Edit model card

Gemma-2 2B Instruct fine-tuned on JSON dataset

This model is a Gemma-2 2b model fine-tuned to paraloq/json_data_extraction.

The model has been fine-tuned to extract data from a text according to a json schema.

Prompt

The prompt used during training is:

"""Below is a text paired with input that provides further context. Write JSON output that matches the schema to extract information.

### Input:
{input}

### Schema:
{schema}

### Response:
"""

Using the Model

You can use the model with the transformer library or with the wrapper from [unsloth] (https://unsloth.ai/blog/gemma2), which allows faster inference.

import torch
from unsloth import FastLanguageModel

# Required to avoid cache size exceeded
torch._dynamo.config.accumulated_cache_size_limit = 2048

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = f"bastienp/Gemma-2-2B-it-JSON-data-extration",
    max_seq_length = 2048,
    dtype = torch.float16,
    load_in_4bit = False,
    token = HF_TOKEN_READ,
)

Using the Quantized model (llama.cpp)

The model is supplied in GGFU format in 4bit and 8bit.

Example code with Llamacpp:

from llama_cpp import Llama

llm = Llama.from_pretrained(
    "bastienp/Gemma-2-2B-it-JSON-data-extration",
    filename="*Q4_K_M.gguf", #*Q8_K_M.gguf for the 8 bit version
    verbose=False,
)

The base model used for fine-tuning is google/gemma-2-2b-it. This repository is NOT affiliated with Google.

Gemma is provided under and subject to the Gemma Terms of Use found at ai.google.dev/gemma/terms.

  • Developed by: bastienp
  • License: gemma
  • Finetuned from model : google/gemma-2-2b-it
Downloads last month
435
GGUF
Model size
2.61B params
Architecture
gemma2

4-bit

5-bit

8-bit

16-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for bastienp/Gemma-2-2B-Instruct-structured-output

Base model

google/gemma-2-2b
Adapter
(140)
this model

Dataset used to train bastienp/Gemma-2-2B-Instruct-structured-output