Edit model card

Overview

This model is simple POC for JSON based text completion for instructions following tasks. It was trained on the 20,000 records from the Alpaca dataset with a simple prompt template to expect and return JSON inputs. The prompt template used is roughly like this:

### INPUT:
```json
{"instructions": "<INSTRUCTIONS>", "input": "<INPUT>"}
```

### OUTPUT:
```json
{"response": "<OUTPUT">}
```

New-lines are escaped, which means you would want to prompt the model like this:

### INPUT:\n```json\n{"instructions": "Explain what an alpaca is"}\n```\n### OUTPUT:\n

As you can see from this example, the input arg in the input JSON can be omitted if they are not needed. The training dataset include examples with and without additional inputs and the model was trained to handle both cases. Ultimately, you can expect the model to behave like an Alpaca finetune on top of llama-2-7b, the only difference is that it should reliably expect and respond in json format.

Training procedure

The adapter was trained for 5 epochs using QLoRA with an average training loss of 0.7535.

The following hyperparameters were used:

  • Learning Rate: 2e-4
  • Lora R: 16
  • Lora Alpha: 16
  • Lora Dropout: 0.05
  • Target Modules: "q_proj", "k_proj", "v_proj", "o_proj"

The following bitsandbytes quantization config was used during training:

  • load_in_8bit: False
  • load_in_4bit: True
  • llm_int8_threshold: 6.0
  • llm_int8_skip_modules: None
  • llm_int8_enable_fp32_cpu_offload: False
  • llm_int8_has_fp16_weight: False
  • bnb_4bit_quant_type: nf4
  • bnb_4bit_use_double_quant: True
  • bnb_4bit_compute_dtype: bfloat16

Framework versions

  • PEFT 0.4.0.dev0
Downloads last month
3
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Dataset used to train nealchandra/llama-2-7b-hf-lora-alpaca-json