--- license: llama3 library_name: transformers tags: - experimental base_model: - nbeerbower/llama-3-bophades-v1-8B datasets: - jondurbin/gutenberg-dpo-v0.1 - jondurbin/truthy-dpo-v0.1 - flammenai/FlameMix-DPO-v1 model-index: - name: llama-3-sauce-v2-8B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 65.61 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/llama-3-sauce-v2-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 83.11 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/llama-3-sauce-v2-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 67.98 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/llama-3-sauce-v2-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 56.39 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/llama-3-sauce-v2-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 76.72 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/llama-3-sauce-v2-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 72.48 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/llama-3-sauce-v2-8B name: Open LLM Leaderboard --- # llama-3-sauce-v2-8B This model is based on Llama-3-8b, and is governed by [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](LICENSE) This is a bad finetune on nbeerbower/llama-3-spicy-abliterated-stella-8B using various DPO sets. # Chat Format Please use the ChatML format or you may experience poor results. ``` <|im_start|>system {System Prompt Here!}<|im_end|> <|im_start|>assistant {Message from AI}<|im_end|> <|im_start|>user {Message from User}<|im_end|> ``` # Method Finetuned using an A100 on Google Colab. [Fine-tune a Mistral-7b model with Direct Preference Optimization](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac) - [Maxime Labonne](https://huggingface.co/mlabonne) ### Configuration Dataset preparation: ```python def chatml_format(example): # Format system system = "" if example.get('system') and len(example['system']) > 0: systemMessage = example['system'] system = "<|im_start|>system\n" + systemMessage + "<|im_end|>\n" # Format instruction prompt = "<|im_start|>user\n" + example['prompt'] + "<|im_end|>\n<|im_start|>assistant\n" # Format chosen answer chosen = example['chosen'] + "<|im_end|>\n" # Format rejected answer rejected = example['rejected'] + "<|im_end|>\n" return { "prompt": system + prompt, "chosen": chosen, "rejected": rejected, } # Array of datasets to concat ds = [ "jondurbin/truthy-dpo-v0.1", "jondurbin/gutenberg-dpo-v0.1", "flammenai/FlameMix-DPO-v1" ] # load_dataset and combine all loaded_datasets = [load_dataset(dataset_name, split='train') for dataset_name in ds] dataset = concatenate_datasets(loaded_datasets) # Save columns original_columns = dataset.column_names # Tokenizer tokenizer = AutoTokenizer.from_pretrained(model_name) tokenizer.pad_token = tokenizer.eos_token tokenizer.padding_side = "left" # Format dataset dataset = dataset.map( chatml_format, remove_columns=original_columns ) ``` LoRA, model, and training settings: ```python # LoRA configuration peft_config = LoraConfig( r=16, lora_alpha=16, lora_dropout=0.05, bias="none", task_type="CAUSAL_LM", target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj'] ) # Model to fine-tune model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, load_in_4bit=True ) model.config.use_cache = False # Reference model ref_model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, load_in_4bit=True ) # Training arguments training_args = TrainingArguments( per_device_train_batch_size=1, gradient_accumulation_steps=1, gradient_checkpointing=True, learning_rate=3e-5, lr_scheduler_type="cosine", max_steps=4000, save_strategy="no", logging_steps=1, output_dir=new_model, optim="paged_adamw_32bit", warmup_steps=100, bf16=True, report_to="wandb", ) # Create DPO trainer dpo_trainer = DPOTrainer( model, ref_model, args=training_args, train_dataset=dataset, tokenizer=tokenizer, peft_config=peft_config, beta=0.1, force_use_ref_model=True ) # Fine-tune model with DPO dpo_trainer.train() ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nbeerbower__llama-3-sauce-v2-8B) | Metric |Value| |---------------------------------|----:| |Avg. |70.38| |AI2 Reasoning Challenge (25-Shot)|65.61| |HellaSwag (10-Shot) |83.11| |MMLU (5-Shot) |67.98| |TruthfulQA (0-shot) |56.39| |Winogrande (5-shot) |76.72| |GSM8k (5-shot) |72.48|