Edit model card

Base Model: mistralai/Mistral-7B-Instruct-v0_2_student_answer_train_examples_mistral_0416

  • LoRAs weights for Mistral-7b-Instruct-v0_2

Noteworthy changes:

  • reduced training hyperparams: epochs=3 (previously 4)

  • new training prompt: "Teenager students write in simple sentences. You are a teenager student, and please answer the following question. {training example}"

  • old training prompt: "Teenager students write in simple sentences [with typos and grammar errors]. You are a teenager student, and please answer the following question. {training example}"

Model Details

Fine-tuned model that talks like middle school students, using simple vocabulary and grammar.

  • Trained on student Q&As physics topics including pulley/ramp examples that discuss work, force, and etc.

Model Description

  • Developed by: [More Information Needed]
  • Funded by [optional]: [More Information Needed]
  • Shared by [optional]: [More Information Needed]
  • Model type: [More Information Needed]
  • Language(s) (NLP): [More Information Needed]
  • License: [More Information Needed]
  • Finetuned from model [optional]: [More Information Needed]

Model Sources [optional]

  • Repository: [More Information Needed]
  • Paper [optional]: [More Information Needed]
  • Demo [optional]: [More Information Needed]

Model Details

Fine-tuned model to talk like middle school students, using typos/grammar errors. Trained on student Q&As physics topics including pulley/ramp examples that discuss work, force, and etc.

  • Developed by: Nora T
  • Finetuned from model: mistralai_Mistral-7B-Instruct-v0.2
  • Repository: [More Information Needed]
  • Paper [optional]: [More Information Needed]
  • Demo [optional]: [More Information Needed]

How to Get Started:

  1. Load Mistral model first:
from peft import PeftModel # for fine-tuning
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, GenerationConfig, GPTQConfig, BitsAndBytesConfig

model_name_or_path = "mistralai/Mistral-7B-Instruct-v0.2"
nf4_config = BitsAndBytesConfig( # quantization 4-bit
   load_in_4bit=True,
   bnb_4bit_quant_type="nf4",
   bnb_4bit_use_double_quant=True,
   bnb_4bit_compute_dtype=torch.bfloat16
)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
                                             device_map="auto",
                                             trust_remote_code=False,
                                             quantization_config=nf4_config,
                                             revision="main")

tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
  1. Load in LoRA weights:
lora_model_path = "{path_to_loras_folder}/mistralai_Mistral-7B-Instruct-v0.2-testgen-LoRAs" # load loras
model = PeftModel.from_pretrained(
        model, lora_model_path, torch_dtype=torch.float16, force_download=True,
      )

Training Hyperparams

  • LoRA Rank: 128
  • LoRA Alpha: 32
  • Batch Size: 64
  • Cutoff Length: 256
  • Learning rate: 3e-4
  • Epochs: 3
  • LoRA Dropout: 0.05

Training Data

Trained on raw text file

Preprocessing [optional]

[More Information Needed]

Technical Specifications

Model Architecture and Objective

[More Information Needed]

Hardware

[More Information Needed]

Software

[More Information Needed]

Citation [optional]

Framework versions

  • PEFT 0.7.1
Downloads last month
2
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ntseng/mistralai_Mistral-7B-Instruct-v0_2_student_answer_train_examples_mistral_0416

Adapter
(883)
this model