JaggedGem's picture
Update README.md
354f740 verified
|
raw
history blame
2.57 kB
metadata
base_model: meta-llama/Llama-3.2-1B-Instruct
datasets:
  - KingNish/reasoning-base-20k
language:
  - en
license: llama3.2
tags:
  - text-generation-inference
  - transformers
  - unsloth
  - llama
  - trl
  - sft
  - reasoning
  - llama-3

Model Dexcription

It's the first iteration of this model. For testing purposes it's just trained on 10k rows. It performed very well, better than expected. It first reasons and then generates a response based on it, like o1. It does reasoning separately (Just like o1), no tags (like reflection). Below is the inference code.

from transformers import AutoModelForCausalLM, AutoTokenizer

MAX_REASONING_TOKENS = 1024
MAX_RESPONSE_TOKENS = 512

model_name = "KingNish/Reasoning-Llama-1b-v0.1"

model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "Which is greater 9.9 or 9.11 ??"
messages = [
    {"role": "user", "content": prompt}
]

# Generate reasoning
reasoning_template = tokenizer.apply_chat_template(messages, tokenize=False, add_reasoning_prompt=True)
reasoning_inputs = tokenizer(reasoning_template, return_tensors="pt").to(model.device)
reasoning_ids = model.generate(**reasoning_inputs, max_new_tokens=MAX_REASONING_TOKENS)
reasoning_output = tokenizer.decode(reasoning_ids[0, reasoning_inputs.input_ids.shape[1]:], skip_special_tokens=True)

# print("REASONING: " + reasoning_output)

# Generate answer
messages.append({"role": "reasoning", "content": reasoning_output})
response_template = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
response_inputs = tokenizer(response_template, return_tensors="pt").to(model.device)
response_ids = model.generate(**response_inputs, max_new_tokens=MAX_RESPONSE_TOKENS)
response_output = tokenizer.decode(response_ids[0, response_inputs.input_ids.shape[1]:], skip_special_tokens=True)

print("ANSWER: " + response_output)

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.