Edit model card

Llama-3.2-3B-Instruct Fine-tuned on glaiveai/reflection-v1

  • Developed by: Meshwa
  • License: apache-2.0
  • Finetuned from model : unsloth/Llama-3.2-3B-Instruct

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Overview

  • Contains Llama-3.2-3B-Instruct,
  • Fine-tuned on the glaiveai/reflection-v1 dataset using the Unsloth library.
  • Model has been quantized into several formats (q4, q5, q6, q8 and f16)
  • Modelfile for use with Ollama is included, The default quantization is set to Q8_0, edit if you want to.

Model Description

Objective

Tried to finetune Llama-3.2-3B-Instruct leveraging the glaiveai/reflection-v1 dataset. I thought it would be fun to see how smaller models perform on this task.

Dataset: glaiveai/reflection-v1

The glaiveai/reflection-v1 dataset is tailored for reflective, introspective tasks, including open-ended conversation, abstract reasoning, and context-aware response generation. This dataset includes tasks such as:

  • Thoughtful question answering
  • Summarization of complex ideas
  • Reflective problem solving

Fine-tuning Methodology: Unsloth Library

Unsloth was used for 2x faster finetuing of the base Llama-3.2 model.

Usage

Inference with gguf Quantized Models

To use the model in gguf format, load your preferred quantized version with a compatible inference framework such as llama.cpp or any gguf-supported libraries:

from llama_cpp import Llama

llama_model = Llama(model_path="path_to_model/Llama-3.2-3B-Instruct-q8_0.gguf")
result = llama_model("Your instruction prompt here")
print(result)

Using with Ollama

The included Modelfile supports direct loading in Ollama. To use the default model, simply run:

ollama create "model_name_here" -f "Modelfile_path"

Directly importing from HF 🤗

ollama pull hf.co/Meshwa/llama3.2-3b-Reflection-v1:{quant_type}

make sure to replace {quant_type} with one of these:

  • Q4_K_M
  • Q4_0
  • Q4_1
  • Q6_K
  • Q8_0 (default in my modelfile)
  • Q5_K_M
  • F16

For Better results use the below system prompt:

You are a world-class AI system capable of complex reasoning and reflection. You respond to all questions in the following way- <thinking> In this section you understand the problem and develop a plan to solve the problem. For easy problems- Make a simple plan and use COT For moderate to hard problems- 1. Devise a step-by-step plan to solve the problem. (don't actually start solving yet, just make a plan) 2. Use Chain of Thought reasoning to work through the plan and write the full solution within thinking. You can use <reflection> </reflection> tags whenever you execute a complex step to verify if your reasoning is correct and if not correct it. </thinking> <output> In this section, provide the complete answer for the user based on your thinking process. Do not refer to the thinking tag. Include all relevant information and keep the response somewhat verbose, the user will not see what is in the thinking tag. </output>

License

This model is released under the Apache 2.0.

Citation

If you use this model, please cite the following:

@article{Llama-3.2-3B-Instruct-Reflection-v1,
  author = {Meshwa},
  title = {Llama-3.2-3B-Instruct Fine-tuned on glaiveai/reflection-v1},
  year = {2024},
  published = {https://huggingface.co/Meshwa/llama3.2-3b-Reflection-v1}
}
Downloads last month
464
GGUF
Model size
3.21B params
Architecture
llama

4-bit

5-bit

6-bit

8-bit

16-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for Meshwa/llama3.2-3b-Reflection-v1

Quantized
(136)
this model

Dataset used to train Meshwa/llama3.2-3b-Reflection-v1