rojasdiego's picture
Update README.md
5595090 verified
metadata
base_model:
  - meta-llama/Llama-3.1-8B-Instruct
datasets:
  - rojas-diego/Apple-MLX-QA
language:
  - en
library_name: transformers
license: mit
pipeline_tag: question-answering

Meta-Llama-3.1-8B-Instruct-Apple-MLX

Overview

This model is a merge of the MLX QLORA Adapter and the base model Meta LLaMa 3.1 8B Instruct model, trained to answer questions and provide guidance on Apple's latest machine learning framework, MLX. The fine-tuning was done using the LORA (Low-Rank Adaptation) method on a custom dataset of question-answer pairs derived from the MLX documentation.

Dataset

Fine-tuned on a single epoch of Apple MLX QA.

Installation

To use the model, you need to install the required dependencies:

pip install peft transformers jinja2==3.1.0

Usage

Here鈥檚 a sample code snippet to load and interact with the model:

import transformers
import torch

model_id = "meta-llama/Meta-Llama-3.1-8B-Instruct"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)

messages = [
    {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
    {"role": "user", "content": "Who are you?"},
]

outputs = pipeline(
    messages,
    max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])