Edit model card

Mayonnaise LLM

Mayo is a language model fine-tuned on the Mayo dataset using Supervised Fine-Tuning (SFT) and Teacher Reinforced Learning (TRL) techniques. It is based on the TinyLlama Model

Features

  • Utilizes SFT and TRL techniques for improved performance
  • Supports English language

Usage

To use the Mayo LLM, you can load the model using the Hugging Face Transformers library:

from transformers import pipeline

pipe = pipeline("text-generation", model="nroggendorff/vegetarian-mayo")

question = "What color is the sky?"
conv = [{"role": "user", "content": question}]

response = pipe(conv, max_new_tokens=32)[0]['generated_text'][-1]['content']
print(response)

License

This project is licensed under the MIT License.

Downloads last month
6
Safetensors
Model size
1.1B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for nroggendorff/vegetarian-mayo

Finetuned
(136)
this model

Dataset used to train nroggendorff/vegetarian-mayo