--- title: README emoji: 🚀 colorFrom: pink colorTo: indigo sdk: static pinned: false --- ## Usage You can load models using the Hugging Face Transformers library: ```python from transformers import pipeline pipe = pipeline("text-generation", model="nroggendorff/mayo") question = "What color is the sky?" conv = [{"role": "user", "content": question}] response = pipe(conv, max_new_tokens=32)[0]['generated_text'][-1]['content'] print(response) ``` To use models with quantization: ```python from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig import torch bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16 ) model_id = "nroggendorff/mayo" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config) question = "What color is the sky?" prompt = tokenizer.apply_chat_template([{"role": "user", "content": question}], tokenize=False) inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=32) generated_text = tokenizer.batch_decode(outputs)[0] print(generated_text) ```