Model Description
mistral_7b_yo_instruct is a text generation model in Yorùbá.
Intended uses & limitations
How to use
import requests
API_URL = "https://i8nykns7vw253vx3.us-east-1.aws.endpoints.huggingface.cloud"
headers = {
"Authorization": "Bearer hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"Content-Type": "application/json"
}
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.json()
# Prompt content: "Pẹlẹ o. Bawo ni o se wa?" ("Hello. How are you?")
output = query({
"inputs": "Pẹlẹ o. Bawo ni o se wa?",
})
# Model response: "O dabo. O jẹ ọjọ ti o dara." ("I am safe. It was a good day.")
print(output)
Eval results
Coming soon
Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
Training data
This model is fine-tuned on 60k+ instruction-following demonstrations built from an aggregation of datasets (AfriQA, XLSum, MENYO-20k), and translations of Alpaca-gpt4).
Use and safety
We emphasize that mistral_7b_yo_instruct is intended only for research purposes and is not ready to be deployed for general use, namely because we have not designed adequate safety measures.
- Downloads last month
- 16
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.