metadata
license: apache-2.0
datasets:
- gsarti/clean_mc4_it
- FreedomIntelligence/alpaca-gpt4-italian
language:
- it
- en
This model has been fine-tuned with the continuous pretraining mode of Unsloth on the gsarti/clean_mc4_it dataset (only 100k rows) to improve the Italian language. The second fine-tuning was performed on the instructed dataset FreedomIntelligence/alpaca-gpt4-italian.
Uploaded model
- Developed by: e-palmisano
- License: apache-2.0
- Finetuned from model : unsloth/Qwen2-1.5B-Instruct-bnb-4bit
Evaluation
For a detailed comparison of model performance, check out the Leaderboard for Italian Language Models.
Here's a breakdown of the performance metrics:
Metric | hellaswag_it acc_norm | arc_it acc_norm | m_mmlu_it 5-shot acc | Average |
---|---|---|---|---|
Accuracy Normalized | 48.05 | 32.68 | 46.89 | 42.57 |
This qwen2 model was trained 2x faster with Unsloth and Huggingface's TRL library.