File size: 1,390 Bytes
2bec6fc
 
9452cf0
 
 
763476c
 
 
 
 
 
14091a7
9452cf0
 
 
2bec6fc
 
 
 
9452cf0
 
 
 
 
 
 
 
 
 
3f027cd
9452cf0
 
2bec6fc
 
 
9452cf0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
---
license: apache-2.0
datasets:
- gsarti/clean_mc4_it
- FreedomIntelligence/alpaca-gpt4-italian
language:
- it
- en
---


This model has been fine-tuned with the continuous pretraining mode of Unsloth on the gsarti/clean_mc4_it dataset (only 100k rows) to improve the Italian language. The second fine-tuning was performed on the instructed dataset FreedomIntelligence/alpaca-gpt4-italian.



# Uploaded  model

- **Developed by:** e-palmisano
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2-1.5B-Instruct-bnb-4bit

## Evaluation

For a detailed comparison of model performance, check out the [Leaderboard for Italian Language Models](https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard).

Here's a breakdown of the performance metrics:

| Metric                      | hellaswag_it acc_norm | arc_it acc_norm | m_mmlu_it 5-shot acc | Average |
|:----------------------------|:----------------------|:----------------|:---------------------|:--------|
| **Accuracy Normalized**     | 48.05                 | 32.68           | 46.89                | 42.57   |



This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)