RefalMachine
commited on
Commit
•
ee1059f
1
Parent(s):
88c6a4e
Update README.md
Browse files
README.md
CHANGED
@@ -10,13 +10,38 @@ base_model:
|
|
10 |
- RefalMachine/ruadapt_qwen2.5_3B_ext_u48_full_lr5e4_peft_mlp_32_32_bs256
|
11 |
---
|
12 |
|
13 |
-
|
14 |
|
15 |
Instruction-tuned version of RefalMachine/ruadapt_qwen2.5_3B_ext_u48_full_lr5e4_peft_mlp_32_32_bs256 with extended tokenizer after LEP (Learned Embedding Propagation, paper will be soon) procedure.
|
16 |
|
17 |
Thanks to the extended tokenizer, the model works more efficiently with the Russian language (up to 60% speed up compared to Qwen-2.5-3B-Instruct in terms of characters)
|
18 |
|
19 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
|
21 |
Tikhomirov M., Chernyshev D. Facilitating large language model Russian adaptation with Learned Embedding Propagation // 2024 (will be soon)
|
22 |
|
|
|
10 |
- RefalMachine/ruadapt_qwen2.5_3B_ext_u48_full_lr5e4_peft_mlp_32_32_bs256
|
11 |
---
|
12 |
|
13 |
+
### Model description
|
14 |
|
15 |
Instruction-tuned version of RefalMachine/ruadapt_qwen2.5_3B_ext_u48_full_lr5e4_peft_mlp_32_32_bs256 with extended tokenizer after LEP (Learned Embedding Propagation, paper will be soon) procedure.
|
16 |
|
17 |
Thanks to the extended tokenizer, the model works more efficiently with the Russian language (up to 60% speed up compared to Qwen-2.5-3B-Instruct in terms of characters)
|
18 |
|
19 |
+
### Метрики и оценка качества
|
20 |
+
|
21 |
+
#### Результаты на Ru-Arena-General
|
22 |
+
|
23 |
+
В качестве референсых ответов, с которыми сравниваются модели выступают ответы от gpt-3.5-turbo-0125, поэтому она имеет винрейт 50%.
|
24 |
+
|
25 |
+
Здесь приведена лишь часть лидерборда, подробнее смотрите в репозитории бенчмарка.
|
26 |
+
|
27 |
+
| Model Name | Winrate | 95% CI | Average # Tokens |
|
28 |
+
|--------------------------------------------------|--------|--------------------|------------------|
|
29 |
+
| gpt-4-1106-preview | 90.9 | (-1.3, 1.0) | 541 |
|
30 |
+
| gpt-4o-mini | 83.9 | (-1.8, 1.1) | 448 |
|
31 |
+
| vikhr-nemo-12b-instruct-r-21-09-24 | 79.8 | (-2.2, 1.9) | 627 |
|
32 |
+
| gemma-2-9b-it-sppo-iter3 | 73.6 | (-1.6, 2.2) | 509 |
|
33 |
+
| gemma-2-9b-it | 69.2 | (-2.5, 1.9) | 459 |
|
34 |
+
| saiga_llama3_8b_v7 | 67.6 | (?, ?) | 503 |
|
35 |
+
| **ruadapt_qwen2.5_3B_ext_u48_instruct_v4** | **66.1** | **(?, ?)** | **531** |
|
36 |
+
| t-lite-instruct-0.1 | 64.7 | (-2.1, 1.7) | 810 |
|
37 |
+
| vikhr-llama3.1-8b-instruct-r-21-09-24 | 63.4 | (-2.1, 2.5) | 618 |
|
38 |
+
| suzume-llama-3-8B-multilingual-orpo-borda-half | 57.1 | (-1.9, 2.2) | 682 |
|
39 |
+
| mistral-nemo-instruct-2407 | 50.5 | (-2.7, 2.6) | 403 |
|
40 |
+
| gpt-3.5-turbo-0125 | 50.0 | (0.0, 0.0) | 220 |
|
41 |
+
| c4ai-command-r-v01 | 49.0 | (-1.7, 2.2) | 529 |
|
42 |
+
| meta-llama-3.1-8b-instruct | 43.1 | (-2.8, 2.3) | 628 |
|
43 |
+
|
44 |
+
### How to cite:
|
45 |
|
46 |
Tikhomirov M., Chernyshev D. Facilitating large language model Russian adaptation with Learned Embedding Propagation // 2024 (will be soon)
|
47 |
|