RefalMachine's picture
Create README.md
88c6a4e verified
|
raw
history blame
934 Bytes
---
datasets:
- IlyaGusev/saiga_scored
- IlyaGusev/saiga_preferences
- dichspace/darulm
language:
- ru
pipeline_tag: text-generation
base_model:
- RefalMachine/ruadapt_qwen2.5_3B_ext_u48_full_lr5e4_peft_mlp_32_32_bs256
---
# Model description
Instruction-tuned version of RefalMachine/ruadapt_qwen2.5_3B_ext_u48_full_lr5e4_peft_mlp_32_32_bs256 with extended tokenizer after LEP (Learned Embedding Propagation, paper will be soon) procedure.
Thanks to the extended tokenizer, the model works more efficiently with the Russian language (up to 60% speed up compared to Qwen-2.5-3B-Instruct in terms of characters)
# How to cite:
Tikhomirov M., Chernyshev D. Facilitating large language model Russian adaptation with Learned Embedding Propagation // 2024 (will be soon)
Tikhomirov M., Chernyshev D. Impact of Tokenization on LLaMa Russian Adaptation //2023 Ivannikov Ispras Open Conference (ISPRAS). – IEEE, 2023. – С. 163-168.