zolicsaki commited on
Commit
6ae4e4b
1 Parent(s): 4dd9b0b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -28,7 +28,6 @@ SambaLingo-Arabic-Base-70B is a pretrained Bi-lingual Arabic and English model t
28
  - **Model type:** Language Model
29
  - **Language(s):** Arabic, English
30
  - **Finetuned from model:** [Llama-2-70b](https://huggingface.co/meta-llama/Llama-2-70b-hf)
31
- - **Try the chat version of this model**: [SambaLingo-chat-space](https://huggingface.co/spaces/sambanovasystems/SambaLingo-chat-space).
32
  - **Paper:** [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
33
  - **Blog Post**: [sambalingo-open-source-language-experts](https://sambanova.ai/blog/sambalingo-open-source-language-experts)
34
 
@@ -53,8 +52,9 @@ All pre-training is done on the [Cultura-X](https://huggingface.co/datasets/uonl
53
 
54
  ## Tokenizer Details
55
  We extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.
56
- ## Evaluation
57
 
 
 
58
 
59
  ## Uses
60
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
@@ -97,12 +97,12 @@ We would like to give a special thanks to the following groups:
97
 
98
  ## Cite SambaLingo
99
  ```
100
- @software{sambalingo,
101
- title = {{SambaLingo: Open Source Language Experts}},
102
- author = {SambaNova Systems},
103
- url = {https://huggingface.co/sambanovasystems/SambaLingo-Arabic-Base-70B}
104
- month = {2},
105
- year = {2024},
106
- version = {1.0},
107
  }
108
  ```
 
28
  - **Model type:** Language Model
29
  - **Language(s):** Arabic, English
30
  - **Finetuned from model:** [Llama-2-70b](https://huggingface.co/meta-llama/Llama-2-70b-hf)
 
31
  - **Paper:** [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
32
  - **Blog Post**: [sambalingo-open-source-language-experts](https://sambanova.ai/blog/sambalingo-open-source-language-experts)
33
 
 
52
 
53
  ## Tokenizer Details
54
  We extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.
 
55
 
56
+ ## Evaluation
57
+ For evaluation results see our paper: [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
58
 
59
  ## Uses
60
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
 
97
 
98
  ## Cite SambaLingo
99
  ```
100
+ @misc{csaki2024sambalingo,
101
+ title={SambaLingo: Teaching Large Language Models New Languages},
102
+ author={Zoltan Csaki and Bo Li and Jonathan Li and Qiantong Xu and Pian Pawakapan and Leon Zhang and Yun Du and Hengyu Zhao and Changran Hu and Urmish Thakker},
103
+ year={2024},
104
+ eprint={2404.05829},
105
+ archivePrefix={arXiv},
106
+ primaryClass={cs.CL}
107
  }
108
  ```