Update README.md
Browse files
@alenusch
added the correct citation.
README.md
CHANGED
@@ -74,6 +74,35 @@ tags:
|
|
74 |
|
75 |
Multilingual language model. This model was trained on the **61** languages from **25** language families (see the list below).
|
76 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
77 |
## Dataset
|
78 |
|
79 |
Model was pretrained on a 600Gb of texts, mostly from MC4 and Wikipedia. Training data was deduplicated, the text deduplication includes 64-bit hashing of each text in the corpus for keeping texts with a unique hash. We also filter the documents based on their text compression rate using zlib4. The most strongly and weakly compressing deduplicated texts are discarded.
|
|
|
74 |
|
75 |
Multilingual language model. This model was trained on the **61** languages from **25** language families (see the list below).
|
76 |
|
77 |
+
## Paper
|
78 |
+
|
79 |
+
**mGPT: Few-Shot Learners Go Multilingual**
|
80 |
+
|
81 |
+
Published at TACL 2024 (MIT Press). Presented at EMNLP 2023.
|
82 |
+
|
83 |
+
[Abstract](https://arxiv.org/abs/2204.07580) [PDF](https://arxiv.org/pdf/2204.07580.pdf)
|
84 |
+
|
85 |
+
```
|
86 |
+
@article{shliazhko-etal-2024-mgpt,
|
87 |
+
title = "m{GPT}: Few-Shot Learners Go Multilingual",
|
88 |
+
author = "Shliazhko, Oleh and
|
89 |
+
Fenogenova, Alena and
|
90 |
+
Tikhonova, Maria and
|
91 |
+
Kozlova, Anastasia and
|
92 |
+
Mikhailov, Vladislav and
|
93 |
+
Shavrina, Tatiana",
|
94 |
+
journal = "Transactions of the Association for Computational Linguistics",
|
95 |
+
volume = "12",
|
96 |
+
year = "2024",
|
97 |
+
address = "Cambridge, MA",
|
98 |
+
publisher = "MIT Press",
|
99 |
+
url = "https://aclanthology.org/2024.tacl-1.4",
|
100 |
+
doi = "10.1162/tacl_a_00633",
|
101 |
+
pages = "58--79",
|
102 |
+
abstract = "This paper introduces mGPT, a multilingual variant of GPT-3, pretrained on 61 languages from 25 linguistically diverse language families using Wikipedia and the C4 Corpus. We detail the design and pretraining procedure. The models undergo an intrinsic and extrinsic evaluation: language modeling in all languages, downstream evaluation on cross-lingual NLU datasets and benchmarks in 33 languages, and world knowledge probing in 23 languages. The in-context learning abilities are on par with the contemporaneous language models while covering a larger number of languages, including underrepresented and low-resource languages of the Commonwealth of Independent States and the indigenous peoples in Russia. The source code and the language models are publicly available under the MIT license.",
|
103 |
+
}
|
104 |
+
```
|
105 |
+
|
106 |
## Dataset
|
107 |
|
108 |
Model was pretrained on a 600Gb of texts, mostly from MC4 and Wikipedia. Training data was deduplicated, the text deduplication includes 64-bit hashing of each text in the corpus for keeping texts with a unique hash. We also filter the documents based on their text compression rate using zlib4. The most strongly and weakly compressing deduplicated texts are discarded.
|