Update README.md
Browse files
README.md
CHANGED
@@ -36,7 +36,7 @@ GPT-NeoX-20B, a sibling model to StellarX, is a 20 billion parameter autoregress
|
|
36 |
|
37 |
## Training and Evaluation
|
38 |
|
39 |
-
StellarX's training dataset comprises a comprehensive collection of English-language texts, covering various domains, thanks to the efforts of "redpajama" dataset by the group "
|
40 |
|
41 |
Evaluation of GPT-NeoX 20B performance has demonstrated its competence across different natural language tasks. Although since this description provides a brief summary, we refer to the GPT-NeoX Paper https://arxiv.org/abs/2204.06745, comparing GPT-NeoX 20B to other models on tasks such as OpenAI's LAMBADA, SciQ, PIQA, TriviaQA, and ARC Challenge.
|
42 |
|
|
|
36 |
|
37 |
## Training and Evaluation
|
38 |
|
39 |
+
StellarX's training dataset comprises a comprehensive collection of English-language texts, covering various domains, thanks to the efforts of "redpajama" dataset by the group "togethercumputer" group.
|
40 |
|
41 |
Evaluation of GPT-NeoX 20B performance has demonstrated its competence across different natural language tasks. Although since this description provides a brief summary, we refer to the GPT-NeoX Paper https://arxiv.org/abs/2204.06745, comparing GPT-NeoX 20B to other models on tasks such as OpenAI's LAMBADA, SciQ, PIQA, TriviaQA, and ARC Challenge.
|
42 |
|