Update README.md
#4
by
dododododo
- opened
README.md
CHANGED
@@ -3,6 +3,9 @@ license: apache-2.0
|
|
3 |
---
|
4 |
|
5 |
# MAP-CC
|
|
|
|
|
|
|
6 |
An open-source Chinese pretraining dataset with a scale of 800 billion tokens, offering the NLP community high-quality Chinese pretraining data.
|
7 |
|
8 |
## Usage Instructions
|
|
|
3 |
---
|
4 |
|
5 |
# MAP-CC
|
6 |
+
|
7 |
+
[**π Homepage**](https://chinese-tiny-llm.github.io) | [**π€ MAP-CC**](https://huggingface.co/datasets/m-a-p/MAP-CC) | [**π€ CHC-Bench**](https://huggingface.co/datasets/m-a-p/CHC-Bench) | [**π€ CT-LLM**](https://huggingface.co/collections/m-a-p/chinese-tiny-llm-660d0133dff6856f94ce0fc6) | [**π arXiv**](https://arxiv.org/abs/2404.04167) | [**GitHub**](https://github.com/Chinese-Tiny-LLM/Chinese-Tiny-LLM)
|
8 |
+
|
9 |
An open-source Chinese pretraining dataset with a scale of 800 billion tokens, offering the NLP community high-quality Chinese pretraining data.
|
10 |
|
11 |
## Usage Instructions
|