--- dataset_info: features: - name: passage dtype: string splits: - name: train num_bytes: 18979214734 num_examples: 88328203 download_size: 1025261393 dataset_size: 18979214734 --- # `chinese_clean_passages_80m` 包含**8千余万**(88328203)个**纯净**中文段落,不包含任何字母、数字。\ Containing more than **80 million pure \& clean** Chinese passages, without any letters/digits/special tokens. 文本长度大部分介于50\~200个汉字之间。\ The passage length is approximately 50\~200 Chinese characters. 通过`datasets.load_dataset()`下载数据,会产生38个大小约340M的数据包,共约12GB,所以请确保有足够空间。\ Downloading the dataset will result in 38 data shards each of which is about 340M and 12GB in total. Make sure there's enough space in your device:) ``` >>> passage_dataset = load_dataset('beyond/chinese_clean_passages_80m') <<< Downloading data: 100%|█| 341M/341M [00:06<00:00, 52.0MB Downloading data: 100%|█| 342M/342M [00:06<00:00, 54.4MB Downloading data: 100%|█| 341M/341M [00:06<00:00, 49.1MB Downloading data: 100%|█| 341M/341M [00:14<00:00, 23.5MB Downloading data: 100%|█| 341M/341M [00:10<00:00, 33.6MB Downloading data: 100%|█| 342M/342M [00:07<00:00, 43.1MB ...(38 data shards) ``` --- Acknowledgment:\ 数据是基于[CLUE中文预训练语料集](https://github.com/CLUEbenchmark/CLUE)进行处理、过滤得到的。\ This dataset is processed/filtered from the [CLUE pre-training corpus](https://github.com/CLUEbenchmark/CLUE).