ali619's picture
Update README.md
3adea9a verified
metadata
dataset_info:
  features:
    - name: text
      dtype: string
  splits:
    - name: train
      num_bytes: 4333625855
      num_examples: 3246583
  download_size: 2246277774
  dataset_size: 4333625855
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
language:
  - fa
  - en
tags:
  - farsi
  - persian
  - english
  - corpus
  - normalized

Dataset Summary

Persian data of this dataset is a collection of 400k blog posts (RohanAiLab/persian_blog). these posts have been gathered from more than 10 websites. This dataset can be used in different NLP tasks like language modeling, creating tokenizer and text generation tasks.

  • To see Persian data in Viewer tab click here

English data of this dataset is merged from english-wiki-corpus dataset.

Note: If you need only Persian corpus click here

Note: The data for both Persian and English in this dataset have been normalized and unnecessary tokens have been removed.