Datasets:

Modalities:
Text
Formats:
parquet
Libraries:
Datasets
pandas
License:
adirik commited on
Commit
f454585
1 Parent(s): 7ad4c89

filter dataset based on licenses

Browse files
README.md DELETED
@@ -1,68 +0,0 @@
1
- ---
2
- dataset_info:
3
- features:
4
- - name: id
5
- dtype: string
6
- - name: title
7
- dtype: string
8
- - name: abstract
9
- dtype: string
10
- - name: authors
11
- dtype: string
12
- - name: published_date
13
- dtype: string
14
- - name: link
15
- dtype: string
16
- - name: markdown
17
- dtype: string
18
- splits:
19
- - name: train
20
- num_bytes: 6952989384
21
- num_examples: 138380
22
- download_size: 3233113336
23
- dataset_size: 6952989384
24
- configs:
25
- - config_name: default
26
- data_files:
27
- - split: train
28
- path: data/train-*
29
- license: cc-by-nc-sa-4.0
30
- ---
31
- ## Arxiver Dataset
32
- Arxiver consists of 138,830 [arXiv](https://arxiv.org/) papers converted to multi-markdown (**.mmd**) format. Our dataset includes original arXiv article IDs, titles, abstracts, authors, publication dates, URLs and corresponding markdown files published between January 2023 and October 2023.
33
-
34
- We hope our dataset will be useful for various applications such as semantic search, domain specific language modeling, question answering and summarization.
35
-
36
- ## Curation
37
- The Arxiver dataset is created using a neural OCR - [Nougat](https://facebookresearch.github.io/nougat/). After OCR processing, we apply custom text processing steps to refine the data. This includes extracting author information, removing reference sections, and performing additional cleaning and formatting.
38
-
39
- ## Using Arxiver
40
- You can easily download and use the arxiver dataset with Hugging Face's [datasets](https://huggingface.co/datasets) library.
41
- ```py
42
- from datasets import load_dataset
43
-
44
- # whole dataset takes 3.3GB
45
- dataset = load_dataset("neuralwork/arxiver")
46
- print(dataset)
47
- ```
48
-
49
- Alternatively, you can stream the dataset to save disk space or to partially download the dataset:
50
- ```py
51
- from datasets import load_dataset
52
-
53
- dataset = load_dataset("neuralwork/arxiver", streaming=True)
54
- print(dataset)
55
- print(next(iter(dataset['train'])))
56
- ```
57
-
58
- ## References
59
- The original articles are maintained by [arXiv](https://arxiv.org/) and copyrighted to the original authors, please refer to the arXiv license information [page](https://info.arxiv.org/help/license/index.html) for details. We release our dataset with a Creative Commons Attribution-Noncommercial-ShareAlike (CC BY-NC-SA 4.0) license, if you use this dataset in your research or project, please cite it as follows:
60
- ```
61
- @misc{acar_arxiver2024,
62
- author = {Alican Acar, Alara Dirik, Muhammet Hatipoglu},
63
- title = {ArXiver},
64
- year = {2024},
65
- publisher = {Hugging Face},
66
- howpublished = {\url{https://huggingface.co/datasets/neuralwork/arxiver}}
67
- }
68
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/train-00000-of-00014.parquet DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:3278ed0880b7732efa1038b19efd09cd261cde44540b891893996ff213a5dad1
3
- size 229148756
 
 
 
 
data/train-00001-of-00014.parquet DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:38c6bb228f4b48ea125c5f023985841fff29309db14c22c424e828f6acae2cdb
3
- size 231683980
 
 
 
 
data/train-00002-of-00014.parquet DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:72349944da1728061a93249785e06dfac16411f29d2229b9302b07fcac128311
3
- size 231592728
 
 
 
 
data/train-00004-of-00014.parquet DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:bd2cb2178409b890ed0cc4f17f777d41e6b9bd4edce0de8740d8309163940d5d
3
- size 232317642
 
 
 
 
data/train-00005-of-00014.parquet DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:bcf624a399480ac351883826d0fcd2fd1ded013d962bb86ddee197cb42c45f31
3
- size 230570137
 
 
 
 
data/train-00006-of-00014.parquet DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:60d2b87fc19973aeadb739b7d246301b5c7166f7beb81f9b0a1338efd1df67a3
3
- size 229294914
 
 
 
 
data/train-00007-of-00014.parquet DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:ef9cac24915eebb97b37c4b9ed3786e828c67fcf8c7d41c5cb5716c6c3d0e227
3
- size 230103952
 
 
 
 
data/train-00008-of-00014.parquet DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:068717bc5b20ae071774367e6551ea1f5b1b96e730e4ac0794e6c6ef19798066
3
- size 228600546
 
 
 
 
data/train-00009-of-00014.parquet DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:d05391fa4eaf5ead5d992b94337f5e0d4cd00ae1a8da1e195a711653b7ca1199
3
- size 232749628
 
 
 
 
data/train-00010-of-00014.parquet DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:0d47c9f8538a3fab558136d8ce86b8833b9c5e6eb3b3bf98ca1046fe9511a84d
3
- size 230664805
 
 
 
 
data/train-00011-of-00014.parquet DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:0da69b52db56c22fbdc60913dcd02a1385e034926eded2c642bb7ab2a7e7a318
3
- size 235495008
 
 
 
 
data/train-00012-of-00014.parquet DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:a40e5876898e41b7d310cdf8816998894057ca7f26aa2ff595f820ffd4d1f0a1
3
- size 231838196
 
 
 
 
data/train-00013-of-00014.parquet DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:a6083edfa5653b393b4e059fbad85c6183fff975dfadd9e346d2dceeced87f0c
3
- size 230194784
 
 
 
 
data/{train-00003-of-00014.parquet → train.parquet} RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e2ec3f524daed003c067a26a1528b1a9b75a3a3bc13e4ddd975ba83ce1c19f20
3
- size 228858260
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:504f92ee194a39ac7cfc581ea577d311da5740f8aca79b90ab13af615904db1f
3
+ size 1441766050