url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 49
51
| id
int64 1.05B
1.38B
| node_id
stringlengths 18
19
| number
int64 3.26k
4.99k
| title
stringlengths 1
162
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
null | comments
sequence | created_at
int64 1,637B
1,664B
| updated_at
int64 1,637B
1,664B
| closed_at
int64 1,637B
1,664B
⌀ | author_association
stringclasses 3
values | active_lock_reason
null | body
stringlengths 2
36.2k
⌀ | reactions
dict | timeline_url
stringlengths 70
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/4888 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4888/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4888/comments | https://api.github.com/repos/huggingface/datasets/issues/4888/events | https://github.com/huggingface/datasets/issues/4888 | 1,349,447,521 | I_kwDODunzps5Qbu9h | 4,888 | Dataset Viewer issue for subjqa | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"It's a bug in the viewer, thanks for reporting it. We're hoping to update to a new version in the next few days which should fix it.",
"Fixed \r\n\r\nhttps://huggingface.co/datasets/subjqa\r\n\r\n<img width=\"1040\" alt=\"Capture d’écran 2022-09-08 à 10 23 26\" src=\"https://user-images.githubusercontent.com/1676121/189073210-2a57ff88-8bb1-44bd-851e-0e75473cea3f.png\">\r\n"
] | 1,661,347,580,000 | 1,662,625,422,000 | 1,662,625,422,000 | MEMBER | null | ### Link
https://huggingface.co/datasets/subjqa
### Description
Getting the following error for this dataset:
```
Status code: 500
Exception: Status500Error
Message: 2 or more items returned, instead of 1
```
Not sure what's causing it though 🤔
### Owner
Yes | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4888/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4888/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4887 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4887/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4887/comments | https://api.github.com/repos/huggingface/datasets/issues/4887/events | https://github.com/huggingface/datasets/pull/4887 | 1,349,426,693 | PR_kwDODunzps49t_PM | 4,887 | Add "cc-by-nc-sa-2.0" to list of licenses | {
"login": "osanseviero",
"id": 7246357,
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/osanseviero",
"html_url": "https://github.com/osanseviero",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Sorry for the issue @albertvillanova! I think it's now fixed! :heart: "
] | 1,661,346,709,000 | 1,661,509,892,000 | 1,661,509,760,000 | MEMBER | null | Datasets side of https://github.com/huggingface/hub-docs/pull/285 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4887/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4887/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4887",
"html_url": "https://github.com/huggingface/datasets/pull/4887",
"diff_url": "https://github.com/huggingface/datasets/pull/4887.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4887.patch",
"merged_at": 1661509760000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4886 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4886/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4886/comments | https://api.github.com/repos/huggingface/datasets/issues/4886/events | https://github.com/huggingface/datasets/issues/4886 | 1,349,285,569 | I_kwDODunzps5QbHbB | 4,886 | Loading huggan/CelebA-HQ throws pyarrow.lib.ArrowInvalid | {
"login": "JeanKaddour",
"id": 11850255,
"node_id": "MDQ6VXNlcjExODUwMjU1",
"avatar_url": "https://avatars.githubusercontent.com/u/11850255?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JeanKaddour",
"html_url": "https://github.com/JeanKaddour",
"followers_url": "https://api.github.com/users/JeanKaddour/followers",
"following_url": "https://api.github.com/users/JeanKaddour/following{/other_user}",
"gists_url": "https://api.github.com/users/JeanKaddour/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JeanKaddour/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JeanKaddour/subscriptions",
"organizations_url": "https://api.github.com/users/JeanKaddour/orgs",
"repos_url": "https://api.github.com/users/JeanKaddour/repos",
"events_url": "https://api.github.com/users/JeanKaddour/events{/privacy}",
"received_events_url": "https://api.github.com/users/JeanKaddour/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi! IIRC one of the files in this dataset is corrupted due to https://github.com/huggingface/datasets/pull/4081 (fixed now).\r\n\r\n@NielsRogge Could you please re-generate and re-push this dataset (or I can do it if you share the generation script)?"
] | 1,661,340,261,000 | 1,662,654,544,000 | null | NONE | null | ## Describe the bug
Loading huggan/CelebA-HQ throws pyarrow.lib.ArrowInvalid
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('huggan/CelebA-HQ')
```
## Expected results
See https://colab.research.google.com/drive/141LJCcM2XyqprPY83nIQ-Zk3BbxWeahq?usp=sharing#scrollTo=N3ml_7f8kzDd
## Actual results
```
File "/home/jean/projects/cold_diffusion/celebA.py", line 4, in <module>
dataset = load_dataset('huggan/CelebA-HQ')
File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/load.py", line 1793, in load_dataset
builder_instance.download_and_prepare(
File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/builder.py", line 704, in download_and_prepare
self._download_and_prepare(
File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/builder.py", line 793, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/builder.py", line 1274, in _prepare_split
for key, table in logging.tqdm(
File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 67, in _generate_tables
parquet_file = pq.ParquetFile(f)
File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/pyarrow/parquet/__init__.py", line 286, in __init__
self.reader.open(
File "pyarrow/_parquet.pyx", line 1227, in pyarrow._parquet.ParquetReader.open
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: datasets-2.4.1.dev0
- Platform: Ubuntu 18.04
- Python version: 3.10
- PyArrow version: pyarrow 9.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4886/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4886/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4885 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4885/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4885/comments | https://api.github.com/repos/huggingface/datasets/issues/4885/events | https://github.com/huggingface/datasets/issues/4885 | 1,349,181,448 | I_kwDODunzps5QauAI | 4,885 | Create dataset from list of dicts | {
"login": "sanderland",
"id": 48946947,
"node_id": "MDQ6VXNlcjQ4OTQ2OTQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/48946947?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanderland",
"html_url": "https://github.com/sanderland",
"followers_url": "https://api.github.com/users/sanderland/followers",
"following_url": "https://api.github.com/users/sanderland/following{/other_user}",
"gists_url": "https://api.github.com/users/sanderland/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanderland/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanderland/subscriptions",
"organizations_url": "https://api.github.com/users/sanderland/orgs",
"repos_url": "https://api.github.com/users/sanderland/repos",
"events_url": "https://api.github.com/users/sanderland/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanderland/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi @sanderland, thanks for your enhancement proposal.\r\n\r\nI agree with you that this would be useful.\r\n\r\nPlease note that under the hood, we use PyArrow tables as backend:\r\n- The implementation of `Dataset.from_dict` uses the PyArrow `Table.from_pydict`\r\n\r\nTherefore, I would suggest:\r\n- Implementing `Dataset.from_list` using the PyArrow `Table.from_pylist`\r\n\r\nWhat do you think?\r\nLet's see if other people have other suggestions...",
"Thanks for the quick and positive reply @albertvillanova! \r\n`from_list` seems sensible. Have opened a PR so we can discuss details there.",
"Resolved via #4890."
] | 1,661,335,284,000 | 1,662,652,972,000 | 1,662,652,972,000 | CONTRIBUTOR | null | I often find myself with data from a variety of sources, and a list of dicts is very common among these.
However, converting this to a Dataset is a little awkward, requiring either
```Dataset.from_pandas(pd.DataFrame(formatted_training_data))```
Which can error out on some more exotic values as 2-d arrays for reasons that are not entirely clear
> ArrowInvalid: ('Can only convert 1-dimensional array values', 'Conversion failed for column labels with type object')
Alternatively:
```Dataset.from_dict({k: [s[k] for s in formatted_training_data] for k in formatted_training_data[0].keys()})```
Which works, but is a little ugly.
**Describe the solution you'd like**
Either `.from_dict` accepting a list of dicts, or a `.from_records` function accepting such.
I am happy to PR this, just wanted to check you are happy to accept this I haven't missed something obvious, and which of the solutions would be prefered.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4885/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4885/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4884 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4884/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4884/comments | https://api.github.com/repos/huggingface/datasets/issues/4884/events | https://github.com/huggingface/datasets/pull/4884 | 1,349,105,946 | PR_kwDODunzps49s6Aj | 4,884 | Fix documentation card of math_qa dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4884). All of your documentation changes will be reflected on that endpoint."
] | 1,661,331,656,000 | 1,661,340,797,000 | 1,661,340,796,000 | MEMBER | null | Fix documentation card of math_qa dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4884/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4884/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4884",
"html_url": "https://github.com/huggingface/datasets/pull/4884",
"diff_url": "https://github.com/huggingface/datasets/pull/4884.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4884.patch",
"merged_at": 1661340796000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4883 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4883/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4883/comments | https://api.github.com/repos/huggingface/datasets/issues/4883/events | https://github.com/huggingface/datasets/issues/4883 | 1,349,083,235 | I_kwDODunzps5QaWBj | 4,883 | With dataloader RSS memory consumed by HF datasets monotonically increases | {
"login": "apsdehal",
"id": 3616806,
"node_id": "MDQ6VXNlcjM2MTY4MDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/3616806?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/apsdehal",
"html_url": "https://github.com/apsdehal",
"followers_url": "https://api.github.com/users/apsdehal/followers",
"following_url": "https://api.github.com/users/apsdehal/following{/other_user}",
"gists_url": "https://api.github.com/users/apsdehal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/apsdehal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/apsdehal/subscriptions",
"organizations_url": "https://api.github.com/users/apsdehal/orgs",
"repos_url": "https://api.github.com/users/apsdehal/repos",
"events_url": "https://api.github.com/users/apsdehal/events{/privacy}",
"received_events_url": "https://api.github.com/users/apsdehal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Are you sure there is a leak? How can I see it? You shared the script but not the output which you believe should indicate a leak.\r\n\r\nI modified your reproduction script to print only once per try as your original was printing too much info and you absolutely must add `gc.collect()` when doing any memory measurements, since python's GC is scheduled so you might be measuring the wrong thing. This gives us:\r\n\r\n```\r\nimport psutil\r\nimport os\r\nimport gc\r\nfrom transformers import BertTokenizer\r\nfrom datasets import load_dataset\r\nfrom torch.utils.data import DataLoader\r\n\r\nBATCH_SIZE = 32\r\nNUM_TRIES = 100\r\n\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-uncased\")\r\ndef transform(x):\r\n x.update(tokenizer(x[\"text\"], return_tensors=\"pt\", max_length=64, padding=\"max_length\", truncation=True))\r\n x.pop(\"text\")\r\n x.pop(\"label\")\r\n return x\r\ndataset = load_dataset(\"imdb\", split=\"train\")\r\ndataset.set_transform(transform)\r\ntrain_loader = DataLoader(dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=4)\r\n\r\nmem_before = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024)\r\n\r\ncount = 0\r\nwhile count < NUM_TRIES:\r\n for idx, batch in enumerate(train_loader): pass\r\n gc.collect()\r\n mem_after = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024)\r\n print(count, mem_after - mem_before)\r\n count += 1\r\n```\r\n\r\nNow running it:\r\n\r\n```\r\n$ python dl-leak.py \r\nReusing dataset imdb (/home/stas/.cache/huggingface/datasets/imdb/plain_text/1.0.0/2fdd8b9bcadd6e7055e742a706876ba43f19faee861df134affd7a3f60fc38a1)\r\n0 4.43359375\r\n1 4.4453125\r\n2 4.44921875\r\n3 4.44921875\r\n4 4.4609375\r\n5 4.46484375\r\n6 4.46484375\r\n7 4.46484375\r\n8 4.46484375\r\n9 4.46484375\r\n10 4.46484375\r\n11 4.46484375\r\n12 4.46484375\r\n13 4.46484375\r\n14 4.46484375\r\n15 4.46484375\r\n16 4.46484375\r\n```\r\n\r\nIt's normal that at the beginning there is a small growth in memory usage, but after 5 cycles it gets steady.",
"Unless of course you're referring the memory growth during the first try. Is that what you're referring to? And since your ds is small it's hard to see the growth - could it be just because some records are longer and it needs to allocate more memory for those?\r\n\r\nThough while experimenting with this I have observed a peculiar thing, if I concatenate 2 datasets, I don't see any growth at all. But that's probably because the program allocated additional peak RSS memory to concatenate and then is re-using the memory\r\n\r\nI basically tried to see if I make the dataset much longer, I'd expect not to see any memory growth once the 780 records of the imdb ds have been processed once.",
"It is hard to say if it is directly reproducible in this setup. Maybe it is specific to the images stored in the CM4 case which cause a memory leak. I am still running your script and seeing if I can reproduce that particular leak in this case.",
"I was able to reproduce the leak with:\r\n\r\n```\r\nimport psutil\r\nimport os\r\nimport gc\r\nfrom datasets import load_from_disk\r\nimport time\r\n\r\nDATASET_PATH = \"/hf/m4-master/data/cm4/cm4-10000-v0.1\"\r\n\r\ndataset = load_from_disk(DATASET_PATH)\r\n\r\n# truncate to a tiny dataset\r\ndataset = dataset.select(range(1000))\r\n\r\nprint(f\"dataset: {len(dataset)} records\")\r\n\r\nmem_before = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024)\r\nfor idx, rec in enumerate(dataset):\r\n if idx % 100 == 0:\r\n gc.collect()\r\n mem_after = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024)\r\n print(f\"{idx:4d} {mem_after - mem_before:12.4f}MB\")\r\n```\r\nYou need to adjust the DATASET_PATH record.\r\n\r\nwhich you get from\r\n\r\n```\r\ngsutil -m cp \"gs://hf-science-m4/cm4/cm4-10000-v0.1/dataset.arrow\" \"gs://hf-science-m4/cm4/cm4-10000-v0.1/dataset_info.json\" \"gs://hf-science-m4/cm4/cm4-10000-v0.1/state.json\" .\r\n```\r\n(I assume the hf folks have the perms) - it's a smallish dataset (10k)\r\n\r\nthen you run:\r\n```\r\n$ python ds.py\r\ndataset: 1000 records\r\n 0 1.0156MB\r\n 100 126.3906MB\r\n 200 142.8906MB\r\n 300 168.5586MB\r\n 400 218.3867MB\r\n 500 230.7070MB\r\n 600 238.9570MB\r\n 700 263.3789MB\r\n 800 288.1289MB\r\n 900 300.5039MB\r\n```\r\n\r\nyou should be able to see the leak ",
"This issue has nothing to do with `PIL`'s decoder. I removed it and the problem is still there.\r\n\r\nI then traced this leak to this single call: `pa_table.to_pydict()` here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/08a7b389cdd6fb49264a72aa8ccfc49a233494b6/src/datasets/formatting/formatting.py#L138-L140\r\n\r\nI can make it leak much faster by modifying that code to repeat `pa_table.to_pydict()` many times in a row. It shouldn't have that impact:\r\n\r\n```\r\nclass PythonArrowExtractor(BaseArrowExtractor[dict, list, dict]):\r\n def extract_row(self, pa_table: pa.Table) -> dict:\r\n x = [pa_table.to_pydict() for x in range(200)]\r\n return _unnest(pa_table.to_pydict())\r\n```\r\n\r\n@lhoestq - do you know what might be happening inside `pa_table.to_pydict()`, as this is in the `pyarrow` domain. Perhaps you know someone to tag from that project?\r\n\r\nProbably next need to remove `datasets` from the equation and make a reproducible case with just `pyarrow` directly.\r\n\r\nThe problem already happens with `pyarrow==6.0.0` or later (minimum for current `datasets`)\r\n\r\nI'm also trying to dig in with `objgraph` to see if there are any circular references which prevent objects from being freed, but no luck there so far. And I'm pretty sure `to_pydict` is not a python code, so the problem is likely to happen somewhere outside of python's GC.",
"This appears to be the same issue I think: https://github.com/huggingface/datasets/issues/4528\r\nI dug into the repro code there and it's the same behavior with the same leak, but it's a pure nlp dataset and thus much faster to work with. \r\n",
"I went all the way back to `pyarrow==1.0.0` and `datasets==1.12.0` and the problem is still there. How is it even possible that it wasn't noticed all this time. \r\n\r\nCould it be that the leak is in some 3rd party component `pyarrow` relies on? as while downgrading I have only downgraded the above 2 packages.\r\n",
"Also found this warning \r\n\r\n> Be careful: if you don't pass the ArrowArray struct to a consumer,\r\n> array memory will leak. This is a low-level function intended for\r\n> expert users.\r\n\r\nsee: https://github.com/apache/arrow/blob/99b57e84277f24e8ec1ddadbb11ef8b4f43c8c89/python/pyarrow/table.pxi#L2515-L2517\r\n\r\nperhaps something triggers this condition?\r\n\r\nI have no idea if it's related - this is just something that came up during my research.",
"Does it crash with OOM at some point? If it doesn't, it isn't a leak, just agressive caching or a custom allocator that doesn't like to give memory back (not uncommon). #4528 looks like it hits a steady state.\r\n\r\nI believe the underlying arrow libs use a custom C allocator. Some of those are designed not to give back to OS, but keep heap memory for themselves to re-use (hitting up the OS involves more expensive mutex locks, contention, etc). The greedy behaviour can be undesirable though. There are likely flags to change the allocator behaviour, and one could likely build without any custom allocators (or use a different one).",
"> Does it crash with OOM at some point?\r\n\r\nIn the original setup where we noticed this problem, it was indeed ending in an OOM",
"> https://github.com/huggingface/datasets/issues/4528 looks like it hits a steady state.\r\n\r\n@rwightman in the plot I shared, the steady state comes from the `time.sleep(100)` I added in the end of the script, to showcase that even the garbage collector couldn't free that allocated memory.\r\n",
"Could this be related to this discussion about a potential memory leak in pyarrow: https://issues.apache.org/jira/browse/ARROW-11007 ?\r\n\r\n(Note: I've tried `import pyarrow; pyarrow.jemalloc_set_decay_ms(0)` and the memory leak is still happening on your toy example)",
"> @lhoestq - do you know what might be happening inside pa_table.to_pydict(), as this is in the pyarrow domain. Perhaps you know someone to tag from that project?\r\n\r\n`to_pydict` calls `to_pylist` on each column (i.e. on each PyArrow Array). Then it iterates on the array and calls `as_py` on each element. The `as_py` implementation depends on the data type. For strings I think it simply gets the buffer that contains the binary string data that is defined in C++\r\n\r\nThe Arrow team is pretty responsive at [email protected] if it can help\r\n\r\n> Probably next need to remove datasets from the equation and make a reproducible case with just pyarrow directly.\r\n\r\nThat would be ideal indeed. Would be happy to help on this, can you give me access to the bucket so I can try with your data ?",
"> That would be ideal indeed. Would be happy to help on this, can you give me access to the bucket so I can try with your data ?\r\n\r\nI added you to the bucket @lhoestq ",
"It looks like an issue with memory mapping:\r\n- the amount of memory used in the end corresponds to the size of the dataset\r\n- setting `keep_in_memory=True` in `load_from_disk` loads the dataset in RAM, and **doesn't cause any memory leak**",
"Here is a code to reproduce this issue using only PyArrow and a dummy arrow file:\r\n```python\r\nimport psutil\r\nimport os\r\nimport gc\r\nimport pyarrow as pa\r\nimport time\r\n\r\nARROW_PATH = \"tmp.arrow\"\r\n\r\nif not os.path.exists(ARROW_PATH):\r\n arr = pa.array([b\"a\" * (200 * 1024)] * 1000) # ~200MB\r\n table = pa.table({\"a\": arr})\r\n\r\n with open(ARROW_PATH, \"wb\") as f:\r\n writer = pa.RecordBatchStreamWriter(f, schema=table.schema)\r\n writer.write_table(table)\r\n writer.close()\r\n\r\n\r\ndef memory_mapped_arrow_table_from_file(filename: str) -> pa.Table:\r\n memory_mapped_stream = pa.memory_map(filename)\r\n opened_stream = pa.ipc.open_stream(memory_mapped_stream)\r\n pa_table = opened_stream.read_all()\r\n return pa_table\r\n\r\n\r\ntable = memory_mapped_arrow_table_from_file(ARROW_PATH)\r\narr = table[0]\r\n\r\nmem_before = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024)\r\nfor idx, x in enumerate(arr):\r\n if idx % 100 == 0:\r\n gc.collect()\r\n time.sleep(0.1)\r\n mem_after = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024)\r\n print(f\"{idx:4d} {mem_after - mem_before:12.4f}MB\")\r\n```\r\nprints\r\n```\r\n 0 0.2500MB\r\n 100 19.8008MB\r\n 200 39.3320MB\r\n 300 58.8633MB\r\n 400 78.3945MB\r\n 500 97.9258MB\r\n 600 117.4570MB\r\n 700 136.9883MB\r\n 800 156.5195MB\r\n 900 176.0508MB\r\n```\r\nNote that this example simply iterates over the `pyarrow.lib.BinaryScalar` objects in the array. Running `.as_py()` is not needed to experience the memory issue.",
"@lhoestq that does indeed increase in memory, but if you iterate over array again after the first time, or re-open and remap the same file (repeat `table = memory_mapped_arrow_table_from_file(ARROW_PATH)`) before re-iterating, it doesn't move pas 195MB.... it would appear another step is needed to continue consuming memory past that.. hmmm\r\n\r\nAre the pa_tables held on to anywhere after they are iterated in the real code?\r\n\r\nin my hack, if you do a bunch cut & paste and then change the arr name for each iter \r\n\r\n```\r\ntable = memory_mapped_arrow_table_from_file(ARROW_PATH)\r\narr = table[0]\r\n\r\nfor idx, x in enumerate(arr):\r\n if idx % 100 == 0:\r\n gc.collect()\r\n time.sleep(0.1)\r\n mem_after = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024)\r\n print(f\"{idx:4d} {mem_after - mem_before:12.4f}MB\")\r\n\r\ntable = memory_mapped_arrow_table_from_file(ARROW_PATH)\r\narr1 = table[0]\r\n\r\nfor idx, x in enumerate(arr1):\r\n if idx % 100 == 0:\r\n gc.collect()\r\n time.sleep(0.1)\r\n mem_after = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024)\r\n print(f\"{idx:4d} {mem_after - mem_before:12.4f}MB\")\r\n\r\ntable = memory_mapped_arrow_table_from_file(ARROW_PATH)\r\narr2 = table[0]\r\n\r\nfor idx, x in enumerate(arr2):\r\n if idx % 100 == 0:\r\n gc.collect()\r\n time.sleep(0.1)\r\n mem_after = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024)\r\n print(f\"{idx:4d} {mem_after - mem_before:12.4f}MB\")\r\n```\r\n\r\nit leaks, if all arr are the same name (so prev one gets cleaned up) it does not and goes back to 0, anything that could be holding onto a reference of an intermediary equivalent like arr in the real use case?\r\n\r\n\r\n\r\n",
"Yes, we have already established here https://github.com/huggingface/datasets/issues/4883#issuecomment-1232063891 that when one iterates over the whole dataset multiple times, it consumes a bit more memory in the next few repetitions and then remains steady. \r\n\r\nWhich means that when a new iterator is created over the same dataset, all the memory from the previous iterator is re-used.\r\n\r\nSo the leak happens primarily when the iterator is \"drained\" the first time. which tells me that either a circular reference is created somewhere which only gets released when the iterator is destroyed, or there is some global variable that keeps piling up the memory and doesn't release it in time.\r\n\r\nAlso I noticed some `__del__` methods which won't destroy objects automatically and there is usually a warning against using it https://stackoverflow.com/a/1481512/9201239\r\n\r\nThere are also some `weakref`s in the code which too may lead to leaks or weird problems at times.\r\n",
"@stas00 my point was, I'm not convinced @lhoestq last example illustrates the leak, but rather the differences between memory mapping and in memory usage patterns. If you destroy arr, memory map impl goes back to 0 each iteration. The amount of memory that 'looks' like it is leaked in first pass differes quite a bit between memory mapped vs in memory, but the underlying issue likely a circular reference, or reference(s) which were not cleaned up that would impact either case, but likely much more visible with mmap.",
"Thank you for clarifying, Ross. \r\n\r\nI think we agree that it's almost certain that the `datasets` iterator traps some inner variable that prevents object freeing, since if we create the iterator multiple times (and drain it) after a few runs no new memory is allocated. We could try to dig in more with `objgraph` - my main concern is if the problem happens somewhere outside of python, (i.e. in pyarrow cpp implementation) in which case it'd be much more difficult to trace. \r\n\r\nI wish there was a way on linux to tell the program to free no longer used memory at will.",
"FWIW, I revisted some code I had in the works to use HF datasets w/ timm train & val scripts. There is no leak there across multipe epochs. It uses the defaults. \r\n\r\nIt's worth noting that with imagenet `keep_in_memory=True` isn't even an option because the train arrow file is ~140GB and my local memory is less. The virtual address space reflects mmap (> 150GB) and doesn't increase over epochs that I noticed. I have some perf issues to bring up wrt to the current setup, but that's a separate and lower prio discussion to have elsewhere...",
"# Notes \r\n\r\nAfter reading many issues and trying many things here is the summary of my learning\r\n\r\nI'm now using @lhoestq repro case as it's pyarrow-isolated: https://github.com/huggingface/datasets/issues/4883#issuecomment-1242034985\r\n\r\n\r\n## 1. pyarrow memory backends\r\n\r\nit has 3 backends, I tried them all with the same results\r\n\r\n```\r\npa.set_memory_pool(pa.jemalloc_memory_pool())\r\npa.set_memory_pool(pa.mimalloc_memory_pool())\r\npa.set_memory_pool(pa.system_memory_pool())\r\n```\r\n\r\n## 2. quick release\r\n\r\nThe `jemalloc` backend supports quick release\r\n\r\n```\r\npa.jemalloc_set_decay_ms(0)\r\n```\r\n\r\nit doesn't make any difference in this case\r\n\r\n## 3. actual memory allocations\r\n\r\nthis is a useful tracer for PA memory allocators\r\n```\r\npa.log_memory_allocations(enable=True)\r\n```\r\n\r\nit nicely reports memory allocations and releases when the arrow file is created the first time.\r\n\r\nbut when we then try to do `enumerate(arr)` this logger reports 0 allocations.\r\n\r\nThis summary also reports no allocations when the script run the second time (post file creation):\r\n```\r\nmem_pool = pa.default_memory_pool()\r\nprint(f\"PyArrow mem pool info: {mem_pool.backend_name} backend, {mem_pool.bytes_allocated()} allocated, \"\r\n f\"{mem_pool.max_memory()} max allocated, \")\r\n\r\nprint(f\"PyArrow total allocated bytes: {pa.total_allocated_bytes()}\")\r\n```\r\n\r\nHowever it's easy to see by using `tracemalloc` which only measures python's memory allocations that it's PA that leaks, since `tracemalloc` shows fixed memory\r\n\r\n(this is bolted on top of the original repro script)\r\n\r\n```\r\nimport tracemalloc\r\ntracemalloc.start()\r\n\r\n[...]\r\nfor idx, x in enumerate(arr):\r\n if idx % 10 == 0:\r\n gc.collect()\r\n time.sleep(0.1)\r\n mem_after = psutil.Process(os.getpid()).memory_info().rss / 2**20\r\n mem_use = pa.total_allocated_bytes() - start_use\r\n mem_peak = pool.max_memory() - start_peak_use\r\n\r\n second_size, second_peak = tracemalloc.get_traced_memory()\r\n mem_diff = (second_size - first_size) / 2**20\r\n mem_peak_diff = (second_peak - first_peak) / 2**20\r\n\r\n # pa.jemalloc_memory_pool().release_unused()\r\n # pa.mimalloc_memory_pool().release_unused()\r\n # pa.system_memory_pool().release_unused()\r\n\r\n print(f\"{idx:4d} {mem_after - mem_before:12.4f}MB {mem_diff:12.4f} {mem_peak_diff:12.4f} {memory_mapped_stream.size()/2**20:4.4}MB {mem_use/2**20:4.4}MB {mem_peak/2**20:4.4}MB\")\r\n\r\n```\r\n\r\ngives:\r\n\r\n```\r\n 0 5.4258MB 0.0110 0.0201 195.3MB 0.0MB 0.0MB\r\n 10 25.3672MB 0.0112 0.0202 195.3MB 0.0MB 0.0MB\r\n 20 45.9336MB 0.0112 0.0203 195.3MB 0.0MB 0.0MB\r\n 30 62.4336MB 0.0112 0.0203 195.3MB 0.0MB 0.0MB\r\n 40 83.0586MB 0.0112 0.0203 195.3MB 0.0MB 0.0MB\r\n 50 103.6836MB 0.0112 0.0203 195.3MB 0.0MB 0.0MB\r\n 60 124.3086MB 0.0112 0.0203 195.3MB 0.0MB 0.0MB\r\n 70 140.8086MB 0.0112 0.0203 195.3MB 0.0MB 0.0MB\r\n 80 161.4336MB 0.0112 0.0203 195.3MB 0.0MB 0.0MB\r\n 90 182.0586MB 0.0112 0.0203 195.3MB 0.0MB 0.0MB\r\n```\r\n\r\nthe 3rd and 4th columns are `tracemalloc`'s report.\r\n\r\nthe 5th column is the size of mmaped stream - fixed.\r\n\r\nthe last 2 are the PA's malloc reports - you can see it's totally fixed and 0.\r\n\r\nSo what gives? PA's memory allocator says nothing was allocated and we can see python doesn't allocate any memory either.\r\n\r\nAs someone suggested in one of the PA issues that **IPC/GRPC could be the issue.** Any suggestions on how debug this one?\r\n\r\nThe main issue is that one can't step through with a python debugger as `arr` is an opaque cpp object binded to python.\r\n\r\nPlease see the next comment for a possible answer.\r\n\r\n# ref-count\r\n\r\nI also traced reference counts and they are all fixed using either `sys.getrefcount(x)` or `len(gc.get_referrers(x))`\r\n\r\nso it's not the python object\r\n\r\n# Important related discussions\r\n\r\nhttps://issues.apache.org/jira/browse/ARROW-11007 - looks very similar to our issue\r\nin particular this part of the report:\r\nhttps://issues.apache.org/jira/browse/ARROW-11007?focusedCommentId=17279642&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17279642\r\n",
"# There is no leak, just badly communicated linux RSS memory usage stats\r\n\r\nNext, lets revisit @rwightman's suggestion that there is actually no leak.\r\n\r\nAfter all - we are using mmap which **will try to map** the file to RAM as much as it can and then page out if there is no memory. i.e. MMAP is only fast if you have a lot of CPU RAM.\r\n\r\nSo let's do it:\r\n\r\n# Memory mapping OOM test\r\n\r\nWe first quickly start a cgroups-controlled shell which will instantly kill any program that consumes more than 1GB of memory:\r\n\r\n```\r\n$ systemd-run --user --scope -p MemoryHigh=1G -p MemoryMax=1G -p MemorySwapMax=1G --setenv=\"MEMLIMIT=1GB\" bash\r\n```\r\n\r\nLet's check that it indeed does so. Let's change @lhoestq's script to allocate a 10GB arrow file:\r\n\r\n```\r\n$ python -c 'import pyarrow as pa; pa.array([b\"a\" * (2000 * 1024)] * 5000)'\r\nKilled\r\n```\r\noops, that didn't work, as we tried to allocate 10GB when only 1GB is allowed. This is what we want!\r\n\r\nLet's do a sanity check - can we allocate 0.1GB?\r\n```\r\npython -c 'import pyarrow as pa; pa.array([b\"a\" * (2000 * 1024)] * 50)'\r\n```\r\n\r\nYes. So the limited shell does the right thing. It let's allocate `< 1GB` of RSS RAM.\r\n\r\nNext let's go back to @lhoestq's script but with 10GB arrow file.\r\n\r\nwe change his repro script https://github.com/huggingface/datasets/issues/4883#issuecomment-1242034985 to 50x larger file\r\n```\r\n arr = pa.array([b\"a\" * (2000 * 1024)] * 5000) # ~10000MB\r\n```\r\nwe first have to run into a normal unlimited shell so that we don't get killed (as the script allocates 10GB)\r\n\r\nlet's run the script now in the 1GB-limited shell while running a monitor:\r\n\r\n```\r\n$ htop -F python -s M_RESIDENT -u `whoami`\r\n```\r\n\r\nso we have 2 sources of RSS info just in case.\r\n\r\n```\r\n$ python pyar.py\r\n 0 4.3516MB 0.0103 0.0194 9.766e+03MB 0.0MB 0.0MB\r\n 10 24.3008MB 0.0104 0.0195 9.766e+03MB 0.0MB 0.0MB\r\n[...]\r\n4980 9730.3672MB 0.0108 0.0199 9.766e+03MB 0.0MB 0.0MB\r\n4990 9750.9922MB 0.0108 0.0199 9.766e+03MB 0.0MB 0.0MB\r\nPyArrow mem pool info: jemalloc backend, 0 allocated, 0 max allocated,\r\nPyArrow total allocated bytes: 0\r\n```\r\n\r\nBut wait, it reported 10GB RSS both in `htop` and in our log!\r\n\r\nSo that means it never allocated 10GB otherwise it'd have been killed.\r\n\r\n**Which tells us that there is no leak whatsoever** and this is just a really difficult situation where MMAPPED memory is reported as part of RSS which it probably shouldn't. As now we have no way how to measure real memory usage.\r\n\r\nI also attached the script with all the different things I have tried in it, so it should be easy to turn them on/off if you want to reproduce any of my findings.\r\n\r\n[pyar.txt](https://github.com/huggingface/datasets/files/9539430/pyar.txt)\r\n\r\njust rename it to `pyra.py` as gh doesn't let attaching scripts...\r\n\r\n(I have to remember to exit that special mem-limited shell or else I won't be able to do anything serious there.)\r\n\r\n",
"The original leak in the multi-modal code is very likely something else. But of course now it'd be very difficult to trace it using mmap.\r\n\r\nI think to debug we have to set `keep_in_memory=True` in `load_from_disk` to load the small dataset in RAM, so there will be no mmap misleading reporting component and then continue searching for another source of a leak.",
"To add to what @stas00 found, I'm gonna leave some links to where I believe the confusion came from in pyarrow's APIs, for future reference:\r\n* In the section where they talk about [efficiently writing and reading arrow data](https://arrow.apache.org/docs/dev/python/ipc.html?#efficiently-writing-and-reading-arrow-data), they give an example of how \r\n\r\n> Arrow can directly reference the data mapped from disk and avoid having to allocate its own memory. \r\n\r\nAnd where their example shows 0 RSS memory allocation, the way we used to measure RSS shows 39.6719MB allocated. Here's the script to reproduce:\r\n```\r\nimport psutil\r\nimport os\r\nimport gc\r\nimport pyarrow as pa\r\nimport time\r\nimport sys\r\n\r\n# gc.set_debug(gc.DEBUG_LEAK)\r\n# gc.set_threshold(0,0,0)\r\n\r\n#pa.set_memory_pool(pa.mimalloc_memory_pool())\r\n#pa.set_memory_pool(pa.system_memory_pool())\r\n\r\nimport tracemalloc\r\n\r\n#pa.jemalloc_set_decay_ms(0)\r\n# pa.log_memory_allocations(enable=True)\r\n\r\nBATCH_SIZE = 10000\r\nNUM_BATCHES = 1000\r\nschema = pa.schema([pa.field('nums', pa.int32())])\r\nwith pa.OSFile('bigfile.arrow', 'wb') as sink:\r\n with pa.ipc.new_file(sink, schema) as writer:\r\n for row in range(NUM_BATCHES):\r\n batch = pa.record_batch([pa.array(range(BATCH_SIZE), type=pa.int32())], schema)\r\n writer.write(batch)\r\n\r\nstart_use = pa.total_allocated_bytes()\r\npool = pa.default_memory_pool()\r\nstart_peak_use = pool.max_memory()\r\ntracemalloc.start()\r\nfirst_size, first_peak = tracemalloc.get_traced_memory()\r\nmem_before = psutil.Process(os.getpid()).memory_info().rss / 2**20\r\n\r\n\r\n# with pa.OSFile('bigfile.arrow', 'rb') as source:\r\n# loaded_array = pa.ipc.open_file(source).read_all()\r\n\r\nwith pa.memory_map('bigfile.arrow', 'rb') as source:\r\n loaded_array = pa.ipc.open_file(source).read_all()\r\n\r\n\r\nprint(\"LEN:\", len(loaded_array))\r\nprint(\"RSS: {}MB\".format(pa.total_allocated_bytes() >> 20))\r\n\r\ngc.collect()\r\ntime.sleep(0.1)\r\nmem_after = psutil.Process(os.getpid()).memory_info().rss / 2**20\r\nmem_use = pa.total_allocated_bytes() - start_use\r\nmem_peak = pool.max_memory() - start_peak_use\r\nsecond_size, second_peak = tracemalloc.get_traced_memory()\r\nmem_diff = (second_size - first_size) / 2**20\r\nmem_peak_diff = (second_peak - first_peak) / 2**20\r\n\r\nidx = 0\r\nprint(f\"{idx:4d} {mem_after - mem_before:12.4f}MB {mem_diff:12.4f} {mem_peak_diff:12.4f} {mem_use/2**20:4.4}MB {mem_peak/2**20:4.4}MB\")\r\n```\r\ngives:\r\n```\r\n\r\nLEN: 10000000\r\nRSS: 0MB\r\n 0 39.6719MB 0.0132 0.0529 0.0MB 0.0MB\r\n```\r\nWhich again just proves that we uncorrectly measure RSS, in the case of MMAPPED memory\r\n\r\n\r\n* [The recommended way to do memory profiling from Arrow's docs](https://arrow.apache.org/docs/dev/cpp/memory.html#memory-profiling)\r\n",
"@lhoestq, I have been working on a detailed article that shows that MMAP doesn't leak and it's mostly ready. I will share when it's ready.\r\n\r\nThe issue is that we still need to be able to debug memory leaks by turning MMAP off.\r\n\r\nBut, once I tried to show the user that using `load_dataset(... keep_in_memory=True)` is the way to debug an actual memory leak - guess I what I discovered? A potential actual leak.\r\n\r\nHere is the repro:\r\n\r\n```\r\n$ cat ds-mmap.py\r\nfrom datasets import load_dataset\r\nimport gc\r\nimport os\r\nimport psutil\r\n\r\nproc = psutil.Process(os.getpid())\r\ndef mem_read():\r\n gc.collect()\r\n return proc.memory_info().rss / 2**20\r\n\r\ndataset = load_dataset(\"wmt19\", 'cs-en', keep_in_memory=True, streaming=False)['train']\r\n\r\nprint(f\"{'idx':>6} {'RSS':>10} {'Δ RSS':>15}\")\r\nstep = 20000\r\nfor i in range(0, 10*step, step):\r\n mem_before = mem_read()\r\n _ = dataset[i:i+step]\r\n mem_after = mem_read()\r\n print(f\"{i:6d} {mem_after:12.4f}MB {mem_after - mem_before:12.4f}MB \")\r\n```\r\n\r\n```\r\npython ds-io.py\r\nReusing dataset wmt19 (/home/stas/.cache/huggingface/datasets/wmt19/cs-en/1.0.0/c3db1bf4240362ed1ef4673b354f468d70aac66d4e67d45f536d493a0840f0d3)\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 5.66it/s]\r\n idx RSS Δ RSS\r\n 0 1398.4609MB 3.5195MB\r\n 20000 1398.5742MB 0.1133MB\r\n 40000 1398.6016MB 0.0273MB\r\n 60000 1398.6016MB 0.0000MB\r\n 80000 1398.6016MB 0.0000MB\r\n100000 1398.6328MB 0.0312MB\r\n120000 1398.6953MB 0.0625MB\r\n140000 1398.6953MB 0.0000MB\r\n160000 1398.7500MB 0.0547MB\r\n180000 1398.7500MB 0.0000MB\r\n```",
"as I suggested on slack perhaps it was due to dataset records length variation, so with your help I wrote another repro with synthetic records which are all identical - which should remove my hypothese from the equation and we should expect 0 incremental growth as we iterate over the datasets. But alas this is not the case. There is a tiny but definite leak-like behavior.\r\n\r\nHere is the new repro:\r\n\r\n```\r\n$ cat ds-synthetic-no-mmap.py\r\nfrom datasets import load_from_disk, Dataset\r\nimport gc\r\nimport sys\r\nimport os\r\nimport psutil\r\n\r\nproc = psutil.Process(os.getpid())\r\ndef mem_read():\r\n gc.collect()\r\n return proc.memory_info().rss / 2**20\r\n\r\nDS_PATH = \"synthetic-ds\"\r\nif not os.path.exists(DS_PATH):\r\n records = 1_000_000\r\n print(\"Creating a synthetic dataset\")\r\n row = dict(foo=[dict(a='a'*500, b='b'*1000)])\r\n ds = Dataset.from_dict({k: [v] * records for k, v in row.items()})\r\n ds.save_to_disk(DS_PATH)\r\n print(\"Done. Please restart the program\")\r\n sys.exit()\r\n\r\ndataset = load_from_disk(DS_PATH, keep_in_memory=True)\r\nprint(f\"Dataset len={len(dataset)}\")\r\n\r\nprint(f\"{'idx':>8} {'RSS':>10} {'Δ RSS':>15}\")\r\nmem_start = 0\r\nstep = 25_000\r\nwarmup_iterations = 4\r\nfor idx, i in enumerate(range(0, len(dataset), step)):\r\n if idx == warmup_iterations: # skip the first few iterations while things get set up\r\n mem_start = mem_read()\r\n mem_before = mem_read()\r\n _ = dataset[i:i+step]\r\n mem_after = mem_read()\r\n print(f\"{i:8d} {mem_after:12.4f}MB {mem_after - mem_before:12.4f}MB\")\r\nmem_end = mem_read()\r\n\r\nprint(f\"Total diff: {mem_end - mem_start:12.4f}MB (after {warmup_iterations} warmup iterations)\")\r\n```\r\n\r\nand the run:\r\n\r\n```\r\n$ python ds-synthetic-no-mmap.py\r\nDataset len=1000000\r\n idx RSS Δ RSS\r\n 0 1601.9258MB 47.9688MB\r\n 25000 1641.6289MB 39.7031MB\r\n 50000 1641.8594MB 0.2305MB\r\n 75000 1642.1289MB 0.2695MB\r\n 100000 1642.1289MB 0.0000MB\r\n 125000 1642.3789MB 0.2500MB\r\n 150000 1642.3789MB 0.0000MB\r\n 175000 1642.6289MB 0.2500MB\r\n 200000 1642.6289MB 0.0000MB\r\n 225000 1642.8789MB 0.2500MB\r\n 250000 1642.8828MB 0.0039MB\r\n 275000 1643.1328MB 0.2500MB\r\n 300000 1643.1328MB 0.0000MB\r\n 325000 1643.3828MB 0.2500MB\r\n 350000 1643.3828MB 0.0000MB\r\n 375000 1643.6328MB 0.2500MB\r\n 400000 1643.6328MB 0.0000MB\r\n 425000 1643.8828MB 0.2500MB\r\n 450000 1643.8828MB 0.0000MB\r\n 475000 1644.1328MB 0.2500MB\r\n 500000 1644.1328MB 0.0000MB\r\n 525000 1644.3828MB 0.2500MB\r\n 550000 1644.3828MB 0.0000MB\r\n 575000 1644.6328MB 0.2500MB\r\n 600000 1644.6328MB 0.0000MB\r\n 625000 1644.8828MB 0.2500MB\r\n 650000 1644.8828MB 0.0000MB\r\n 675000 1645.1328MB 0.2500MB\r\n 700000 1645.1328MB 0.0000MB\r\n 725000 1645.3828MB 0.2500MB\r\n 750000 1645.3828MB 0.0000MB\r\n 775000 1645.6328MB 0.2500MB\r\n 800000 1645.6328MB 0.0000MB\r\n 825000 1645.8828MB 0.2500MB\r\n 850000 1645.8828MB 0.0000MB\r\n 875000 1646.1328MB 0.2500MB\r\n 900000 1646.1328MB 0.0000MB\r\n 925000 1646.3828MB 0.2500MB\r\n 950000 1646.3828MB 0.0000MB\r\n 975000 1646.6328MB 0.2500MB\r\nTotal diff: 4.5039MB (after 4 warmup iterations)\r\n```\r\nso I'm still not sure why we get this.\r\n\r\nAs you can see I started skipping the first few iterations where memory isn't stable yet. As the actual diff is much larger if we count all iterations.\r\n\r\nWhat do you think?",
"@stas00 my 2 cents from having looked at a LOT of memory leaks over the years, esp in Python, .3% memory increase over that many iterations of something is difficult to say with certainty it is a leak. \r\n\r\nAlso, just looking at RSS makes it hard to analyze leaks. RSS can stay near constant while you are leaking. RSS is paged in mem, if you have a big leak your RSS might not increase much (leaked mem tends not to get used again so often paged out) while your virtual page allocation could be going through the roof...",
"yes, that's true, but unless the leak is big, I'm yet to find another measurement tool.\r\n\r\nTo prove your point here is a very simple IO in a loop program that also reads the same line all over again:\r\n\r\n```\r\n$ cat mmap-no-leak-debug.py\r\nimport gc\r\nimport mmap\r\nimport os\r\nimport psutil\r\nimport sys\r\n\r\nproc = psutil.Process(os.getpid())\r\n\r\nPATH = \"./tmp.txt\"\r\n\r\ndef mem_read():\r\n gc.collect()\r\n return proc.memory_info().rss / 2**20\r\n\r\n# create a large data file with a few long lines\r\nif not os.path.exists(PATH):\r\n with open(PATH, \"w\") as fh:\r\n s = 'a'* 2**27 + \"\\n\" # 128MB\r\n # write ~2GB file\r\n for i in range(16):\r\n fh.write(s)\r\n\r\nprint(f\"{'idx':>4} {'RSS':>10} {'Δ RSS':>12} {'Δ accumulated':>10}\")\r\n\r\ntotal_read = 0\r\ncontent = ''\r\nmem_after = mem_before_acc = mem_after_acc = mem_before = proc.memory_info().rss / 2**20\r\nprint(f\"{0:4d} {mem_after:10.2f}MB {mem_after - 0:10.2f}MB {0:10.2f}MB\")\r\n\r\nmmap_mode = True if \"--mmap\" in sys.argv else False\r\n\r\nwith open(PATH, \"r\") as fh:\r\n\r\n if mmap_mode:\r\n mm = mmap.mmap(fh.fileno(), 0, access=mmap.ACCESS_READ)\r\n\r\n idx = 0\r\n while True:\r\n idx += 1\r\n mem_before = mem_read()\r\n line = mm.readline() if mmap_mode else fh.readline()\r\n if not line:\r\n break\r\n\r\n #total_read += len(line)\r\n\r\n if \"--accumulate\" in sys.argv:\r\n mem_before_acc = mem_read()\r\n content += str(line)\r\n mem_after_acc = mem_read()\r\n\r\n mem_after = mem_read()\r\n\r\n print(f\"{idx:4d} {mem_after:10.2f}MB {mem_after - mem_before:10.2f}MB {mem_after_acc - mem_before_acc:10.2f}MB\")\r\n```\r\n\r\nit has some other instrumentations to do mmap and accumulate data, but let's ignore that for now.\r\n\r\nHere it is running in a simple non-mmap IO:\r\n\r\n```\r\n$ python mmap-no-leak-debug.py\r\n idx RSS Δ RSS Δ accumulated\r\n 0 12.43MB 12.43MB 0.00MB\r\n 1 269.72MB 257.29MB 0.00MB\r\n 2 269.73MB 0.02MB 0.00MB\r\n 3 269.73MB 0.00MB 0.00MB\r\n 4 269.74MB 0.01MB 0.00MB\r\n 5 269.74MB 0.00MB 0.00MB\r\n 6 269.75MB 0.01MB 0.00MB\r\n 7 269.75MB 0.00MB 0.00MB\r\n 8 269.76MB 0.01MB 0.00MB\r\n 9 269.76MB 0.00MB 0.00MB\r\n 10 269.77MB 0.01MB 0.00MB\r\n 11 269.77MB 0.00MB 0.00MB\r\n 12 269.77MB 0.00MB 0.00MB\r\n 13 269.77MB 0.00MB 0.00MB\r\n 14 269.77MB 0.00MB 0.00MB\r\n 15 269.77MB 0.00MB 0.00MB\r\n 16 146.02MB -123.75MB 0.00MB\r\n```\r\n\r\nas you can see even this super-simplistic program that just performs `readline()` slightly increases in RSS over iterations.\r\n\r\nIf you have a better tool for measurement other than RSS, I'm all ears.",
"@stas00 if you aren't using memory maps, you should be able to clearly see the increase in the virtual mem for the process as well. Even then, it could still be challenging to determine if it's leak vs fragmentation due to problematic allocation patterns (not uncommon with Python). Using a better mem allocator like tcmalloc via LD_PRELOAD hooks could reduce impact of fragmentation across both Python and c libs. Not sure that plays nice with any allocator that arrow might use itself though. "
] | 1,661,330,574,000 | 1,663,619,676,000 | null | MEMBER | null | ## Describe the bug
When the HF datasets is used in conjunction with PyTorch Dataloader, the RSS memory of the process keeps on increasing when it should stay constant.
## Steps to reproduce the bug
Run and observe the output of this snippet which logs RSS memory.
```python
import psutil
import os
from transformers import BertTokenizer
from datasets import load_dataset
from torch.utils.data import DataLoader
BATCH_SIZE = 32
NUM_TRIES = 10
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
def transform(x):
x.update(tokenizer(x["text"], return_tensors="pt", max_length=64, padding="max_length", truncation=True))
x.pop("text")
x.pop("label")
return x
dataset = load_dataset("imdb", split="train")
dataset.set_transform(transform)
train_loader = DataLoader(dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=4)
mem_before = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024)
count = 0
while count < NUM_TRIES:
for idx, batch in enumerate(train_loader):
mem_after = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024)
print(count, idx, mem_after - mem_before)
count += 1
```
## Expected results
Memory should not increase after initial setup and loading of the dataset
## Actual results
Memory continuously increases as can be seen in the log.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-glibc2.10
- Python version: 3.8.13
- PyArrow version: 7.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4883/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
} | https://api.github.com/repos/huggingface/datasets/issues/4883/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4882 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4882/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4882/comments | https://api.github.com/repos/huggingface/datasets/issues/4882/events | https://github.com/huggingface/datasets/pull/4882 | 1,348,913,665 | PR_kwDODunzps49sRtv | 4,882 | Fix language tags resource file | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4882). All of your documentation changes will be reflected on that endpoint."
] | 1,661,321,161,000 | 1,661,349,513,000 | 1,661,349,510,000 | MEMBER | null | This PR fixes/updates/adds ALL language tags from IANA (as of 2022-08-08).
This PR also removes all BCP47 suffixes (the languages file only contains language subtags, i.e. ISO 639 1 or 2 codes; no script/region/variant suffixes). See:
- #4753 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4882/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4882/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4882",
"html_url": "https://github.com/huggingface/datasets/pull/4882",
"diff_url": "https://github.com/huggingface/datasets/pull/4882.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4882.patch",
"merged_at": 1661349510000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4881 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4881/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4881/comments | https://api.github.com/repos/huggingface/datasets/issues/4881/events | https://github.com/huggingface/datasets/issues/4881 | 1,348,495,777 | I_kwDODunzps5QYGmh | 4,881 | Language names and language codes: connecting to a big database (rather than slow enrichment of custom list) | {
"login": "alexis-michaud",
"id": 6072524,
"node_id": "MDQ6VXNlcjYwNzI1MjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/6072524?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexis-michaud",
"html_url": "https://github.com/alexis-michaud",
"followers_url": "https://api.github.com/users/alexis-michaud/followers",
"following_url": "https://api.github.com/users/alexis-michaud/following{/other_user}",
"gists_url": "https://api.github.com/users/alexis-michaud/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexis-michaud/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexis-michaud/subscriptions",
"organizations_url": "https://api.github.com/users/alexis-michaud/orgs",
"repos_url": "https://api.github.com/users/alexis-michaud/repos",
"events_url": "https://api.github.com/users/alexis-michaud/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexis-michaud/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Thanks for opening this discussion, @alexis-michaud.\r\n\r\nAs the language validation procedure is shared with other Hugging Face projects, I'm tagging them as well.\r\n\r\nCC: @huggingface/moon-landing ",
"on the Hub side, there is not fine grained validation we just check that `language:` contains an array of lowercase strings between 2 and 3 characters long =)\r\n\r\nand for `language_bcp47:` we just check it's an array of strings.\r\n\r\nThe only page where we have a hardcoded list of languages is https://huggingface.co/languages and I've been thinking of hooking that page on an external database of languages (so any suggestion is super interesting), but it's not used for validation.\r\n\r\nThat being said, in `datasets` this file https://github.com/huggingface/datasets/blob/main/src/datasets/utils/resources/languages.json is not really used no? Or just in the tagging tool? What about just removing it?\r\n\r\nalso cc'ing @lbourdois who's been active and helpful on those subjects in the past!",
"PS @alexis-michaud is there a DB of language codes you would recommend? That would contain all `ISO 639-1, 639-2 or 639-3 codes` and be kept up to date, and ideally that would be accessible as a Node.js npm package?\r\n\r\ncc @albertvillanova too",
"> PS @alexis-michaud is there a DB of language codes you would recommend? That would contain all `ISO 639-1, 639-2 or 639-3 codes` and be kept up to date, and ideally that would be accessible as a Node.js npm package?\r\n> \r\n> cc @albertvillanova too\r\n\r\nMany thanks for your answer! \r\n\r\nThe Glottolog database is kept up to date, and has information on the closest ISO code for each Glottocode. So providing a clean table with equivalences sounds (to me) like something perfectly reasonable to expect from their team. \r\nTo what extent would [pyglottolog](https://github.com/glottolog/pyglottolog) fit the bill / do the job? (API documentation [here](https://pyglottolog.readthedocs.io/en/latest/index.html)) I'm reaching my technical limitations here: I can't assess the distance between what they offer and what the HF team needs. \r\nI have opened an Issue in [their repo](https://github.com/glottolog/glottolog-cldf/issues/13). \r\n\r\nVery interested to see where it goes from there.",
"I just tried pyglottolog to generate a file with all the current IDs (first column).\r\n\r\n`glottolog languoids` inside the `glottolog` repository.\r\n\r\n[glottolog-languoids-v4.6-10-g5c66eec874.csv](https://github.com/huggingface/datasets/files/9417456/glottolog-languoids-v4.6-10-g5c66eec874.csv)\r\n\r\n",
"Greetings @alexis-michaud and others,\r\nI think perhaps a standards-based approach here would help everyone out both at the technical and social layers of technical innovations. \r\n\r\nLet me say a few things: \r\n1. there are multiple kinds of assets in AI that should have associated language codes. \r\n * AI Training Data sets\r\n * AI models\r\n * AI outputs\r\nThese are all distinct components which should be tagged for the language and encoding methods they operate on or enhance. For example, an AI based cross-language tool from French to English (UK variety) still needs to consider if it is operating on oral language speech or written text. This is where [IANA language sub-tags](https://www.iana.org/assignments/language-subtag-registry/language-subtag-registry) come in and are so important. I link to the official source. If one wants to use middleware such as a python package or npm package to manage strings then please make sure those packages are updating codes as they are being revised. I see that @julien-c mentioned BCP-47. BCP-47 is the current standard for language tagging. Following it will make the resources you create more findable and let future users better understand or expect any biases which may have been introduced in the different AI based products.\r\n2. BCP-47 is a technical read. However, you will notice that it identifies when to use an ISO 639-1, ISO 639-2, or ISO 639-3. code. This is important for interoperability with many systems. If you are using library systems then you should likely just stick with ISO 639-3 codes.\r\n3. If you are going to use Glottolog codes use them after an `-x-` tag in the BCP-47 format to maintain BCP-47 validity. \r\n4. You should source ISO 639-3 codes directly from the [ISO 639-3 registrar](https://iso639-3.sil.org/code_tables/639/data) as these codes are updated annually, usually in February or March. ISO 639-3 codes have multiple classes: `Active`, `Deprecated`, and `Unassigned`. This means that string length checking is not a sufficient strategy for validation.\r\n5. The names of smaller languages often change depending on the language used to describe them. The [ISO639-2 documentation](https://www.loc.gov/standards/iso639-2/php/code_list.php) has a list of language names for languages with smaller populations for languages in which descriptions about these languages are often written. For example, ISO 639-2's documentation contains the names of languages as they are used in French, German, and English. ISO 639-2 rarely is updated as it is now tied to ISO 639-3's evolution and modern systems should just use ISO 639-3, but these additional names of languages in other languages may not appear in the ISO 369-3 tables.\r\n6. Glottolog codes are also updated at least annually. Usually sometime after ISO 639-3 updates.\r\n7. Please, if the material is in a written mode, please indicate which script is used unless the IANA field has a `suppress script` value. Please use the script tag that BCP-47 calls for from [ISO 15924](https://unicode.org/iso15924/iso15924-codes.html). This also updates at least annually. \r\n8. Another great place to look for language names is the [Unicode CLDR database for locales](https://cldr.unicode.org/translation/displaynames/languagelocale-names). These ought to be congruent with ISO 639-3 but, sometimes CLDR has additional references to languages (such as the french name for a language) which is not contained in ISO 639-2 or ISO 639-3.\r\n9. Wikidata for language names is not always a great source of authoritative information. Language names are asymmetrical. Many times they are contrived because there is no actual name for the language in the language referring... e.g. French doesn't have a name for every language in the world, often they say something like: the language of 'x' people. — English does the same. When a language name standard does not have the best name for a language the best way to handle that is to make a change request with the standards registrar. Keeping track of the source list and the version of your source list for your language codes is very important. \r\n10. Finally, It would be a great service to technologist, minority language communities, and linguists if for all resources of the three types mentioned in number 1 above you added a record to [OLAC](http://www.language-archives.org/). — I can help you with that. OLAC is a search interface for language resources.\r\n",
"Hi everybody!\r\n\r\nAbout the point:\r\n> also cc'ing @lbourdois who's been active and helpful on those subjects in the past!\r\n\r\nDiscussions on the need to improve the Hub's tagging system (applying to both datasets and models) can be found in the following discussion: https://github.com/huggingface/hub-docs/issues/193\r\nOnce this system has been redone and satisfies the identified needs, a redesign of the [Languages page](https://huggingface.co/languages) would also be relevant: https://github.com/huggingface/hub-docs/issues/194. \r\nI invite you to read them. But as a quick summary, the exchanges were oriented towards the ISO standard (the first HF system was based on it and it is generally the standard indicated in AI/DL papers) by favouring ISO 639-1 if it exists, and fallback to ISO 639-2 or ISO 639-3 if it doesn't. In addition, it is possible to add BCP-47 tags to consider existing varieties/regionalisms within a language (https://huggingface.co/datasets/AmazonScience/massive/discussions/1). If a language does not belong to either of these two standards, then a request should be made to the HF team to add it manually.\r\n\r\n\r\nTo return to the present discussion, thank you for the various databases and methodologies you mention. It makes a big difference to have linguists in the loop 🚀.\r\n\r\nI have a couple of questions where I think an expert perspective would be appreciated:\r\n- Do you think it's possible to easily handle tags that have been deprecated potentially for decades?\r\nFor example (I'm taking the case of Hebrew but this has happened for other languages) I tagged Google models with the \"iw\" [tag](https://huggingface.co/models?language=iw&sort=downloads) because I based it on what the authors gave in their [paper](https://arxiv.org/pdf/2010.11934.pdf) see table 6 page 12). It turns out that this ISO tag has in fact been deprecated since 1989 in favour of the \"he\" tag. It would therefore be necessary to have a verification that transforms the old tags into the most recent ones.\r\n\r\n- When you look up a language on Wikipedia, it usually shows, in addition to the ISO standard, the codes in the Glottolog (which you have already mentioned), [ELP](https://www.endangeredlanguages.com/?hl=en) and [Linguasphere](http://www.linguasphere.info/jr/index.php?l1=home&l2=welcome) databases. Would you have any opinion about these two other databases?\r\n\r\n- On the Hub, there is the following dataset where French people speak in English: https://huggingface.co/datasets/Datatang/French_Speaking_English_Speech_Data_by_Mobile_Phone \r\nIs there a database to take this case into account? I have not found any code in the Glottolog database. If based on an IETF BCP-47 standard, I would tend to tag the dataset with \"en-fr\" but would this be something accepted by linguists?\r\nBased on the first post in this thread that there are about 8000 languages, if one considers that a given language can be pronounced by a speaker of the other 7999, that would theoretically make about 64 million BCP-47 language1-language2 codes existing. And even much more if we consider regionalists with language1_regionalism_x-language2_regionalism_y. I guess there is no such database.\r\n\r\n- Are there any databases that take into account all the existing sign languages in the world?\r\nIt would be nice to have them included on the Hub.\r\n\r\n- Is there an international classification of languages?\r\nA bit like the [International Classification of Diseases](https://en.wikipedia.org/wiki/International_Classification_of_Diseases) in medicine, which is established by the WHO and used as a reference throughout the world. The idea would be to have a precise number of languages to which we would then have to assign a unique tag in order to find them later. \r\n\r\n- Finally for the CNRS team, when can we expect to see all the datasets of [Pangloss](https://pangloss.cnrs.fr/) on HF? 👀 And I don't know if you have a way to help to add also the datasets of [CoCoON](https://cocoon.huma-num.fr/exist/crdo/).",
"> I invite you to read them. But as a quick summary, the exchanges were oriented towards the ISO standard (the first HF system was based on it and it is generally the standard indicated in AI/DL papers) by favouring ISO 639-1 if it exists, and fallback to ISO 639-2 or ISO 639-3 if it doesn't. In addition, it is possible to add BCP-47 tags to consider existing varieties/regionalisms within a language (https://huggingface.co/datasets/AmazonScience/massive/discussions/1). If a language does not belong to either of these two standards, then a request should be made to the HF team to add it manually.\r\n\r\nOne comment on this fall back system (which generally follows the BCP-47 process). ISO 639-2 has some codes which refer to a language ambiguously. For example, I believe code `ara` is used for arabic. In some contexts arabic is considered a single language, however, Egyptian Arabic is quite different from Moroccan Arabic, which are both considered separate languages. These ambiguous codes are valid ISO 639-3 codes but they have a special status. They are called `macro codes`. They exist inside the ISO 639-3 standard to provide absolute fallback compatibility between ISO 639-2 and ISO 639-3. However, when considering AI and MT applications with language data, the unforeseen potential applications and the potential for bias using macro codes should be avoided for new applications of language tags to resources. For historical cases where it is not clear what resources were used to create the AI tools or datasets then I understand the use of ambiguous tag uses. So for clarity in language tagging I suggest:\r\n\r\n1. Strictly following BCP-47\r\n2. Whenever possible avoid the use of macro tags in the ISO 639-3 standard. These are BCP-47 valid, but could introduce biases in the application of their use in society. (Generally there are more specific tags available to use in the ISO 639-3 standard.)",
"> * Are there any databases that take into account all the existing sign languages in the world?\r\n> It would be nice to have them included on the Hub.\r\n\r\nSign Languages present an interesting case. As I understand the situation. The identification of sign languages has been identified as a component of their endangerment. Some sign languages do exist in ISO 639-3. For further discussion on the issue I refer readers to the following publications: \r\n\r\n* https://doi.org/10.3390/languages7010049\r\n* https://www.academia.edu/35870983/The_ethics_of_of_language_identification_and_ISO_639\r\n\r\nOne way to be BCP-47 compliant and identify a sign language which is not identified in any of the BCP-47 referenced standards is to use the ISO 639-3 code for undetermined language `und` and then apply a custom suffix indicator (as explained in BCP-47) `-x-` and a custom code, such as the ones used in https://doi.org/10.3390/languages7010049",
"> * Is there an international classification of languages?\r\n> A bit like the [International Classification of Diseases](https://en.wikipedia.org/wiki/International_Classification_of_Diseases) in medicine, which is established by the WHO and used as a reference throughout the world. The idea would be to have a precise number of languages to which we would then have to assign a unique tag in order to find them later.\r\n\r\nYes that would be the function of ISO 639-3. It is the reference standard for languages. It includes a code and its name and the status of the code. Many technical metadata standards for file and computer interoperability reference it, many technical library metadata standards reference it. Some linguists use it. Many governments reference it. \r\n\r\nIndexing diseases are different from indexing languages in several ways, one way is that diseases are the impact of a pathogen not the pathogen itself. If we take COVID-19 as an example, there are many varieties of the pathogen but broadly speaking there is only one disease — with many symptoms.\r\n\r\n",
">* When you look up a language on Wikipedia, it usually shows, in addition to the ISO standard, the codes in the Glottolog (which you have already mentioned), [ELP](https://www.endangeredlanguages.com/?hl=en) and [Linguasphere](http://www.linguasphere.info/jr/index.php?l1=home&l2=welcome) databases. Would you have any opinion about these two other databases?\r\n\r\nWhile these do appear on wikipedia, I don't know of any information system which uses these codes. I do know that glottolog did import ELP data at one time and its database does contain ELP data I'm not sure if Glottolog regularly ingests new versions of ELP data. I suspect that the use of Linguasphere data may be relevant to users of wikidata as a linked data attribute but I haven't heard of any linked data projects using Linguasphere data for analysis or product development. My impression is that it is fairly unused.",
"> * Do you think it's possible to easily handle tags that have been deprecated potentially for decades?\r\n>For example (I'm taking the case of Hebrew but this has happened for other languages) I [tag](https://huggingface.co/models?language=iw&sort=downloads)ged Google models with the \"iw\" tag because I based it on what the authors gave in their [paper](https://arxiv.org/pdf/2010.11934.pdf) see table 6 page 12). It turns out that this ISO tag has in fact been deprecated since 1989 in favour of the \"he\" tag. It would therefore be necessary to have a verification that transforms the old tags into the most recent ones.\r\n\r\nYes. You can parse the IANA file linked to above (it is regularly updated). All deprecated tags are marked as such in that file. The new prefered tag if there is one, is indicated. ISO 639-3 also indicates a code's status but their list is relevant only codes within their domain (ISO 639-3).",
"> * On the Hub, there is the following dataset where French people speak in English: https://huggingface.co/datasets/Datatang/French_Speaking_English_Speech_Data_by_Mobile_Phone\r\nIs there a database to take this case into account? I have not found any code in the Glottolog database. If based on an IETF BCP-47 standard, I would tend to tag the dataset with \"en-fr\" but would this be something accepted by linguists?\r\n\r\nI would interpret `en-fr` as english as spoken in France. `fr`in this position refers to the geo-political entity not a second language. I see no reason that other linguists should have a different option after having read BCP-47 and understood how it works.\r\n\r\nThe functional goal here is to tag a language resource as being produced by nonnative speakers, while tagging both languages. There are several problems here. The first is that BCP-47 has no way explicit way to do this. One could use the sub code `x-` with a private use code to indicate a second language and infer some meaning as to that language's role. However, there is another problem here which complexifies the situation greatly... how do we know that those english speakers (in France, or from France, or who were native French speakers) were not speaking their third or fourth language rather than their second language. So to conceptualize a sub-tag which indicates the first language of a speech act for speakers in a second (or other) language would need to be carefully crafted. It might then be proposed to the appropriate authorities. For example three sub-tags exist.\r\n\r\nThere are three registered sub-tags out of a BCP-47 allowed 35. These are `x-`, `u-`, and `t-`. `u-` and `t-` are defined in [RFC6067 ](https://www.rfc-editor.org/rfc/rfc6067)and [RFC6497](https://www.rfc-editor.org/rfc/rfc6497) . For more information see the [Unicode CLDR documentation](https://cldr.unicode.org/index/bcp47-extension) where it says: \r\n\r\n\r\n>[IETF BCP 47 ](http://www.google.com/url?q=http%3A%2F%2Ftools.ietf.org%2Fhtml%2Fbcp47&sa=D&sntz=1&usg=AOvVaw1DoMN1IBGg-JHgECBvdW1t)[Tags for Identifying Languages](http://www.google.com/url?q=http%3A%2F%2Ftools.ietf.org%2Fhtml%2Fbcp47&sa=D&sntz=1&usg=AOvVaw1DoMN1IBGg-JHgECBvdW1t) defines the language identifiers (tags) used on the Internet and in many standards. It has an extension mechanism that allows additional information to be included. The Unicode Consortium is the maintainer of the extension ‘u’ for Locale Extensions, as described in [rfc6067](https://www.google.com/url?q=https%3A%2F%2Ftools.ietf.org%2Fhtml%2Frfc6067&sa=D&sntz=1&usg=AOvVaw0gGWi0EjHfy1WId8k8oKAi), and the extension 't' for Transformed Content, as described in [rfc6497](https://www.google.com/url?q=https%3A%2F%2Ftools.ietf.org%2Fhtml%2Frfc6497&sa=D&sntz=1&usg=AOvVaw0w-OUsFX1PtaKYIq31P64I).\r\n>\r\n>The subtags available for use in the 'u' extension provide language tag extensions that provide for additional information needed for identifying locales. The 'u' subtags consist of a set of keys and associated values (types). For example, a locale identifier for British English with numeric collation has the following form: en-GB-u-kn-true\r\n>\r\n>The subtags available for use in the 't' extension provide language tag extensions that provide for additional information needed for identifying transformed content, or a request to transform content in a certain way. For example, the language tag \"ja-Kana-t-it\" can be used as a content tag indicates Japanese Katakana transformed from Italian. It can also be used as a request for a given transformation.\r\n>\r\n>For more details on the valid subtags for these extensions, their syntax, and their meanings, see LDML Section 3.7 [Unicode BCP 47 Extension Data](http://www.google.com/url?q=http%3A%2F%2Fwww.unicode.org%2Freports%2Ftr35%2F%23Locale_Extension_Key_and_Type_Data&sa=D&sntz=1&usg=AOvVaw0lMthb9KbTJtoOd5mvv3Ha).",
"Hi @lbourdois ! Many thanks for the detailed information.\r\n\r\n> Discussions on the need to improve the Hub's tagging system (applying to both datasets and models) can be found in the following discussion: [huggingface/hub-docs#193](https://github.com/huggingface/hub-docs/issues/193) \r\nFascinating topic! To me, the following suggestion has a lot of appeal:\r\n\"if consider that it was necessary to create an ISO 639-3 because ISO 639-1 was deficient, it would be to do the reverse and thus convert the tags from ISO 639-1 to ISO 639-2 or 3 (https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes or https://iso639-3.sil.org/code_tables/639/data).\"\r\n\r\nYes, ISO 639-1 is unsuitable because it has so few codes: less than 200. To address linguistic diversity in 'unrestricted mode', a list of all languages is wanted. \r\n\r\nThe idea of letting people use their favourite nomenclature and automatically adding the ISO 639-3 three-letter code as a tag is appealing. Thus all the HF datasets would have three-letter language tags (handy for basic search), alongside the authors' preferred tags and language names (including Glottolog tags as well as ISO 639-{1, 2}, and all other options allowed by BCP-47). \r\n\r\nRetaining the authors' original tags and language names would be best. \r\n* For language names: some people favour one name over another and it is important to respect their choice. In the case of Yongning Na: alternative names include 'Mosuo', 'Narua', 'Eastern Naxi'... and the names carry implications: people have been reported to come to blows about the use of the term 'Mosuo'. \r\n* For language tags: Glottocodes can be more fine-grained than Ethnologue (ISO 639-3), and some colleagues feel strongly about those. \r\n\r\nThus there would be a BCP-47 tag (sounds like a solid technical choice, though not 'passer-by-friendly': requiring some expertise to interpret) **plus** an ISO 639-3 tag that could be grabbed easily, and (last but not least) language names spelled out in full. Searches would be easier. No information would be lost. \r\n\r\nAre industry practices so conservative that many people are happy with two-letter codes, and consider ISO 639-3 three-letter codes an unnecessary complication? That would be a pity, since there are so many advantages to using longer lists. (Somewhat like the transition to Unicode: sooo much better!) But maybe that conservative attitude _is_ widespread, and it would then need to be taken into account. In which case, one could consider offering two-letter codes as a search option. Internally, the search engine would look up the corresponding 3-letter codes, and produce the search results accordingly. \r\n\r\nNow to the other questions:\r\n\r\n> * Do you think it's possible to easily handle tags that have been deprecated potentially for decades?\r\n> For example (I'm taking the case of Hebrew but this has happened for other languages) I tagged Google models with the \"iw\" [tag](https://huggingface.co/models?language=iw&sort=downloads) because I based it on what the authors gave in their [paper](https://arxiv.org/pdf/2010.11934.pdf) see table 6 page 12). It turns out that this ISO tag has in fact been deprecated since 1989 in favour of the \"he\" tag. It would therefore be necessary to have a verification that transforms the old tags into the most recent ones.\r\nI guess that the above suggestion takes care of this case. The original tag (in this example, \"iw\") is retained (facilitating cross-reference with the published paper, and respecting the real: the way the dataset was originally tagged). This old tag goes into the `BCP-47` field of the dataset, which can handle quirks & oddities like this one. And a new tag is added in the `ISO 639-3` field: the 3-letter code \"heb\". \r\n\r\n> * When you look up a language on Wikipedia, it usually shows, in addition to the ISO standard, the codes in the Glottolog (which you have already mentioned), [ELP](https://www.endangeredlanguages.com/?hl=en) and [Linguasphere](http://www.linguasphere.info/jr/index.php?l1=home&l2=welcome) databases. Would you have any opinion about these two other databases?\r\n\r\nI'm afraid I never heard about Linguasphere. The [online register for Linguasphere (PDF)](http://www.linguasphere.info/jr/pdf/index/LS_index_n-n.pdf) seems to be from 1999-2000. It seems that the level of interoperability is not very high right now. (By contrast, Glottolog has [pyglottolog](https://github.com/glottolog/pyglottolog) and in my experience contacts flow well.) \r\n\r\nThe Endangered Languages Project is something Google started but initially did not 'push' very strongly, it seems. Just airing an opinion on the public Internet, it seems that the project is now solidly rooted at University of Hawaiʻi at Mānoa. It seems that they do not generate codes of their own. They refer to ISO 639-3 (Ethnologue) as a code authority when applicable, and otherwise provide comments in so many words, such as that language L currently lacks an Ethnologue code of its own (example [here](https://www.endangeredlanguages.com/lang/10624)). \r\n\r\n> * On the Hub, there is the following dataset where French people speak in English: https://huggingface.co/datasets/Datatang/French_Speaking_English_Speech_Data_by_Mobile_Phone\r\n> Is there a database to take this case into account? I have not found any code in the Glottolog database. If based on an IETF BCP-47 standard, I would tend to tag the dataset with \"en-fr\" but would this be something accepted by linguists?\r\n> Based on the first post in this thread that there are about 8000 languages, if one considers that a given language can be pronounced by a speaker of the other 7999, that would theoretically make about 64 million BCP-47 language1-language2 codes existing. And even much more if we consider regionalists with language1_regionalism_x-language2_regionalism_y. I guess there is no such database.\r\n\r\nYes, you noted the difficulty here: that there are so many possible situations. Eventually, each dataset would required descriptors of its own. @BenjaminGalliot points out that, in addition to specifying the speakers' native languages, the degree of language proficiency would also be relevant. How many years did the speakers spend in which area? Talking which languages? In what chronological order? Etc. The complexity defies encoding. The purpose of language codes is to allow for searches that group resources into sets that make sense. Additional information is very important, but would seem to be a matter for 'comments' fields. \r\n\r\n> * Is there an international classification of languages?\r\n> A bit like the [International Classification of Diseases](https://en.wikipedia.org/wiki/International_Classification_of_Diseases) in medicine, which is established by the WHO and used as a reference throughout the world. The idea would be to have a precise number of languages to which we would then have to assign a unique tag in order to find them later.\r\n\r\nAs I understand, Ethnologue and Glottolog both try to do that, each in its own way. The simile with diseases seems interesting, to some extent: in both cases it's about human classification of phenomena that have complexity (though some diseases are simpler than others, whereas all languages have much complexity, in different ways).\r\n\r\n> * Finally, when can we expect to see all the datasets of [Pangloss](https://pangloss.cnrs.fr/) on HF? eyes And I don't know if you have a way to help to add also the datasets of [CoCoON](https://cocoon.huma-num.fr/exist/crdo/).\r\n\r\nThree concerns: (i) Technical specifications: we have not yet received feedback on the Japhug and Na datasets in HF. There may be technical considerations that we have not yet thought of and that would need to be taken into account before 'bulk upload'. (ii) Would there be a way to automate the process? The way @BenjaminGalliot did it for Japhug and Na, there was a manual component involved, and doing it by hand for all 200 datasets would not be an ideal workflow, given that the metadata are all clearly arranged. (iii) Some datasets are currently under a 'No derivatives' CreativeCommons license. We could go back to the depositors and argue that the 'No derivatives' mention were best omitted (see [here a similar argument about publications](https://creativecommons.org/2020/04/21/academic-publications-under-no-derivatives-licenses-is-misguided/)): again, we'd want to be sure about the way forward before we set the process into motion.\r\n\r\nOur hope would be that some colleagues try out the [OutilsPangloss](https://gitlab.com/lacito/outilspangloss) download tool, assemble datasets from Pangloss/Cocoon as they wish, then deposit them to HF.",
"> The idea of letting people use their favourite nomenclature and automatically adding the ISO 639-3 three-letter code as a tag is appealing. Thus all the HF datasets would have three-letter language tags (handy for basic search), alongside the authors' preferred tags and language names (including Glottolog tags as well as ISO 639-{1, 2}, and all other options allowed by BCP-47).\r\n> \r\n> Retaining the authors' original tags and language names would be best.\r\n> \r\n> * For language names: some people favour one name over another and it is important to respect their choice. In the case of Yongning Na: alternative names include 'Mosuo', 'Narua', 'Eastern Naxi'... and the names carry implications: people have been reported to come to blows about the use of the term 'Mosuo'.\r\n> * For language tags: Glottocodes can be more fine-grained than Ethnologue (ISO 639-3), and some colleagues feel strongly about those.\r\n> \r\n> Thus there would be a BCP-47 tag (sounds like a solid technical choice, though not 'passer-by-friendly': requiring some expertise to interpret) **plus** an ISO 639-3 tag that could be grabbed easily, and (last but not least) language names spelled out in full. Searches would be easier. No information would be lost.\r\n\r\n@alexis-michaud raises an excellent point. Language Resource users have varying search habits (or approaches). This includes cases where two or more language names refer to a single language. A search utility/interface needs to be flexible and able to present results from various kinds of input in the search process. This could be like how the terms French/Français/Franzosisch (en/fr/de) are names for the same language or it could be a variety of the following: autoglottonyms (how the speakers of the language refer to their language), or exoglottonyms (how others refer to the language). Additionally, in web based searches I have also needed to implement diacritic sensitive and insensitive logic so that users can type with or without diacritics and not have results unnecessarily excluded. \r\n\r\nDepending on how detailed of a search problem HF seeks to solve. It may be better to off load complex search to search engines like OLAC which aggregate a lot of language resources. — as I mentioned above I can assist with the informatics on creating an OLAC feed.\r\n\r\nAbstracting search logic from actual metadata may prove a useful way to lower the technical debt overhead. Technical tools and library standards use ISO and BCP-47 Standards. So, from a bibliographic metadata perspective this seems to be the way forward with the widest set of use cases. ",
"To get a visual idea of these first exchanges, I coded a Streamlit app that I put online on Spaces: https://huggingface.co/spaces/lbourdois/Language-tags-demo. \r\nThe code is in Python so I don't know if it can be used by HF who seems to need something in Node.js but it serves as a proof of concept. The advantage is also that you can directly test ideas by enter things in a search bar and see what comes up. \r\n\r\nThis application is divided into 3 points:\r\n- The first is to enter a language in natural language to get its code which can then be filled in the YAML file of the README.MD files of the HF datasets or models in order to be referenced and found by everyone.\r\nIn practice, enter the language (e.g: `English`) you are interested in to get its associated tag (e.g: `en`). You can enter several languages by separating them with a comma (e.g `French,English,German`). You will be given priority to the ISO 639-3 code if it exists otherwise the Glottocode or the BCP47 code (for varieties in particular). If none of these codes are available, it links to a page where the user can contact HF to request to add this tag. \r\nIf you enter a BCP47 code, it must be entered as follows: `Language(Territory)`, for example `French(Canada)`. Attention! If you enter a BCP-47 language, it must be entered first, otherwise the plant code will be displayed. I have to fix this problem but I am moving to a new place, I don't have an internet connection when I want and I prefer to push this first version so that you can already test things now and not have to wait days or weeks.\r\nThis point is intended to simulate the user's side of the equation, which wonders which tag he should fill in for his language.\r\n\r\n\r\n- The second is to enter a language code to obtain the name of the language in natural language.\r\nIn practice, enter the tag (ISO 639-1/2/3, Glottolog or BCP-47) you are interested in (e.g: `fra`) to get its associated language (e.g: French). You can enter several languages by separating them with a comma (e.g `fra,eng,deu`). Attention! If you enter a BCP-47 code, it must be entered first, otherwise the plant code will be displayed. Same as the other bug above (it's actually the same one).\r\nThis point is intended to simulate the side of HF that for a given tag must return the correct language.\r\n\r\n\r\n\r\nTo code these two points, I tested two approaches. \r\n\r\n1. The first one (internal DB in the app) consists in querying a database that HF would have locally at their place. To create this database, I merged the ISO 639 database (https://iso639-3.sil.org/sites/iso639-3/files/downloads/iso-639-3.tab) and the Glottolog database (https://glottolog.org/meta/downloads). The result of this merge is visible in the 3rd point of the application qui is an overview of the database.\r\nIn the image below, on line 1 of the database, we can see that the Glottocode database gives an ISO 639-3 code (column ISO639P3code) but not the ISO 639 database (column 639-3). Do you have an explanation for this phenomenon?\r\n![image](https://user-images.githubusercontent.com/58078086/188433217-bf7cb606-7af4-40b5-861f-ed662468f6e4.png)\r\n\r\n\r\nFor BCP 47 codes of the type `fr-CA`, I have retrieved the ISO-3166 alpha 1 codes of the territories (https://www.iso.org/iso-3166-country-codes.html).\r\nIn practice, what I do is if we enter `fr-CA` is that the letters before the `-` refer to a language in the `Name` column for a `639-1` == `fr` (`639-3` for `fra` or `fre`) in the base of my image above. Then I look at the letters after the `-` which refers to a territory. It comes out `French (Canada)`. I used https://cldr.unicode.org/translation/displaynames/languagelocale-name-patterns for the pattern that came up.\r\n\r\n\r\n2. The second approach (with langcodes lib in the app) consists in using the Python `langcodes` library (https://github.com/rspeer/langcodes) which offers a lot of features in ready-made functions. It manages for example deprecated codes, the validity of an entered code, gives languages from code in the language of your choice (by default in English, but also autoglottonyms), etc. I invite you to read the README of the library. The only negative point is that it hasn't been updated for 10 months so basing your tag system on an external tool that isn't necessarily up to date can cause problems in the long run. But it is certainly an interesting source.\r\n\r\nFinally, I have added some information on the number of people speaking/reading the language(s) searched (figures provided by langcodes which are based on those given by ISO). This is not relevant for our topic but it could be figures that could be added as information on the https://huggingface.co/languages page. \r\n\r\n\r\n\r\nWhat could be done to improve the app if I have time:\r\n- Write the text for the app's homepage to describe what it does. This could serve as a basis for a documentation that I think will be necessary to add somewhere on the HF website to explain how the language tagging system works.\r\n- Deal with the bug mentioned above\r\n- Integrate ISO 3166-1 alpha 2 territories (https://www.iso.org/obp/ui#iso:pub:PUB500001:en)? They offer a finer granularity than ISO 3166-1 alpha 1 which is limited to the country level, but they are very administrative (for French, ISO 3166-1 alpha 2 gives us the \"départements\" for example).\r\n- Add autoglottonyms? (I only handle English language names for the moment)\r\n- For each language indicate to which family it belongs, in practice this could help to make data augmentation, but especially to classify the languages and find them more easily on the page https://huggingface.co/languages.",
"Very impressive! Using the prompt 'Japhug' (a language name), the app finds the intended language:\r\n![image](https://user-images.githubusercontent.com/6072524/188441805-3af3a580-951e-4150-b5f9-64e1bde0992b.png)\r\n\r\nA first question: based on the Glottocode, would it be possible to grab the closest ISO639-3 code? In case there is no match for the exact language variety, one needs to explore the higher-level groupings, level by level. For this language (Japhug), the information provided in the extracted CSV file (`glottolog-languoids-v4.6.csv`) is: \r\n`sino1245/burm1265/naqi1236/qian1263/rgya1241/core1262/jiar1240` \r\nOne need not look further than the first higher-level grouping, [`jiar1240`](https://glottolog.org/resource/languoid/id/jiar1240), to get an ISO639-3 code, namely `jya`.\r\n\r\nThus users searching by language names would get ISO639-3 (often less fine-grained than Glottolog) as a bonus.\r\nIt might be possible to ask the Glottolog team to provide this piece of information as part of an export from their database.",
"> on line 1 of the database, we can see that the Glottocode database gives an ISO 639-3 code (column ISO639P3code) but not the ISO 639 database (column 639-3). Do you have an explanation for this phenomenon?\r\n\r\nThat is because the language name 'Aewa' is not found in the Ethnologue (ISO 639-3) export that you are using. [This export in table form](https://iso639-3.sil.org/sites/iso639-3/files/downloads/iso-639-3.tab) only has one reference name (`Ref_Name`). For the language at issue, it is not 'Aewa' but ['Awishira'](https://www.ethnologue.com/language/ash).\r\n\r\nBy contrast, the language on line 0 of the database is called 'Abinomn' by both Ethnologue and Glottolog, and accordingly, columns `ISO639P3code` and `639-3` both contain the ISO 639-3 code, `bsa`.\r\n \r\nThe full Ethnologue database records alternate names for each language, and I'd bet that 'Aewa' is recorded among alternate names for the 'Ashiwira' language. I can't check because the full Ethnologue database is paywalled. \r\n![image](https://user-images.githubusercontent.com/6072524/188461409-e8c48036-df9b-4b56-9609-41cb9c3d3c3c.png)\r\n\r\n[Glottolog](https://glottolog.org/resource/languoid/id/abis1238) does provide the corresponding ISO 639-3 code for 'Aewa', `ash`, which is an exact match (it refers to the same variety as Glottolog `abis1238`).\r\nIn this specific case, Glottolog provides all the relevant information. I'd say that Glottolog can be trusted for all the codes they provide, including ISO 639-3 codes: they only include them when the match is good. \r\n\r\nSee previous comment about the cases where there is no exact match between Glottolog and ISO 639-3 (suggested workaround: look at a higher-level grouping to get an ISO 639-3 code).",
"I will add these two points to my TODO list.\r\n- Since Glottolog can be trust, I will add a condition to the code that if there is no ISO 639-3 code in the \"official\" database (https://iso639-3.sil.org/sites/iso639-3/files/downloads/iso-639-3.tab), look for it in the \"ISO639P3code\" column of Glottolog.\r\n- For the point of adding the closest ISO 639-3 code for a Glottolog code, what convention should be adopted for the output? Just the ISO 639-3 code, or the ISO 639-3 code - Glottolog code, or the ISO 639-3 code - language name?\r\nTo use the example of `Japhug` , should it be just `jya`, or `jya-japh1234` or `jya-Japhug`?",
"> * Integrate ISO 3166-1 alpha 2 territories (https://www.iso.org/obp/ui#iso:pub:PUB500001:en)? They offer a finer granularity than ISO 3166-1 alpha 1 which is limited to the country level, but they are very administrative (for French, ISO 3166-1 alpha 2 gives us the \"départements\" for example).\r\n\r\nI'm concerned with this sort of exploration. Not because I am against innovation. In fact this is an interesting thought exercise. However, to explore this thought further creates cognitive dissidence between BCP-47 authorized codes and other code sets which are not BP-47 compliant. For that reason, I think adding additional codes is a waste of time both for HF devs and for future users who get a confusing idea about language tagging. ",
"Good job for the application!\r\n\r\n> On the Hub, there is the following dataset where French people speak in English: https://huggingface.co/datasets/Datatang/French_Speaking_English_Speech_Data_by_Mobile_Phone\r\n Is there a database to take this case into account? I have not found any code in the Glottolog database. If based on an IETF BCP-47 standard, I would tend to tag the dataset with \"en-fr\" but would this be something accepted by linguists?\r\n Based on the first post in this thread that there are about 8000 languages, if one considers that a given language can be pronounced by a speaker of the other 7999, that would theoretically make about 64 million BCP-47 language1-language2 codes existing. And even much more if we consider regionalists with language1_regionalism_x-language2_regionalism_y. I guess there is no such database.\r\n\r\n> Yes, you noted the difficulty here: that there are so many possible situations. Eventually, each dataset would required descriptors of its own. @BenjaminGalliot points out that, in addition to specifying the speakers' native languages, the degree of language proficiency would also be relevant. How many years did the speakers spend in which area? Talking which languages? In what chronological order? Etc. The complexity defies encoding. The purpose of language codes is to allow for searches that group resources into sets that make sense. Additional information is very important, but would seem to be a matter for 'comments' fields.\r\n\r\nTo briefly complete what I said on this subject in a private discussion group, there is a lot of (meta)data associated with each element of a corpus (which language level, according to which criteria, knowing that even among native speakers there are differences, some of which may go beyond what seems obvious to us from a linguistic point of view, such as socio-professional category, life history, environment in the broad sense, etc.), which can be placed in ad-hoc columns, or more freely in a comment/note column. And it is the role of the researcher (in this case a linguist, in all likelihood) to do analyses (statistics...) to determine the relevant data, including criteria that may justify separating different languages (in the broad sense), making separate corpora, etc. Putting this information in the language code is in my opinion doing the job in the opposite and wrong direction, as well as bringing other problems, like where to stop in the list of multidimensional criteria to be integrated, so in my opinion, here, the minimum is the best (the important thing is in my opinion to have well-documented data, globally, by sub-corpus or by line)...\r\n\r\n> If you are going to use Glottolog codes use them after an -x- tag in the BCP-47 format to maintain BCP-47 validity.\r\n\r\nYes, for the current corpora, I have written:\r\n\r\n```\r\nlanguage:\r\n- jya\r\n- nru\r\nlanguage_bcp47:\r\n- x-japh1234\r\n- x-yong1288\r\n```\r\n\r\n> * Add autoglottonyms? (I only handle English language names for the moment)\r\n\r\nAutoglossonyms are useful (I use them prior to other glossonyms), but I'm not sure there is an easy way to retrieve them. We can find some of them in the \"Alternative Names\" panel of Glottolog, but even if we have an API to retrieve them easily, their associated language code will often not be the one we are in (hence the need to do several cycles to find one, which might not be the right one...). Maybe this problem needs more investigation...\r\n\r\n> For the point of adding the closest ISO 639-3 code for a Glottolog code, what convention should be adopted for the output? Just the ISO 639-3 code, or the ISO 639-3 code - Glottolog code, or the ISO 639-3 code - language name?\r\nTo use the example of Japhug , should it be just jya, or jya-japh1234 or jya-Japhug?\r\n\r\nI strongly insist not to add **a** language name after the code, it would restart a spiral of problems, notably the choice of the language in question:\r\n* the autoglossonym: in my opinion the best choice, but you have to know it…\r\n* the English name: iniquitous,\r\n* the name in the administratively/politically dominant language of the target language if it is relevant (strictly localized without overlapping, for example): iniquitous and tendentious (and in a way a special case of the previous one)...\r\n* etc.\r\n",
"> To get a visual idea of these first exchanges, I coded a Streamlit app that I put online on Spaces: https://huggingface.co/spaces/lbourdois/Language-tags-demo.\r\n> The code is in Python so I don't know if it can be used by HF who seems to need something in Node.js but it serves as a proof of concept. The advantage is also that you can directly test ideas by enter things in a search bar and see what comes up.\r\n\r\nThis is really great. You're doing a fantastic job. I love watching the creative process evolve. It is exciting. Let me provide some links to some search interfaces for further inspiration. I always find it helpful to know how others have approached a problem when figuring out my approach. I will link to three examples Glottolog, r12a's language sub-tag chooser, and the FLEx project builder wizard. The first two are online, but the last one is in an application which must be downloaded and works only on windows or linux. I have placed some notes on each of the screenshots.\r\n\r\n* **[Glottolog](https://glottolog.org/)** | [Search Query](https://glottolog.org/glottolog?name=en&namequerytype=part&multilingual=on#2/20.9/150.0) \r\n\r\n![Glottolog1](https://user-images.githubusercontent.com/40230/188494425-84ee6ecf-6868-4684-a4ae-008973f3b367.png)\r\n![Glottolog2](https://user-images.githubusercontent.com/40230/188494426-fc1c225c-f99a-46b5-a1aa-950cf7912ce3.png)\r\n\r\n\r\n* **[r12a language sub-tag chooser](https://r12a.github.io/app-subtags/)** | [Code on github](https://github.com/r12a/app-subtags)\r\n\r\n![r12a1](https://user-images.githubusercontent.com/40230/188495349-8e53be68-8433-46ff-a0c7-c2f6e25458b6.png)\r\n\r\n\r\n* **FLEx Language Chooser** | [application page](https://software.sil.org/fieldworks/)\r\n![FLEx1](https://user-images.githubusercontent.com/40230/188499742-82c5601e-7e37-4863-bd63-8bff8c0694e3.png)\r\n\r\n",
"> In practice, what I do is if we enter `fr-CA` is that the letters before the `-` refer to a language in the `Name` column for a `639-1` == `fr` (`639-3` for `fra` or `fre`) in the base of my image above. Then I look at the letters after the `-` which refers to a territory. It comes out `French (Canada)`. I used https://cldr.unicode.org/translation/displaynames/languagelocale-name-patterns for the pattern that came up.\r\n\r\nWhat you are doing is looking at the algorithm for Locale generation rather than BCP-47's original documentation. I'm not sure there are difference, there might be. I know that locale IDs generally follow BCP-47 But I think there are some differences such as the use of `_` vs. `-`. ",
"> A first question: based on the Glottocode, would it be possible to grab the closest ISO639-3 code? In case there is no match for the exact language variety, one needs to explore the higher-level groupings, level by level. For this language (Japhug), the information provided in the extracted CSV file (`glottolog-languoids-v4.6.csv`) is: `sino1245/burm1265/naqi1236/qian1263/rgya1241/core1262/jiar1240` One need not look further than the first higher-level grouping, [`jiar1240`](https://glottolog.org/resource/languoid/id/jiar1240), to get an ISO639-3 code, namely `jya`.\r\n> \r\n> Thus users searching by language names would get ISO639-3 (often less fine-grained than Glottolog) as a bonus. It might be possible to ask the Glottolog team to provide this piece of information as part of an export from their database.\r\n\r\nThis is logical, but the fine grained assertions are not the same. That is just because they are in a hierarchical structure today doesn't mean they will be tomorrow. In some cases the glottolog is clearly referring to sub-language variants which will never receive full language status, whereas in other cases glottolog is referencing to unequal entities one or more of which should be a language. Many of the codes in glottolog have no associated documentation indicating what sort of speech variety they are. ",
"@lbourdois \r\n> * Since Glottolog can be trust, I will add a condition to the code that if there is no ISO 639-3 code in the \"official\" database (https://iso639-3.sil.org/sites/iso639-3/files/downloads/iso-639-3.tab), look for it in the \"ISO639P3code\" column of Glottolog.\r\n\r\nI'm confused here... if there is no ISO639-3 code in the official database from the registrar, why would you look for an \"unofficial\" code from someone else? What is the use case here?",
"> For the point of adding the closest ISO 639-3 code for a Glottolog code, what convention should be adopted for the output? Just the ISO 639-3 code, or the ISO 639-3 code - Glottolog code, or the ISO 639-3 code - language name?\r\nTo use the example of Japhug , should it be just jya, or jya-japh1234 or jya-Japhug?\r\n\r\n(answer edited in view of [Benjamin Galliot's comment](https://github.com/huggingface/datasets/issues/4881#issuecomment-1237420600) \r\nEasy part of the answer first: jya-Japhug is out, because, as @BenjaminGalliot pointed out above, mixing language names with language codes will make trouble. For Japhug, `jya-Japhug` looks rather good: the pair looks nice, the one (`jya`) packed together, the other (`Japhug`) good and complete while still pretty compact. But think about languages like 'Yongning Na' or 'Yucatán Maya': a code with a space in the middle, like `nru-Yongning Na`, is really unsightly and unwieldy, not?\r\n\r\nSome [principles for language naming in English](http://hdl.handle.net/10125/24725) have been put forward, with some linguistic arguments, but always supposing that such standardization is desirable, actual standardization of language names in English may well never happen.\r\n\r\nAs for `jya-japh1234`: again, at first sight it seems cute, combining two fierce competitors (Ethnologue and Glottolog) into something that gets the best of both worlds. \r\nBut @HughP has a point: _adding additional codes is a waste of time both for HF devs and for future users who get a confusing idea about language tagging_ Strong wording, for an important comment: better stick with BCP 47. \r\n\r\nSo the solution pointed out by Benjamin, from Frances Gillis-Webber and Sabine Tittel, looks attractive: \r\njya-x-japh1234\r\n\r\nOn the other hand, if the idea for HF Datasets is simply to add the closest ISO 639-3 code for a Glottolog code, maybe it could be provided simply in three letters: providing the 'raw' ISO 639-3 code `jya`. Availability of 'straight' ISO 639-3 codes could save trouble for some users, and those who want more detail could look at the rest of the metadata and general information associated with datasets.",
"The problem seems to have already been raised here: https://drops.dagstuhl.de/opus/volltexte/2019/10368/pdf/OASIcs-LDK-2019-4.pdf\r\n\r\nAn example can be seen here :\r\n\r\n> 3.1.2 The use of privateuse sub-tag\r\nIn light of unambiguous language codes being available for the two Khoisan varieties, we\r\npropose to combine the ISO 639-3 code for the parent language N‖ng, i.e., ‘ngh’, with the\r\nprivateuse sub-tag ‘x-’ and the respective Glottocodes stated above.\r\nThe language tags for N|uu and ‖’Au can then be defined accordingly:\r\nN|uu: ngh-x-nuuu1242\r\n‖’Au: ngh-x-auni1243\r\n\r\nBy the way, while searching for this, I came across this application: https://huggingface.co/spaces/cdleong/langcode-search",
"> > * Since Glottolog can be trust, I will add a condition to the code that if there is no ISO 639-3 code in the \"official\" database (https://iso639-3.sil.org/sites/iso639-3/files/downloads/iso-639-3.tab), look for it in the \"ISO639P3code\" column of Glottolog.\r\n> \r\n> I'm confused here... if there is no ISO639-3 code in the official database from the registrar, why would you look for an \"unofficial\" code from someone else? What is the use case here?\r\n\r\nHi @HughP, I'm happy to clear what confusion may exist here :innocent: Here is the use case. \r\nGuillaume Jacques (@rgyalrong) put together a sizeable corpus of the Japhug language. It is up on HF Datasets ([here](https://huggingface.co/datasets/Lacito/pangloss/viewer/japh1234)) as well as on Zenodo. \r\n\r\nZenodo is an all-purpose repository without adequate domain-specific metadata (\"[métadonnées métier](https://www.cines.fr/archivage/des-expertises/les-metadonnees/metadonnees-metier/)\"), and the deposits in there are not easy to locate. The Zenodo deposit is intended for a highly specific user case: someone reads about the dataset in a paper, goes to the address on Zenodo and grabs the dataset at one go. \r\n\r\nHF Datasets, on the other hand, allows users to look around among corpora. The Japhug corpus needs proper tagging so that HF Datasets users can find out about it. \r\nJaphug has an entry of its own in Glottolog, whereas it lacks an entry of its own in Ethnologue. Hence the practical usefulness of Glottolog. Ethnologue pools together, under the code `jya`, three different languages (Japhug, Tshobdun `tsho1240` and Zbu `zbua1234`). \r\n\r\nI hope that this helps.",
"> By the way, while searching for this, I came across this application: https://huggingface.co/spaces/cdleong/langcode-search\r\n\r\nReally relevant Space, so tagging its author @cdleong, just in case!",
"@cdleong A one-stop shop for language codes: terrific!\r\nHow do you feel about the use of Glottocodes? When searching the language names 'Japhug' and 'Yongning Na' (real examples, related to a HF Datasets deposit & various research projects), the relevant Glottocodes are retrieved, and that is great (and not that easy, notably with the space in the middle of 'Yongning Na'). But this positive result is 'hidden' in the results page. Specifically: \r\n\r\n- for Japhug: when searching by language name ('Japhug'), the result in big print is 'Failure', even though there is an available Glottocode (at bottom).\r\n![image](https://user-images.githubusercontent.com/6072524/188604619-a5032f53-6d2c-4751-b83b-bf70a5bf3b22.png)\r\nWhen searching by Glottocode (japh1234), same outcome: 'Result: failure!' (even though this _is_ the right Glottocode\r\nWhen searching by x-japh1234 (Glottocode encapsulated in BCP 47 syntax), one gets the message \r\n\r\n> ''x-japh1234' parses meaningfully as a language tag according to IANA\"\r\n\r\nbut there is paradoxically no link provided to Glottolog: the 'Glottolog' part of the results page is empty\r\n![image](https://user-images.githubusercontent.com/6072524/188605698-91a39982-ae70-4c48-94ae-cceeb06c25f5.png)\r\n\r\n- Yongning Na: the correct code is identified (yong1288) but instead of foregrounding this exact match, the first result that comes up is a completely different language, called 'Yong'. \r\n\r\nTrying to formulate a conclusion (admittedly, this note is not based on intensive testing, it is just feedback on initial contact): from a user perspective, it seems that the tool could make more extensive use of Glottolog. `langcode-search` does a great job querying Glottolog, why not make more extensive use of that information? (including: to arrive at the nearest ISO 639-3 code)"
] | 1,661,285,664,000 | 1,663,140,750,000 | null | NONE | null | **The problem:**
Language diversity is an important dimension of the diversity of datasets. To find one's way around datasets, being able to search by language name and by standardized codes appears crucial.
Currently the list of language codes is [here](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/resources/languages.json), right? At about 1,500 entries, it is roughly at 1/4th of the world's diversity of extant languages. (Probably less, as the list of 1,418 contains variants that are linguistically very close: 108 varieties of English, for instance.)
Looking forward to ever increasing coverage, how will the list of language names and language codes improve over time?
Enrichment of the custom list by HFT contributors (like [here](https://github.com/huggingface/datasets/pull/4880)) has several issues:
* progress is likely to be slow:
![image](https://user-images.githubusercontent.com/6072524/186253353-62f42168-3d31-4105-be1c-5eb1f818d528.png)
(input required from reviewers, etc.)
* the more contributors, the less consistency can be expected among contributions. No need to elaborate on how much confusion is likely to ensue as datasets accumulate.
* there is no information on which language relates with which: no encoding of the special closeness between the languages of the Northwestern Germanic branch (English+Dutch+German etc.), for instance. Information on phylogenetic closeness can be relevant to run experiments on transfer of technology from one language to its close relatives.
**A solution that seems desirable:**
Connecting to an established database that (i) aims at full coverage of the world's languages and (ii) has information on higher-level groupings, alternative names, etc.
It takes a lot of hard work to do such databases. Two important initiatives are [Ethnologue](https://www.ethnologue.com/) (ISO standard) and [Glottolog](https://glottolog.org/). Both have pros and cons. Glottolog contains references to Ethnologue identifiers, so adopting Glottolog entails getting the advantages of both sets of language codes.
Both seem technically accessible & 'developer-friendly'. Glottolog has a [GitHub repo](https://github.com/glottolog/glottolog). For Ethnologue, harvesting tools have been devised (see [here](https://github.com/lyy1994/ethnologue); I did not try it out).
In case a conversation with linguists seemed in order here, I'd be happy to participate ('pro bono', of course), & to rustle up more colleagues as useful, to help this useful development happen.
With appreciation of HFT, | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4881/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4881/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4880 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4880/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4880/comments | https://api.github.com/repos/huggingface/datasets/issues/4880/events | https://github.com/huggingface/datasets/pull/4880 | 1,348,452,776 | PR_kwDODunzps49qyJr | 4,880 | Added names of less-studied languages | {
"login": "BenjaminGalliot",
"id": 23100612,
"node_id": "MDQ6VXNlcjIzMTAwNjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/23100612?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenjaminGalliot",
"html_url": "https://github.com/BenjaminGalliot",
"followers_url": "https://api.github.com/users/BenjaminGalliot/followers",
"following_url": "https://api.github.com/users/BenjaminGalliot/following{/other_user}",
"gists_url": "https://api.github.com/users/BenjaminGalliot/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenjaminGalliot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenjaminGalliot/subscriptions",
"organizations_url": "https://api.github.com/users/BenjaminGalliot/orgs",
"repos_url": "https://api.github.com/users/BenjaminGalliot/repos",
"events_url": "https://api.github.com/users/BenjaminGalliot/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenjaminGalliot/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"OK, I removed Glottolog codes and only added ISO 639-3 ones. The former are for the moment in corpus card description, language details, and in subcorpora names.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4880). All of your documentation changes will be reflected on that endpoint."
] | 1,661,283,158,000 | 1,661,345,566,000 | 1,661,345,566,000 | CONTRIBUTOR | null | Added names of less-studied languages (nru – Narua and jya – Japhug) for existing datasets. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4880/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4880/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4880",
"html_url": "https://github.com/huggingface/datasets/pull/4880",
"diff_url": "https://github.com/huggingface/datasets/pull/4880.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4880.patch",
"merged_at": 1661345566000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4879 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4879/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4879/comments | https://api.github.com/repos/huggingface/datasets/issues/4879/events | https://github.com/huggingface/datasets/pull/4879 | 1,348,346,407 | PR_kwDODunzps49qbOl | 4,879 | Fix Citation Information section in dataset cards | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4879). All of your documentation changes will be reflected on that endpoint."
] | 1,661,278,003,000 | 1,661,314,148,000 | 1,661,314,147,000 | MEMBER | null | Fix Citation Information section in dataset cards.
This PR partially fixes the Citation Information section in dataset cards. Subsequent PRs will follow to complete this task. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4879/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4879/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4879",
"html_url": "https://github.com/huggingface/datasets/pull/4879",
"diff_url": "https://github.com/huggingface/datasets/pull/4879.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4879.patch",
"merged_at": 1661314147000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4878 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4878/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4878/comments | https://api.github.com/repos/huggingface/datasets/issues/4878/events | https://github.com/huggingface/datasets/issues/4878 | 1,348,270,141 | I_kwDODunzps5QXPg9 | 4,878 | [not really a bug] `identical_ok` is deprecated in huggingface-hub's `upload_file` | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892884,
"node_id": "MDU6TGFiZWwxOTM1ODkyODg0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/help%20wanted",
"name": "help wanted",
"color": "008672",
"default": true,
"description": "Extra attention is needed"
},
{
"id": 1935892912,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "Further information is requested"
}
] | closed | false | null | [] | null | [
"Resolved via https://github.com/huggingface/datasets/pull/4937."
] | 1,661,274,595,000 | 1,663,077,606,000 | 1,663,077,605,000 | CONTRIBUTOR | null | In the huggingface-hub dependency, the `identical_ok` argument has no effect in `upload_file` (and it will be removed soon)
See
https://github.com/huggingface/huggingface_hub/blob/43499582b19df1ed081a5b2bd7a364e9cacdc91d/src/huggingface_hub/hf_api.py#L2164-L2169
It's used here:
https://github.com/huggingface/datasets/blob/fcfcc951a73efbc677f9def9a8707d0af93d5890/src/datasets/dataset_dict.py#L1373-L1381
https://github.com/huggingface/datasets/blob/fdcb8b144ce3ef241410281e125bd03e87b8caa1/src/datasets/arrow_dataset.py#L4354-L4362
https://github.com/huggingface/datasets/blob/fdcb8b144ce3ef241410281e125bd03e87b8caa1/src/datasets/arrow_dataset.py#L4197-L4213
We should remove it.
Maybe the third code sample has an unexpected behavior since it uses the non-default value `identical_ok = False`, but the argument is ignored. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4878/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4878/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4877 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4877/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4877/comments | https://api.github.com/repos/huggingface/datasets/issues/4877/events | https://github.com/huggingface/datasets/pull/4877 | 1,348,246,755 | PR_kwDODunzps49qF-w | 4,877 | Fix documentation card of covid_qa_castorini dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4877). All of your documentation changes will be reflected on that endpoint."
] | 1,661,273,553,000 | 1,661,277,901,000 | 1,661,277,900,000 | MEMBER | null | Fix documentation card of covid_qa_castorini dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4877/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4877/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4877",
"html_url": "https://github.com/huggingface/datasets/pull/4877",
"diff_url": "https://github.com/huggingface/datasets/pull/4877.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4877.patch",
"merged_at": 1661277900000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4876 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4876/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4876/comments | https://api.github.com/repos/huggingface/datasets/issues/4876/events | https://github.com/huggingface/datasets/issues/4876 | 1,348,202,678 | I_kwDODunzps5QW_C2 | 4,876 | Move DatasetInfo from `datasets_infos.json` to the YAML tags in `README.md` | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"also @osanseviero @Pierrci @SBrandeis potentially",
"Love this in principle 🚀 \r\n\r\nLet's keep in mind users might rely on `dataset_infos.json` already.\r\n\r\nI'm not convinced by the two-syntax solution, wouldn't it be simpler to have only one syntax with a `default` config for datasets with only one config? ie, always having the `configs` field. This makes parsing the metadata easier IMO.\r\n\r\nMight also be good to wrap the tags under a `datasets_info` tag as follows:\r\n\r\n```yaml\r\ndescription: ...\r\ncitation: ...\r\ndataset_infos:\r\n download_size: 35142551\r\n dataset_size: 89789763\r\n version: 1.0.0\r\n configs:\r\n - ...\r\n[...]\r\n```\r\n\r\nLet's also keep in mind that extracting YAML metadata from a markdown readme is a bit more fastidious for users than just parsing a JSON file.",
"> Let's keep in mind users might rely on dataset_infos.json already.\r\n\r\nYea we'll full full backward compatibility\r\n\r\n> Let's also keep in mind that extracting YAML metadata from a markdown readme is a bit more fastidious for users than just parsing a JSON file.\r\n\r\nThe main things that may use or ingest these data IMO are:\r\n- users in the UI or IDE\r\n- `datasets` to populate `DatasetInfo` python object\r\n- moon landing which is already parsing YAML\r\n\r\nAm I missing something ? If not I think it's ok to use YAML\r\n\r\n> Might also be good to wrap the tags under a datasets_info tag as follows:\r\n\r\nMaybe one single syntax like this then ?\r\n```yaml\r\ndataset_infos:\r\n- config: unlabeled\r\n download_size: 35142551\r\n dataset_size: 89789763\r\n version: 1.0.0\r\n splits:\r\n - name: train\r\n num_examples: 10000\r\n features:\r\n - name: text\r\n dtype: string\r\n- config: labeled\r\n download_size: 35142551\r\n dataset_size: 89789763\r\n version: 1.0.0\r\n splits:\r\n - name: train\r\n num_examples: 100\r\n features:\r\n - name: text\r\n dtype: string\r\n - name: label\r\n dtype: ClassLabel\r\n names:\r\n - negative\r\n - positive\r\n```\r\nand when you have only one config\r\n```yaml\r\ndataset_infos:\r\n- config: default\r\n splits:\r\n - name: train\r\n num_examples: 10000\r\n features:\r\n - name: text\r\n dtype: string\r\n```",
"love the idea, and the trend in general to move more things (like tasks) to a single place (YAML).\r\n\r\nalso, if you browse files on a dataset's page (in \"Files and versions\"), raw `README.md` files looks nice and readable, while `.json` files are just one long line that users need to scroll. \r\n\r\n> Let's also keep in mind that extracting YAML metadata from a markdown readme is a bit more fastidious for users than just parsing a JSON file.\r\n\r\ndo users often parse `datasets_infos.json` file themselves? ",
"> do users often parse datasets_infos.json file themselves?\r\n\r\nNot AFAIK, but I'm sure there should be a few users.\r\nUsers that access these info via the `DatasetInfo` from `datasets` won't see the change though e.g.\r\n```python\r\n>> from datasets import get_datasets_infos\r\n>>> get_datasets_infos(\"squad\")\r\n{'plain_text': DatasetInfo(description='Stanford Question Answering Dataset...\r\n```",
"> Maybe one single syntax like this then ?\r\n\r\nLGTM!\r\n\r\n> The main things that may use or ingest these data IMO are:\r\n> - users in the UI or IDE\r\n> - datasets to populate DatasetInfo python object\r\n> - moon landing which is already parsing YAML\r\n\r\nFair point!\r\n\r\nHaving dataset info in the README's YAML is great for API / `huggingface_hub` consumers as well as it will be inserted in the `cardData` field out of the box 🔥 \r\n",
"Very supportive of this!\r\n\r\nNesting an array of configs inside `dataset_infos: ` sounds good to me. One small tweak is that `config: default` can be optional for the default config (which can be the first one by convention)\r\n\r\nWe'll be able to implement metadata validation on the Hub side so we ensure that those metadata are always in the right format (maybe for @coyotte508 ? cc @Pierrci). From a quick glance the `features` might be the harder part to validate here, any doc will be welcome.\r\n\r\n### Other high-level points:\r\n- as we move from mostly academic datasets to *all* datasets (which include the data inside the repos), my intuition is that more and more datasets (Hub-stored) are going to be **single-config**\r\n- similarly, less and less datasets will have a loading script, **just the data + some metadata**\r\n- to lower the barrier to entry to contribution, in the long term users shouldn't need to compute/update this data via a command line. It could be filled automatically on the Hub through a \"bot\" inside Discussions & Pull requests for instance.",
"re: `config: default`\r\n\r\nNote also that the default config is not named `default`, afaiu, but create from the repo name, eg: https://huggingface.co/datasets/nbtpj/bionlp2021SAS default config is `nbtpj--bionlp2021SAS` (which is awful)",
"> Note also that the default config is not named default, afaiu, but create from the repo name, eg: https://huggingface.co/datasets/nbtpj/bionlp2021SAS default config is nbtpj--bionlp2021SAS (which is awful)\r\n\r\nWe can change this to `default` I think or something else",
"> From a quick glance the features might be the harder part to validate here, any doc will be welcome.\r\n\r\nI dug into features validation, see:\r\n\r\n- the OpenAPI spec: https://github.com/huggingface/datasets-server/blob/main/chart/static-files/openapi.json#L460-L697\r\n- the node.js code: https://github.com/huggingface/moon-landing/blob/upgrade-datasets-server-client/server/lib/datasets/FeatureType.ts",
"> We can change this to default I think or something else\r\n\r\nI created https://github.com/huggingface/datasets/issues/4902 to discuss that",
"> Note also that the default config is not named `default`, afaiu, but create from the repo name\r\n\r\nin case of single-config you can even hide the config name from the UI IMO\r\n\r\n> I dug into features validation, see: the OpenAPI spec\r\n\r\nin moon-landing we use [Joi](https://joi.dev/api/) to validate metadata so we would need to generate from Joi code from the OpenAPI spec (or from somewhere else) but I guess that's doable – or just rewrite it manually, as it won't change often",
"I remember there was an ongoing discussion on this topic:\r\n- #3507\r\n\r\nI recall some of the concerns raised on that discussion:\r\n- @lhoestq: Tensorflow Datasets catalog includes a community catalog where you can find and use HF datasets. They are using the exported dataset_infos.json files from github to get the metadata: [#3507 (comment)](https://github.com/huggingface/datasets/issues/3507#issuecomment-1056997627)\r\n- @severo: [#3507 (comment)](https://github.com/huggingface/datasets/issues/3507#issuecomment-1042779776)\r\n - the metadata header might be very long, before reaching the start of the README/dataset card. \r\n - It also somewhat prevents including large strings like the checksums\r\n - two concepts are mixed in the same file (metadata and documentation). This means that if you're interested only in one of them, you still have to know how to parse the whole file. \r\n- @severo: the future \"datasets server\" could be in charge of generating the dataset-info.json file: [#3507 (comment)](https://github.com/huggingface/datasets/issues/3507#issuecomment-1033752157)",
"Thanks for bringing these points up !\r\n\r\n> @lhoestq: Tensorflow Datasets catalog includes a community catalog where you can find and use HF datasets. They are using the exported dataset_infos.json files from github to get the metadata: https://github.com/huggingface/datasets/issues/3507#issuecomment-1056997627\r\n\r\nThe TFDS implementation is not super advanced, so it's ok IMO as long as we don't break all the dataset scripts. Note that users can still use `to_tf_dataset`.\r\n\r\nWe had a chance to discuss the two nexts points with @julien-c as well:\r\n\r\n> @severo: https://github.com/huggingface/datasets/issues/3507#issuecomment-1042779776\r\nthe metadata header might be very long, before reaching the start of the README/dataset card.\r\n\r\nIf we don't add the checksums we should be fine. We can also set a maximum number of supported configs in the README to keep it readable.\r\n\r\n> @severo: the future \"datasets server\" could be in charge of generating the dataset-info.json file: https://github.com/huggingface/datasets/issues/3507#issuecomment-1033752157\r\n\r\nI guess the \"HF Hub actions\" could open PRs to do the same in the YAML directly\r\n",
"Thanks for linking that similar discussion for context, @albertvillanova!"
] | 1,661,271,401,000 | 1,661,782,709,000 | null | MEMBER | null | Currently there are two places to find metadata for datasets:
- datasets_infos.json, which contains **per dataset config**
- description
- citation
- license
- splits and sizes
- checksums of the data files
- feature types
- and more
- YAML tags, which contain
- license
- language
- train-eval-index
- and more
It would be nice to have a single place instead. We can rely on the YAML tags more than the JSON file for consistency with models. And it would all be indexed by our back-end directly, which is nice to have.
One way would be to move everything to the YAML tags except the checksums (there can be tens of thousands of them). The description/citation is already in the dataset card so we probably don't need to have them in the YAML card, it would be redundant.
Here is an example for SQuAD
```yaml
download_size: 35142551
dataset_size: 89789763
version: 1.0.0
splits:
- name: train
num_examples: 87599
num_bytes: 79317110
- name: validation
num_examples: 10570
num_bytes: 10472653
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
struct:
- name: text
list:
dtype: string
- name: answer_start
list:
dtype: int32
```
Since there is only one configuration for SQuAD, this structure is ok. For datasets with several configs we can see in a second step, but IMO it would be ok to have these fields per config using another syntax
```yaml
configs:
- config: unlabeled
splits:
- name: train
num_examples: 10000
features:
- name: text
dtype: string
- config: labeled
splits:
- name: train
num_examples: 100
features:
- name: text
dtype: string
- name: label
dtype: ClassLabel
names:
- negative
- positive
```
So in the end you could specify a YAML tag either at the top level (for all configs) or per config in the `configs` field
Alternatively we could keep config specific stuff in the `dataset_infos.json` as it it today
Not sure yet what's the best approach here but cc @julien-c @mariosasko @albertvillanova @polinaeterna for feedback :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4876/reactions",
"total_count": 7,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 4
} | https://api.github.com/repos/huggingface/datasets/issues/4876/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4875 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4875/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4875/comments | https://api.github.com/repos/huggingface/datasets/issues/4875/events | https://github.com/huggingface/datasets/issues/4875 | 1,348,095,686 | I_kwDODunzps5QWk7G | 4,875 | `_resolve_features` ignores the token | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi ! Your HF_ENDPOINT seems wrong because of the extra \"/\"\r\n```diff\r\n- os.environ[\"HF_ENDPOINT\"] = \"https://hub-ci.huggingface.co/\"\r\n+ os.environ[\"HF_ENDPOINT\"] = \"https://hub-ci.huggingface.co\"\r\n```\r\n\r\ncan you try again without the extra \"/\" ?",
"Oh, yes, sorry, but it's not the issue.\r\n\r\nIn my code, I set `HF_ENDPOINT=https://hub-ci.huggingface.co`. I added `os.environ[\"HF_ENDPOINT\"] = \"https://hub-ci.huggingface.co/\"` afterward just to indicate that we had to have this env var and made a mistake there",
"I can't reproduce on my side. I tried using a private dataset repo with a CSV file on hub-ci\r\n\r\nWhat's your version of `huggingface_hub` ?",
"I can't reproduce either... Not sure what has occurred, very sorry to have made you lost your time on that "
] | 1,661,266,656,000 | 1,661,358,821,000 | 1,661,358,810,000 | CONTRIBUTOR | null | ## Describe the bug
When calling [`_resolve_features()`](https://github.com/huggingface/datasets/blob/54b532a8a2f5353fdb0207578162153f7b2da2ec/src/datasets/iterable_dataset.py#L1255) on a gated dataset, ie. a dataset which requires a token to be loaded, the token seems to be ignored even if it has been provided to `load_dataset` before.
## Steps to reproduce the bug
```python
import os
os.environ["HF_ENDPOINT"] = "https://hub-ci.huggingface.co/"
hf_token = "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"
from datasets import load_dataset
# public
dataset_name = "__DUMMY_DATASETS_SERVER_USER__/repo_csv_data-16612654226756"
config_name = "__DUMMY_DATASETS_SERVER_USER__--repo_csv_data-16612654226756"
split_name = "train"
iterable_dataset = load_dataset(
dataset_name,
name=config_name,
split=split_name,
streaming=True,
use_auth_token=hf_token,
)
iterable_dataset = iterable_dataset._resolve_features()
print(iterable_dataset.features)
# gated
dataset_name = "__DUMMY_DATASETS_SERVER_USER__/repo_csv_data-16612654317644"
config_name = "__DUMMY_DATASETS_SERVER_USER__--repo_csv_data-16612654317644"
split_name = "train"
iterable_dataset = load_dataset(
dataset_name,
name=config_name,
split=split_name,
streaming=True,
use_auth_token=hf_token,
)
try:
iterable_dataset = iterable_dataset._resolve_features()
except FileNotFoundError as e:
print("FAILS")
```
## Expected results
I expect to have the same result on a public dataset and on a gated (or private) dataset, if the token has been provided.
## Actual results
An exception is thrown on gated datasets.
## Environment info
- `datasets` version: 2.4.0
- Platform: Linux-5.15.0-1017-aws-x86_64-with-glibc2.35
- Python version: 3.9.6
- PyArrow version: 7.0.0
- Pandas version: 1.4.2 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4875/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4875/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4874 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4874/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4874/comments | https://api.github.com/repos/huggingface/datasets/issues/4874/events | https://github.com/huggingface/datasets/pull/4874 | 1,347,618,197 | PR_kwDODunzps49n_nI | 4,874 | [docs] Some tiny doc tweaks | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4874). All of your documentation changes will be reflected on that endpoint."
] | 1,661,246,380,000 | 1,661,362,077,000 | 1,661,362,076,000 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4874/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4874/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4874",
"html_url": "https://github.com/huggingface/datasets/pull/4874",
"diff_url": "https://github.com/huggingface/datasets/pull/4874.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4874.patch",
"merged_at": 1661362076000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4873 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4873/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4873/comments | https://api.github.com/repos/huggingface/datasets/issues/4873/events | https://github.com/huggingface/datasets/issues/4873 | 1,347,592,022 | I_kwDODunzps5QUp9W | 4,873 | Multiple dataloader memory error | {
"login": "cyk1337",
"id": 13767887,
"node_id": "MDQ6VXNlcjEzNzY3ODg3",
"avatar_url": "https://avatars.githubusercontent.com/u/13767887?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyk1337",
"html_url": "https://github.com/cyk1337",
"followers_url": "https://api.github.com/users/cyk1337/followers",
"following_url": "https://api.github.com/users/cyk1337/following{/other_user}",
"gists_url": "https://api.github.com/users/cyk1337/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyk1337/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyk1337/subscriptions",
"organizations_url": "https://api.github.com/users/cyk1337/orgs",
"repos_url": "https://api.github.com/users/cyk1337/repos",
"events_url": "https://api.github.com/users/cyk1337/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyk1337/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi!\r\n\r\n200+ data loaders is a lot. Have you tried to reduce the number of datasets by concatenating/interleaving the ones with the same structure/task (the API is `{concatenate_datasets/interleave_datasets}([dset1, ..., dset_N])`)?",
"Hi @mariosasko, thank you for your reply. I tried pre-concatenating different datasets into one, but one key need is to keep each batch the same data type. Considering that the concatenate-then-segment operation for prefetched samples may span across different data types after concatenating/interleaving (cuz different data sources are mixed), any solution to remain the same data source for each batch?"
] | 1,661,245,190,000 | 1,662,692,577,000 | null | NONE | null | For the use of multiple datasets and tasks, we use around more than 200+ dataloaders, then pass it into `dataloader1, dataloader2, ..., dataloader200=accelerate.prepare(dataloader1, dataloader2, ..., dataloader200)`
It causes the memory error when generating batches. Any solutions to it?
```bash
File "/home/xxx/my_code/src/utils/data_utils.py", line 54, in generate_batch
x = next(iterator)
File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/accelerate/data_loader.py", line 301, in __iter__
for batch in super().__iter__():
File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 435, in __next__
data = self._next_data()
File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 475, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 28, in fetch
data.append(next(self.dataset_iter))
File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/accelerate/data_loader.py", line 249, in __iter__
for element in self.dataset:
File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 503, in __iter__
for key, example in self._iter():
File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 500, in _iter
yield from ex_iterable
File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 231, in __iter__
new_key = "_".join(str(key) for key in keys)
MemoryError
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4873/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4873/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4872 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4872/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4872/comments | https://api.github.com/repos/huggingface/datasets/issues/4872/events | https://github.com/huggingface/datasets/pull/4872 | 1,347,180,765 | PR_kwDODunzps49mjU9 | 4,872 | [WIP] Docs for creating an audio dataset | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4872). All of your documentation changes will be reflected on that endpoint.",
"Awesome thanks ! I think we can also encourage TAR archives as for image dataset scripts (feel free to copy paste some parts from there lol)",
"Thanks for all the great feedback @polinaeterna and @lhoestq! 🥰\r\n\r\nI added all the other feedback, and I'll look into the `librivox-indonesia` script now!",
"If you don't mind, I'm taking over this PR since we'll do a release pretty soon"
] | 1,661,216,829,000 | 1,663,603,761,000 | null | MEMBER | null | This PR is a first draft of how to create audio datasets (`AudioFolder` and loading script). Feel free to let me know if there are any specificities I'm missing for this. 🙂 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4872/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4872/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4872",
"html_url": "https://github.com/huggingface/datasets/pull/4872",
"diff_url": "https://github.com/huggingface/datasets/pull/4872.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4872.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4871 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4871/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4871/comments | https://api.github.com/repos/huggingface/datasets/issues/4871/events | https://github.com/huggingface/datasets/pull/4871 | 1,346,703,568 | PR_kwDODunzps49k9Rm | 4,871 | Fix: wmt datasets - fix CWMT zh subsets | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4871). All of your documentation changes will be reflected on that endpoint."
] | 1,661,186,529,000 | 1,661,248,820,000 | 1,661,248,819,000 | MEMBER | null | Fix https://github.com/huggingface/datasets/issues/4575
TODO: run `datasets-cli test`:
- [x] wmt17
- [x] wmt18
- [x] wmt19 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4871/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4871/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4871",
"html_url": "https://github.com/huggingface/datasets/pull/4871",
"diff_url": "https://github.com/huggingface/datasets/pull/4871.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4871.patch",
"merged_at": 1661248819000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4870 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4870/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4870/comments | https://api.github.com/repos/huggingface/datasets/issues/4870/events | https://github.com/huggingface/datasets/pull/4870 | 1,346,160,498 | PR_kwDODunzps49jGxD | 4,870 | audio folder check CI | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,661,163,353,000 | 1,661,171,672,000 | 1,661,170,780,000 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4870/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4870/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4870",
"html_url": "https://github.com/huggingface/datasets/pull/4870",
"diff_url": "https://github.com/huggingface/datasets/pull/4870.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4870.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4869 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4869/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4869/comments | https://api.github.com/repos/huggingface/datasets/issues/4869/events | https://github.com/huggingface/datasets/pull/4869 | 1,345,513,758 | PR_kwDODunzps49hBGY | 4,869 | Fix typos in documentation | {
"login": "fl-lo",
"id": 85993954,
"node_id": "MDQ6VXNlcjg1OTkzOTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/85993954?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fl-lo",
"html_url": "https://github.com/fl-lo",
"followers_url": "https://api.github.com/users/fl-lo/followers",
"following_url": "https://api.github.com/users/fl-lo/following{/other_user}",
"gists_url": "https://api.github.com/users/fl-lo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fl-lo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fl-lo/subscriptions",
"organizations_url": "https://api.github.com/users/fl-lo/orgs",
"repos_url": "https://api.github.com/users/fl-lo/repos",
"events_url": "https://api.github.com/users/fl-lo/events{/privacy}",
"received_events_url": "https://api.github.com/users/fl-lo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,661,094,603,000 | 1,661,160,339,000 | 1,661,159,398,000 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4869/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4869/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4869",
"html_url": "https://github.com/huggingface/datasets/pull/4869",
"diff_url": "https://github.com/huggingface/datasets/pull/4869.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4869.patch",
"merged_at": 1661159398000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4868 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4868/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4868/comments | https://api.github.com/repos/huggingface/datasets/issues/4868/events | https://github.com/huggingface/datasets/pull/4868 | 1,345,191,322 | PR_kwDODunzps49gBk0 | 4,868 | adding mafand to datasets | {
"login": "dadelani",
"id": 23586676,
"node_id": "MDQ6VXNlcjIzNTg2Njc2",
"avatar_url": "https://avatars.githubusercontent.com/u/23586676?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dadelani",
"html_url": "https://github.com/dadelani",
"followers_url": "https://api.github.com/users/dadelani/followers",
"following_url": "https://api.github.com/users/dadelani/following{/other_user}",
"gists_url": "https://api.github.com/users/dadelani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dadelani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dadelani/subscriptions",
"organizations_url": "https://api.github.com/users/dadelani/orgs",
"repos_url": "https://api.github.com/users/dadelani/repos",
"events_url": "https://api.github.com/users/dadelani/events{/privacy}",
"received_events_url": "https://api.github.com/users/dadelani/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892913,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEz",
"url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": "This will not be worked on"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @dadelani, thanks for your awesome contribution!!! :heart: \r\n\r\nHowever, now we are using the Hub to add new datasets, instead of this GitHub repo. \r\n\r\nYou could share this dataset under your Hub organization namespace: [Masakhane NLP](https://huggingface.co/masakhane). This way the dataset will be accessible using:\r\n```python\r\nds = load_dataset(\"masakhane/mafand\")\r\n```\r\n\r\nYou have the procedure documented in our online docs: \r\n- [Create a dataset loading script](https://huggingface.co/docs/datasets/dataset_script)\r\n- [Share](https://huggingface.co/docs/datasets/share)\r\n\r\nMoreover, datasets shared on the Hub no longer need the dummy data files.\r\n\r\nPlease, feel free to ping me if you need any further guidance/support.",
"thank you for the comment. I have moved it to the Hub https://huggingface.co/datasets/masakhane/mafand",
"Great job, @dadelani!!\r\n\r\nPlease, note that in the README.md file, the YAML tags should be preceded and followed by three dashes `---`, so that they are properly parsed. See, e.g.: https://raw.githubusercontent.com/huggingface/datasets/main/templates/README.md",
"Also you could replace the line:\r\n```\r\n# Dataset Card for [Needs More Information]\r\n```\r\nwith\r\n```\r\n# Dataset Card for MAFAND-MT\r\n```",
"Great, thank you for the feedback. I have fixed both issues."
] | 1,661,009,174,000 | 1,661,166,050,000 | 1,661,158,343,000 | CONTRIBUTOR | null | I'm addding the MAFAND dataset by Masakhane based on the paper/repository below:
Paper: https://aclanthology.org/2022.naacl-main.223/
Code: https://github.com/masakhane-io/lafand-mt
Please, help merge this
Everything works except for creating dummy data file | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4868/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4868/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4868",
"html_url": "https://github.com/huggingface/datasets/pull/4868",
"diff_url": "https://github.com/huggingface/datasets/pull/4868.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4868.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4867 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4867/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4867/comments | https://api.github.com/repos/huggingface/datasets/issues/4867/events | https://github.com/huggingface/datasets/pull/4867 | 1,344,982,646 | PR_kwDODunzps49fZle | 4,867 | Complete tags of superglue dataset card | {
"login": "richarddwang",
"id": 17963619,
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richarddwang",
"html_url": "https://github.com/richarddwang",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660,952,679,000 | 1,661,159,643,000 | 1,661,158,711,000 | CONTRIBUTOR | null | Related to #4479 . | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4867/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4867/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4867",
"html_url": "https://github.com/huggingface/datasets/pull/4867",
"diff_url": "https://github.com/huggingface/datasets/pull/4867.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4867.patch",
"merged_at": 1661158711000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4866 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4866/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4866/comments | https://api.github.com/repos/huggingface/datasets/issues/4866/events | https://github.com/huggingface/datasets/pull/4866 | 1,344,809,132 | PR_kwDODunzps49e1CP | 4,866 | amend docstring for dunder | {
"login": "schafsam",
"id": 37704298,
"node_id": "MDQ6VXNlcjM3NzA0Mjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/37704298?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/schafsam",
"html_url": "https://github.com/schafsam",
"followers_url": "https://api.github.com/users/schafsam/followers",
"following_url": "https://api.github.com/users/schafsam/following{/other_user}",
"gists_url": "https://api.github.com/users/schafsam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/schafsam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/schafsam/subscriptions",
"organizations_url": "https://api.github.com/users/schafsam/orgs",
"repos_url": "https://api.github.com/users/schafsam/repos",
"events_url": "https://api.github.com/users/schafsam/events{/privacy}",
"received_events_url": "https://api.github.com/users/schafsam/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4866). All of your documentation changes will be reflected on that endpoint."
] | 1,660,936,155,000 | 1,662,741,191,000 | null | NONE | null | display dunder method in docsting with underlines an not bold markdown. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4866/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4866/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4866",
"html_url": "https://github.com/huggingface/datasets/pull/4866",
"diff_url": "https://github.com/huggingface/datasets/pull/4866.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4866.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4865 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4865/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4865/comments | https://api.github.com/repos/huggingface/datasets/issues/4865/events | https://github.com/huggingface/datasets/issues/4865 | 1,344,552,626 | I_kwDODunzps5QJD6y | 4,865 | Dataset Viewer issue for MoritzLaurer/multilingual_nli | {
"login": "MoritzLaurer",
"id": 41862082,
"node_id": "MDQ6VXNlcjQxODYyMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/41862082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MoritzLaurer",
"html_url": "https://github.com/MoritzLaurer",
"followers_url": "https://api.github.com/users/MoritzLaurer/followers",
"following_url": "https://api.github.com/users/MoritzLaurer/following{/other_user}",
"gists_url": "https://api.github.com/users/MoritzLaurer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MoritzLaurer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MoritzLaurer/subscriptions",
"organizations_url": "https://api.github.com/users/MoritzLaurer/orgs",
"repos_url": "https://api.github.com/users/MoritzLaurer/repos",
"events_url": "https://api.github.com/users/MoritzLaurer/events{/privacy}",
"received_events_url": "https://api.github.com/users/MoritzLaurer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting @MoritzLaurer.\r\n\r\nCurrently, the dataset preview is working properly: https://huggingface.co/datasets/MoritzLaurer/multilingual_nli\r\n\r\nPlease note that when a dataset is modified, it might take some time until the preview is completely updated.\r\n\r\n@severo might it be worth adding a clearer error message, something like \"The preview is updating, please retry later\"?",
"Thanks for your response. You are right, its now working well. I had waited for 30 min or so and refreshed several times and thought there was some other error. Yeah, a different error message sounds like a good idea to avoid confusion. ",
"I'm closing this issue then.",
"> @severo might it be worth adding a clearer error message, something like \"The preview is updating, please retry later\"?\r\n\r\nYes, it's a known issue, and we're about to ship a better version"
] | 1,660,920,920,000 | 1,661,179,634,000 | 1,661,148,800,000 | NONE | null | ### Link
_No response_
### Description
I've just uploaded a new dataset to the hub and the viewer does not work for some reason, see here: https://huggingface.co/datasets/MoritzLaurer/multilingual_nli
It displays the error:
```
Status code: 400
Exception: Status400Error
Message: The dataset does not exist.
```
Weirdly enough the dataviewer works for an earlier version of the same dataset. The only difference is that it is smaller, but I'm not aware of other changes I have made: https://huggingface.co/datasets/MoritzLaurer/multilingual_nli_test
Do you know why the dataviewer is not working?
### Owner
_No response_ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4865/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4865/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4864 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4864/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4864/comments | https://api.github.com/repos/huggingface/datasets/issues/4864/events | https://github.com/huggingface/datasets/issues/4864 | 1,344,410,043 | I_kwDODunzps5QIhG7 | 4,864 | Allow pathlib PoxisPath in Dataset.read_json | {
"login": "cccntu",
"id": 31893406,
"node_id": "MDQ6VXNlcjMxODkzNDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cccntu",
"html_url": "https://github.com/cccntu",
"followers_url": "https://api.github.com/users/cccntu/followers",
"following_url": "https://api.github.com/users/cccntu/following{/other_user}",
"gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cccntu/subscriptions",
"organizations_url": "https://api.github.com/users/cccntu/orgs",
"repos_url": "https://api.github.com/users/cccntu/repos",
"events_url": "https://api.github.com/users/cccntu/events{/privacy}",
"received_events_url": "https://api.github.com/users/cccntu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 1,660,913,957,000 | 1,660,913,957,000 | null | CONTRIBUTOR | null | **Is your feature request related to a problem? Please describe.**
```
from pathlib import Path
from datasets import Dataset
ds = Dataset.read_json(Path('data.json'))
```
causes an error
```
AttributeError: 'PosixPath' object has no attribute 'decode'
```
**Describe the solution you'd like**
It should be able to accept PosixPath and read the json from inside. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4864/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4864/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4863 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4863/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4863/comments | https://api.github.com/repos/huggingface/datasets/issues/4863/events | https://github.com/huggingface/datasets/issues/4863 | 1,343,737,668 | I_kwDODunzps5QF89E | 4,863 | TFDS wiki_dialog dataset to Huggingface dataset | {
"login": "djaym7",
"id": 12378820,
"node_id": "MDQ6VXNlcjEyMzc4ODIw",
"avatar_url": "https://avatars.githubusercontent.com/u/12378820?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/djaym7",
"html_url": "https://github.com/djaym7",
"followers_url": "https://api.github.com/users/djaym7/followers",
"following_url": "https://api.github.com/users/djaym7/following{/other_user}",
"gists_url": "https://api.github.com/users/djaym7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/djaym7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/djaym7/subscriptions",
"organizations_url": "https://api.github.com/users/djaym7/orgs",
"repos_url": "https://api.github.com/users/djaym7/repos",
"events_url": "https://api.github.com/users/djaym7/events{/privacy}",
"received_events_url": "https://api.github.com/users/djaym7/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [
"@albertvillanova any help ? The linked dataset is in beam format which is similar to wikipedia dataset in huggingface that you scripted..",
"Nvm, I was able to port it to huggingface datasets, will upload to the hub soon",
"https://huggingface.co/datasets/djaym7/wiki_dialog",
"Thanks for the addition, @djaym7."
] | 1,660,863,990,000 | 1,661,161,305,000 | 1,661,145,533,000 | NONE | null | ## Adding a Dataset
- **Name:** *Wiki_dialog*
- **Description: https://github.com/google-research/dialog-inpainting#:~:text=JSON%20object%2C%20for-,example,-%3A
- **Paper: https://arxiv.org/abs/2205.09073
- **Data: https://github.com/google-research/dialog-inpainting
- **Motivation:** *Research and Development on biggest corpus of dialog data*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/main/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4863/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4863/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4862 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4862/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4862/comments | https://api.github.com/repos/huggingface/datasets/issues/4862/events | https://github.com/huggingface/datasets/issues/4862 | 1,343,464,699 | I_kwDODunzps5QE6T7 | 4,862 | Got "AttributeError: 'xPath' object has no attribute 'read'" when loading an excel dataset with my own code | {
"login": "yana-xuyan",
"id": 38536635,
"node_id": "MDQ6VXNlcjM4NTM2NjM1",
"avatar_url": "https://avatars.githubusercontent.com/u/38536635?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yana-xuyan",
"html_url": "https://github.com/yana-xuyan",
"followers_url": "https://api.github.com/users/yana-xuyan/followers",
"following_url": "https://api.github.com/users/yana-xuyan/following{/other_user}",
"gists_url": "https://api.github.com/users/yana-xuyan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yana-xuyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yana-xuyan/subscriptions",
"organizations_url": "https://api.github.com/users/yana-xuyan/orgs",
"repos_url": "https://api.github.com/users/yana-xuyan/repos",
"events_url": "https://api.github.com/users/yana-xuyan/events{/privacy}",
"received_events_url": "https://api.github.com/users/yana-xuyan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"What's more, the downloaded data is actually a folder instead of an excel file.",
"Hi hi, instead of using `download_and_extract` function, I only use `download` function: `base_dir = Path(dl_manager.download(urls))`. It turns out that the code works for `datasets==2.2.2`, however, it doesn't work with `datasets==2.4.0`. ",
"Hi @yana-xuyan, thanks for reporting.\r\n\r\nIndeed you already found the answer: an Excel file should be just downloaded and not downloaded-and-extracted.\r\n\r\nThe reason why is that if you call also extract, our library will try to infer the compression format (and extract it). And Excel files are viewed as ZIP files and extracted as so (into a directory). This is because the Office Open XML is indeed a zipped file under the hood): https://en.wikipedia.org/wiki/Office_Open_XML\r\n> Office Open XML (also informally known as OOXML) is a **zipped**, XML-based file format\r\n```python\r\nimport zipfile\r\n\r\nzipfile.is_zipfile(\"filename.xlsx\")\r\n```\r\nreturns `True`.",
"Hi @albertvillanova, thank you for your reply! Do you have any clue on why the same error still exists with `datasets==2.4.0` even after I don't extract the downloaded file? FYI, if I downgrade to `datasets==2.2.2`, the code works fine.",
"I guess this has to do with the cache: you should remove the previously-wrongly generated directory from the cache; otherwise `datasets` tries to re-use it."
] | 1,660,847,774,000 | 1,661,937,908,000 | 1,661,937,908,000 | NONE | null | ## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
# The dataset function is as follows:
from pathlib import Path
from typing import Dict, List, Tuple
import datasets
import pandas as pd
_CITATION = """\
"""
_DATASETNAME = "jadi_ide"
_DESCRIPTION = """\
"""
_HOMEPAGE = ""
_LICENSE = "Unknown"
_URLS = {
_DATASETNAME: "https://github.com/fathanick/Javanese-Dialect-Identification-from-Twitter-Data/raw/main/Update 16K_Dataset.xlsx",
}
_SOURCE_VERSION = "1.0.0"
class JaDi_Ide(datasets.GeneratorBasedBuilder):
SOURCE_VERSION = datasets.Version(_SOURCE_VERSION)
BUILDER_CONFIGS = [
NusantaraConfig(
name="jadi_ide_source",
version=SOURCE_VERSION,
description="JaDi-Ide source schema",
schema="source",
subset_id="jadi_ide",
),
]
DEFAULT_CONFIG_NAME = "source"
def _info(self) -> datasets.DatasetInfo:
if self.config.schema == "source":
features = datasets.Features(
{
"id": datasets.Value("string"),
"text": datasets.Value("string"),
"label": datasets.Value("string")
}
)
return datasets.DatasetInfo(
description=_DESCRIPTION,
features=features,
homepage=_HOMEPAGE,
license=_LICENSE,
citation=_CITATION,
)
def _split_generators(self, dl_manager: datasets.DownloadManager) -> List[datasets.SplitGenerator]:
"""Returns SplitGenerators."""
# Dataset does not have predetermined split, putting all as TRAIN
urls = _URLS[_DATASETNAME]
base_dir = Path(dl_manager.download_and_extract(urls))
data_files = {"train": base_dir}
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN,
gen_kwargs={
"filepath": data_files["train"],
"split": "train",
},
),
]
def _generate_examples(self, filepath: Path, split: str) -> Tuple[int, Dict]:
"""Yields examples as (key, example) tuples."""
df = pd.read_excel(filepath, engine='openpyxl')
df.columns = ["id", "text", "label"]
if self.config.schema == "source":
for row in df.itertuples():
ex = {
"id": str(row.id),
"text": row.text,
"label": row.label,
}
yield row.id, ex
```
## Expected results
Expecting to load the dataset smoothly.
## Actual results
File "/home/xuyan/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 1751, in load_dataset
use_auth_token=use_auth_token,
File "/home/xuyan/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 705, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/xuyan/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 1227, in _download_and_prepare
super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
File "/home/xuyan/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 793, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/xuyan/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 1216, in _prepare_split
desc=f"Generating {split_info.name} split",
File "/home/xuyan/anaconda3/lib/python3.7/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/home/xuyan/.cache/huggingface/modules/datasets_modules/datasets/jadi_ide/7a539f2b6f726defea8fbe36ceda17bae66c370f6d6c418e3a08d760ebef7519/jadi_ide.py", line 107, in _generate_examples
df = pd.read_excel(filepath, engine='openpyxl')
File "/home/xuyan/anaconda3/lib/python3.7/site-packages/datasets/download/streaming_download_manager.py", line 701, in xpandas_read_excel
return pd.read_excel(BytesIO(filepath_or_buffer.read()), **kwargs)
AttributeError: 'xPath' object has no attribute 'read'
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: Linux-4.15.0-142-generic-x86_64-with-debian-stretch-sid
- Python version: 3.7.4
- PyArrow version: 9.0.0
- Pandas version: 0.25.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4862/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4862/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4861 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4861/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4861/comments | https://api.github.com/repos/huggingface/datasets/issues/4861/events | https://github.com/huggingface/datasets/issues/4861 | 1,343,260,220 | I_kwDODunzps5QEIY8 | 4,861 | Using disk for memory with the method `from_dict` | {
"login": "HugoLaurencon",
"id": 44556846,
"node_id": "MDQ6VXNlcjQ0NTU2ODQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/44556846?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HugoLaurencon",
"html_url": "https://github.com/HugoLaurencon",
"followers_url": "https://api.github.com/users/HugoLaurencon/followers",
"following_url": "https://api.github.com/users/HugoLaurencon/following{/other_user}",
"gists_url": "https://api.github.com/users/HugoLaurencon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HugoLaurencon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HugoLaurencon/subscriptions",
"organizations_url": "https://api.github.com/users/HugoLaurencon/orgs",
"repos_url": "https://api.github.com/users/HugoLaurencon/repos",
"events_url": "https://api.github.com/users/HugoLaurencon/events{/privacy}",
"received_events_url": "https://api.github.com/users/HugoLaurencon/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 1,660,835,898,000 | 1,660,835,898,000 | null | NONE | null | **Is your feature request related to a problem? Please describe.**
I start with an empty dataset. In a loop, at each iteration, I create a new dataset with the method `from_dict` (based on some data I load) and I concatenate this new dataset with the one at the previous iteration. After some iterations, I have an OOM error.
**Describe the solution you'd like**
The method `from_dict` loads the data in RAM. It could be good to add an option to use the disk instead.
**Describe alternatives you've considered**
To solve the problem, I have to do an intermediate step where I save the new datasets at each iteration with `save_to_disk`. Once it's done, I open them all and concatenate them.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4861/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4861/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4860 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4860/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4860/comments | https://api.github.com/repos/huggingface/datasets/issues/4860/events | https://github.com/huggingface/datasets/pull/4860 | 1,342,311,540 | PR_kwDODunzps49WjEu | 4,860 | Add collection3 dataset | {
"login": "pefimov",
"id": 16446994,
"node_id": "MDQ6VXNlcjE2NDQ2OTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/16446994?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pefimov",
"html_url": "https://github.com/pefimov",
"followers_url": "https://api.github.com/users/pefimov/followers",
"following_url": "https://api.github.com/users/pefimov/following{/other_user}",
"gists_url": "https://api.github.com/users/pefimov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pefimov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pefimov/subscriptions",
"organizations_url": "https://api.github.com/users/pefimov/orgs",
"repos_url": "https://api.github.com/users/pefimov/repos",
"events_url": "https://api.github.com/users/pefimov/events{/privacy}",
"received_events_url": "https://api.github.com/users/pefimov/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892913,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEz",
"url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": "This will not be worked on"
}
] | closed | false | null | [] | null | [
"Hi @pefimov. Thanks for you awesome work on this dataset contribution.\r\n\r\nHowever, now we are using the Hub to add new datasets, instead of this GitHub repo. \r\n\r\nYou could share this dataset under the appropriate Hub organization namespace. This way the dataset will be accessible using:\r\n```python\r\nds = load_dataset(\"<org_namespace>/collection3\")\r\n```\r\n\r\nYou have the procedure documented in our online docs: \r\n- [Create a dataset loading script](https://huggingface.co/docs/datasets/dataset_script)\r\n- [Share](https://huggingface.co/docs/datasets/share)\r\n\r\nMoreover, datasets shared on the Hub no longer need the dummy data files.\r\n\r\nPlease, feel free to ping me if you need any further guidance/support. ",
"> However, now we are using the Hub to add new datasets, instead of this GitHub repo.\r\n> \r\n> You could share this dataset under the appropriate Hub organization namespace. This way the dataset will be accessible using:\r\n> \r\n> ```python\r\n> ds = load_dataset(\"<org_namespace>/collection3\")\r\n> ```\r\n> \r\nHi @albertvillanova . Thank you for your response.\r\n\r\nI thought that Collection3 is large and important dataset in Russian presented in 2016 but not represented in huggingface.\r\n\r\nAlso I am not related to authors or organisation of dataset",
"The current policy of sharing datasets on the Hub instead of in this GitHub repo has no relation with the importance of the dataset: https://huggingface.co/docs/datasets/share#datasets-on-github-legacy \r\n> The distinction between a Hub dataset and a dataset from GitHub only comes from the legacy sharing workflow. It does not involve any ranking, decisioning, or opinion regarding the contents of the dataset itself.\r\n\r\nIt is not required to be an author/owner (or belong to the organization that is owner) of the dataset in order to share it on the Hub (as it was not the case when sharing them on this GitHub repo). \r\n\r\nIt is recommended to share it under an organization namespace that makes sense though. For this specific dataset, do you know of a clear organization under which it could be shared on the Hub? Maybe \"labinform\", or \"Information Research Laboratory\" or \"Lomonosov Moscow State University\"?\r\n\r\nIn cases like this, where the org is not evident, one possibility could be to contact the dataset owners/creators and ask them. According the publication paper, the authors are:\r\n- V.A. Mozharova\r\n- N.V. Loukachevitch\r\n\r\nI think maybe it would be worth contacting them.",
"@pefimov I have contacted the authors (and put you in CC).",
"Reply from the authors:\r\n> It is better to use name: Research Computing Center of Lomonosov Moscow State University (short name RCC-MSU)\r\n> https://rcc.msu.ru/en",
"I have created the corresponding org namespace and dataset empty repository: https://huggingface.co/datasets/RCC-MSU/collection3\r\n\r\n@pefimov feel free to open a PR on the Hub if you are willing to do so: \r\n- Go to the *Community* tab on the repo: https://huggingface.co/datasets/RCC-MSU/collection3/discussions\r\n- And click: *New pull request* button\r\n\r\nDocs: [Pull requests and Discussions](https://huggingface.co/docs/hub/repositories-pull-requests-discussions) on the Hub",
"Thanks"
] | 1,660,771,902,000 | 1,661,284,965,000 | 1,661,159,339,000 | NONE | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4860/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4860/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4860",
"html_url": "https://github.com/huggingface/datasets/pull/4860",
"diff_url": "https://github.com/huggingface/datasets/pull/4860.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4860.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4859 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4859/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4859/comments | https://api.github.com/repos/huggingface/datasets/issues/4859/events | https://github.com/huggingface/datasets/issues/4859 | 1,342,231,016 | I_kwDODunzps5QANHo | 4,859 | can't install using conda on Windows 10 | {
"login": "xoffey",
"id": 22627691,
"node_id": "MDQ6VXNlcjIyNjI3Njkx",
"avatar_url": "https://avatars.githubusercontent.com/u/22627691?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xoffey",
"html_url": "https://github.com/xoffey",
"followers_url": "https://api.github.com/users/xoffey/followers",
"following_url": "https://api.github.com/users/xoffey/following{/other_user}",
"gists_url": "https://api.github.com/users/xoffey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xoffey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xoffey/subscriptions",
"organizations_url": "https://api.github.com/users/xoffey/orgs",
"repos_url": "https://api.github.com/users/xoffey/repos",
"events_url": "https://api.github.com/users/xoffey/events{/privacy}",
"received_events_url": "https://api.github.com/users/xoffey/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [] | 1,660,766,257,000 | 1,660,766,257,000 | null | NONE | null | ## Describe the bug
I wanted to install using conda or Anaconda navigator. That didn't work, so I had to install using pip.
## Steps to reproduce the bug
conda install -c huggingface -c conda-forge datasets
## Expected results
Should have indicated successful installation.
## Actual results
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
... took forever, so I cancelled it with ctrl-c
## Environment info
- `datasets` version: 2.4.0 # after installing with pip
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.9.12
- PyArrow version: 9.0.0
- Pandas version: 1.4.2
- conda version: 4.13.0
conda info
active environment : base
active env location : G:\anaconda2022
shell level : 1
user config file : C:\Users\michael\.condarc
populated config files : C:\Users\michael\.condarc
conda version : 4.13.0
conda-build version : 3.21.8
python version : 3.9.12.final.0
virtual packages : __cuda=11.1=0
__win=0=0
__archspec=1=x86_64
base environment : G:\anaconda2022 (writable)
conda av data dir : G:\anaconda2022\etc\conda
conda av metadata url : None
channel URLs : https://conda.anaconda.org/pytorch/win-64
https://conda.anaconda.org/pytorch/noarch
https://conda.anaconda.org/huggingface/win-64
https://conda.anaconda.org/huggingface/noarch
https://conda.anaconda.org/conda-forge/win-64
https://conda.anaconda.org/conda-forge/noarch
https://conda.anaconda.org/anaconda-fusion/win-64
https://conda.anaconda.org/anaconda-fusion/noarch
https://repo.anaconda.com/pkgs/main/win-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/r/win-64
https://repo.anaconda.com/pkgs/r/noarch
https://repo.anaconda.com/pkgs/msys2/win-64
https://repo.anaconda.com/pkgs/msys2/noarch
package cache : G:\anaconda2022\pkgs
C:\Users\michael\.conda\pkgs
C:\Users\michael\AppData\Local\conda\conda\pkgs
envs directories : G:\anaconda2022\envs
C:\Users\michael\.conda\envs
C:\Users\michael\AppData\Local\conda\conda\envs
platform : win-64
user-agent : conda/4.13.0 requests/2.27.1 CPython/3.9.12 Windows/10 Windows/10.0.19044
administrator : False
netrc file : None
offline mode : False
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4859/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4859/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4858 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4858/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4858/comments | https://api.github.com/repos/huggingface/datasets/issues/4858/events | https://github.com/huggingface/datasets/issues/4858 | 1,340,859,853 | I_kwDODunzps5P6-XN | 4,858 | map() function removes columns when input_columns is not None | {
"login": "pramodith",
"id": 16939722,
"node_id": "MDQ6VXNlcjE2OTM5NzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/16939722?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pramodith",
"html_url": "https://github.com/pramodith",
"followers_url": "https://api.github.com/users/pramodith/followers",
"following_url": "https://api.github.com/users/pramodith/following{/other_user}",
"gists_url": "https://api.github.com/users/pramodith/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pramodith/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pramodith/subscriptions",
"organizations_url": "https://api.github.com/users/pramodith/orgs",
"repos_url": "https://api.github.com/users/pramodith/repos",
"events_url": "https://api.github.com/users/pramodith/events{/privacy}",
"received_events_url": "https://api.github.com/users/pramodith/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi! Thanks for reporting! This looks like a bug. I've just opened a PR with the fix.",
"Awesome! Thank you. I'll close the issue once the PR gets merged. :-)"
] | 1,660,682,550,000 | 1,663,076,928,000 | 1,663,076,928,000 | NONE | null | ## Describe the bug
The map function, removes features from the dataset that are not present in the _input_columns_ list of columns, despite the columns being removed not mentioned in the _remove_columns_ argument.
## Steps to reproduce the bug
```python
from datasets import Dataset
ds = Dataset.from_dict({"a" : [1,2,3],"b" : [0,1,0], "c" : [2,4,5]})
def double(x,y):
x = x*2
y = y*2
return {"d" : x, "e" : y}
ds.map(double, input_columns=["a","c"])
```
## Expected results
```
Dataset({
features: ['a', 'b', 'c', 'd', 'e'],
num_rows: 3
})
```
## Actual results
```
Dataset({
features: ['a', 'c', 'd', 'e'],
num_rows: 3
})
```
In this specific example feature **b** should not be removed.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: linux (colab)
- Python version: 3.7.13
- PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4858/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4858/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4857 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4857/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4857/comments | https://api.github.com/repos/huggingface/datasets/issues/4857/events | https://github.com/huggingface/datasets/issues/4857 | 1,340,397,153 | I_kwDODunzps5P5NZh | 4,857 | No preprocessed wikipedia is working on huggingface/datasets | {
"login": "aninrusimha",
"id": 30733039,
"node_id": "MDQ6VXNlcjMwNzMzMDM5",
"avatar_url": "https://avatars.githubusercontent.com/u/30733039?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aninrusimha",
"html_url": "https://github.com/aninrusimha",
"followers_url": "https://api.github.com/users/aninrusimha/followers",
"following_url": "https://api.github.com/users/aninrusimha/following{/other_user}",
"gists_url": "https://api.github.com/users/aninrusimha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aninrusimha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aninrusimha/subscriptions",
"organizations_url": "https://api.github.com/users/aninrusimha/orgs",
"repos_url": "https://api.github.com/users/aninrusimha/repos",
"events_url": "https://api.github.com/users/aninrusimha/events{/privacy}",
"received_events_url": "https://api.github.com/users/aninrusimha/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Thanks for reporting @aninrusimha.\r\n\r\nPlease, note that the preprocessed datasets are still available, as described in the dataset card, e.g.: https://huggingface.co/datasets/wikipedia\r\n```python\r\nds = load_dataset(\"wikipedia\", \"20220301.en\")\r\n``` ",
"This is working now, but I was getting an error a few days ago when running an existing script. Unfortunately I did not do a proper bug report, but for some reason I was unable to load the dataset due to a request being made to the wikimedia website. However, its working now. Thanks for the reply!"
] | 1,660,658,133,000 | 1,660,743,308,000 | 1,660,743,308,000 | NONE | null | ## Describe the bug
20220301 wikipedia dump has been deprecated, so now there is no working wikipedia dump on huggingface
https://huggingface.co/datasets/wikipedia
https://dumps.wikimedia.org/enwiki/
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4857/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4857/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4856 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4856/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4856/comments | https://api.github.com/repos/huggingface/datasets/issues/4856/events | https://github.com/huggingface/datasets/issues/4856 | 1,339,779,957 | I_kwDODunzps5P22t1 | 4,856 | file missing when load_dataset with openwebtext on windows | {
"login": "kingstarcraft",
"id": 10361976,
"node_id": "MDQ6VXNlcjEwMzYxOTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/10361976?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kingstarcraft",
"html_url": "https://github.com/kingstarcraft",
"followers_url": "https://api.github.com/users/kingstarcraft/followers",
"following_url": "https://api.github.com/users/kingstarcraft/following{/other_user}",
"gists_url": "https://api.github.com/users/kingstarcraft/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kingstarcraft/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kingstarcraft/subscriptions",
"organizations_url": "https://api.github.com/users/kingstarcraft/orgs",
"repos_url": "https://api.github.com/users/kingstarcraft/repos",
"events_url": "https://api.github.com/users/kingstarcraft/events{/privacy}",
"received_events_url": "https://api.github.com/users/kingstarcraft/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"I have tried to extract ```0015896-b1054262f7da52a0518521e29c8e352c.txt``` from ```17ecf461bfccd469a1fbc264ccb03731f8606eea7b3e2e8b86e13d18040bf5b3/urlsf_subset00-16_data.xz``` with 7-zip\r\nand put the file into cache_path ```F://huggingface/datasets/downloads/extracted/0901d27f43b7e9ac0577da0d0061c8c632ba0b70ecd1b4bfb21562d9b7486faa```\r\nthere is still raise the same error and I find the file was removed from cache_path after I run the run_mlm.py with ```python run_mlm.py --model_type roberta --tokenizer_name roberta-base --dataset_name openwebtext --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --do_train --do_eval --output_dir F:/model/roberta-base```."
] | 1,660,622,662,000 | 1,660,640,792,000 | null | NONE | null | ## Describe the bug
0015896-b1054262f7da52a0518521e29c8e352c.txt is missing when I run run_mlm.py with openwebtext. I check the cache_path and can not find 0015896-b1054262f7da52a0518521e29c8e352c.txt. but I can find this file in the 17ecf461bfccd469a1fbc264ccb03731f8606eea7b3e2e8b86e13d18040bf5b3/urlsf_subset00-16_data.xz with 7-zip.
## Steps to reproduce the bug
```sh
python run_mlm.py --model_type roberta --tokenizer_name roberta-base --dataset_name openwebtext --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --do_train --do_eval --output_dir F:/model/roberta-base
```
or
```python
from datasets import load_dataset
load_dataset("openwebtext", None, cache_dir=None, use_auth_token=None)
```
## Expected results
Loading is successful
## Actual results
Traceback (most recent call last):
File "D:\Python\v3.8.5\lib\site-packages\datasets\builder.py", line 704, in download_and_prepare
self._download_and_prepare(
File "D:\Python\v3.8.5\lib\site-packages\datasets\builder.py", line 1227, in _download_and_prepare
super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
File "D:\Python\v3.8.5\lib\site-packages\datasets\builder.py", line 795, in _download_and_prepare
raise OSError(
OSError: Cannot find data file.
Original error:
[Errno 22] Invalid argument: 'F://huggingface/datasets/downloads/extracted/0901d27f43b7e9ac0577da0d0061c8c632ba0b70ecd1b4bfb21562d9b7486faa/0015896-b1054262f7da52a0518521e29c8e352c.txt'
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: windows
- Python version: 3.8.5
- PyArrow version: 9.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4856/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4856/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4855 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4855/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4855/comments | https://api.github.com/repos/huggingface/datasets/issues/4855/events | https://github.com/huggingface/datasets/issues/4855 | 1,339,699,975 | I_kwDODunzps5P2jMH | 4,855 | Dataset Viewer issue for super_glue | {
"login": "wzsxxa",
"id": 54366859,
"node_id": "MDQ6VXNlcjU0MzY2ODU5",
"avatar_url": "https://avatars.githubusercontent.com/u/54366859?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wzsxxa",
"html_url": "https://github.com/wzsxxa",
"followers_url": "https://api.github.com/users/wzsxxa/followers",
"following_url": "https://api.github.com/users/wzsxxa/following{/other_user}",
"gists_url": "https://api.github.com/users/wzsxxa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wzsxxa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wzsxxa/subscriptions",
"organizations_url": "https://api.github.com/users/wzsxxa/orgs",
"repos_url": "https://api.github.com/users/wzsxxa/repos",
"events_url": "https://api.github.com/users/wzsxxa/events{/privacy}",
"received_events_url": "https://api.github.com/users/wzsxxa/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | null | [] | null | [
"Thanks for reporting @wzsxxa.\r\n\r\nHowever the \"super_glue\" dataset is rendered properly by the Dataset preview: https://huggingface.co/datasets/super_glue"
] | 1,660,613,696,000 | 1,661,162,881,000 | 1,661,162,865,000 | NONE | null | ### Link
https://huggingface.co/datasets/super_glue
### Description
can't view super_glue dataset on the web page
### Owner
_No response_ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4855/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4855/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4853 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4853/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4853/comments | https://api.github.com/repos/huggingface/datasets/issues/4853/events | https://github.com/huggingface/datasets/pull/4853 | 1,339,456,490 | PR_kwDODunzps49NFNL | 4,853 | Fix bug and checksums in exams dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660,594,677,000 | 1,660,632,237,000 | 1,660,631,346,000 | MEMBER | null | Fix #4852. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4853/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4853/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4853",
"html_url": "https://github.com/huggingface/datasets/pull/4853",
"diff_url": "https://github.com/huggingface/datasets/pull/4853.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4853.patch",
"merged_at": 1660631346000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4852 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4852/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4852/comments | https://api.github.com/repos/huggingface/datasets/issues/4852/events | https://github.com/huggingface/datasets/issues/4852 | 1,339,450,991 | I_kwDODunzps5P1mZv | 4,852 | Bug in multilingual_with_para config of exams dataset and checksums error | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @albertvillanova. Unfortunately I still get this error. Is this because the merge has yet to be released? Is there a way to track the release?",
"Hi @thesofakillers, yes you are right: the fix will be available after next release (it was planned for today; Monday at the latest).\r\n\r\nIn the meantime, you can use the version of the `exams` on our main branch by passing `revision` to `load_dataset`:\r\n```python\r\nds = load_dataset(\"exams\", revision=\"main\")\r\n```"
] | 1,660,594,492,000 | 1,663,321,855,000 | 1,660,631,347,000 | MEMBER | null | ## Describe the bug
There is a bug for "multilingual_with_para" config in exams dataset:
```python
ds = load_dataset("./datasets/exams", split="train")
```
raises:
```
KeyError: 'choices'
```
Moreover, there is a NonMatchingChecksumError:
```
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://github.com/mhardalov/exams-qa/raw/main/data/exams/multilingual/with_paragraphs/train_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/multilingual/with_paragraphs/dev_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/multilingual/with_paragraphs/test_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/test_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_bg_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_bg_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_hr_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_hr_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_hu_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_hu_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_it_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_it_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_mk_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_mk_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_pl_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_pl_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_pt_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_pt_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_sq_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_sq_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_sr_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_sr_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_tr_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_tr_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_vi_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_vi_with_para.jsonl.tar.gz']
```
CC: @thesofakillers | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4852/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4852/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4851 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4851/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4851/comments | https://api.github.com/repos/huggingface/datasets/issues/4851/events | https://github.com/huggingface/datasets/pull/4851 | 1,339,085,917 | PR_kwDODunzps49L6ee | 4,851 | Fix license tag and Source Data section in billsum dataset card | {
"login": "kashif",
"id": 8100,
"node_id": "MDQ6VXNlcjgxMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kashif",
"html_url": "https://github.com/kashif",
"followers_url": "https://api.github.com/users/kashif/followers",
"following_url": "https://api.github.com/users/kashif/following{/other_user}",
"gists_url": "https://api.github.com/users/kashif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kashif/subscriptions",
"organizations_url": "https://api.github.com/users/kashif/orgs",
"repos_url": "https://api.github.com/users/kashif/repos",
"events_url": "https://api.github.com/users/kashif/events{/privacy}",
"received_events_url": "https://api.github.com/users/kashif/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"thanks @albertvillanova done thank you!"
] | 1,660,574,220,000 | 1,661,176,584,000 | 1,661,175,659,000 | CONTRIBUTOR | null | Fixed the data source and license fields | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4851/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4851/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4851",
"html_url": "https://github.com/huggingface/datasets/pull/4851",
"diff_url": "https://github.com/huggingface/datasets/pull/4851.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4851.patch",
"merged_at": 1661175659000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4850 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4850/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4850/comments | https://api.github.com/repos/huggingface/datasets/issues/4850/events | https://github.com/huggingface/datasets/pull/4850 | 1,338,702,306 | PR_kwDODunzps49KnZ8 | 4,850 | Fix test of _get_extraction_protocol for TAR files | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660,552,678,000 | 1,660,556,576,000 | 1,660,555,726,000 | MEMBER | null | While working in another PR, I discovered an xpass test (a test that is supposed to xfail but nevertheless passes) when testing `_get_extraction_protocol`: https://github.com/huggingface/datasets/runs/7818845285?check_suite_focus=true
```
XPASS tests/test_streaming_download_manager.py::test_streaming_dl_manager_get_extraction_protocol_throws[https://foo.bar/train.tar]
```
This PR:
- refactors the test so that it tests the raise of the exceptions instead of xfailing
- fixes the test for TAR files: it does not raise an exception, but returns "tar"
- fixes some tests wrongly named: exchange `test_streaming_dl_manager_get_extraction_protocol` with `test_streaming_dl_manager_get_extraction_protocol_gg_drive` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4850/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4850/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4850",
"html_url": "https://github.com/huggingface/datasets/pull/4850",
"diff_url": "https://github.com/huggingface/datasets/pull/4850.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4850.patch",
"merged_at": 1660555726000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4849 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4849/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4849/comments | https://api.github.com/repos/huggingface/datasets/issues/4849/events | https://github.com/huggingface/datasets/pull/4849 | 1,338,273,900 | PR_kwDODunzps49JN8d | 4,849 | 1.18.x | {
"login": "Mr-Robot-001",
"id": 49282718,
"node_id": "MDQ6VXNlcjQ5MjgyNzE4",
"avatar_url": "https://avatars.githubusercontent.com/u/49282718?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mr-Robot-001",
"html_url": "https://github.com/Mr-Robot-001",
"followers_url": "https://api.github.com/users/Mr-Robot-001/followers",
"following_url": "https://api.github.com/users/Mr-Robot-001/following{/other_user}",
"gists_url": "https://api.github.com/users/Mr-Robot-001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mr-Robot-001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mr-Robot-001/subscriptions",
"organizations_url": "https://api.github.com/users/Mr-Robot-001/orgs",
"repos_url": "https://api.github.com/users/Mr-Robot-001/repos",
"events_url": "https://api.github.com/users/Mr-Robot-001/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mr-Robot-001/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,660,489,759,000 | 1,660,489,802,000 | 1,660,489,802,000 | NONE | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4849/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4849/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4849",
"html_url": "https://github.com/huggingface/datasets/pull/4849",
"diff_url": "https://github.com/huggingface/datasets/pull/4849.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4849.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4848 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4848/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4848/comments | https://api.github.com/repos/huggingface/datasets/issues/4848/events | https://github.com/huggingface/datasets/pull/4848 | 1,338,271,833 | PR_kwDODunzps49JNj_ | 4,848 | a | {
"login": "Mr-Robot-001",
"id": 49282718,
"node_id": "MDQ6VXNlcjQ5MjgyNzE4",
"avatar_url": "https://avatars.githubusercontent.com/u/49282718?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mr-Robot-001",
"html_url": "https://github.com/Mr-Robot-001",
"followers_url": "https://api.github.com/users/Mr-Robot-001/followers",
"following_url": "https://api.github.com/users/Mr-Robot-001/following{/other_user}",
"gists_url": "https://api.github.com/users/Mr-Robot-001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mr-Robot-001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mr-Robot-001/subscriptions",
"organizations_url": "https://api.github.com/users/Mr-Robot-001/orgs",
"repos_url": "https://api.github.com/users/Mr-Robot-001/repos",
"events_url": "https://api.github.com/users/Mr-Robot-001/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mr-Robot-001/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,660,489,276,000 | 1,660,489,799,000 | 1,660,489,799,000 | NONE | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4848/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4848/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4848",
"html_url": "https://github.com/huggingface/datasets/pull/4848",
"diff_url": "https://github.com/huggingface/datasets/pull/4848.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4848.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4847 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4847/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4847/comments | https://api.github.com/repos/huggingface/datasets/issues/4847/events | https://github.com/huggingface/datasets/pull/4847 | 1,338,270,636 | PR_kwDODunzps49JNWX | 4,847 | Test win ci | {
"login": "Mr-Robot-001",
"id": 49282718,
"node_id": "MDQ6VXNlcjQ5MjgyNzE4",
"avatar_url": "https://avatars.githubusercontent.com/u/49282718?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mr-Robot-001",
"html_url": "https://github.com/Mr-Robot-001",
"followers_url": "https://api.github.com/users/Mr-Robot-001/followers",
"following_url": "https://api.github.com/users/Mr-Robot-001/following{/other_user}",
"gists_url": "https://api.github.com/users/Mr-Robot-001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mr-Robot-001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mr-Robot-001/subscriptions",
"organizations_url": "https://api.github.com/users/Mr-Robot-001/orgs",
"repos_url": "https://api.github.com/users/Mr-Robot-001/repos",
"events_url": "https://api.github.com/users/Mr-Robot-001/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mr-Robot-001/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,660,489,020,000 | 1,660,489,065,000 | 1,660,489,065,000 | NONE | null | aa | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4847/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4847/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4847",
"html_url": "https://github.com/huggingface/datasets/pull/4847",
"diff_url": "https://github.com/huggingface/datasets/pull/4847.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4847.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4846 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4846/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4846/comments | https://api.github.com/repos/huggingface/datasets/issues/4846/events | https://github.com/huggingface/datasets/pull/4846 | 1,337,979,897 | PR_kwDODunzps49IYSC | 4,846 | Update documentation card of miam dataset | {
"login": "PierreColombo",
"id": 22492839,
"node_id": "MDQ6VXNlcjIyNDkyODM5",
"avatar_url": "https://avatars.githubusercontent.com/u/22492839?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PierreColombo",
"html_url": "https://github.com/PierreColombo",
"followers_url": "https://api.github.com/users/PierreColombo/followers",
"following_url": "https://api.github.com/users/PierreColombo/following{/other_user}",
"gists_url": "https://api.github.com/users/PierreColombo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PierreColombo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PierreColombo/subscriptions",
"organizations_url": "https://api.github.com/users/PierreColombo/orgs",
"repos_url": "https://api.github.com/users/PierreColombo/repos",
"events_url": "https://api.github.com/users/PierreColombo/events{/privacy}",
"received_events_url": "https://api.github.com/users/PierreColombo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Ahahah :D not sur how i broke something by updating the README :D ",
"Thanks for the fix @PierreColombo. \r\n\r\nOnce a README is modified, our CI runs tests on it, requiring additional quality fixes, so that all READMEs are progressively improved and have some minimal tags/sections/information.\r\n\r\nFor this specific README file, the additional quality requirements of the CI are: https://github.com/huggingface/datasets/runs/7819924428?check_suite_focus=true\r\n```\r\nE The following issues were found for the README at `/home/runner/work/datasets/datasets/datasets/miam/README.md`:\r\nE -\tSection `Additional Information` is missing subsection: `Dataset Curators`.\r\nE -\tSection `Additional Information` is missing subsection: `Contributions`.\r\nE -\t`Additional Information` has an extra subsection: `Benchmark Curators`. Skipping further validation checks for this subsection as expected structure is unknown.\r\n```",
"Thanks a lot Albert :)))"
] | 1,660,401,535,000 | 1,660,697,404,000 | 1,660,472,768,000 | CONTRIBUTOR | null | Hi !
Paper has been published at EMNLP. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4846/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4846/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4846",
"html_url": "https://github.com/huggingface/datasets/pull/4846",
"diff_url": "https://github.com/huggingface/datasets/pull/4846.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4846.patch",
"merged_at": 1660472768000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4845 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4845/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4845/comments | https://api.github.com/repos/huggingface/datasets/issues/4845/events | https://github.com/huggingface/datasets/pull/4845 | 1,337,928,283 | PR_kwDODunzps49IOjf | 4,845 | Mark CI tests as xfail if Hub HTTP error | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660,387,511,000 | 1,661,230,632,000 | 1,661,229,746,000 | MEMBER | null | In order to make testing more robust (and avoid merges to master with red tests), we could mark tests as xfailed (instead of failed) when the Hub raises some temporary HTTP errors.
This PR:
- marks tests as xfailed only if the Hub raises a 500 error for:
- test_upstream_hub
- makes pytest report the xfailed/xpassed tests.
More tests could also be marked if needed.
Examples of CI failures due to temporary Hub HTTP errors:
- FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_multiple_files
- https://github.com/huggingface/datasets/runs/7806855399?check_suite_focus=true
`requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https://hub-ci.huggingface.co/api/datasets/__DUMMY_TRANSFORMERS_USER__/test-16603108028233/commit/main (Request ID: aZeAQ5yLktoGHQYBcJ3zo)`
- FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_no_token
- https://github.com/huggingface/datasets/runs/7840022996?check_suite_focus=true
`requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https://s3.us-east-1.amazonaws.com/lfs-staging.huggingface.co/repos/81/e3/81e3b831fa9bf23190ec041f26ef7ff6d6b71c1a937b8ec1ef1f1f05b508c089/caae596caa179cf45e7c9ac0c6d9a9cb0fe2d305291bfbb2d8b648ae26ed38b6?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20220815%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20220815T144713Z&X-Amz-Expires=900&X-Amz-Signature=5ddddfe8ef2b0601e80ab41c78a4d77d921942b0d8160bcab40ff894095e6823&X-Amz-SignedHeaders=host&x-id=PutObject`
- FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_private
- https://github.com/huggingface/datasets/runs/7835921082?check_suite_focus=true
`requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https://hub-ci.huggingface.co/api/repos/create (Request ID: gL_1I7i2dii9leBhlZen-) - Internal Error - We're working hard to fix that as soon as possible!`
- FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_to_hub_custom_features_image_list
- https://github.com/huggingface/datasets/runs/7835920900?check_suite_focus=true
- This is not 500, but 404:
`requests.exceptions.HTTPError: 404 Client Error: Not Found for url: [https://hub-ci.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/test-16605586458339.git/info/lfs/objects](https://hub-ci.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/test-16605586458339.git/info/lfs/objects/batch)`
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4845/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4845/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4845",
"html_url": "https://github.com/huggingface/datasets/pull/4845",
"diff_url": "https://github.com/huggingface/datasets/pull/4845.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4845.patch",
"merged_at": 1661229746000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4844 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4844/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4844/comments | https://api.github.com/repos/huggingface/datasets/issues/4844/events | https://github.com/huggingface/datasets/pull/4844 | 1,337,878,249 | PR_kwDODunzps49IFLa | 4,844 | Add 'val' to VALIDATION_KEYWORDS. | {
"login": "akt42",
"id": 98386959,
"node_id": "U_kgDOBd1EDw",
"avatar_url": "https://avatars.githubusercontent.com/u/98386959?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akt42",
"html_url": "https://github.com/akt42",
"followers_url": "https://api.github.com/users/akt42/followers",
"following_url": "https://api.github.com/users/akt42/following{/other_user}",
"gists_url": "https://api.github.com/users/akt42/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akt42/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akt42/subscriptions",
"organizations_url": "https://api.github.com/users/akt42/orgs",
"repos_url": "https://api.github.com/users/akt42/repos",
"events_url": "https://api.github.com/users/akt42/events{/privacy}",
"received_events_url": "https://api.github.com/users/akt42/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@mariosasko not sure about how the reviewing process works. Maybe you can have a look because we discussed this elsewhere?",
"Hi, thanks! \r\n\r\nLet's add one pattern with `val` to this test before merging: \r\nhttps://github.com/huggingface/datasets/blob/b88a656cf94c4ad972154371c83c1af759fde522/tests/test_data_files.py#L598",
"_The documentation is not available anymore as the PR was closed or merged._",
"@akt42 note that there is some info about splits keywords in the docs: https://huggingface.co/docs/datasets/main/en/repository_structure#split-names-keywords. I agree it's not clear that it applies not only to filenames, but to directories as well.\r\n\r\nI think \"val\" should be now added to the documentation source file here: https://github.com/huggingface/datasets/blob/main/docs/source/repository_structure.mdx?plain=1#L98",
"@polinaeterna Thanks for notifying us that there is a list of supported keywords\r\n\r\nI've added \"val\" to that list and a test."
] | 1,660,373,381,000 | 1,661,854,655,000 | 1,661,854,494,000 | CONTRIBUTOR | null | This PR fixes #4839 by adding the word `"val"` to the `VALIDATION_KEYWORDS` so that the `load_dataset()` method with `imagefolder` (and probably, some other directives as well) reads folders named `"val"` as well.
I think the supported keywords have to be mentioned in the documentation as well, but I couldn't think of a proper place to add that. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4844/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4844/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4844",
"html_url": "https://github.com/huggingface/datasets/pull/4844",
"diff_url": "https://github.com/huggingface/datasets/pull/4844.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4844.patch",
"merged_at": 1661854494000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4843 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4843/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4843/comments | https://api.github.com/repos/huggingface/datasets/issues/4843/events | https://github.com/huggingface/datasets/pull/4843 | 1,337,668,699 | PR_kwDODunzps49HaWT | 4,843 | Fix typo in streaming docs | {
"login": "flozi00",
"id": 47894090,
"node_id": "MDQ6VXNlcjQ3ODk0MDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flozi00",
"html_url": "https://github.com/flozi00",
"followers_url": "https://api.github.com/users/flozi00/followers",
"following_url": "https://api.github.com/users/flozi00/following{/other_user}",
"gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flozi00/subscriptions",
"organizations_url": "https://api.github.com/users/flozi00/orgs",
"repos_url": "https://api.github.com/users/flozi00/repos",
"events_url": "https://api.github.com/users/flozi00/events{/privacy}",
"received_events_url": "https://api.github.com/users/flozi00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660,335,501,000 | 1,660,477,410,000 | 1,660,474,929,000 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4843/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4843/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4843",
"html_url": "https://github.com/huggingface/datasets/pull/4843",
"diff_url": "https://github.com/huggingface/datasets/pull/4843.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4843.patch",
"merged_at": 1660474929000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4842 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4842/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4842/comments | https://api.github.com/repos/huggingface/datasets/issues/4842/events | https://github.com/huggingface/datasets/pull/4842 | 1,337,527,764 | PR_kwDODunzps49G8CC | 4,842 | Update stackexchange license | {
"login": "cakiki",
"id": 3664563,
"node_id": "MDQ6VXNlcjM2NjQ1NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cakiki",
"html_url": "https://github.com/cakiki",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cakiki/subscriptions",
"organizations_url": "https://api.github.com/users/cakiki/orgs",
"repos_url": "https://api.github.com/users/cakiki/repos",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"received_events_url": "https://api.github.com/users/cakiki/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660,325,946,000 | 1,660,473,798,000 | 1,660,472,929,000 | CONTRIBUTOR | null | The correct license of the stackexchange subset of the Pile is `cc-by-sa-4.0`, as can for example be seen here: https://stackoverflow.com/help/licensing | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4842/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4842/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4842",
"html_url": "https://github.com/huggingface/datasets/pull/4842",
"diff_url": "https://github.com/huggingface/datasets/pull/4842.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4842.patch",
"merged_at": 1660472929000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4841 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4841/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4841/comments | https://api.github.com/repos/huggingface/datasets/issues/4841/events | https://github.com/huggingface/datasets/pull/4841 | 1,337,401,243 | PR_kwDODunzps49Gf0I | 4,841 | Update ted_talks_iwslt license to include ND | {
"login": "cakiki",
"id": 3664563,
"node_id": "MDQ6VXNlcjM2NjQ1NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cakiki",
"html_url": "https://github.com/cakiki",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cakiki/subscriptions",
"organizations_url": "https://api.github.com/users/cakiki/orgs",
"repos_url": "https://api.github.com/users/cakiki/repos",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"received_events_url": "https://api.github.com/users/cakiki/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660,320,892,000 | 1,660,475,722,000 | 1,660,474,822,000 | CONTRIBUTOR | null | Excerpt from the paper's abstract: "Aside from its cultural and social relevance, this content, which is published under the Creative Commons BY-NC-ND license, also represents a precious language resource for the machine translation research community" | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4841/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4841/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4841",
"html_url": "https://github.com/huggingface/datasets/pull/4841",
"diff_url": "https://github.com/huggingface/datasets/pull/4841.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4841.patch",
"merged_at": 1660474822000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4840 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4840/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4840/comments | https://api.github.com/repos/huggingface/datasets/issues/4840/events | https://github.com/huggingface/datasets/issues/4840 | 1,337,342,672 | I_kwDODunzps5PtjrQ | 4,840 | Dataset Viewer issue for darragh/demo_data_raw3 | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"do you have an idea of why it can occur @huggingface/datasets? The dataset consists of a single parquet file.",
"Thanks for reporting @severo.\r\n\r\nI'm not able to reproduce that error. I get instead:\r\n```\r\nFileNotFoundError: [Errno 2] No such file or directory: 'orix/data/ChiSig/唐合乐-9-3.jpg'\r\n```\r\n\r\nWhich pyarrow version are you using? Mine is 6.0.1. ",
"OK, I get now your error when not streaming.",
"OK!\r\n\r\nIf it's useful, the pyarrow version is 7.0.0:\r\n\r\nhttps://github.com/huggingface/datasets-server/blob/487c39d87998f8d5a35972f1027d6c8e588e622d/services/worker/poetry.lock#L1537-L1543",
"Apparently, there is something weird with that Parquet file: its schema is:\r\n```\r\nimages: extension<arrow.py_extension_type<pyarrow.lib.UnknownExtensionType>>\r\n```\r\n\r\nI have forced a right schema:\r\n```python\r\nfrom datasets import Features, Image, load_dataset\r\n\r\nfeatures = Features({\"images\": Image()})\r\nds = datasets.load_dataset(\"parquet\", split=\"train\", data_files=\"train-00000-of-00001.parquet\", features=features)\r\n```\r\nand then recreated a new Parquet file:\r\n```python\r\nds.to_parquet(\"train.parquet\")\r\n```\r\n\r\nNow this Parquet file has the right schema:\r\n```\r\nimages: struct<bytes: binary, path: string>\r\n child 0, bytes: binary\r\n child 1, path: string\r\n```\r\nand can be loaded normally:\r\n```python\r\nIn [26]: ds = load_dataset(\"parquet\", split=\"train\", data_files=\"dataset.parquet\")\r\nn [27]: ds\r\nOut[27]: \r\nDataset({\r\n features: ['images'],\r\n num_rows: 20\r\n})\r\n```"
] | 1,660,317,778,000 | 1,662,623,744,000 | null | CONTRIBUTOR | null | ### Link
https://huggingface.co/datasets/darragh/demo_data_raw3
### Description
```
Exception: ValueError
Message: Arrow type extension<arrow.py_extension_type<pyarrow.lib.UnknownExtensionType>> does not have a datasets dtype equivalent.
```
reported by @NielsRogge
### Owner
No | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4840/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4840/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4839 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4839/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4839/comments | https://api.github.com/repos/huggingface/datasets/issues/4839/events | https://github.com/huggingface/datasets/issues/4839 | 1,337,206,377 | I_kwDODunzps5PtCZp | 4,839 | ImageFolder dataset builder does not read the validation data set if it is named as "val" | {
"login": "akt42",
"id": 98386959,
"node_id": "U_kgDOBd1EDw",
"avatar_url": "https://avatars.githubusercontent.com/u/98386959?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akt42",
"html_url": "https://github.com/akt42",
"followers_url": "https://api.github.com/users/akt42/followers",
"following_url": "https://api.github.com/users/akt42/following{/other_user}",
"gists_url": "https://api.github.com/users/akt42/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akt42/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akt42/subscriptions",
"organizations_url": "https://api.github.com/users/akt42/orgs",
"repos_url": "https://api.github.com/users/akt42/repos",
"events_url": "https://api.github.com/users/akt42/events{/privacy}",
"received_events_url": "https://api.github.com/users/akt42/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "akt42",
"id": 98386959,
"node_id": "U_kgDOBd1EDw",
"avatar_url": "https://avatars.githubusercontent.com/u/98386959?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akt42",
"html_url": "https://github.com/akt42",
"followers_url": "https://api.github.com/users/akt42/followers",
"following_url": "https://api.github.com/users/akt42/following{/other_user}",
"gists_url": "https://api.github.com/users/akt42/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akt42/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akt42/subscriptions",
"organizations_url": "https://api.github.com/users/akt42/orgs",
"repos_url": "https://api.github.com/users/akt42/repos",
"events_url": "https://api.github.com/users/akt42/events{/privacy}",
"received_events_url": "https://api.github.com/users/akt42/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "akt42",
"id": 98386959,
"node_id": "U_kgDOBd1EDw",
"avatar_url": "https://avatars.githubusercontent.com/u/98386959?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akt42",
"html_url": "https://github.com/akt42",
"followers_url": "https://api.github.com/users/akt42/followers",
"following_url": "https://api.github.com/users/akt42/following{/other_user}",
"gists_url": "https://api.github.com/users/akt42/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akt42/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akt42/subscriptions",
"organizations_url": "https://api.github.com/users/akt42/orgs",
"repos_url": "https://api.github.com/users/akt42/repos",
"events_url": "https://api.github.com/users/akt42/events{/privacy}",
"received_events_url": "https://api.github.com/users/akt42/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"#take"
] | 1,660,310,760,000 | 1,661,854,495,000 | 1,661,854,495,000 | CONTRIBUTOR | null | **Is your feature request related to a problem? Please describe.**
Currently, the `'imagefolder'` data set builder in [`load_dataset()`](https://github.com/huggingface/datasets/blob/2.4.0/src/datasets/load.py#L1541] ) only [supports](https://github.com/huggingface/datasets/blob/6c609a322da994de149b2c938f19439bca99408e/src/datasets/data_files.py#L31) the following names as the validation data set directory name: `["validation", "valid", "dev"]`. When the validation directory is named as `'val'`, the Data set will not have a validation split. I expected this to be a trivial task but ended up spending a lot of time before knowing that only the above names are supported.
Here's a minimal example of `val` not being recognized:
```python
import os
import numpy as np
import cv2
from datasets import load_dataset
# creating a dummy data set with the following structure:
# ROOT
# | -- train
# | ---- class_1
# | ---- class_2
# | -- val
# | ---- class_1
# | ---- class_2
ROOT = "data"
for which in ["train", "val"]:
for class_name in ["class_1", "class_2"]:
dir_name = os.path.join(ROOT, which, class_name)
if not os.path.exists(dir_name):
os.makedirs(dir_name)
for i in range(10):
cv2.imwrite(
os.path.join(dir_name, f"{i}.png"),
np.random.random((224, 224))
)
# trying to create a data set
dataset = load_dataset(
"imagefolder",
data_dir=ROOT
)
>> dataset
DatasetDict({
train: Dataset({
features: ['image', 'label'],
num_rows: 20
})
})
# ^ note how the dataset only has a 'train' subset
```
**Describe the solution you'd like**
The suggestion is to include `"val"` to [that list ](https://github.com/huggingface/datasets/blob/6c609a322da994de149b2c938f19439bca99408e/src/datasets/data_files.py#L31) as that's a commonly used phrase to name the validation directory.
Also, In the documentation, explicitly mention that only such directory names are supported as train/val/test directories to avoid confusion.
**Describe alternatives you've considered**
In the documentation, explicitly mention that only such directory names are supported as train/val/test directories without adding `val` to the above list.
**Additional context**
A question asked in the forum: [
Loading an imagenet-style image dataset with train/val directories](https://discuss.huggingface.co/t/loading-an-imagenet-style-image-dataset-with-train-val-directories/21554) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4839/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4839/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4838 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4838/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4838/comments | https://api.github.com/repos/huggingface/datasets/issues/4838/events | https://github.com/huggingface/datasets/pull/4838 | 1,337,194,918 | PR_kwDODunzps49F08R | 4,838 | Fix documentation card of adv_glue dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The failing test has nothing to do with this PR:\r\n```\r\nFAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_multiple_files\r\n```"
] | 1,660,310,126,000 | 1,660,558,634,000 | 1,660,557,731,000 | MEMBER | null | Fix documentation card of adv_glue dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4838/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4838/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4838",
"html_url": "https://github.com/huggingface/datasets/pull/4838",
"diff_url": "https://github.com/huggingface/datasets/pull/4838.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4838.patch",
"merged_at": 1660557731000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4837 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4837/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4837/comments | https://api.github.com/repos/huggingface/datasets/issues/4837/events | https://github.com/huggingface/datasets/pull/4837 | 1,337,079,723 | PR_kwDODunzps49Fb6l | 4,837 | Add support for CSV metadata files to ImageFolder | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Cool thanks ! Maybe let's include this change after the refactoring from FolderBasedBuilder in #3963 to avoid dealing with too many unpleasant conflicts ?",
"@lhoestq I resolved the conflicts (AudioFolder also supports CSV metadata now). Let me know what you think.\r\n",
"@lhoestq Thanks for the suggestion! Indeed it makes more sense to use CSV as the default format in the folder-based builders."
] | 1,660,303,158,000 | 1,661,947,287,000 | 1,661,947,147,000 | CONTRIBUTOR | null | Fix #4814 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4837/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4837/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4837",
"html_url": "https://github.com/huggingface/datasets/pull/4837",
"diff_url": "https://github.com/huggingface/datasets/pull/4837.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4837.patch",
"merged_at": 1661947147000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4836 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4836/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4836/comments | https://api.github.com/repos/huggingface/datasets/issues/4836/events | https://github.com/huggingface/datasets/issues/4836 | 1,337,067,632 | I_kwDODunzps5Psghw | 4,836 | Is it possible to pass multiple links to a split in load script? | {
"login": "sadrasabouri",
"id": 43045767,
"node_id": "MDQ6VXNlcjQzMDQ1NzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/43045767?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sadrasabouri",
"html_url": "https://github.com/sadrasabouri",
"followers_url": "https://api.github.com/users/sadrasabouri/followers",
"following_url": "https://api.github.com/users/sadrasabouri/following{/other_user}",
"gists_url": "https://api.github.com/users/sadrasabouri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sadrasabouri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sadrasabouri/subscriptions",
"organizations_url": "https://api.github.com/users/sadrasabouri/orgs",
"repos_url": "https://api.github.com/users/sadrasabouri/repos",
"events_url": "https://api.github.com/users/sadrasabouri/events{/privacy}",
"received_events_url": "https://api.github.com/users/sadrasabouri/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 1,660,302,371,000 | 1,660,302,371,000 | null | NONE | null | **Is your feature request related to a problem? Please describe.**
I wanted to use a python loading script in hugging face datasets that use different sources of text (it's somehow a compilation of multiple datasets + my own dataset) based on how `load_dataset` [works](https://huggingface.co/docs/datasets/loading) I assumed I could do something like bellow in my loading script:
```python
...
_URL = "MY_DATASET_URL/resolve/main/data/"
_URLS = {
"train": [
"FIRST_URL_TO.txt",
_URL + "train-00000-of-00001-676bfebbc8742592.parquet"
]
}
...
```
but when loading the dataset it raises the following error:
```python
File ~/.local/lib/python3.8/site-packages/datasets/builder.py:704, in DatasetBuilder.download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
702 logger.warning("HF google storage unreachable. Downloading and preparing it from source")
703 if not downloaded_from_gcs:
--> 704 self._download_and_prepare(
705 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
...
668 if isinstance(a, str):
669 # Force-cast str subclasses to str (issue #21127)
670 parts.append(str(a))
TypeError: expected str, bytes or os.PathLike object, not list
```
**Describe the solution you'd like**
I believe since it's possible for `load_dataset` to get list of URLs instead of just a URL for `train` split it can be possible here too.
**Describe alternatives you've considered**
An alternative solution would be to download all needed datasets locally and `push_to_hub` them all, but since the datasets I'm talking about are huge it's not among my options.
**Additional context**
I think loading `text` beside the `parquet` is completely a different issue but I believe I can figure it out by proposing a config for my dataset to load each entry of `_URLS['train']` separately either by `load_dataset("text", ...` or `load_dataset("parquet", ...`.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4836/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4836/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4835 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4835/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4835/comments | https://api.github.com/repos/huggingface/datasets/issues/4835/events | https://github.com/huggingface/datasets/pull/4835 | 1,336,994,835 | PR_kwDODunzps49FJg9 | 4,835 | Fix documentation card of ethos dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660,297,866,000 | 1,660,310,035,000 | 1,660,309,179,000 | MEMBER | null | Fix documentation card of ethos dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4835/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4835/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4835",
"html_url": "https://github.com/huggingface/datasets/pull/4835",
"diff_url": "https://github.com/huggingface/datasets/pull/4835.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4835.patch",
"merged_at": 1660309179000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4834 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4834/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4834/comments | https://api.github.com/repos/huggingface/datasets/issues/4834/events | https://github.com/huggingface/datasets/pull/4834 | 1,336,993,511 | PR_kwDODunzps49FJOu | 4,834 | Fix documentation card of recipe_nlg dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660,297,779,000 | 1,660,303,698,000 | 1,660,302,820,000 | MEMBER | null | Fix documentation card of recipe_nlg dataset | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4834/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4834/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4834",
"html_url": "https://github.com/huggingface/datasets/pull/4834",
"diff_url": "https://github.com/huggingface/datasets/pull/4834.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4834.patch",
"merged_at": 1660302820000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4833 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4833/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4833/comments | https://api.github.com/repos/huggingface/datasets/issues/4833/events | https://github.com/huggingface/datasets/pull/4833 | 1,336,946,965 | PR_kwDODunzps49E_Nk | 4,833 | Fix missing tags in dataset cards | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660,295,092,000 | 1,660,298,427,000 | 1,660,297,555,000 | MEMBER | null | Fix missing tags in dataset cards.
This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4833/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4833/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4833",
"html_url": "https://github.com/huggingface/datasets/pull/4833",
"diff_url": "https://github.com/huggingface/datasets/pull/4833.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4833.patch",
"merged_at": 1660297555000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4832 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4832/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4832/comments | https://api.github.com/repos/huggingface/datasets/issues/4832/events | https://github.com/huggingface/datasets/pull/4832 | 1,336,727,389 | PR_kwDODunzps49EQav | 4,832 | Fix tags in dataset cards | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The non-passing tests are caused by other missing information in the dataset cards."
] | 1,660,277,483,000 | 1,660,279,315,000 | 1,660,278,444,000 | MEMBER | null | Fix wrong tags in dataset cards. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4832/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4832/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4832",
"html_url": "https://github.com/huggingface/datasets/pull/4832",
"diff_url": "https://github.com/huggingface/datasets/pull/4832.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4832.patch",
"merged_at": 1660278444000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4831 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4831/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4831/comments | https://api.github.com/repos/huggingface/datasets/issues/4831/events | https://github.com/huggingface/datasets/pull/4831 | 1,336,199,643 | PR_kwDODunzps49Cibf | 4,831 | Add oversampling strategies to interleave datasets | {
"login": "ylacombe",
"id": 52246514,
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ylacombe",
"html_url": "https://github.com/ylacombe",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4831). All of your documentation changes will be reflected on that endpoint.",
"Hi @lhoestq, \r\nThanks for your review! I've added the requested mention in the documentation and corrected the Error type in `interleave_datasets`. \r\nI've also added test cases in `test_arrow_dataset.py`, which was useful since it allow me to detect an error in the case of an oversampling strategy with no sampling probabilities. \r\nCould you double check this part ? I've commented the code to explain the approach.\r\nThanks!\r\n"
] | 1,660,235,091,000 | 1,661,415,669,000 | 1,661,359,567,000 | CONTRIBUTOR | null | Hello everyone,
Here is a proposal to improve `interleave_datasets` function.
Following Issue #3064, and @lhoestq [comment](https://github.com/huggingface/datasets/issues/3064#issuecomment-1022333385), I propose here a code that performs oversampling when interleaving a `Dataset` list.
I have myself encountered this problem while trying to implement training on a multilingual dataset following a training strategy similar to that of [XLSUM paper](https://arxiv.org/pdf/2106.13822.pdf), a multilingual abstract summary dataset where the multilingual training dataset is created by sampling from the languages following a smoothing strategy. The main idea is to sample languages that have a low number of samples more frequently than other languages.
As in Issue #3064, the current default strategy is a undersampling strategy, which stops as soon as a dataset runs out of samples. The new `all_exhausted` strategy stops building the new dataset as soon as all samples in each dataset have been added at least once.
How does it work in practice:
- if ``probabilities`` is `None` and the strategy is `all_exhausted`, it simply performs a round robin interleaving that stops when the longest dataset is out of samples. Here the new dataset length will be $maxLengthDataset*nbDataset$.
- if ``probabilities`` is not `None` and the strategy is `all_exhausted`, it keeps trace of the datasets which were out of samples but continues to add them to the new dataset, and stops as soons as every dataset runs out of samples at least once.
- In the other cases, it is supposed to keep the same behaviour as before. Except that this time, when probabilities are precised, it really stops AS SOON AS a dataset is out of samples.
More on the last sentence:
The previous example of `interleave_datasets` was:
>>> from datasets import Dataset, interleave_datasets
>>> d1 = Dataset.from_dict({"a": [0, 1, 2]})
>>> d2 = Dataset.from_dict({"a": [10, 11, 12]})
>>> d3 = Dataset.from_dict({"a": [20, 21, 22]})
>>> dataset = interleave_datasets([d1, d2, d3])
>>> dataset["a"]
[0, 10, 20, 1, 11, 21, 2, 12, 22]
>>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42)
>>> dataset["a"]
[10, 0, 11, 1, 2, 20, 12]
With my implementation, `dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42)` gives:
>>> dataset["a"]
[10, 0, 11, 1, 2]
because `d1` is already out of samples just after `2` is added.
Example of the results of applying the different strategies:
>>> from datasets import Dataset, interleave_datasets
>>> d1 = Dataset.from_dict({"a": [0, 1, 2]})
>>> d2 = Dataset.from_dict({"a": [10, 11, 12]})
>>> d3 = Dataset.from_dict({"a": [20, 21, 22]})
>>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42, stopping_strategy="all_exhausted")
>>> dataset["a"]
[10, 0, 11, 1, 2, 20, 12, 10, 0, 1, 2, 21, 0, 11, 1, 2, 0, 1, 12, 2, 10, 0, 22]
>>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42)
>>> dataset["a"]
[10, 0, 11, 1, 2]
>>> dataset = interleave_datasets([d1, d2, d3])
>>> dataset["a"]
[0, 10, 20, 1, 11, 21, 2, 12, 22]
>>> dataset = interleave_datasets([d1, d2, d3], stopping_strategy="all_exhausted")
>>> dataset["a"]
[0, 10, 20, 1, 11, 21, 2, 12, 22]
>>> d1 = Dataset.from_dict({"a": [0, 1, 2]})
>>> d2 = Dataset.from_dict({"a": [10, 11, 12, 13]})
>>> d3 = Dataset.from_dict({"a": [20, 21, 22, 23, 24]})
>>> dataset = interleave_datasets([d1, d2, d3])
>>> dataset["a"]
[0, 10, 20, 1, 11, 21, 2, 12, 22]
>>> dataset = interleave_datasets([d1, d2, d3], stopping_strategy="all_exhausted")
>>> dataset["a"]
[0, 10, 20, 1, 11, 21, 2, 12, 22, 0, 13, 23, 1, 0, 24]
>>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42)
>>> dataset["a"]
[10, 0, 11, 1, 2]
>>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42, stopping_strategy="all_exhausted")
>>> dataset["a"]
[10, 0, 11, 1, 2, 20, 12, 13, ..., 0, 1, 2, 0, 24]
**Final note:** I've been using that code for a research project involving a large-scale multilingual dataset. One should be careful when using oversampling to avoid to avoid exploding the size of the dataset. For example, if a very large data set has a low probability of being sampled, the final dataset may be several times the size of that large data set.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4831/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4831/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4831",
"html_url": "https://github.com/huggingface/datasets/pull/4831",
"diff_url": "https://github.com/huggingface/datasets/pull/4831.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4831.patch",
"merged_at": 1661359567000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4830 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4830/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4830/comments | https://api.github.com/repos/huggingface/datasets/issues/4830/events | https://github.com/huggingface/datasets/pull/4830 | 1,336,177,937 | PR_kwDODunzps49Cdro | 4,830 | Fix task tags in dataset cards | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The non-passing tests are caused by other missing information in the dataset cards."
] | 1,660,233,966,000 | 1,660,235,847,000 | 1,660,234,980,000 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4830/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4830/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4830",
"html_url": "https://github.com/huggingface/datasets/pull/4830",
"diff_url": "https://github.com/huggingface/datasets/pull/4830.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4830.patch",
"merged_at": 1660234980000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4829 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4829/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4829/comments | https://api.github.com/repos/huggingface/datasets/issues/4829/events | https://github.com/huggingface/datasets/issues/4829 | 1,336,068,068 | I_kwDODunzps5Posfk | 4,829 | Misalignment between card tag validation and docs | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"(Note that the doc is aligned with the hub validation rules, and the \"ground truth\" is the hub validation rules given that they apply to all datasets, not just the canonical ones)"
] | 1,660,229,085,000 | 1,660,229,195,000 | null | MEMBER | null | ## Describe the bug
As pointed out in other issue: https://github.com/huggingface/datasets/pull/4827#discussion_r943536284
the validation of the dataset card tags is not aligned with its documentation: e.g.
- implementation: `license: List[str]`
- docs: `license: Union[str, List[str]]`
They should be aligned.
CC: @julien-c
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4829/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4829/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4828 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4828/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4828/comments | https://api.github.com/repos/huggingface/datasets/issues/4828/events | https://github.com/huggingface/datasets/pull/4828 | 1,336,040,168 | PR_kwDODunzps49B_vb | 4,828 | Support PIL Image objects in `add_item`/`add_column` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4828). All of your documentation changes will be reflected on that endpoint."
] | 1,660,227,945,000 | 1,661,182,703,000 | null | CONTRIBUTOR | null | Fix #4796
PS: We should also improve the type inference in `OptimizedTypeSequence` to make it possible to also infer the complex types (only `Image` currently) in nested arrays (e.g. `[[pil_image], [pil_image, pil_image]]` or `[{"img": pil_image}`]), but I plan to address this in a separate PR. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4828/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4828/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4828",
"html_url": "https://github.com/huggingface/datasets/pull/4828",
"diff_url": "https://github.com/huggingface/datasets/pull/4828.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4828.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4827 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4827/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4827/comments | https://api.github.com/repos/huggingface/datasets/issues/4827/events | https://github.com/huggingface/datasets/pull/4827 | 1,335,994,312 | PR_kwDODunzps49B1zi | 4,827 | Add license metadata to pg19 | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660,225,940,000 | 1,660,230,063,000 | 1,660,229,198,000 | MEMBER | null | As reported over email by Roy Rijkers | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4827/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4827/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4827",
"html_url": "https://github.com/huggingface/datasets/pull/4827",
"diff_url": "https://github.com/huggingface/datasets/pull/4827.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4827.patch",
"merged_at": 1660229198000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4826 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4826/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4826/comments | https://api.github.com/repos/huggingface/datasets/issues/4826/events | https://github.com/huggingface/datasets/pull/4826 | 1,335,987,583 | PR_kwDODunzps49B0V3 | 4,826 | Fix language tags in dataset cards | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The non-passing tests are caused by other missing information in the dataset cards."
] | 1,660,225,634,000 | 1,660,227,468,000 | 1,660,226,592,000 | MEMBER | null | Fix language tags in all dataset cards, so that they are validated (aligned with our `languages.json` resource). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4826/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4826/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4826",
"html_url": "https://github.com/huggingface/datasets/pull/4826",
"diff_url": "https://github.com/huggingface/datasets/pull/4826.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4826.patch",
"merged_at": 1660226592000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4825 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4825/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4825/comments | https://api.github.com/repos/huggingface/datasets/issues/4825/events | https://github.com/huggingface/datasets/pull/4825 | 1,335,856,882 | PR_kwDODunzps49BYWL | 4,825 | [Windows] Fix Access Denied when using os.rename() | {
"login": "DougTrajano",
"id": 8703022,
"node_id": "MDQ6VXNlcjg3MDMwMjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8703022?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DougTrajano",
"html_url": "https://github.com/DougTrajano",
"followers_url": "https://api.github.com/users/DougTrajano/followers",
"following_url": "https://api.github.com/users/DougTrajano/following{/other_user}",
"gists_url": "https://api.github.com/users/DougTrajano/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DougTrajano/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DougTrajano/subscriptions",
"organizations_url": "https://api.github.com/users/DougTrajano/orgs",
"repos_url": "https://api.github.com/users/DougTrajano/repos",
"events_url": "https://api.github.com/users/DougTrajano/events{/privacy}",
"received_events_url": "https://api.github.com/users/DougTrajano/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Cool thank you ! Maybe we can just replace `os.rename` by `shutil.move` instead ?",
"> Cool thank you ! Maybe we can just replace `os.rename` by `shutil.move` instead ?\r\n\r\nYes, I think that could be a better solution, but I didn't test it in Linux (e.g. Ubuntu) to guarantee that `os.rename()` could be completely replaced by `shutil.move()`.",
"AFAIK `shutil.move` does call `os.rename` first before doing extra work to make it work on windows, so this is should be a safe safe change for linux ;)",
"> AFAIK `shutil.move` does call `os.rename` first before doing extra work to make it work on windows, so this is should be a safe safe change for linux ;)\r\n\r\nalright, let me change the PR then.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4825). All of your documentation changes will be reflected on that endpoint.",
"Hi @lhoestq looks like one of the tests failed, but is not related to this change, do I need to do something from my side?"
] | 1,660,219,035,000 | 1,661,346,547,000 | 1,661,346,547,000 | CONTRIBUTOR | null | In this PR, we are including an additional step when `os.rename()` raises a PermissionError.
Basically, we will use `shutil.move()` on the temp files.
Fix #2937 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4825/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4825/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4825",
"html_url": "https://github.com/huggingface/datasets/pull/4825",
"diff_url": "https://github.com/huggingface/datasets/pull/4825.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4825.patch",
"merged_at": 1661346547000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4824 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4824/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4824/comments | https://api.github.com/repos/huggingface/datasets/issues/4824/events | https://github.com/huggingface/datasets/pull/4824 | 1,335,826,639 | PR_kwDODunzps49BR5H | 4,824 | Fix titles in dataset cards | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The non-passing tests are caused by other missing information in the dataset cards."
] | 1,660,217,268,000 | 1,660,225,571,000 | 1,660,222,609,000 | MEMBER | null | Fix all the titles in the dataset cards, so that they conform to the required format. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4824/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4824/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4824",
"html_url": "https://github.com/huggingface/datasets/pull/4824",
"diff_url": "https://github.com/huggingface/datasets/pull/4824.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4824.patch",
"merged_at": 1660222609000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4823 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4823/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4823/comments | https://api.github.com/repos/huggingface/datasets/issues/4823/events | https://github.com/huggingface/datasets/pull/4823 | 1,335,687,033 | PR_kwDODunzps49A0O_ | 4,823 | Update data URL in mkqa dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660,209,373,000 | 1,660,211,510,000 | 1,660,210,672,000 | MEMBER | null | Update data URL in mkqa dataset.
Fix #4817. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4823/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4823/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4823",
"html_url": "https://github.com/huggingface/datasets/pull/4823",
"diff_url": "https://github.com/huggingface/datasets/pull/4823.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4823.patch",
"merged_at": 1660210671000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4822 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4822/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4822/comments | https://api.github.com/repos/huggingface/datasets/issues/4822/events | https://github.com/huggingface/datasets/issues/4822 | 1,335,675,352 | I_kwDODunzps5PnMnY | 4,822 | Moving dataset between namespaces breaks dataset viewer | {
"login": "cakiki",
"id": 3664563,
"node_id": "MDQ6VXNlcjM2NjQ1NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cakiki",
"html_url": "https://github.com/cakiki",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cakiki/subscriptions",
"organizations_url": "https://api.github.com/users/cakiki/orgs",
"repos_url": "https://api.github.com/users/cakiki/repos",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"received_events_url": "https://api.github.com/users/cakiki/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Let's keep open for now. We should try to reproduce"
] | 1,660,208,730,000 | 1,663,358,589,000 | null | CONTRIBUTOR | null | ## Describe the bug
I moved a dataset from my own namespace to an org and that broke the dataset viewer. To fix it I had to manually edit the `dataset_info.json` file and change the first key in the json from `username--datasetname` to `orgname--datasetname`
## Steps to reproduce the bug
What I did was:
1- Upload a dataset to my own namespace using `push_to_hub`
2- Move the dataset from my namespace to an org using the web interface.
## Expected results
For the file to be changed accordingly.
## Actual results
Broken dataset viewer.
## Environment info
- `datasets` version: 2.3.3.dev0
- Platform: Linux-4.15.0-189-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.5
- PyArrow version: 7.0.0
- Pandas version: 1.3.5
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4822/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4822/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4821 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4821/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4821/comments | https://api.github.com/repos/huggingface/datasets/issues/4821/events | https://github.com/huggingface/datasets/pull/4821 | 1,335,664,588 | PR_kwDODunzps49AvaE | 4,821 | Fix train_test_split docs | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660,208,145,000 | 1,660,211,969,000 | 1,660,211,140,000 | CONTRIBUTOR | null | I saw that `stratify` is added to the `train_test_split` method as per #4322, hence the docs can be updated. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4821/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4821/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4821",
"html_url": "https://github.com/huggingface/datasets/pull/4821",
"diff_url": "https://github.com/huggingface/datasets/pull/4821.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4821.patch",
"merged_at": 1660211140000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4820 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4820/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4820/comments | https://api.github.com/repos/huggingface/datasets/issues/4820/events | https://github.com/huggingface/datasets/issues/4820 | 1,335,117,132 | I_kwDODunzps5PlEVM | 4,820 | Terminating: fork() called from a process already using GNU OpenMP, this is unsafe. | {
"login": "talhaanwarch",
"id": 37379131,
"node_id": "MDQ6VXNlcjM3Mzc5MTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/37379131?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/talhaanwarch",
"html_url": "https://github.com/talhaanwarch",
"followers_url": "https://api.github.com/users/talhaanwarch/followers",
"following_url": "https://api.github.com/users/talhaanwarch/following{/other_user}",
"gists_url": "https://api.github.com/users/talhaanwarch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/talhaanwarch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/talhaanwarch/subscriptions",
"organizations_url": "https://api.github.com/users/talhaanwarch/orgs",
"repos_url": "https://api.github.com/users/talhaanwarch/repos",
"events_url": "https://api.github.com/users/talhaanwarch/events{/privacy}",
"received_events_url": "https://api.github.com/users/talhaanwarch/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Fixed by installing either resampy<3 or resampy>=4"
] | 1,660,160,553,000 | 1,660,161,190,000 | 1,660,161,190,000 | NONE | null | Hi, when i try to run prepare_dataset function in [fine tuning ASR tutorial 4](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_tuning_Wav2Vec2_for_English_ASR.ipynb) , i got this error.
I got this error
Terminating: fork() called from a process already using GNU OpenMP, this is unsafe.
There is no other logs available, so i have no clue what is the cause of it.
```
def prepare_dataset(batch):
audio = batch["path"]
# batched output is "un-batched"
batch["input_values"] = processor(audio["array"], sampling_rate=audio["sampling_rate"]).input_values[0]
batch["input_length"] = len(batch["input_values"])
with processor.as_target_processor():
batch["labels"] = processor(batch["text"]).input_ids
return batch
data = data.map(prepare_dataset, remove_columns=data.column_names["train"],
num_proc=4)
```
Specify the actual results or traceback.
There is no traceback except
`Terminating: fork() called from a process already using GNU OpenMP, this is unsafe.`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: Linux-5.15.0-43-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 9.0.0
- Pandas version: 1.4.3
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4820/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4820/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4819 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4819/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4819/comments | https://api.github.com/repos/huggingface/datasets/issues/4819/events | https://github.com/huggingface/datasets/pull/4819 | 1,335,064,449 | PR_kwDODunzps48-xc6 | 4,819 | Add missing language tags to resources | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660,158,402,000 | 1,660,160,749,000 | 1,660,159,935,000 | MEMBER | null | Add missing language tags to resources, required by existing datasets on GitHub. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4819/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4819/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4819",
"html_url": "https://github.com/huggingface/datasets/pull/4819",
"diff_url": "https://github.com/huggingface/datasets/pull/4819.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4819.patch",
"merged_at": 1660159935000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4818 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4818/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4818/comments | https://api.github.com/repos/huggingface/datasets/issues/4818/events | https://github.com/huggingface/datasets/pull/4818 | 1,334,941,810 | PR_kwDODunzps48-W7a | 4,818 | Add add cc-by-sa-2.5 license tag | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4818). All of your documentation changes will be reflected on that endpoint."
] | 1,660,151,919,000 | 1,660,154,101,000 | null | CONTRIBUTOR | null | - [ ] add it to moon-landing
- [ ] add it to hub-docs | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4818/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4818/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4818",
"html_url": "https://github.com/huggingface/datasets/pull/4818",
"diff_url": "https://github.com/huggingface/datasets/pull/4818.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4818.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4817 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4817/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4817/comments | https://api.github.com/repos/huggingface/datasets/issues/4817/events | https://github.com/huggingface/datasets/issues/4817 | 1,334,572,163 | I_kwDODunzps5Pi_SD | 4,817 | Outdated Link for mkqa Dataset | {
"login": "liaeh",
"id": 52380283,
"node_id": "MDQ6VXNlcjUyMzgwMjgz",
"avatar_url": "https://avatars.githubusercontent.com/u/52380283?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liaeh",
"html_url": "https://github.com/liaeh",
"followers_url": "https://api.github.com/users/liaeh/followers",
"following_url": "https://api.github.com/users/liaeh/following{/other_user}",
"gists_url": "https://api.github.com/users/liaeh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liaeh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liaeh/subscriptions",
"organizations_url": "https://api.github.com/users/liaeh/orgs",
"repos_url": "https://api.github.com/users/liaeh/repos",
"events_url": "https://api.github.com/users/liaeh/events{/privacy}",
"received_events_url": "https://api.github.com/users/liaeh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting @liaeh, we are investigating this. "
] | 1,660,135,545,000 | 1,660,210,672,000 | 1,660,210,672,000 | NONE | null | ## Describe the bug
The URL used to download the mkqa dataset is outdated. It seems the URL to download the dataset is currently https://github.com/apple/ml-mkqa/blob/main/dataset/mkqa.jsonl.gz instead of https://github.com/apple/ml-mkqa/raw/master/dataset/mkqa.jsonl.gz (master branch has been renamed to main).
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("mkqa")
```
## Expected results
downloads the dataset
## Actual results
```python
Downloading builder script:
4.79k/? [00:00<00:00, 201kB/s]
Downloading metadata:
13.2k/? [00:00<00:00, 504kB/s]
Downloading and preparing dataset mkqa/mkqa (download: 11.35 MiB, generated: 34.29 MiB, post-processed: Unknown size, total: 45.65 MiB) to /home/lhr/.cache/huggingface/datasets/mkqa/mkqa/1.0.0/5401489c674c81257cf563417aaaa5de2c7e26a1090ce9b10eb0404f10003d4d...
Downloading data files: 0%
0/1 [00:00<?, ?it/s]
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
Input In [3], in <cell line: 3>()
1 from datasets import load_dataset
----> 3 dataset = load_dataset("mkqa")
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/load.py:1746, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1743 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES
1745 # Download and prepare data
-> 1746 builder_instance.download_and_prepare(
1747 download_config=download_config,
1748 download_mode=download_mode,
1749 ignore_verifications=ignore_verifications,
1750 try_from_hf_gcs=try_from_hf_gcs,
1751 use_auth_token=use_auth_token,
1752 )
1754 # Build dataset for splits
1755 keep_in_memory = (
1756 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
1757 )
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/builder.py:704, in DatasetBuilder.download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
702 logger.warning("HF google storage unreachable. Downloading and preparing it from source")
703 if not downloaded_from_gcs:
--> 704 self._download_and_prepare(
705 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
706 )
707 # Sync info
708 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/builder.py:1227, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verify_infos)
1226 def _download_and_prepare(self, dl_manager, verify_infos):
-> 1227 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/builder.py:771, in DatasetBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
769 split_dict = SplitDict(dataset_name=self.name)
770 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 771 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
773 # Checksums verification
774 if verify_infos and dl_manager.record_checksums:
File ~/.cache/huggingface/modules/datasets_modules/datasets/mkqa/5401489c674c81257cf563417aaaa5de2c7e26a1090ce9b10eb0404f10003d4d/mkqa.py:130, in Mkqa._split_generators(self, dl_manager)
128 # download and extract URLs
129 urls_to_download = _URLS
--> 130 downloaded_files = dl_manager.download_and_extract(urls_to_download)
132 return [datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]})]
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/download/download_manager.py:431, in DownloadManager.download_and_extract(self, url_or_urls)
415 def download_and_extract(self, url_or_urls):
416 """Download and extract given url_or_urls.
417
418 Is roughly equivalent to:
(...)
429 extracted_path(s): `str`, extracted paths of given URL(s).
430 """
--> 431 return self.extract(self.download(url_or_urls))
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/download/download_manager.py:309, in DownloadManager.download(self, url_or_urls)
306 download_func = partial(self._download, download_config=download_config)
308 start_time = datetime.now()
--> 309 downloaded_path_or_paths = map_nested(
310 download_func,
311 url_or_urls,
312 map_tuple=True,
313 num_proc=download_config.num_proc,
314 disable_tqdm=not is_progress_bar_enabled(),
315 desc="Downloading data files",
316 )
317 duration = datetime.now() - start_time
318 logger.info(f"Downloading took {duration.total_seconds() // 60} min")
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py:393, in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types, disable_tqdm, desc)
391 num_proc = 1
392 if num_proc <= 1 or len(iterable) <= num_proc:
--> 393 mapped = [
394 _single_map_nested((function, obj, types, None, True, None))
395 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)
396 ]
397 else:
398 split_kwds = [] # We organize the splits ourselve (contiguous splits)
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py:394, in <listcomp>(.0)
391 num_proc = 1
392 if num_proc <= 1 or len(iterable) <= num_proc:
393 mapped = [
--> 394 _single_map_nested((function, obj, types, None, True, None))
395 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)
396 ]
397 else:
398 split_kwds = [] # We organize the splits ourselve (contiguous splits)
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py:330, in _single_map_nested(args)
328 # Singleton first to spare some computation
329 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):
--> 330 return function(data_struct)
332 # Reduce logging to keep things readable in multiprocessing with tqdm
333 if rank is not None and logging.get_verbosity() < logging.WARNING:
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/download/download_manager.py:335, in DownloadManager._download(self, url_or_filename, download_config)
332 if is_relative_path(url_or_filename):
333 # append the relative path to the base_path
334 url_or_filename = url_or_path_join(self._base_path, url_or_filename)
--> 335 return cached_path(url_or_filename, download_config=download_config)
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/utils/file_utils.py:185, in cached_path(url_or_filename, download_config, **download_kwargs)
181 url_or_filename = str(url_or_filename)
183 if is_remote_url(url_or_filename):
184 # URL, so get it from the cache (downloading if necessary)
--> 185 output_path = get_from_cache(
186 url_or_filename,
187 cache_dir=cache_dir,
188 force_download=download_config.force_download,
189 proxies=download_config.proxies,
190 resume_download=download_config.resume_download,
191 user_agent=download_config.user_agent,
192 local_files_only=download_config.local_files_only,
193 use_etag=download_config.use_etag,
194 max_retries=download_config.max_retries,
195 use_auth_token=download_config.use_auth_token,
196 ignore_url_params=download_config.ignore_url_params,
197 download_desc=download_config.download_desc,
198 )
199 elif os.path.exists(url_or_filename):
200 # File, and it exists.
201 output_path = url_or_filename
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/utils/file_utils.py:530, in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token, ignore_url_params, download_desc)
525 raise FileNotFoundError(
526 f"Cannot find the requested files in the cached path at {cache_path} and outgoing traffic has been"
527 " disabled. To enable file online look-ups, set 'local_files_only' to False."
528 )
529 elif response is not None and response.status_code == 404:
--> 530 raise FileNotFoundError(f"Couldn't find file at {url}")
531 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}")
532 if head_error is not None:
FileNotFoundError: Couldn't find file at https://github.com/apple/ml-mkqa/raw/master/dataset/mkqa.jsonl.gz
```
## Environment info
- `datasets` version: 2.4.0
- Platform: Linux-5.13.0-40-generic-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyArrow version: 9.0.0
- Pandas version: 1.4.2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4817/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4817/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4816 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4816/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4816/comments | https://api.github.com/repos/huggingface/datasets/issues/4816/events | https://github.com/huggingface/datasets/pull/4816 | 1,334,099,454 | PR_kwDODunzps487kpq | 4,816 | Update version of opus_paracrawl dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660,109,984,000 | 1,660,314,749,000 | 1,660,313,876,000 | MEMBER | null | This PR updates OPUS ParaCrawl from 7.1 to 9 version.
Fix #4815. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4816/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4816/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4816",
"html_url": "https://github.com/huggingface/datasets/pull/4816",
"diff_url": "https://github.com/huggingface/datasets/pull/4816.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4816.patch",
"merged_at": 1660313876000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4815 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4815/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4815/comments | https://api.github.com/repos/huggingface/datasets/issues/4815/events | https://github.com/huggingface/datasets/issues/4815 | 1,334,078,303 | I_kwDODunzps5PhGtf | 4,815 | Outdated loading script for OPUS ParaCrawl dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,660,108,354,000 | 1,660,313,877,000 | 1,660,313,877,000 | MEMBER | null | ## Describe the bug
Our loading script for OPUS ParaCrawl loads its 7.1 version. Current existing version is 9.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4815/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4815/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4814 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4814/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4814/comments | https://api.github.com/repos/huggingface/datasets/issues/4814/events | https://github.com/huggingface/datasets/issues/4814 | 1,333,356,230 | I_kwDODunzps5PeWbG | 4,814 | Support CSV as metadata file format in AudioFolder/ImageFolder | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,660,055,809,000 | 1,661,947,148,000 | 1,661,947,148,000 | CONTRIBUTOR | null | Requested here: https://discuss.huggingface.co/t/how-to-structure-an-image-dataset-repo-using-the-image-folder-approach/21004. CSV is also used in AutoTrain for specifying metadata in image datasets. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4814/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4814/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4813 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4813/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4813/comments | https://api.github.com/repos/huggingface/datasets/issues/4813/events | https://github.com/huggingface/datasets/pull/4813 | 1,333,287,756 | PR_kwDODunzps48446r | 4,813 | Fix loading example in opus dataset cards | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660,052,858,000 | 1,660,067,535,000 | 1,660,066,698,000 | MEMBER | null | This PR:
- fixes the examples to load the datasets, with the corrected dataset name, in their dataset cards for:
- opus_dgt
- opus_paracrawl
- opus_wikipedia
- fixes their dataset cards with the missing required information: title, data instances/fields/splits
- enumerates the supported languages
- adds a missing citation reference for opus_wikipedia
Related to:
- #4806 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4813/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4813/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4813",
"html_url": "https://github.com/huggingface/datasets/pull/4813",
"diff_url": "https://github.com/huggingface/datasets/pull/4813.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4813.patch",
"merged_at": 1660066698000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4812 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4812/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4812/comments | https://api.github.com/repos/huggingface/datasets/issues/4812/events | https://github.com/huggingface/datasets/pull/4812 | 1,333,051,730 | PR_kwDODunzps484Fzq | 4,812 | Fix bug in function validate_type for Python >= 3.9 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660,041,162,000 | 1,660,311,683,000 | 1,660,310,824,000 | MEMBER | null | Fix `validate_type` function, so that it uses `get_origin` instead. This makes the function forward compatible.
This fixes #4811 because:
```python
In [4]: typing.Optional[str]
Out[4]: typing.Optional[str]
In [5]: get_origin(typing.Optional[str])
Out[5]: typing.Union
```
Fix #4811. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4812/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4812/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4812",
"html_url": "https://github.com/huggingface/datasets/pull/4812",
"diff_url": "https://github.com/huggingface/datasets/pull/4812.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4812.patch",
"merged_at": 1660310824000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4811 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4811/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4811/comments | https://api.github.com/repos/huggingface/datasets/issues/4811/events | https://github.com/huggingface/datasets/issues/4811 | 1,333,043,421 | I_kwDODunzps5PdKDd | 4,811 | Bug in function validate_type for Python >= 3.9 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,660,040,721,000 | 1,660,310,825,000 | 1,660,310,825,000 | MEMBER | null | ## Describe the bug
The function `validate_type` assumes that the type `typing.Optional[str]` is automatically transformed to `typing.Union[str, NoneType]`.
```python
In [4]: typing.Optional[str]
Out[4]: typing.Union[str, NoneType]
```
However, this is not the case for Python 3.9:
```python
In [3]: typing.Optional[str]
Out[3]: typing.Optional[str]
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4811/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4811/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4810 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4810/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4810/comments | https://api.github.com/repos/huggingface/datasets/issues/4810/events | https://github.com/huggingface/datasets/pull/4810 | 1,333,038,702 | PR_kwDODunzps484C9l | 4,810 | hellaswag: add non-empty description to fix metadata issue | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4810). All of your documentation changes will be reflected on that endpoint.",
"Are the `metadata JSON file` not on their way to deprecation? 😆😇\r\n\r\nIMO, more generally than this particular PR, the contribution process should be simplified now that many validation checks happen on the hub side.\r\n\r\nKeeping this open in the meantime to get more potential feedback!"
] | 1,660,040,474,000 | 1,660,227,062,000 | null | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4810/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4810/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4810",
"html_url": "https://github.com/huggingface/datasets/pull/4810",
"diff_url": "https://github.com/huggingface/datasets/pull/4810.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4810.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4809 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4809/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4809/comments | https://api.github.com/repos/huggingface/datasets/issues/4809/events | https://github.com/huggingface/datasets/pull/4809 | 1,332,842,747 | PR_kwDODunzps483Y4h | 4,809 | Complete the mlqa dataset card | {
"login": "eldhoittangeorge",
"id": 7940237,
"node_id": "MDQ6VXNlcjc5NDAyMzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7940237?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eldhoittangeorge",
"html_url": "https://github.com/eldhoittangeorge",
"followers_url": "https://api.github.com/users/eldhoittangeorge/followers",
"following_url": "https://api.github.com/users/eldhoittangeorge/following{/other_user}",
"gists_url": "https://api.github.com/users/eldhoittangeorge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eldhoittangeorge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eldhoittangeorge/subscriptions",
"organizations_url": "https://api.github.com/users/eldhoittangeorge/orgs",
"repos_url": "https://api.github.com/users/eldhoittangeorge/repos",
"events_url": "https://api.github.com/users/eldhoittangeorge/events{/privacy}",
"received_events_url": "https://api.github.com/users/eldhoittangeorge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> Thanks for your contribution, @eldhoittangeorge.\r\n> \r\n> The CI error message: https://github.com/huggingface/datasets/runs/7743526624?check_suite_focus=true\r\n> \r\n> ```\r\n> E ValueError: The following issues have been found in the dataset cards:\r\n> E YAML tags:\r\n> E __init__() missing 5 required positional arguments: 'annotations_creators', 'language_creators', 'license', 'size_categories', and 'source_datasets'\r\n> ```\r\n\r\nI will fix the CI error.",
"@eldhoittangeorge, thanks again for all the fixes. Just a minor one before we can merge this PR: https://github.com/huggingface/datasets/runs/7744885754?check_suite_focus=true\r\n```\r\nE YAML tags:\r\nE Could not validate the metadata, found the following errors:\r\nE * field 'language_creators':\r\nE \t['unknown'] are not registered tags for 'language_creators', reference at https://github.com/huggingface/datasets/tree/main/src/datasets/utils/resources/creators.json\r\n```",
"> \r\n\r\nThanks, I updated the file. \r\nA small suggestion can you mention this link https://github.com/huggingface/datasets/tree/main/src/datasets/utils/resources/ in the contribution page. So that others will know the acceptable values for the tags."
] | 1,660,030,686,000 | 1,660,062,381,000 | 1,660,051,603,000 | CONTRIBUTOR | null | I fixed the issue #4808
Details of PR:
- Added languages included in the dataset.
- Added task id and task category.
- Updated the citation information.
Fix #4808. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4809/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4809/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4809",
"html_url": "https://github.com/huggingface/datasets/pull/4809",
"diff_url": "https://github.com/huggingface/datasets/pull/4809.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4809.patch",
"merged_at": 1660051603000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4808 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4808/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4808/comments | https://api.github.com/repos/huggingface/datasets/issues/4808/events | https://github.com/huggingface/datasets/issues/4808 | 1,332,840,217 | I_kwDODunzps5PcYcZ | 4,808 | Add more information to the dataset card of mlqa dataset | {
"login": "eldhoittangeorge",
"id": 7940237,
"node_id": "MDQ6VXNlcjc5NDAyMzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7940237?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eldhoittangeorge",
"html_url": "https://github.com/eldhoittangeorge",
"followers_url": "https://api.github.com/users/eldhoittangeorge/followers",
"following_url": "https://api.github.com/users/eldhoittangeorge/following{/other_user}",
"gists_url": "https://api.github.com/users/eldhoittangeorge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eldhoittangeorge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eldhoittangeorge/subscriptions",
"organizations_url": "https://api.github.com/users/eldhoittangeorge/orgs",
"repos_url": "https://api.github.com/users/eldhoittangeorge/repos",
"events_url": "https://api.github.com/users/eldhoittangeorge/events{/privacy}",
"received_events_url": "https://api.github.com/users/eldhoittangeorge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "eldhoittangeorge",
"id": 7940237,
"node_id": "MDQ6VXNlcjc5NDAyMzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7940237?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eldhoittangeorge",
"html_url": "https://github.com/eldhoittangeorge",
"followers_url": "https://api.github.com/users/eldhoittangeorge/followers",
"following_url": "https://api.github.com/users/eldhoittangeorge/following{/other_user}",
"gists_url": "https://api.github.com/users/eldhoittangeorge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eldhoittangeorge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eldhoittangeorge/subscriptions",
"organizations_url": "https://api.github.com/users/eldhoittangeorge/orgs",
"repos_url": "https://api.github.com/users/eldhoittangeorge/repos",
"events_url": "https://api.github.com/users/eldhoittangeorge/events{/privacy}",
"received_events_url": "https://api.github.com/users/eldhoittangeorge/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "eldhoittangeorge",
"id": 7940237,
"node_id": "MDQ6VXNlcjc5NDAyMzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7940237?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eldhoittangeorge",
"html_url": "https://github.com/eldhoittangeorge",
"followers_url": "https://api.github.com/users/eldhoittangeorge/followers",
"following_url": "https://api.github.com/users/eldhoittangeorge/following{/other_user}",
"gists_url": "https://api.github.com/users/eldhoittangeorge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eldhoittangeorge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eldhoittangeorge/subscriptions",
"organizations_url": "https://api.github.com/users/eldhoittangeorge/orgs",
"repos_url": "https://api.github.com/users/eldhoittangeorge/repos",
"events_url": "https://api.github.com/users/eldhoittangeorge/events{/privacy}",
"received_events_url": "https://api.github.com/users/eldhoittangeorge/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"#self-assign",
"Fixed by:\r\n- #4809"
] | 1,660,030,542,000 | 1,660,052,003,000 | 1,660,052,003,000 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4808/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4808/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4807 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4807/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4807/comments | https://api.github.com/repos/huggingface/datasets/issues/4807/events | https://github.com/huggingface/datasets/pull/4807 | 1,332,784,110 | PR_kwDODunzps483MSH | 4,807 | document fix in opus_gnome dataset | {
"login": "gojiteji",
"id": 38291975,
"node_id": "MDQ6VXNlcjM4MjkxOTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/38291975?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gojiteji",
"html_url": "https://github.com/gojiteji",
"followers_url": "https://api.github.com/users/gojiteji/followers",
"following_url": "https://api.github.com/users/gojiteji/following{/other_user}",
"gists_url": "https://api.github.com/users/gojiteji/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gojiteji/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gojiteji/subscriptions",
"organizations_url": "https://api.github.com/users/gojiteji/orgs",
"repos_url": "https://api.github.com/users/gojiteji/repos",
"events_url": "https://api.github.com/users/gojiteji/events{/privacy}",
"received_events_url": "https://api.github.com/users/gojiteji/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Duplicate:\r\n- #4806 "
] | 1,660,027,093,000 | 1,660,030,083,000 | 1,660,030,083,000 | CONTRIBUTOR | null | I fixed a issue #4805.
I changed `"gnome"` to `"opus_gnome"` in[ README.md](https://github.com/huggingface/datasets/tree/main/datasets/opus_gnome#dataset-summary). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4807/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4807/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4807",
"html_url": "https://github.com/huggingface/datasets/pull/4807",
"diff_url": "https://github.com/huggingface/datasets/pull/4807.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4807.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4806 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4806/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4806/comments | https://api.github.com/repos/huggingface/datasets/issues/4806/events | https://github.com/huggingface/datasets/pull/4806 | 1,332,664,038 | PR_kwDODunzps482yiS | 4,806 | Fix opus_gnome dataset card | {
"login": "gojiteji",
"id": 38291975,
"node_id": "MDQ6VXNlcjM4MjkxOTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/38291975?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gojiteji",
"html_url": "https://github.com/gojiteji",
"followers_url": "https://api.github.com/users/gojiteji/followers",
"following_url": "https://api.github.com/users/gojiteji/following{/other_user}",
"gists_url": "https://api.github.com/users/gojiteji/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gojiteji/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gojiteji/subscriptions",
"organizations_url": "https://api.github.com/users/gojiteji/orgs",
"repos_url": "https://api.github.com/users/gojiteji/repos",
"events_url": "https://api.github.com/users/gojiteji/events{/privacy}",
"received_events_url": "https://api.github.com/users/gojiteji/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@gojiteji why have you closed this PR and created an identical one?\r\n- #4807 ",
"@albertvillanova \r\nI forgot to follow \"How to create a Pull\" in CONTRIBUTING.md in this branch.",
"Both are identical. And you can push additional commits to this branch.",
"I see. Thank you for your comment.",
"Anyway, @gojiteji thanks for your contribution and this fix.",
"Once you have modified the `opus_gnome` dataset card, our Continuous Integration test suite performs some tests on it that make some additional requirements: the errors that appear have nothing to do with your contribution, but with these additional quality requirements.",
"> the errors that appear have nothing to do with your contribution, but with these additional quality requirements.\r\n\r\nIs there anything I should do?",
"If you would like to address them as well in this PR, it would be awesome: https://github.com/huggingface/datasets/runs/7741104780?check_suite_focus=true\r\n",
"These are the 2 error messages:\r\n```\r\nE ValueError: The following issues have been found in the dataset cards:\r\nE README Validation:\r\nE The following issues were found for the README at `/home/runner/work/datasets/datasets/datasets/opus_gnome/README.md`:\r\nE -\tNo first-level heading starting with `Dataset Card for` found in README. Skipping further validation for this README.\r\n\r\nE The following issues have been found in the dataset cards:\r\nE YAML tags:\r\nE Could not validate the metadata, found the following errors:\r\nE * field 'language':\r\nE \t['ara', 'cat', 'foo', 'gr', 'nqo', 'tmp'] are not registered tags for 'language', reference at https://github.com/huggingface/datasets/tree/main/src/datasets/utils/resources/languages.json\r\n```",
"In principle there are 2 errors:\r\n\r\nThe first one says, the title of the README does not start with `Dataset Card for`:\r\n- The README title is: `# Dataset Card Creation Guide`\r\n- According to the [template here](https://github.com/huggingface/datasets/blob/main/templates/README.md), it should be: `# Dataset Card for [Dataset Name]`",
"In relation with the languages:\r\n- you should check whether the language codes are properly spelled\r\n- and if so, adding them to our `languages.json` file, so that they are properly validated",
"Thank you for the detailed information. I'm checking it now.",
"```\r\nE ValueError: The following issues have been found in the dataset cards:\r\nE README Validation:\r\nE The following issues were found for the README at `/home/runner/work/datasets/datasets/datasets/opus_gnome/README.md`:\r\nE -\tExpected some content in section `Data Instances` but it is empty.\r\nE -\tExpected some content in section `Data Fields` but it is empty.\r\nE -\tExpected some content in section `Data Splits` but it is empty.\r\n```",
"I added `ara`, `cat`, `gr`, and `nqo` to `languages.json` and removed `foo` and `tmp` from `README.md`.\r\nI also write Data Instances, Data Fields, and Data Splits in `README.md`.",
"Thanks for your investigation and fixes to the dataset card structure! I'm just making some suggestions before merging this PR: see below.",
"Should I create PR for `config.json` to add ` ara cat gr nqo` first?\r\nI think I can pass this failing after that.\r\n\r\nOr removing `ara, cat, gr, nqo, foo, tmp` from `README.md`. ",
"Once you address these issues, all the CI tests will pass.",
"Once the remaining changes are addressed (see unresolved above), we will be able to merge this:\r\n- [ ] Remove \"ara\" from README\r\n- [ ] Remove \"cat\" from README\r\n- [ ] Remove \"gr\" from README\r\n- [ ] Replace \"tmp\" with \"tyj\" in README\r\n- [ ] Add \"tyj\" to `languages.json`:\r\n ```\r\n \"tyj\": \"Tai Do; Tai Yo\",",
"I did the five changes."
] | 1,660,016,415,000 | 1,660,046,806,000 | 1,660,045,924,000 | CONTRIBUTOR | null | I fixed a issue #4805.
I changed `"gnome"` to `"opus_gnome"` in[ README.md](https://github.com/huggingface/datasets/tree/main/datasets/opus_gnome#dataset-summary).
Fix #4805 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4806/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4806/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4806",
"html_url": "https://github.com/huggingface/datasets/pull/4806",
"diff_url": "https://github.com/huggingface/datasets/pull/4806.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4806.patch",
"merged_at": 1660045924000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4805 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4805/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4805/comments | https://api.github.com/repos/huggingface/datasets/issues/4805/events | https://github.com/huggingface/datasets/issues/4805 | 1,332,653,531 | I_kwDODunzps5Pbq3b | 4,805 | Wrong example in opus_gnome dataset card | {
"login": "gojiteji",
"id": 38291975,
"node_id": "MDQ6VXNlcjM4MjkxOTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/38291975?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gojiteji",
"html_url": "https://github.com/gojiteji",
"followers_url": "https://api.github.com/users/gojiteji/followers",
"following_url": "https://api.github.com/users/gojiteji/following{/other_user}",
"gists_url": "https://api.github.com/users/gojiteji/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gojiteji/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gojiteji/subscriptions",
"organizations_url": "https://api.github.com/users/gojiteji/orgs",
"repos_url": "https://api.github.com/users/gojiteji/repos",
"events_url": "https://api.github.com/users/gojiteji/events{/privacy}",
"received_events_url": "https://api.github.com/users/gojiteji/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 1,660,015,287,000 | 1,660,045,925,000 | 1,660,045,925,000 | CONTRIBUTOR | null | ## Describe the bug
I found that [the example on opus_gone dataset ](https://github.com/huggingface/datasets/tree/main/datasets/opus_gnome#dataset-summary) doesn't work.
## Steps to reproduce the bug
```python
load_dataset("gnome", lang1="it", lang2="pl")
```
`"gnome"` should be `"opus_gnome"`
## Expected results
```bash
100%
1/1 [00:00<00:00, 42.09it/s]
DatasetDict({
train: Dataset({
features: ['id', 'translation'],
num_rows: 8368
})
})
```
## Actual results
```bash
Couldn't find 'gnome' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/main/datasets/gnome/gnome.py
```
## Environment info
- `datasets` version: 2.4.0
- Platform: Linux-5.4.0-120-generic-x86_64-with-glibc2.27
- Python version: 3.9.13
- PyArrow version: 9.0.0
- Pandas version: 1.4.3
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4805/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4805/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4804 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4804/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4804/comments | https://api.github.com/repos/huggingface/datasets/issues/4804/events | https://github.com/huggingface/datasets/issues/4804 | 1,332,630,358 | I_kwDODunzps5PblNW | 4,804 | streaming dataset with concatenating splits raises an error | {
"login": "Bing-su",
"id": 37621276,
"node_id": "MDQ6VXNlcjM3NjIxMjc2",
"avatar_url": "https://avatars.githubusercontent.com/u/37621276?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bing-su",
"html_url": "https://github.com/Bing-su",
"followers_url": "https://api.github.com/users/Bing-su/followers",
"following_url": "https://api.github.com/users/Bing-su/following{/other_user}",
"gists_url": "https://api.github.com/users/Bing-su/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bing-su/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bing-su/subscriptions",
"organizations_url": "https://api.github.com/users/Bing-su/orgs",
"repos_url": "https://api.github.com/users/Bing-su/repos",
"events_url": "https://api.github.com/users/Bing-su/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bing-su/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi! Only the name of a particular split (\"train\", \"test\", ...) is supported as a split pattern if `streaming=True`. We plan to address this limitation soon."
] | 1,660,012,916,000 | 1,660,740,156,000 | null | NONE | null | ## Describe the bug
streaming dataset with concatenating splits raises an error
## Steps to reproduce the bug
```python
from datasets import load_dataset
# no error
repo = "nateraw/ade20k-tiny"
dataset = load_dataset(repo, split="train+validation")
```
```python
from datasets import load_dataset
# error
repo = "nateraw/ade20k-tiny"
dataset = load_dataset(repo, split="train+validation", streaming=True)
```
```sh
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-4-a6ae02d63899>](https://localhost:8080/#) in <module>()
3 # error
4 repo = "nateraw/ade20k-tiny"
----> 5 dataset = load_dataset(repo, split="train+validation", streaming=True)
1 frames
[/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in as_streaming_dataset(self, split, base_path)
1030 splits_generator = splits_generators[split]
1031 else:
-> 1032 raise ValueError(f"Bad split: {split}. Available splits: {list(splits_generators)}")
1033
1034 # Create a dataset for each of the given splits
ValueError: Bad split: train+validation. Available splits: ['validation', 'train']
```
[Colab](https://colab.research.google.com/drive/1wMj08_0bym9jnGgByib4lsBPu8NCZBG9?usp=sharing)
## Expected results
load successfully or throws an error saying it is not supported.
## Actual results
above
## Environment info
- `datasets` version: 2.4.0
- Platform: Windows-10-10.0.22000-SP0 (windows11 x64)
- Python version: 3.9.13
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4804/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4804/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4803 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4803/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4803/comments | https://api.github.com/repos/huggingface/datasets/issues/4803/events | https://github.com/huggingface/datasets/issues/4803 | 1,332,079,562 | I_kwDODunzps5PZevK | 4,803 | Support `pipeline` argument in inspect.py functions | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 1,659,974,484,000 | 1,659,974,484,000 | null | CONTRIBUTOR | null | **Is your feature request related to a problem? Please describe.**
The `wikipedia` dataset requires a `pipeline` argument to build the list of splits:
https://huggingface.co/datasets/wikipedia/blob/main/wikipedia.py#L937
But this is currently not supported in `get_dataset_config_info`:
https://github.com/huggingface/datasets/blob/main/src/datasets/inspect.py#L373-L375
which is called by other functions, e.g. `get_dataset_split_names`.
**Additional context**
The dataset viewer is not working out-of-the-box on `wikipedia` for this reason:
https://huggingface.co/datasets/wikipedia/viewer
<img width="637" alt="Capture d’écran 2022-08-08 à 12 01 16" src="https://user-images.githubusercontent.com/1676121/183461838-5330783b-0269-4ba7-a999-314cde2023d8.png">
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4803/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4803/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4802 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4802/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4802/comments | https://api.github.com/repos/huggingface/datasets/issues/4802/events | https://github.com/huggingface/datasets/issues/4802 | 1,331,676,691 | I_kwDODunzps5PX8YT | 4,802 | `with_format` behavior is inconsistent on different datasets | {
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi! You can get a `torch.Tensor` if you do the following:\r\n```python\r\nraw = load_dataset(\"beans\", split=\"train\")\r\nraw = raw.select(range(100))\r\n\r\npreprocessor = AutoFeatureExtractor.from_pretrained(\"nateraw/vit-base-beans\")\r\n\r\nfrom datasets import Array3D\r\nfeatures = raw.features.copy()\r\nfeatures[\"pixel_values\"] = datasets.Array3D(shape=(3, 224, 224), dtype=\"float32\")\r\n\r\ndef preprocess_func(examples):\r\n imgs = [img.convert(\"RGB\") for img in examples[\"image\"]]\r\n return preprocessor(imgs)\r\n\r\ndata = raw.map(preprocess_func, batched=True, features=features)\r\n\r\nprint(type(data[0][\"pixel_values\"]))\r\n\r\ndata = data.with_format(\"torch\", columns=[\"pixel_values\"])\r\n\r\nprint(type(data[0][\"pixel_values\"]))\r\n```\r\n\r\nThe reason for this \"inconsistency\" in the default case is the way PyArrow infers the type of multi-dim arrays (in this case, the `pixel_values` column). If the type is not specified manually, PyArrow assumes it is a dynamic-length sequence (it needs to know the type before writing the first batch to a cache file, and it can't be sure the array is fixed ahead of time; `ArrayXD` is our way of telling that the dims are fixed), so it already fails to convert the corresponding array to NumPy properly (you get an array of `np.object` arrays). And `with_format(\"torch\")` replaces NumPy arrays with Torch tensors, so this bad formatting propagates."
] | 1,659,955,294,000 | 1,660,063,749,000 | null | CONTRIBUTOR | null | ## Describe the bug
I found a case where `with_format` does not transform the dataset to the requested format.
## Steps to reproduce the bug
Run:
```python
from transformers import AutoTokenizer, AutoFeatureExtractor
from datasets import load_dataset
raw = load_dataset("glue", "sst2", split="train")
raw = raw.select(range(100))
tokenizer = AutoTokenizer.from_pretrained("philschmid/tiny-bert-sst2-distilled")
def preprocess_func(examples):
return tokenizer(examples["sentence"], padding=True, max_length=256, truncation=True)
data = raw.map(preprocess_func, batched=True)
print(type(data[0]["input_ids"]))
data = data.with_format("torch", columns=["input_ids"])
print(type(data[0]["input_ids"]))
```
printing as expected:
```python
<class 'list'>
<class 'torch.Tensor'>
```
Then run:
```python
raw = load_dataset("beans", split="train")
raw = raw.select(range(100))
preprocessor = AutoFeatureExtractor.from_pretrained("nateraw/vit-base-beans")
def preprocess_func(examples):
imgs = [img.convert("RGB") for img in examples["image"]]
return preprocessor(imgs)
data = raw.map(preprocess_func, batched=True)
print(type(data[0]["pixel_values"]))
data = data.with_format("torch", columns=["pixel_values"])
print(type(data[0]["pixel_values"]))
```
Printing, unexpectedly
```python
<class 'list'>
<class 'list'>
```
## Expected results
`with_format` should transform into the requested format; it's not the case.
## Actual results
`type(data[0]["pixel_values"])` should be `torch.Tensor` in the example above
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: dev version, commit 44af3fafb527302282f6b6507b952de7435f0979
- Platform: Linux
- Python version: 3.9.12
- PyArrow version: 7.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4802/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4802/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4801 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4801/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4801/comments | https://api.github.com/repos/huggingface/datasets/issues/4801/events | https://github.com/huggingface/datasets/pull/4801 | 1,331,337,418 | PR_kwDODunzps48yTYu | 4,801 | Fix fine classes in trec dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,659,935,462,000 | 1,661,185,754,000 | 1,661,184,855,000 | MEMBER | null | This PR:
- replaces the fine labels, so that there are 50 instead of 47
- once more labels are added, all they (fine and coarse) have been re-ordered, so that they align with the order in: https://cogcomp.seas.upenn.edu/Data/QA/QC/definition.html
- the feature names have been fixed: `fine_label` instead of `label-fine`
- to sneak-case (underscores instead of hyphens)
- words have been reordered
Fix #4790. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4801/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4801/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4801",
"html_url": "https://github.com/huggingface/datasets/pull/4801",
"diff_url": "https://github.com/huggingface/datasets/pull/4801.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4801.patch",
"merged_at": 1661184855000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4800 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4800/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4800/comments | https://api.github.com/repos/huggingface/datasets/issues/4800/events | https://github.com/huggingface/datasets/pull/4800 | 1,331,288,128 | PR_kwDODunzps48yIss | 4,800 | support LargeListArray in pyarrow | {
"login": "xwwwwww",
"id": 48146603,
"node_id": "MDQ6VXNlcjQ4MTQ2NjAz",
"avatar_url": "https://avatars.githubusercontent.com/u/48146603?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xwwwwww",
"html_url": "https://github.com/xwwwwww",
"followers_url": "https://api.github.com/users/xwwwwww/followers",
"following_url": "https://api.github.com/users/xwwwwww/following{/other_user}",
"gists_url": "https://api.github.com/users/xwwwwww/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xwwwwww/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xwwwwww/subscriptions",
"organizations_url": "https://api.github.com/users/xwwwwww/orgs",
"repos_url": "https://api.github.com/users/xwwwwww/repos",
"events_url": "https://api.github.com/users/xwwwwww/events{/privacy}",
"received_events_url": "https://api.github.com/users/xwwwwww/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4800). All of your documentation changes will be reflected on that endpoint.",
"Hi, thanks for working on this! Can you run `make style` at the repo root to fix the code quality error in CI and add a test?",
"Hi, I have fixed the code quality error and added a test",
"It seems that CI fails due to the lack of memory for allocating a large array, while I pass the test locally.",
"Also, the current implementation of the NumPy-to-PyArrow conversion creates a lot of copies, which is not ideal for large arrays.\r\n\r\nWe can improve performance significantly if we rewrite this part:\r\nhttps://github.com/huggingface/datasets/blob/83f695c14507a3a38e9f4d84612cf49e5f50c153/src/datasets/features/features.py#L1322-L1323\r\n\r\nas\r\n```python\r\n values = pa.array(arr.ravel(), type=type) \r\n```",
"@xwwwwww Feel free to ignore https://github.com/huggingface/datasets/pull/4800#issuecomment-1212280549 and revert the changes you've made to address it. \r\n\r\nWithout copying the array, this would be possible:\r\n```python\r\narr = np.array([\r\n [1, 2, 3],\r\n [4, 5, 6]\r\n])\r\n\r\ndset = Dataset.from_dict({\"data\": [arr]})\r\n\r\narr[0][0] = 100 # this change would be reflected in dset's PyArrow table -> a breaking change and also probably unexpected by the user \r\n```",
"> @xwwwwww Feel free to ignore [#4800 (comment)](https://github.com/huggingface/datasets/pull/4800#issuecomment-1212280549) and revert the changes you've made to address it.\r\n> \r\n> Without copying the array, this would be possible:\r\n> \r\n> ```python\r\n> arr = np.array([\r\n> [1, 2, 3],\r\n> [4, 5, 6]\r\n> ])\r\n> \r\n> dset = Dataset.from_dict({\"data\": [arr]})\r\n> \r\n> arr[0][0] = 100 # this change would be reflected in dset's PyArrow table -> a breaking change and also probably unexpected by the user \r\n> ```\r\n\r\nOh, that makes sense.",
"passed tests in ubuntu while failed in windows",
"@mariosasko Hi, do you have any clue about this failure in windows?",
"Perhaps we can skip the added test on Windows then.\r\n\r\nNot sure if this can help, but the ERR tool available on Windows outputs the following for the returned error code `-1073741819`:\r\n```\r\n# for decimal -1073741819 / hex 0xc0000005\r\n ISCSI_ERR_SETUP_NETWORK_NODE iscsilog.h\r\n# Failed to setup initiator portal. Error status is given in\r\n# the dump data.\r\n STATUS_ACCESS_VIOLATION ntstatus.h\r\n# The instruction at 0x%p referenced memory at 0x%p. The\r\n# memory could not be %s.\r\n USBD_STATUS_DEV_NOT_RESPONDING usb.h\r\n# as an HRESULT: Severity: FAILURE (1), FACILITY_NONE (0x0), Code 0x5\r\n# for decimal 5 / hex 0x5\r\n WINBIO_FP_TOO_FAST winbio_err.h\r\n# Move your finger more slowly on the fingerprint reader.\r\n# as an HRESULT: Severity: FAILURE (1), FACILITY_NULL (0x0), Code 0x5\r\n ERROR_ACCESS_DENIED winerror.h\r\n# Access is denied.\r\n# 5 matches found for \"-1073741819\"\r\n```",
"What's the proper way to skip the added test in windows?\r\nI tried `if platform.system() == 'Linux'`, but the CI test seems stuck",
"@mariosasko Hi, any idea about this :)",
"Hi again! We want to skip the test on Windows but not on Linux. You can use this decorator to do so: \r\n```python\r\[email protected](os.name == \"nt\" and (os.getenv(\"CIRCLECI\") == \"true\" or os.getenv(\"GITHUB_ACTIONS\") == \"true\"), reason=\"The Windows CI runner does not have enough RAM to run this test\")\r\[email protected](...)\r\ndef test_large_array_xd_with_np(...):\r\n ...\r\n```",
"> Hi again! We want to skip the test on Windows but not on Linux. You can use this decorator to do so:\r\n> \r\n> ```python\r\n> @pytest.mark.skipif(os.name == \"nt\" and (os.getenv(\"CIRCLECI\") == \"true\" or os.getenv(\"GITHUB_ACTIONS\") == \"true\"), reason=\"The Windows CI runner does not have enough RAM to run this test\")\r\n> @pytest.mark.parametrize(...)\r\n> def test_large_array_xd_with_np(...):\r\n> ...\r\n> ```\r\n\r\nCI on windows still stucks :(",
"@mariosasko Hi, could you please take a look at this issue",
"@mariosasko Hi, all checks have passed, and we are finally ready to merge this PR :)",
"@lhoestq @albertvillanova Perhaps other maintainers can take a look and merge this PR :)"
] | 1,659,931,126,000 | 1,663,598,482,000 | null | CONTRIBUTOR | null | ```python
import numpy as np
import datasets
a = np.zeros((5000000, 768))
res = datasets.Dataset.from_dict({"embedding": a})
'''
File "/home/wenjiaxin/anaconda3/envs/data/lib/python3.8/site-packages/datasets/arrow_writer.py", line 178, in __arrow_array__
out = numpy_to_pyarrow_listarray(data)
File "/home/wenjiaxin/anaconda3/envs/data/lib/python3.8/site-packages/datasets/features/features.py", line 1173, in numpy_to_pyarrow_listarray
offsets = pa.array(np.arange(n_offsets + 1) * step_offsets, type=pa.int32())
File "pyarrow/array.pxi", line 312, in pyarrow.lib.array
File "pyarrow/array.pxi", line 83, in pyarrow.lib._ndarray_to_array
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Integer value 2147483904 not in range: -2147483648 to 2147483647
'''
```
Loading a large numpy array currently raises the error above as the type of offsets is `int32`.
And pyarrow has supported [LargeListArray](https://arrow.apache.org/docs/python/generated/pyarrow.LargeListArray.html) for this case.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4800/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4800/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4800",
"html_url": "https://github.com/huggingface/datasets/pull/4800",
"diff_url": "https://github.com/huggingface/datasets/pull/4800.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4800.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4799 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4799/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4799/comments | https://api.github.com/repos/huggingface/datasets/issues/4799/events | https://github.com/huggingface/datasets/issues/4799 | 1,330,889,854 | I_kwDODunzps5PU8R- | 4,799 | video dataset loader/parser | {
"login": "nollied",
"id": 26421036,
"node_id": "MDQ6VXNlcjI2NDIxMDM2",
"avatar_url": "https://avatars.githubusercontent.com/u/26421036?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nollied",
"html_url": "https://github.com/nollied",
"followers_url": "https://api.github.com/users/nollied/followers",
"following_url": "https://api.github.com/users/nollied/following{/other_user}",
"gists_url": "https://api.github.com/users/nollied/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nollied/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nollied/subscriptions",
"organizations_url": "https://api.github.com/users/nollied/orgs",
"repos_url": "https://api.github.com/users/nollied/repos",
"events_url": "https://api.github.com/users/nollied/events{/privacy}",
"received_events_url": "https://api.github.com/users/nollied/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi! We've just started discussing the video support in `datasets` (decoding backends, video feature type, etc.), so I believe we should have something tangible by the end of this year.\r\n\r\nAlso, if you have additional video features in mind that you would like to see, feel free to let us know",
"Coool thanks @mariosasko "
] | 1,659,837,252,000 | 1,660,063,371,000 | 1,660,063,371,000 | CONTRIBUTOR | null | you know how you can [use `load_dataset` with any arbitrary csv file](https://huggingface.co/docs/datasets/loading#csv)? and you can also [use it to load a local image dataset](https://huggingface.co/docs/datasets/image_load#local-files)?
could you please add functionality to load a video dataset? it would be really cool if i could point it to a bunch of video files and use pytorch to start looping through batches of videos. like if my batch size is 16, each sample in the batch is a frame from a video. i'm competing in the [minerl challenge](https://www.aicrowd.com/challenges/neurips-2022-minerl-basalt-competition), and it would be awesome to use the HF ecosystem. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4799/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4799/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4798 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4798/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4798/comments | https://api.github.com/repos/huggingface/datasets/issues/4798/events | https://github.com/huggingface/datasets/pull/4798 | 1,330,699,942 | PR_kwDODunzps48wVEG | 4,798 | Shard generator | {
"login": "marianna13",
"id": 43296932,
"node_id": "MDQ6VXNlcjQzMjk2OTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/43296932?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marianna13",
"html_url": "https://github.com/marianna13",
"followers_url": "https://api.github.com/users/marianna13/followers",
"following_url": "https://api.github.com/users/marianna13/following{/other_user}",
"gists_url": "https://api.github.com/users/marianna13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marianna13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marianna13/subscriptions",
"organizations_url": "https://api.github.com/users/marianna13/orgs",
"repos_url": "https://api.github.com/users/marianna13/repos",
"events_url": "https://api.github.com/users/marianna13/events{/privacy}",
"received_events_url": "https://api.github.com/users/marianna13/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi, thanks!\r\n\r\n> I was using Hugging Face datasets to process some very large datasets and found that it would be quite handy to have a feature that will allow to \"split\" these large datasets into chunks with equal size\r\n\r\n`map`, the method we use for processing in `datasets`, already does that if `batched=True`. And you can control the batch size with `batch_size`.\r\n\r\n> Even better - be able to run through these chunks one by one in simple and convenient way\r\n\r\nIt's not hard to do this \"manually\" with the existing API:\r\n```python\r\nbatch_size = <BATCH_SIZE>\r\nfor i in range(len(dset) // batch_size)\r\n shard = dset[i * batch_size:(i+1) * batch_size] # a dict of lists\r\n shard = Dataset.from_dict(shard)\r\n```\r\n(should be of similar performance to your implementation)\r\n\r\nStill, I think an API like that could be useful if implemented efficiently (see [this](https://discuss.huggingface.co/t/why-is-it-so-slow-to-access-data-through-iteration-with-hugginface-dataset/20385) discussion to understand what's the issue with `select`/`__getitem__` on which your implementation relies on), which can be done with `pa.Table.to_reader` in PyArrow 8.0.0+, .\r\n\r\n@lhoestq @albertvillanova wdyt? We could use such API to efficiently iterate over the batches in `map` before processing them.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4798). All of your documentation changes will be reflected on that endpoint.",
"This is more efficient since it doesn't bring the data in memory:\r\n```python\r\nfor i in range(len(dset) // batch_size)\r\n start = i * batch_size\r\n end = min((i+1) * batch_size, len(dset))\r\n shard = dset.select(range(start, end))\r\n```\r\n\r\n@marianna13 can you give more details on when it would be handy to have this shard generator ?",
"> This is more efficient since it doesn't bring the data in memory:\r\n> \r\n> ```python\r\n> for i in range(len(dset) // batch_size)\r\n> start = i * batch_size\r\n> end = min((i+1) * batch_size, len(dset))\r\n> shard = dset.select(range(start, end))\r\n> ```\r\n> \r\n> @marianna13 can you give more details on when it would be handy to have this shard generator ?\r\n\r\nSure! I used such generator when I needed to process a very large dataset (>1TB) in parallel, I've found out empirically that it's much more efficient to do that by processing only one part of the dataset with the shard generator. I tried to use a map with batching but it causesd oom errors, I tried to use the normal shard and here's what I came up with. So I thought it might be helpful to someone else!",
"I see thanks ! `map` should work just fine even at this scale, feel free to open an issue if you'd like to discuss your OOM issue.\r\n\r\nRegarding `shard_generator`, since it is pretty straightforward to get shards I'm not sure we need that extra Dataset method"
] | 1,659,777,246,000 | 1,660,906,235,000 | null | NONE | null | Hi everyone! I was using Hugging Face datasets to process some very large datasets and found that it would be quite handy to have a feature that will allow to "split" these large datasets into chunks with equal size. Even better - be able to run through these chunks one by one in simple and convenient way. So I decided to add the method called shard_generator() to the main Dataset class. It works similar to shard method but it returns a generator of datasets with equal size (defined by shard_size attribute).
Example:
```python
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", split="validation")
>>> ds
Dataset({
features: ['text', 'label'],
num_rows: 1066
})
>>> next(ds.shard_generator(300))
Dataset({
features: ['text', 'label'],
num_rows: 300
})
```
I hope it can be helpful to someone. Thanks! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4798/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4798/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4798",
"html_url": "https://github.com/huggingface/datasets/pull/4798",
"diff_url": "https://github.com/huggingface/datasets/pull/4798.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4798.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4797 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4797/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4797/comments | https://api.github.com/repos/huggingface/datasets/issues/4797/events | https://github.com/huggingface/datasets/pull/4797 | 1,330,000,998 | PR_kwDODunzps48uL-t | 4,797 | Torgo dataset creation | {
"login": "YingLi001",
"id": 75192317,
"node_id": "MDQ6VXNlcjc1MTkyMzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/75192317?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YingLi001",
"html_url": "https://github.com/YingLi001",
"followers_url": "https://api.github.com/users/YingLi001/followers",
"following_url": "https://api.github.com/users/YingLi001/following{/other_user}",
"gists_url": "https://api.github.com/users/YingLi001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YingLi001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YingLi001/subscriptions",
"organizations_url": "https://api.github.com/users/YingLi001/orgs",
"repos_url": "https://api.github.com/users/YingLi001/repos",
"events_url": "https://api.github.com/users/YingLi001/events{/privacy}",
"received_events_url": "https://api.github.com/users/YingLi001/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @YingLi001, thanks for your proposal to add this dataset.\r\n\r\nHowever, now we add datasets directly to the Hub (instead of our GitHub repository). You have the instructions in our docs: \r\n- [Create a dataset loading script](https://huggingface.co/docs/datasets/dataset_script)\r\n- [Create a dataset card](https://huggingface.co/docs/datasets/dataset_card)\r\n- [Share](https://huggingface.co/docs/datasets/share)\r\n\r\nFeel free to ask if you need any additional support/help."
] | 1,659,709,106,000 | 1,660,070,760,000 | 1,660,070,760,000 | NONE | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4797/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4797/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4797",
"html_url": "https://github.com/huggingface/datasets/pull/4797",
"diff_url": "https://github.com/huggingface/datasets/pull/4797.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4797.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4796 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4796/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4796/comments | https://api.github.com/repos/huggingface/datasets/issues/4796/events | https://github.com/huggingface/datasets/issues/4796 | 1,329,887,810 | I_kwDODunzps5PRHpC | 4,796 | ArrowInvalid: Could not convert <PIL.Image.Image image mode=RGB when adding image to Dataset | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@mariosasko I'm getting a similar issue when creating a Dataset from a Pandas dataframe, like so:\r\n\r\n```\r\nfrom datasets import Dataset, Features, Image, Value\r\nimport pandas as pd\r\nimport requests\r\nimport PIL\r\n\r\n# we need to define the features ourselves\r\nfeatures = Features({\r\n 'a': Value(dtype='int32'),\r\n 'b': Image(),\r\n})\r\n\r\nurl = \"http://images.cocodataset.org/val2017/000000039769.jpg\"\r\nimage = PIL.Image.open(requests.get(url, stream=True).raw)\r\n\r\ndf = pd.DataFrame({\"a\": [1, 2], \r\n \"b\": [image, image]})\r\n\r\ndataset = Dataset.from_pandas(df, features=features) \r\n```\r\nresults in \r\n\r\n```\r\nArrowInvalid: ('Could not convert <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x480 at 0x7F7991A15C10> with type JpegImageFile: did not recognize Python value type when inferring an Arrow data type', 'Conversion failed for column b with type object')\r\n```\r\n\r\nWill the PR linked above also fix that?",
"I would expect this to work, but it doesn't. Shouldn't be too hard to fix tho (in a subsequent PR).",
"Hi @mariosasko just wanted to check in if there is a PR to follow for this. I was looking to create a demo app using this. If it's not working I can just use byte encoded images in the dataset which are not displayed. ",
"Hi @darraghdog! No PR yet, but I plan to fix this before the next release."
] | 1,659,703,279,000 | 1,660,912,890,000 | null | CONTRIBUTOR | null | ## Describe the bug
When adding a Pillow image to an existing Dataset on the hub, `add_item` fails due to the Pillow image not being automatically converted into the Image feature.
## Steps to reproduce the bug
```python
from datasets import load_dataset
from PIL import Image
dataset = load_dataset("hf-internal-testing/example-documents")
# load any random Pillow image
image = Image.open("/content/cord_example.png").convert("RGB")
new_image = {'image': image}
dataset['test'] = dataset['test'].add_item(new_image)
```
## Expected results
The image should be automatically casted to the Image feature when using `add_item`. For now, this can be fixed by using `encode_example`:
```
import datasets
feature = datasets.Image(decode=False)
new_image = {'image': feature.encode_example(image)}
dataset['test'] = dataset['test'].add_item(new_image)
```
## Actual results
```
ArrowInvalid: Could not convert <PIL.Image.Image image mode=RGB size=576x864 at 0x7F7CCC4589D0> with type Image: did not recognize Python value type when inferring an Arrow data type
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4796/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4796/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4795 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4795/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4795/comments | https://api.github.com/repos/huggingface/datasets/issues/4795/events | https://github.com/huggingface/datasets/issues/4795 | 1,329,525,732 | I_kwDODunzps5PPvPk | 4,795 | Missing MBPP splits | {
"login": "stadlerb",
"id": 2452384,
"node_id": "MDQ6VXNlcjI0NTIzODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2452384?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stadlerb",
"html_url": "https://github.com/stadlerb",
"followers_url": "https://api.github.com/users/stadlerb/followers",
"following_url": "https://api.github.com/users/stadlerb/following{/other_user}",
"gists_url": "https://api.github.com/users/stadlerb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stadlerb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stadlerb/subscriptions",
"organizations_url": "https://api.github.com/users/stadlerb/orgs",
"repos_url": "https://api.github.com/users/stadlerb/repos",
"events_url": "https://api.github.com/users/stadlerb/events{/privacy}",
"received_events_url": "https://api.github.com/users/stadlerb/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Thanks for reporting this as well, @stadlerb.\r\n\r\nI suggest waiting for the answer of the data owners... ",
"@albertvillanova The first author of the paper responded to the upstream issue:\r\n> Task IDs 11-510 are the 500 test problems. We use 90 problems (511-600) for validation and then remaining 374 for fine-tuning (601-974). The other problems can be used as desired, either for training or few-shot prompting (although this should be specified).",
"Thanks for the follow-up, @stadlerb.\r\n\r\nWould you be willing to open a Pull Request to address this issue? :wink: ",
"Opened a [PR](https://github.com/huggingface/datasets/pull/4943) to implement this--lmk if you have any feedback"
] | 1,659,682,261,000 | 1,663,072,044,000 | 1,663,072,044,000 | NONE | null | (@albertvillanova)
The [MBPP dataset on the Hub](https://huggingface.co/datasets/mbpp) has only a test split for both its "full" and its "sanitized" subset, while the [paper](https://arxiv.org/abs/2108.07732) states in subsection 2.1 regarding the full split:
> In the experiments described later in the paper, we hold out 10 problems for **few-shot prompting**, another 500 as our **test** dataset (which is used to evaluate both few-shot inference and fine-tuned models), 374 problems for **fine-tuning**, and the rest for **validation**.
If the dataset on the Hub should reproduce most closely what the original authors use, I guess this four-way split should be reflected.
The paper doesn't explicitly state the task_id ranges of the splits, but the [GitHub readme](https://github.com/google-research/google-research/tree/master/mbpp) referenced in the paper specifies exact task_id ranges, although it misstates the total number of samples:
> We specify a train and test split to use for evaluation. Specifically:
>
> * Task IDs 11-510 are used for evaluation.
> * Task IDs 1-10 and 511-1000 are used for training and/or prompting. We typically used 1-10 for few-shot prompting, although you can feel free to use any of the training examples.
I.e. the few-shot, train and validation splits are combined into one split, with a soft suggestion of using the first ten for few-shot prompting. It is not explicitly stated whether the 374 fine-tuning samples mentioned in the paper have task_id 511 to 784 or 601 to 974 or are randomly sampled from task_id 511 to 974.
Regarding the "sanitized" split the paper states the following:
> For evaluations involving the edited dataset, we perform comparisons with 100 problems that appear in both the original and edited dataset, using the same held out 10 problems for few-shot prompting and 374 problems for fine-tuning.
The statement doesn't appear to be very precise, as among the 10 few-shot problems, those with task_id 1, 5 and 10 are not even part of the sanitized variant, and many from the task_id range from 511 to 974 are missing (e.g. task_id 511 to 553). I suppose the idea the task_id ranges for each split remain the same, even if some of the task_ids are not present. That would result in 7 few-shot, 257 test, 141 train and 22 validation examples in the sanitized split. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4795/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4795/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4792 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4792/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4792/comments | https://api.github.com/repos/huggingface/datasets/issues/4792/events | https://github.com/huggingface/datasets/issues/4792 | 1,328,593,929 | I_kwDODunzps5PMLwJ | 4,792 | Add DocVQA | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | null | [] | null | [
"Thanks for proposing, @NielsRogge.\r\n\r\nPlease, note this dataset requires registering in their website and their Terms and Conditions state we cannot distribute their URL:\r\n```\r\n1. You will NOT distribute the download URLs\r\n...\r\n```"
] | 1,659,618,446,000 | 1,659,936,680,000 | null | CONTRIBUTOR | null | ## Adding a Dataset
- **Name:** DocVQA
- **Description:** Document Visual Question Answering (DocVQA) seeks to inspire a “purpose-driven” point of view in Document Analysis and Recognition research, where the document content is extracted and used to respond to high-level tasks defined by the human consumers of this information.
- **Paper:** https://arxiv.org/abs/2007.00398
- **Data:** https://www.docvqa.org/datasets/docvqa
- **Motivation:** Models like LayoutLM and Donut in the Transformers library are fine-tuned on DocVQA. Would be very handy to directly load this dataset from the hub.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/main/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4792/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4792/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4791 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4791/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4791/comments | https://api.github.com/repos/huggingface/datasets/issues/4791/events | https://github.com/huggingface/datasets/issues/4791 | 1,328,571,064 | I_kwDODunzps5PMGK4 | 4,791 | Dataset Viewer issue for Team-PIXEL/rendered-wikipedia-english | {
"login": "xplip",
"id": 25847814,
"node_id": "MDQ6VXNlcjI1ODQ3ODE0",
"avatar_url": "https://avatars.githubusercontent.com/u/25847814?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xplip",
"html_url": "https://github.com/xplip",
"followers_url": "https://api.github.com/users/xplip/followers",
"following_url": "https://api.github.com/users/xplip/following{/other_user}",
"gists_url": "https://api.github.com/users/xplip/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xplip/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xplip/subscriptions",
"organizations_url": "https://api.github.com/users/xplip/orgs",
"repos_url": "https://api.github.com/users/xplip/repos",
"events_url": "https://api.github.com/users/xplip/events{/privacy}",
"received_events_url": "https://api.github.com/users/xplip/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting. It's a known issue that should be fixed soon. Meanwhile, I had to manually trigger the dataset viewer. It's OK now.\r\nNote that the extreme aspect ratio of the images generates another issue, that we're inspecting."
] | 1,659,617,356,000 | 1,659,620,596,000 | 1,659,620,596,000 | NONE | null | ### Link
https://huggingface.co/datasets/Team-PIXEL/rendered-wikipedia-english/viewer/rendered-wikipedia-en/train
### Description
The dataset can be loaded fine but the viewer shows this error:
```
Server Error
Status code: 400
Exception: Status400Error
Message: The dataset does not exist.
```
I'm guessing this is because I recently renamed the dataset. Based on related issues (e.g. https://github.com/huggingface/datasets/issues/4759) , is there something server-side that needs to be refreshed?
### Owner
Yes | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4791/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4791/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4790 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4790/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4790/comments | https://api.github.com/repos/huggingface/datasets/issues/4790/events | https://github.com/huggingface/datasets/issues/4790 | 1,328,546,904 | I_kwDODunzps5PMARY | 4,790 | Issue with fine classes in trec dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,659,616,131,000 | 1,661,184,856,000 | 1,661,184,856,000 | MEMBER | null | ## Describe the bug
According to their paper, the TREC dataset contains 2 kinds of classes:
- 6 coarse classes: TREC-6
- 50 fine classes: TREC-50
However, our implementation only has 47 (instead of 50) fine classes. The reason for this is that we only considered the last segment of the label, which is repeated for several coarse classes:
- We have one `desc` fine label instead of 2:
- `DESC:desc`
- `HUM:desc`
- We have one `other` fine label instead of 3:
- `ENTY:other`
- `LOC:other`
- `NUM:other`
From their paper:
> We define a two-layered taxonomy, which represents a natural semantic classification for typical answers in the TREC task. The hierarchy contains 6 coarse classes and 50 fine classes,
> Each coarse class contains a non-overlapping set of fine classes.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4790/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4790/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4789 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4789/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4789/comments | https://api.github.com/repos/huggingface/datasets/issues/4789/events | https://github.com/huggingface/datasets/pull/4789 | 1,328,409,253 | PR_kwDODunzps48o3Kk | 4,789 | Update doc upload_dataset.mdx | {
"login": "mishig25",
"id": 11827707,
"node_id": "MDQ6VXNlcjExODI3NzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mishig25",
"html_url": "https://github.com/mishig25",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"repos_url": "https://api.github.com/users/mishig25/repos",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,659,608,640,000 | 1,662,741,430,000 | 1,662,741,298,000 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4789/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4789/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4789",
"html_url": "https://github.com/huggingface/datasets/pull/4789",
"diff_url": "https://github.com/huggingface/datasets/pull/4789.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4789.patch",
"merged_at": 1662741298000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4788 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4788/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4788/comments | https://api.github.com/repos/huggingface/datasets/issues/4788/events | https://github.com/huggingface/datasets/pull/4788 | 1,328,246,021 | PR_kwDODunzps48oUNx | 4,788 | Fix NonMatchingChecksumError in mbpp dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you for the quick response! Before noticing that you already had implemented the fix, I already had implemened my own version. I'd also suggest bumping the major version because the contents of the dataset changed, even if only slightly.\r\nI'll attach my version of the affected files: [mbpp-checksum-changes.zip](https://github.com/huggingface/datasets/files/9258161/mbpp-checksum-changes.zip).",
"Hi @stadlerb, thanks for your feedback.\r\n\r\nWe normally update the major version whenever there is a new dataset release, usually with a breaking change in schema. The patch version is updated whenever there is a small correction in the dataset that does not change its schema.\r\n\r\nAs a side note for future contributions, please note that this dataset is hosted in our library GitHub repository. Therefore, the PRs to GitHub-hosted datasets needs being done through GitHub.\r\n\r\nCurrently added datasets are hosted on the Hub and for them, PRs can be done through the Hub.",
"I just noticed another problem with the dataset: The [GitHub page](https://github.com/google-research/google-research/tree/master/mbpp) and the [paper](http://arxiv.org/abs/2108.07732) mention a train-test split, which is not reflected in the dataloader. I'll open a new issue regarding this later."
] | 1,659,601,060,000 | 1,659,634,440,000 | 1,659,633,661,000 | MEMBER | null | Fix issue reported on the Hub: https://huggingface.co/datasets/mbpp/discussions/1
Fix #4787. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4788/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4788/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4788",
"html_url": "https://github.com/huggingface/datasets/pull/4788",
"diff_url": "https://github.com/huggingface/datasets/pull/4788.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4788.patch",
"merged_at": 1659633661000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4787 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4787/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4787/comments | https://api.github.com/repos/huggingface/datasets/issues/4787/events | https://github.com/huggingface/datasets/issues/4787 | 1,328,243,911 | I_kwDODunzps5PK2TH | 4,787 | NonMatchingChecksumError in mbpp dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,659,600,951,000 | 1,659,633,661,000 | 1,659,633,661,000 | MEMBER | null | ## Describe the bug
As reported on the Hub [Fix Checksum Mismatch](https://huggingface.co/datasets/mbpp/discussions/1), there is a `NonMatchingChecksumError` when loading mbpp dataset
## Steps to reproduce the bug
```python
ds = load_dataset("mbpp", "full")
```
## Expected results
Loading of the dataset without any exception raised.
## Actual results
```
NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-1-a3fbdd3ed82e> in <module>
----> 1 ds = load_dataset("mbpp", "full")
.../huggingface/datasets/src/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1791
1792 # Download and prepare data
-> 1793 builder_instance.download_and_prepare(
1794 download_config=download_config,
1795 download_mode=download_mode,
.../huggingface/datasets/src/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
702 logger.warning("HF google storage unreachable. Downloading and preparing it from source")
703 if not downloaded_from_gcs:
--> 704 self._download_and_prepare(
705 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
706 )
.../huggingface/datasets/src/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos)
1225
1226 def _download_and_prepare(self, dl_manager, verify_infos):
-> 1227 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
1228
1229 def _get_examples_iterable_for_split(self, split_generator: SplitGenerator) -> ExamplesIterable:
.../huggingface/datasets/src/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
773 # Checksums verification
774 if verify_infos and dl_manager.record_checksums:
--> 775 verify_checksums(
776 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files"
777 )
.../huggingface/datasets/src/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
38 if len(bad_urls) > 0:
39 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls))
41 logger.info("All the checksums matched successfully" + for_verification_name)
42
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://raw.githubusercontent.com/google-research/google-research/master/mbpp/mbpp.jsonl']
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4787/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4787/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4786 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4786/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4786/comments | https://api.github.com/repos/huggingface/datasets/issues/4786/events | https://github.com/huggingface/datasets/issues/4786 | 1,327,340,828 | I_kwDODunzps5PHZ0c | 4,786 | .save_to_disk('path', fs=s3) TypeError | {
"login": "hongknop",
"id": 110547763,
"node_id": "U_kgDOBpbTMw",
"avatar_url": "https://avatars.githubusercontent.com/u/110547763?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hongknop",
"html_url": "https://github.com/hongknop",
"followers_url": "https://api.github.com/users/hongknop/followers",
"following_url": "https://api.github.com/users/hongknop/following{/other_user}",
"gists_url": "https://api.github.com/users/hongknop/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hongknop/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hongknop/subscriptions",
"organizations_url": "https://api.github.com/users/hongknop/orgs",
"repos_url": "https://api.github.com/users/hongknop/repos",
"events_url": "https://api.github.com/users/hongknop/events{/privacy}",
"received_events_url": "https://api.github.com/users/hongknop/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 1,659,538,169,000 | 1,659,540,180,000 | 1,659,540,180,000 | NONE | null | The following code:
```python
import datasets
train_dataset, test_dataset = load_dataset("imdb", split=["train", "test"])
s3 = datasets.filesystems.S3FileSystem(key=aws_access_key_id, secret=aws_secret_access_key)
train_dataset.save_to_disk("s3://datasets/", fs=s3)
```
produces following traceback:
```shell
File "C:\Users\Hong Knop\AppData\Local\Programs\Python\Python310\lib\site-packages\botocore\auth.py", line 374, in scope
return '/'.join(scope)
```
I invoke print(scope) in <auth.py> (line 373) and find this:
```python
[('4VA08VLL3VTKQJKCAI8M',), '20220803', 'us-east-1', 's3', 'aws4_request']
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4786/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4786/timeline | null | completed | null | null | false |