url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 49
51
| id
int64 1.05B
1.38B
| node_id
stringlengths 18
19
| number
int64 3.26k
4.99k
| title
stringlengths 1
162
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
null | comments
sequence | created_at
int64 1,637B
1,664B
| updated_at
int64 1,637B
1,664B
| closed_at
int64 1,637B
1,664B
⌀ | author_association
stringclasses 3
values | active_lock_reason
null | body
stringlengths 2
36.2k
⌀ | reactions
dict | timeline_url
stringlengths 70
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/4172 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4172/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4172/comments | https://api.github.com/repos/huggingface/datasets/issues/4172/events | https://github.com/huggingface/datasets/pull/4172 | 1,204,433,160 | PR_kwDODunzps42O7LW | 4,172 | Update assin2 dataset_infos.json | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649,937,186,000 | 1,650,034,062,000 | 1,650,033,682,000 | MEMBER | null | Following comments in https://github.com/huggingface/datasets/issues/4003 we found that it was outdated and casing an error when loading the dataset | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4172/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4172/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4172",
"html_url": "https://github.com/huggingface/datasets/pull/4172",
"diff_url": "https://github.com/huggingface/datasets/pull/4172.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4172.patch",
"merged_at": 1650033682000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4170 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4170/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4170/comments | https://api.github.com/repos/huggingface/datasets/issues/4170/events | https://github.com/huggingface/datasets/pull/4170 | 1,204,413,620 | PR_kwDODunzps42O2-L | 4,170 | to_tf_dataset rewrite | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"[Magic is now banned](https://www.youtube.com/watch?v=WIn58XoY728#t=36s) by decree of @sgugger. This is honestly much cleaner, and the functionality will make much more sense in `transformers` anyway!",
"@gante I renamed the default collator to `minimal_tf_collate_fn`!",
"@lhoestq @sgugger @gante \r\n\r\nI think this should now be ready, it looks good in testing! I'll try a few more notebooks today and tomorrow to be sure before I merge. Key changes are:\r\n\r\n- No column autodetection magic (will make a separate PR to add this as a `transformers` function)\r\n- Drops non-numerical features automatically (this is more of a 'DataLoader' method, we'll have a separate method to expose 'raw' datasets to `tf.data`)\r\n- Better autodetection of numerical features.\r\n- Shouldn't randomly crash mid-function :skull: \r\n\r\nWe definitely have some questions still to resolve about how to handle making a 'DataLoader' dataset versus a 'raw' dataset - see [the Notion doc](https://www.notion.so/huggingface2/Splitting-to_tf_dataset-c2e0773c4bec484384064b30ed634383) if you're interested. Still, since this PR is just fixes/improvements to an existing method which never supported non-numerical features anyway, we can merge it before we've resolved those issues, and then think about how to name and split things afterwards.",
"P.S. I'll take out the region comments at the end before I merge, I promise! They're just helpful while I'm editing it",
"+1 for the tests\r\n\r\n> Drops non-numerical features automatically\r\n\r\nCan you give more details on how this work and the rationale as well ? This is not explained in the docs\r\n\r\nAlso why are you adding `error_on_missing` and `auto_fix_label_names ` ? The rationale is not clear to me. In particular I think it is sensible enough to expect users to not ask columns that don't exist, and to rename a label column when required.",
"@lhoestq I rewrote those parts - they were causing some other issues too! `error_on_missing` and `auto_fix_label_names` have been removed. The new logic is to simply drop (before batch collation) all columns the user doesn't ask for, but not to raise errors if the user asked for columns not in the dataset, as they may be added by the collator. Hopefully this cleans it up and matches the documentation better!",
"@lhoestq New tests are now in!",
"Seeing some other random tests failing that don't look to be associated with this PR.",
"@lhoestq I can't figure out these test failures! They don't seem related to this PR at all, but I rebased to the latest version and they keep happening, even though they're not visible on master.",
"Thanks for the ping, will take a look tomorrow :)\r\n\r\nMaybe the rebase didn't go well for the code recently merged about label alignment from https://github.com/huggingface/datasets/pull/4277 ?",
"It's very strange! The rebase looks fine to me. I might try to move my changes to a new branch from `master` and see if I can figure out which change causes this problem to appear.",
"@lhoestq Got it! It was caused by a name collision - I was importing `typing.Sequence`, but the code also needed `features.Sequence`. The tests from that PR were expecting the latter but got the former, and then crashed.",
"@lhoestq Thanks! Also, when you're ready, don't merge it immediately! I'd like to do a quick round of manual testing with the very final build once you're happy to make sure it still works in our notebooks and examples.",
"@lhoestq Tests look good to me, merging now!"
] | 1,649,935,858,000 | 1,654,525,872,000 | 1,654,525,329,000 | MEMBER | null | This PR rewrites almost all of `to_tf_dataset()`, which makes it kind of hard to list all the changes, but the most critical ones are:
- Much better stability and no more dropping unexpected column names (Sorry @NielsRogge)
- Doesn't clobber custom transforms on the data (Sorry @NielsRogge again)
- Much better handling of the situation when the `collate_fn` adds columns that aren't in the dataset.
- Better inference of shapes and data types
- Lots of hacky special-casing code removed
- Can return string columns (as `tf.String`)
- Most arguments have default values, calling the method should be much simpler
- ~~Can accept a `model` argument and only return columns that are valid inputs to that model~~
- Drops the `dummy_labels` argument - this was a workaround for Keras issues that have been resolved by changes in `transformers`. Also remove it from tests and the Overview notebook.
I still have a couple of TODOs remaining and some testing to do, so don't merge yet, but it should be mostly ready for review at this point! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4170/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4170/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4170",
"html_url": "https://github.com/huggingface/datasets/pull/4170",
"diff_url": "https://github.com/huggingface/datasets/pull/4170.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4170.patch",
"merged_at": 1654525329000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4169 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4169/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4169/comments | https://api.github.com/repos/huggingface/datasets/issues/4169/events | https://github.com/huggingface/datasets/issues/4169 | 1,203,995,869 | I_kwDODunzps5Hw4Td | 4,169 | Timit_asr dataset cannot be previewed recently | {
"login": "YingLi001",
"id": 75192317,
"node_id": "MDQ6VXNlcjc1MTkyMzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/75192317?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YingLi001",
"html_url": "https://github.com/YingLi001",
"followers_url": "https://api.github.com/users/YingLi001/followers",
"following_url": "https://api.github.com/users/YingLi001/following{/other_user}",
"gists_url": "https://api.github.com/users/YingLi001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YingLi001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YingLi001/subscriptions",
"organizations_url": "https://api.github.com/users/YingLi001/orgs",
"repos_url": "https://api.github.com/users/YingLi001/repos",
"events_url": "https://api.github.com/users/YingLi001/events{/privacy}",
"received_events_url": "https://api.github.com/users/YingLi001/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for reporting. The bug has already been detected, and we hope to fix it soon.",
"TIMIT is now a dataset that requires manual download, see #4145 \r\n\r\nTherefore it might take a bit more time to fix it",
"> TIMIT is now a dataset that requires manual download, see #4145\r\n> \r\n> Therefore it might take a bit more time to fix it\r\n\r\nThank you for your quickly response. Exactly, I also found the manual download issue in the morning. But when I used *list_datasets()* to check the available datasets, *'timit_asr'* is still in the list. So I am a little bit confused. If *'timit_asr'* need to be manually downloaded, does that mean we can **not** automatically download it **any more** in the future?",
"Yes exactly. If you try to load the dataset it will ask you to download it manually first, and to pass the downloaded and extracted data like `load_dataset(\"timir_asr\", data_dir=\"path/to/extracted/data\")`\r\n\r\nThe URL we were using was coming from a host that doesn't have the permission to redistribute the data, and the dataset owners (LDC) notified us about it."
] | 1,649,906,911,000 | 1,651,853,211,000 | 1,651,853,211,000 | NONE | null | ## Dataset viewer issue for '*timit_asr*'
**Link:** *https://huggingface.co/datasets/timit_asr*
Issue: The timit-asr dataset cannot be previewed recently.
Am I the one who added this dataset ? Yes-No
No | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4169/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4169/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4168 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4168/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4168/comments | https://api.github.com/repos/huggingface/datasets/issues/4168/events | https://github.com/huggingface/datasets/pull/4168 | 1,203,867,540 | PR_kwDODunzps42NL6F | 4,168 | Add code examples to API docs | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> Do you think it is clearer to make every code example fully reproducible so when users copy the code they can actually run it and get an output? This seems quite repetitive - maybe even unnecessary - but it is definitely clearer.\r\n\r\nI think it's ok to be repetitive to get more clarity. Many users come from `transformers` and may have little experience with some processing methods (especially torch users).\r\n\r\n> Should we showcase a function with more than one parameter to highlight different use-cases (it's pretty basic right now, but I'd be happy to add more)?\r\n\r\nMaybe let's do it case by case, depending on whether there are parameters that are likely to be used often ?\r\n\r\n> For the class_encode_column function, let me know if there is a simpler dataset with fewer columns (currently using winograd_wsc) so it is easier for users to see what changed.\r\n\r\nYou can try with `boolq`, it has a boolean column that can be converted to labels\r\n\r\n> Where possible, I try to show the input before and the output after using a function like flatten for example. Do you think this is too much and just showing the usage (ie, >>> ds.flatten()) will be sufficient?\r\n\r\nNo I don't think it's too much, it's nice this way thanks :)",
"Updated each code example so they are fully reproducible (where applicable)! The next step will be to identify some functions where we can show off some parameters that are useful or commonly used. Some useful parameters can be:\r\n\r\n- use `map(batched=True)` to process batches of examples.\r\n- set a seed in `shuffle`.\r\n- set `shuffle` and `seed` in `train_test_split`.\r\n\r\nLet me know if you think of anything else related to the functions in `arrow_dataset.py`!",
"Cool thanks ! I think you can also do `num_proc` for `map`"
] | 1,649,891,018,000 | 1,651,085,617,000 | 1,651,085,314,000 | MEMBER | null | This PR adds code examples for functions related to the base Datasets class to highlight usage. Most of the examples use the `rotten_tomatoes` dataset since it is nice and small. Several things I would appreciate feedback on:
- Do you think it is clearer to make every code example fully reproducible so when users copy the code they can actually run it and get an output? This seems quite repetitive - maybe even unnecessary - but it is definitely clearer. Personally, I think we might be able to get away with not including this since users probably want to try the function on their own dataset. For example:
```py
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", split="validation")
>>> code example goes here
```
- Should we showcase a function with more than one parameter to highlight different use-cases (it's pretty basic right now, but I'd be happy to add more)?
- For the `class_encode_column` function, let me know if there is a simpler dataset with fewer columns (currently using `winograd_wsc`) so it is easier for users to see what changed.
- Where possible, I try to show the input before and the output after using a function like `flatten` for example. Do you think this is too much and just showing the usage (ie, `>>> ds.flatten()`) will be sufficient?
Thanks :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4168/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4168/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4168",
"html_url": "https://github.com/huggingface/datasets/pull/4168",
"diff_url": "https://github.com/huggingface/datasets/pull/4168.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4168.patch",
"merged_at": 1651085314000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4167 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4167/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4167/comments | https://api.github.com/repos/huggingface/datasets/issues/4167/events | https://github.com/huggingface/datasets/pull/4167 | 1,203,761,614 | PR_kwDODunzps42M1O5 | 4,167 | Avoid rate limit in update hub repositories | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I also set GIT_LFS_SKIP_SMUDGE=1 to speed up git clones",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649,881,937,000 | 1,649,883,401,000 | 1,649,883,032,000 | MEMBER | null | use http.extraHeader to avoid rate limit | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4167/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4167/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4167",
"html_url": "https://github.com/huggingface/datasets/pull/4167",
"diff_url": "https://github.com/huggingface/datasets/pull/4167.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4167.patch",
"merged_at": 1649883032000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4166 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4166/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4166/comments | https://api.github.com/repos/huggingface/datasets/issues/4166/events | https://github.com/huggingface/datasets/pull/4166 | 1,203,758,004 | PR_kwDODunzps42M0dS | 4,166 | Fix exact match | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649,881,686,000 | 1,651,580,611,000 | 1,651,580,187,000 | CONTRIBUTOR | null | Clarify docs and add clarifying example to the exact_match metric | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4166/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4166/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4166",
"html_url": "https://github.com/huggingface/datasets/pull/4166",
"diff_url": "https://github.com/huggingface/datasets/pull/4166.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4166.patch",
"merged_at": 1651580187000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4165 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4165/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4165/comments | https://api.github.com/repos/huggingface/datasets/issues/4165/events | https://github.com/huggingface/datasets/pull/4165 | 1,203,730,187 | PR_kwDODunzps42MubF | 4,165 | Fix google bleu typos, examples | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649,879,994,000 | 1,651,580,632,000 | 1,651,580,204,000 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4165/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4165/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4165",
"html_url": "https://github.com/huggingface/datasets/pull/4165",
"diff_url": "https://github.com/huggingface/datasets/pull/4165.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4165.patch",
"merged_at": 1651580204000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4164 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4164/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4164/comments | https://api.github.com/repos/huggingface/datasets/issues/4164/events | https://github.com/huggingface/datasets/pull/4164 | 1,203,661,346 | PR_kwDODunzps42MfxX | 4,164 | Fix duplicate key in multi_news | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649,875,704,000 | 1,649,883,856,000 | 1,649,883,482,000 | MEMBER | null | To merge after this job succeeded: https://github.com/huggingface/datasets/runs/6012207928 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4164/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4164/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4164",
"html_url": "https://github.com/huggingface/datasets/pull/4164",
"diff_url": "https://github.com/huggingface/datasets/pull/4164.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4164.patch",
"merged_at": 1649883482000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4163 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4163/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4163/comments | https://api.github.com/repos/huggingface/datasets/issues/4163/events | https://github.com/huggingface/datasets/issues/4163 | 1,203,539,268 | I_kwDODunzps5HvI1E | 4,163 | Optional Content Warning for Datasets | {
"login": "TristanThrush",
"id": 20826878,
"node_id": "MDQ6VXNlcjIwODI2ODc4",
"avatar_url": "https://avatars.githubusercontent.com/u/20826878?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TristanThrush",
"html_url": "https://github.com/TristanThrush",
"followers_url": "https://api.github.com/users/TristanThrush/followers",
"following_url": "https://api.github.com/users/TristanThrush/following{/other_user}",
"gists_url": "https://api.github.com/users/TristanThrush/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TristanThrush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TristanThrush/subscriptions",
"organizations_url": "https://api.github.com/users/TristanThrush/orgs",
"repos_url": "https://api.github.com/users/TristanThrush/repos",
"events_url": "https://api.github.com/users/TristanThrush/events{/privacy}",
"received_events_url": "https://api.github.com/users/TristanThrush/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi! You can use the `extra_gated_prompt` YAML field in a dataset card for displaying custom messages/warnings that the user must accept before gaining access to the actual dataset. This option also keeps the viewer hidden until the user agrees to terms. ",
"Hi @mariosasko, thanks for explaining how to add this feature. \r\n\r\nIf the current dataset yaml is:\r\n```\r\n---\r\nannotations_creators:\r\n- expert\r\nlanguage_creators:\r\n- expert-generated\r\nlanguages:\r\n- en\r\nlicense:\r\n- cc-by-4.0\r\nmultilinguality:\r\n- monolingual\r\npretty_name: HatemojiBuild\r\nsize_categories:\r\n- 1K<n<10K\r\nsource_datasets:\r\n- original\r\ntask_categories:\r\n- text-classification\r\ntask_ids:\r\n- hate-speech-detection\r\n---\r\n```\r\n\r\nCan you provide a minimal working example of how to added the gated prompt?\r\n\r\nThanks!",
"```\r\n---\r\nannotations_creators:\r\n- expert\r\nlanguage_creators:\r\n- expert-generated\r\nlanguages:\r\n- en\r\nlicense:\r\n- cc-by-4.0\r\nmultilinguality:\r\n- monolingual\r\npretty_name: HatemojiBuild\r\nsize_categories:\r\n- 1K<n<10K\r\nsource_datasets:\r\n- original\r\ntask_categories:\r\n- text-classification\r\ntask_ids:\r\n- hate-speech-detection\r\nextra_gated_prompt: \"This repository contains harmful content.\"\r\n---\r\n```\r\n\\+ enable `User Access requests` under the Settings pane.\r\n\r\nThere's a brief guide here https://discuss.huggingface.co/t/how-to-customize-the-user-access-requests-message/13953 , and you can see the field in action here, https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0/blob/main/README.md (you need to agree the terms in the Dataset Card pane to be able to access the files pane, so this comes up 403 at first).\r\n\r\nAnd a working example here! https://huggingface.co/datasets/DDSC/dkhate :) Great to be able to mitigate harms in text.",
"-- is there a way to gate content anonymously, i.e. without registering which users access it?",
"+1 to @leondz's question. One scenario is if you don't want the dataset to be indexed by search engines or viewed in browser b/c of upstream conditions on data, but don't want to collect emails. Some ability to turn off the dataset viewer or add a gating mechanism without emails would be fantastic."
] | 1,649,867,881,000 | 1,654,807,142,000 | null | MEMBER | null | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
We now have hate speech datasets on the hub, like this one: https://huggingface.co/datasets/HannahRoseKirk/HatemojiBuild
I'm wondering if there is an option to select a content warning message that appears before the dataset preview? Otherwise, people immediately see hate speech when clicking on this dataset.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
Implementation of a content warning message that separates users from the dataset preview until they click out of the warning.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
Possibly just a way to remove the dataset preview completely? I think I like the content warning option better, though.
**Additional context**
Add any other context about the feature request here.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4163/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4163/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4162 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4162/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4162/comments | https://api.github.com/repos/huggingface/datasets/issues/4162/events | https://github.com/huggingface/datasets/pull/4162 | 1,203,421,909 | PR_kwDODunzps42LtGO | 4,162 | Add Conceptual 12M | {
"login": "thomasw21",
"id": 24695242,
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomasw21",
"html_url": "https://github.com/thomasw21",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Looks like your dummy_data.zip file is not in the right location ;)\r\ndatasets/datasets/conceptual_12m/dummy/default/0.0.0/dummy_data.zip\r\n->\r\ndatasets/conceptual_12m/dummy/default/0.0.0/dummy_data.zip"
] | 1,649,861,843,000 | 1,650,010,381,000 | 1,650,009,985,000 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4162/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4162/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4162",
"html_url": "https://github.com/huggingface/datasets/pull/4162",
"diff_url": "https://github.com/huggingface/datasets/pull/4162.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4162.patch",
"merged_at": 1650009985000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4161 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4161/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4161/comments | https://api.github.com/repos/huggingface/datasets/issues/4161/events | https://github.com/huggingface/datasets/pull/4161 | 1,203,230,485 | PR_kwDODunzps42LEhi | 4,161 | Add Visual Genome | {
"login": "thomasw21",
"id": 24695242,
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomasw21",
"html_url": "https://github.com/thomasw21",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hum there seems to be some issues with tasks in test:\r\n - some tasks don't fit anything in `tasks.json`. Do I remove them in `task_categories`?\r\n - some tasks should exist, typically `visual-question-answering` (https://github.com/huggingface/datasets/blame/9f2ff14673cac1f1ad56d80221a793f5938b68c7/src/datasets/utils/resources/tasks.json#L195) yet the exception is failing on me. I'm guessing it's because my `master` is not up-to-date. However this means that the testing only tests my branch instead of the one merged with master?\r\n \r\n cc @mariosasko @lhoestq ",
"> some tasks don't fit anything in tasks.json. Do I remove them in task_categories?\r\n\r\nYou can keep them, but add `other-` as a prefix to those tasks to make the CI ignore it\r\n\r\n> some tasks should exist, typically visual-question-answering (https://github.com/huggingface/datasets/blame/9f2ff14673cac1f1ad56d80221a793f5938b68c7/src/datasets/utils/resources/tasks.json#L195) yet the exception is failing on me. I'm guessing it's because my master is not up-to-date. However this means that the testing only tests my branch instead of the one merged with master?\r\n\r\nFeel free to merge upstream/master into your branch ;)\r\n\r\nEDIT: actually I just noticed you've already done this, thanks !",
"After offline discussions: will keep that image essentially it's necessary as I have a mapping that creates a mapping between url and local path (images are downloaded via a zip file) and dummy data needs to store that dummy image. The issue is when I read an annotation, I get a url, compute the local path, and basically I assume the local path exists since I've extracted all the images ... This isn't true if dummy data doesn't have all the images, so instead I've added a script that \"fixes\" the dummy data after using the CLI, it essentially adds the dummy image in the zip corresponding to the url."
] | 1,649,852,724,000 | 1,650,555,769,000 | 1,650,546,532,000 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4161/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4161/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4161",
"html_url": "https://github.com/huggingface/datasets/pull/4161",
"diff_url": "https://github.com/huggingface/datasets/pull/4161.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4161.patch",
"merged_at": 1650546532000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4160 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4160/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4160/comments | https://api.github.com/repos/huggingface/datasets/issues/4160/events | https://github.com/huggingface/datasets/issues/4160 | 1,202,845,874 | I_kwDODunzps5Hsfiy | 4,160 | RGBA images not showing | {
"login": "cceyda",
"id": 15624271,
"node_id": "MDQ6VXNlcjE1NjI0Mjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/15624271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cceyda",
"html_url": "https://github.com/cceyda",
"followers_url": "https://api.github.com/users/cceyda/followers",
"following_url": "https://api.github.com/users/cceyda/following{/other_user}",
"gists_url": "https://api.github.com/users/cceyda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cceyda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cceyda/subscriptions",
"organizations_url": "https://api.github.com/users/cceyda/orgs",
"repos_url": "https://api.github.com/users/cceyda/repos",
"events_url": "https://api.github.com/users/cceyda/events{/privacy}",
"received_events_url": "https://api.github.com/users/cceyda/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
},
{
"id": 4030246674,
"node_id": "LA_kwDODunzps7wOK8S",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer-rgba-images",
"name": "dataset-viewer-rgba-images",
"color": "6C5FC0",
"default": false,
"description": ""
}
] | closed | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting. It's a known issue, and we hope to fix it soon.",
"Fixed, thanks!"
] | 1,649,833,163,000 | 1,655,829,791,000 | 1,655,829,791,000 | CONTRIBUTOR | null | ## Dataset viewer issue for ceyda/smithsonian_butterflies_transparent
[**Link:** *link to the dataset viewer page*](https://huggingface.co/datasets/ceyda/smithsonian_butterflies_transparent)
![image](https://user-images.githubusercontent.com/15624271/163117683-e91edb28-41bf-43d9-b371-5c62e14f40c9.png)
Am I the one who added this dataset ? Yes
👉 More of a general issue of 'RGBA' png images not being supported
(the dataset itself is just for the huggan sprint and not that important, consider it just an example) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4160/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4160/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4159 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4159/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4159/comments | https://api.github.com/repos/huggingface/datasets/issues/4159/events | https://github.com/huggingface/datasets/pull/4159 | 1,202,522,153 | PR_kwDODunzps42Izmd | 4,159 | Add `TruthfulQA` dataset | {
"login": "jon-tow",
"id": 41410219,
"node_id": "MDQ6VXNlcjQxNDEwMjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/41410219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jon-tow",
"html_url": "https://github.com/jon-tow",
"followers_url": "https://api.github.com/users/jon-tow/followers",
"following_url": "https://api.github.com/users/jon-tow/following{/other_user}",
"gists_url": "https://api.github.com/users/jon-tow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jon-tow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jon-tow/subscriptions",
"organizations_url": "https://api.github.com/users/jon-tow/orgs",
"repos_url": "https://api.github.com/users/jon-tow/repos",
"events_url": "https://api.github.com/users/jon-tow/events{/privacy}",
"received_events_url": "https://api.github.com/users/jon-tow/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Bump. (I'm not sure which reviewer to `@` but, previously, @lhoestq has been very helpful 🤗 )"
] | 1,649,805,544,000 | 1,654,703,493,000 | 1,654,699,414,000 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4159/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4159/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4159",
"html_url": "https://github.com/huggingface/datasets/pull/4159",
"diff_url": "https://github.com/huggingface/datasets/pull/4159.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4159.patch",
"merged_at": 1654699414000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4158 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4158/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4158/comments | https://api.github.com/repos/huggingface/datasets/issues/4158/events | https://github.com/huggingface/datasets/pull/4158 | 1,202,376,843 | PR_kwDODunzps42ITg3 | 4,158 | Add AUC ROC Metric | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649,796,808,000 | 1,651,002,110,000 | 1,651,001,722,000 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4158/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4158/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4158",
"html_url": "https://github.com/huggingface/datasets/pull/4158",
"diff_url": "https://github.com/huggingface/datasets/pull/4158.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4158.patch",
"merged_at": 1651001722000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4157 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4157/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4157/comments | https://api.github.com/repos/huggingface/datasets/issues/4157/events | https://github.com/huggingface/datasets/pull/4157 | 1,202,239,622 | PR_kwDODunzps42H2Wf | 4,157 | Fix formatting in BLEU metric card | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649,788,191,000 | 1,649,860,225,000 | 1,649,859,394,000 | CONTRIBUTOR | null | Fix #4148 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4157/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4157/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4157",
"html_url": "https://github.com/huggingface/datasets/pull/4157",
"diff_url": "https://github.com/huggingface/datasets/pull/4157.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4157.patch",
"merged_at": 1649859394000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4156 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4156/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4156/comments | https://api.github.com/repos/huggingface/datasets/issues/4156/events | https://github.com/huggingface/datasets/pull/4156 | 1,202,220,531 | PR_kwDODunzps42HySw | 4,156 | Adding STSb-TR dataset | {
"login": "figenfikri",
"id": 12762065,
"node_id": "MDQ6VXNlcjEyNzYyMDY1",
"avatar_url": "https://avatars.githubusercontent.com/u/12762065?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/figenfikri",
"html_url": "https://github.com/figenfikri",
"followers_url": "https://api.github.com/users/figenfikri/followers",
"following_url": "https://api.github.com/users/figenfikri/following{/other_user}",
"gists_url": "https://api.github.com/users/figenfikri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/figenfikri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/figenfikri/subscriptions",
"organizations_url": "https://api.github.com/users/figenfikri/orgs",
"repos_url": "https://api.github.com/users/figenfikri/repos",
"events_url": "https://api.github.com/users/figenfikri/events{/privacy}",
"received_events_url": "https://api.github.com/users/figenfikri/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,649,787,005,000 | 1,657,120,792,000 | null | NONE | null | Semantic Textual Similarity benchmark Turkish (STSb-TR) dataset introduced in our paper [Semantic Similarity Based Evaluation for Abstractive News Summarization](https://aclanthology.org/2021.gem-1.3.pdf) added. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4156/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4156/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4156",
"html_url": "https://github.com/huggingface/datasets/pull/4156",
"diff_url": "https://github.com/huggingface/datasets/pull/4156.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4156.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4155 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4155/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4155/comments | https://api.github.com/repos/huggingface/datasets/issues/4155/events | https://github.com/huggingface/datasets/pull/4155 | 1,202,183,608 | PR_kwDODunzps42Hqam | 4,155 | Make HANS dataset streamable | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649,784,853,000 | 1,649,851,426,000 | 1,649,851,055,000 | CONTRIBUTOR | null | Fix #4133 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4155/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4155/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4155",
"html_url": "https://github.com/huggingface/datasets/pull/4155",
"diff_url": "https://github.com/huggingface/datasets/pull/4155.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4155.patch",
"merged_at": 1649851054000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4154 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4154/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4154/comments | https://api.github.com/repos/huggingface/datasets/issues/4154/events | https://github.com/huggingface/datasets/pull/4154 | 1,202,145,721 | PR_kwDODunzps42Hh14 | 4,154 | Generate tasks.json taxonomy from `huggingface_hub` | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Ok recomputed the json file, this should be ready to review now! @lhoestq ",
"Note: the generated JSON from `hf/hub-docs` can be found in the output of a GitHub Action run on that repo, for instance in https://github.com/huggingface/hub-docs/runs/6006686983?check_suite_focus=true\r\n\r\n(click on \"Run export-tasks script\")",
"Should we not add the tasks with hideInDatasets?",
"yes, probably true – i'll change that in a PR in `hub-docs`",
"Yes that's good :) feel free to merge",
"thanks to the both of you!"
] | 1,649,783,566,000 | 1,649,932,352,000 | 1,649,931,973,000 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4154/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4154/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4154",
"html_url": "https://github.com/huggingface/datasets/pull/4154",
"diff_url": "https://github.com/huggingface/datasets/pull/4154.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4154.patch",
"merged_at": 1649931973000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4153 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4153/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4153/comments | https://api.github.com/repos/huggingface/datasets/issues/4153/events | https://github.com/huggingface/datasets/pull/4153 | 1,202,040,506 | PR_kwDODunzps42HLA8 | 4,153 | Adding Text-based NP Enrichment (TNE) dataset | {
"login": "yanaiela",
"id": 8031035,
"node_id": "MDQ6VXNlcjgwMzEwMzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8031035?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yanaiela",
"html_url": "https://github.com/yanaiela",
"followers_url": "https://api.github.com/users/yanaiela/followers",
"following_url": "https://api.github.com/users/yanaiela/following{/other_user}",
"gists_url": "https://api.github.com/users/yanaiela/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yanaiela/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yanaiela/subscriptions",
"organizations_url": "https://api.github.com/users/yanaiela/orgs",
"repos_url": "https://api.github.com/users/yanaiela/repos",
"events_url": "https://api.github.com/users/yanaiela/events{/privacy}",
"received_events_url": "https://api.github.com/users/yanaiela/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hey @lhoestq, can you please have a look? 🙏",
"Great, thanks again @lhoestq! I think we're good to go now",
"Done"
] | 1,649,778,423,000 | 1,651,586,748,000 | 1,651,586,748,000 | CONTRIBUTOR | null | Added the [TNE](https://github.com/yanaiela/TNE) dataset to the library | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4153/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4153/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4153",
"html_url": "https://github.com/huggingface/datasets/pull/4153",
"diff_url": "https://github.com/huggingface/datasets/pull/4153.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4153.patch",
"merged_at": 1651586748000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4152 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4152/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4152/comments | https://api.github.com/repos/huggingface/datasets/issues/4152/events | https://github.com/huggingface/datasets/issues/4152 | 1,202,034,115 | I_kwDODunzps5HpZXD | 4,152 | ArrayND error in pyarrow 5 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Where do we bump the required pyarrow version? Any inputs on how I fix this issue? ",
"We need to bump it in `setup.py` as well as update some CI job to use pyarrow 6 instead of 5 in `.circleci/config.yaml` and `.github/workflows/benchmarks.yaml`"
] | 1,649,778,100,000 | 1,651,656,586,000 | 1,651,656,586,000 | MEMBER | null | As found in https://github.com/huggingface/datasets/pull/3903, The ArrayND features fail on pyarrow 5:
```python
import pyarrow as pa
from datasets import Array2D
from datasets.table import cast_array_to_feature
arr = pa.array([[[0]]])
feature_type = Array2D(shape=(1, 1), dtype="int64")
cast_array_to_feature(arr, feature_type)
```
raises
```python
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-8-04610f9fa78c> in <module>
----> 1 cast_array_to_feature(pa.array([[[0]]]), Array2D(shape=(1, 1), dtype="int32"))
~/Desktop/hf/datasets/src/datasets/table.py in wrapper(array, *args, **kwargs)
1672 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
1673 else:
-> 1674 return func(array, *args, **kwargs)
1675
1676 return wrapper
~/Desktop/hf/datasets/src/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str)
1806 return array_cast(array, get_nested_type(feature), allow_number_to_str=allow_number_to_str)
1807 elif not isinstance(feature, (Sequence, dict, list, tuple)):
-> 1808 return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
1809 raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
1810
~/Desktop/hf/datasets/src/datasets/table.py in wrapper(array, *args, **kwargs)
1672 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
1673 else:
-> 1674 return func(array, *args, **kwargs)
1675
1676 return wrapper
~/Desktop/hf/datasets/src/datasets/table.py in array_cast(array, pa_type, allow_number_to_str)
1705 array = array.storage
1706 if isinstance(pa_type, pa.ExtensionType):
-> 1707 return pa_type.wrap_array(array)
1708 elif pa.types.is_struct(array.type):
1709 if pa.types.is_struct(pa_type) and (
AttributeError: 'Array2DExtensionType' object has no attribute 'wrap_array'
```
The thing is that `cast_array_to_feature` is called when writing an Arrow file, so creating an Arrow dataset using any ArrayND type currently fails.
`wrap_array` has been added in pyarrow 6, so we can either bump the required pyarrow version or fix this for pyarrow 5 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4152/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4152/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4151 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4151/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4151/comments | https://api.github.com/repos/huggingface/datasets/issues/4151/events | https://github.com/huggingface/datasets/pull/4151 | 1,201,837,999 | PR_kwDODunzps42GgLu | 4,151 | Add missing label for emotion description | {
"login": "lijiazheng99",
"id": 44396506,
"node_id": "MDQ6VXNlcjQ0Mzk2NTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/44396506?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lijiazheng99",
"html_url": "https://github.com/lijiazheng99",
"followers_url": "https://api.github.com/users/lijiazheng99/followers",
"following_url": "https://api.github.com/users/lijiazheng99/following{/other_user}",
"gists_url": "https://api.github.com/users/lijiazheng99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lijiazheng99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lijiazheng99/subscriptions",
"organizations_url": "https://api.github.com/users/lijiazheng99/orgs",
"repos_url": "https://api.github.com/users/lijiazheng99/repos",
"events_url": "https://api.github.com/users/lijiazheng99/events{/privacy}",
"received_events_url": "https://api.github.com/users/lijiazheng99/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,649,769,457,000 | 1,649,771,930,000 | 1,649,771,930,000 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4151/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4151/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4151",
"html_url": "https://github.com/huggingface/datasets/pull/4151",
"diff_url": "https://github.com/huggingface/datasets/pull/4151.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4151.patch",
"merged_at": 1649771930000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4150 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4150/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4150/comments | https://api.github.com/repos/huggingface/datasets/issues/4150/events | https://github.com/huggingface/datasets/issues/4150 | 1,201,689,730 | I_kwDODunzps5HoFSC | 4,150 | Inconsistent splits generation for datasets without loading script (packaged dataset puts everything into a single split) | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 1,649,762,155,000 | 1,651,179,764,000 | 1,651,179,764,000 | CONTRIBUTOR | null | ## Describe the bug
Splits for dataset loaders without scripts are prepared inconsistently. I think it might be confusing for users.
## Steps to reproduce the bug
* If you load a packaged datasets from Hub, it infers splits from directory structure / filenames (check out the data [here](https://huggingface.co/datasets/nateraw/test-imagefolder-dataset)):
```python
ds = load_dataset("nateraw/test-imagefolder-dataset")
print(ds)
### Output:
DatasetDict({
train: Dataset({
features: ['image', 'label'],
num_rows: 6
})
test: Dataset({
features: ['image', 'label'],
num_rows: 4
})
})
```
* If you do the same from locally stored data specifying only directory path you'll get the same:
```python
ds = load_dataset("/path/to/local/data/test-imagefolder-dataset")
print(ds)
### Output:
DatasetDict({
train: Dataset({
features: ['image', 'label'],
num_rows: 6
})
test: Dataset({
features: ['image', 'label'],
num_rows: 4
})
})
```
* However, if you explicitely specify package name (like `imagefolder`, `csv`, `json`), all the data is put into a single split:
```python
ds = load_dataset("imagefolder", data_dir="/path/to/local/data/test-imagefolder-dataset")
print(ds)
### Output:
DatasetDict({
train: Dataset({
features: ['image', 'label'],
num_rows: 10
})
})
```
## Expected results
For `load_dataset("imagefolder", data_dir="/path/to/local/data/test-imagefolder-dataset")` I expect the same output as of the two first options. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4150/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4150/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4149 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4149/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4149/comments | https://api.github.com/repos/huggingface/datasets/issues/4149/events | https://github.com/huggingface/datasets/issues/4149 | 1,201,389,221 | I_kwDODunzps5Hm76l | 4,149 | load_dataset for winoground returning decoding error | {
"login": "odellus",
"id": 4686956,
"node_id": "MDQ6VXNlcjQ2ODY5NTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4686956?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/odellus",
"html_url": "https://github.com/odellus",
"followers_url": "https://api.github.com/users/odellus/followers",
"following_url": "https://api.github.com/users/odellus/following{/other_user}",
"gists_url": "https://api.github.com/users/odellus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/odellus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/odellus/subscriptions",
"organizations_url": "https://api.github.com/users/odellus/orgs",
"repos_url": "https://api.github.com/users/odellus/repos",
"events_url": "https://api.github.com/users/odellus/events{/privacy}",
"received_events_url": "https://api.github.com/users/odellus/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"I thought I had fixed it with this after some helpful hints from @severo\r\n```python\r\nimport datasets \r\ntoken = 'hf_XXXXX'\r\ndataset = datasets.load_dataset(\r\n 'facebook/winoground', \r\n name='facebook--winoground', \r\n split='train', \r\n streaming=True,\r\n use_auth_token=token,\r\n)\r\n```\r\nbut I found out that wasn't the case\r\n```python\r\n[x for x in dataset]\r\n...\r\nClientResponseError: 401, message='Unauthorized', url=URL('https://huggingface.co/datasets/facebook/winoground/resolve/a86a60456fbbd242e9a744199071a6bd3e7fd9de/examples.jsonl')\r\n```",
"Hi ! This dataset structure (image + labels in a JSON file) is not supported yet, though we're adding support for this in in #4069 \r\n\r\nThe following structure will be supported soon:\r\n```\r\nmetadata.json\r\nimages/\r\n image0.png\r\n image1.png\r\n ...\r\n```\r\nWhere `metadata.json` is a JSON Lines file with labels or other metadata, and each line must have a \"file_name\" field with the name of the image file.\r\n\r\nFor the moment are only supported:\r\n- JSON files only\r\n- image files only\r\n\r\nSince this dataset is a mix of the two, at the moment it fails trying to read the images as JSON.\r\n\r\nTherefore to be able to load this dataset we need to wait for the new structure to be supported (very soon ^^), or add a dataset script in the repository that reads both the JSON and the images cc @TristanThrush \r\n",
"We'll also investigate the issue with the streaming download manager in https://github.com/huggingface/datasets/issues/4139 ;) thanks for reporting",
"Are there any updates on this?",
"In the meantime, anyone can always download the images.zip and examples.jsonl files directly from huggingface.co - let me know if anyone has issues with that.",
"I mirrored the files at https://huggingface.co/datasets/facebook/winoground in a folder on my local machine `winground`\r\nand when I tried\r\n```python\r\nimport datasets\r\nds = datasets.load_from_disk('./winoground')\r\n```\r\nI get the following error\r\n```python\r\n--------------------------------------------------------------------------\r\nFileNotFoundError Traceback (most recent call last)\r\nInput In [2], in <cell line: 1>()\r\n----> 1 ds = datasets.load_from_disk('./winoground')\r\n\r\nFile ~/.local/lib/python3.8/site-packages/datasets/load.py:1759, in load_from_disk(dataset_path, fs, keep_in_memory)\r\n 1757 return DatasetDict.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory)\r\n 1758 else:\r\n-> 1759 raise FileNotFoundError(\r\n 1760 f\"Directory {dataset_path} is neither a dataset directory nor a dataset dict directory.\"\r\n 1761 )\r\n\r\nFileNotFoundError: Directory ./winoground is neither a dataset directory nor a dataset dict directory.\r\n```\r\nso still some work to be done on the backend imo.",
"Note that `load_from_disk` is the function that reloads an Arrow dataset saved with `my_dataset.save_to_disk`.\r\n\r\nOnce we do support images with metadata you'll be able to use `load_dataset(\"facebook/winoground\")` directly (or `load_dataset(\"./winoground\")` of you've cloned the winoground repository locally).",
"Apologies for the delay. I added a custom dataset loading script for winoground. It should work now, with an auth token:\r\n\r\n`examples = load_dataset('facebook/winoground', use_auth_token=<your auth token>)`\r\n\r\nLet me know if there are any issues",
"Adding the dataset loading script definitely didn't take as long as I thought it would 😅",
"killer"
] | 1,649,751,376,000 | 1,651,707,638,000 | 1,651,707,638,000 | CONTRIBUTOR | null | ## Describe the bug
I am trying to use datasets to load winoground and I'm getting a JSON decoding error.
## Steps to reproduce the bug
```python
from datasets import load_dataset
token = 'hf_XXXXX' # my HF access token
datasets = load_dataset('facebook/winoground', use_auth_token=token)
```
## Expected results
I downloaded images.zip and examples.jsonl manually. I was expecting to have some trouble decoding json so I didn't use jsonlines but instead was able to get a complete set of 400 examples by doing
```python
import json
with open('examples.jsonl', 'r') as f:
examples = f.read().split('\n')
# Thinking this would error if the JSON is not utf-8 encoded
json_data = [json.loads(x) for x in examples]
print(json_data[-1])
```
and I see
```python
{'caption_0': 'someone is overdoing it',
'caption_1': 'someone is doing it over',
'collapsed_tag': 'Relation',
'id': 399,
'image_0': 'ex_399_img_0',
'image_1': 'ex_399_img_1',
'num_main_preds': 1,
'secondary_tag': 'Morpheme-Level',
'tag': 'Scope, Preposition'}
```
so I'm not sure what's going on here honestly. The file `examples.jsonl` doesn't have non-UTF-8 encoded text.
## Actual results
During the split operation after downloading, datasets encounters an error in the JSON ([trace](https://gist.github.com/odellus/e55d390ca203386bf551f38e0c63a46b) abbreviated for brevity).
```
datasets/packaged_modules/json/json.py:144 in Json._generate_tables(self, files)
...
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.4
- Platform: Linux-5.13.0-39-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 7.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4149/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4149/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4148 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4148/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4148/comments | https://api.github.com/repos/huggingface/datasets/issues/4148/events | https://github.com/huggingface/datasets/issues/4148 | 1,201,169,242 | I_kwDODunzps5HmGNa | 4,148 | fix confusing bleu metric example | {
"login": "aizawa-naoki",
"id": 6253193,
"node_id": "MDQ6VXNlcjYyNTMxOTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6253193?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aizawa-naoki",
"html_url": "https://github.com/aizawa-naoki",
"followers_url": "https://api.github.com/users/aizawa-naoki/followers",
"following_url": "https://api.github.com/users/aizawa-naoki/following{/other_user}",
"gists_url": "https://api.github.com/users/aizawa-naoki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aizawa-naoki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aizawa-naoki/subscriptions",
"organizations_url": "https://api.github.com/users/aizawa-naoki/orgs",
"repos_url": "https://api.github.com/users/aizawa-naoki/repos",
"events_url": "https://api.github.com/users/aizawa-naoki/events{/privacy}",
"received_events_url": "https://api.github.com/users/aizawa-naoki/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [] | 1,649,744,306,000 | 1,649,859,394,000 | 1,649,859,394,000 | NONE | null | **Is your feature request related to a problem? Please describe.**
I would like to see the example in "Metric Card for BLEU" changed.
The 0th element in the predictions list is not closed in square brackets, and the 1st list is missing a comma.
The BLEU score are calculated correctly, but it is difficult to understand, so it would be helpful if you could correct this.
```
>> predictions = [
... ["hello", "there", "general", "kenobi", # <- no closing square bracket.
... ["foo", "bar" "foobar"] # <- no comma between "bar" and "foobar"
... ]
>>> references = [
... [["hello", "there", "general", "kenobi"]],
... [["foo", "bar", "foobar"]]
... ]
>>> bleu = datasets.load_metric("bleu")
>>> results = bleu.compute(predictions=predictions, references=references)
>>> print(results)
{'bleu': 0.6370964381207871, ...
```
**Describe the solution you'd like**
```
>> predictions = [
... ["hello", "there", "general", "kenobi", # <- no closing square bracket.
... ["foo", "bar" "foobar"] # <- no comma between "bar" and "foobar"
... ]
# and
>>> print(results)
{'bleu':1.0, ...
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4148/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4148/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4147 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4147/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4147/comments | https://api.github.com/repos/huggingface/datasets/issues/4147/events | https://github.com/huggingface/datasets/pull/4147 | 1,200,756,008 | PR_kwDODunzps42CtPl | 4,147 | Adjust path to datasets tutorial in How-To | {
"login": "NimaBoscarino",
"id": 6765188,
"node_id": "MDQ6VXNlcjY3NjUxODg=",
"avatar_url": "https://avatars.githubusercontent.com/u/6765188?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NimaBoscarino",
"html_url": "https://github.com/NimaBoscarino",
"followers_url": "https://api.github.com/users/NimaBoscarino/followers",
"following_url": "https://api.github.com/users/NimaBoscarino/following{/other_user}",
"gists_url": "https://api.github.com/users/NimaBoscarino/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NimaBoscarino/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NimaBoscarino/subscriptions",
"organizations_url": "https://api.github.com/users/NimaBoscarino/orgs",
"repos_url": "https://api.github.com/users/NimaBoscarino/repos",
"events_url": "https://api.github.com/users/NimaBoscarino/events{/privacy}",
"received_events_url": "https://api.github.com/users/NimaBoscarino/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649,726,434,000 | 1,649,752,344,000 | 1,649,751,962,000 | MEMBER | null | The link in the How-To overview page to the Datasets tutorials is currently broken. This is just a small adjustment to make it match the format used in https://github.com/huggingface/datasets/blob/master/docs/source/tutorial.md.
(Edit to add: The link in the PR deployment (https://moon-ci-docs.huggingface.co/docs/datasets/pr_4147/en/how_to) is also broken since it's actually hardcoded to `master` and not dynamic to the branch name, but other links seem to behave similarly.) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4147/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4147/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4147",
"html_url": "https://github.com/huggingface/datasets/pull/4147",
"diff_url": "https://github.com/huggingface/datasets/pull/4147.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4147.patch",
"merged_at": 1649751962000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4146 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4146/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4146/comments | https://api.github.com/repos/huggingface/datasets/issues/4146/events | https://github.com/huggingface/datasets/issues/4146 | 1,200,215,789 | I_kwDODunzps5Hidbt | 4,146 | SAMSum dataset viewer not working | {
"login": "aakashnegi10",
"id": 39906333,
"node_id": "MDQ6VXNlcjM5OTA2MzMz",
"avatar_url": "https://avatars.githubusercontent.com/u/39906333?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aakashnegi10",
"html_url": "https://github.com/aakashnegi10",
"followers_url": "https://api.github.com/users/aakashnegi10/followers",
"following_url": "https://api.github.com/users/aakashnegi10/following{/other_user}",
"gists_url": "https://api.github.com/users/aakashnegi10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aakashnegi10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aakashnegi10/subscriptions",
"organizations_url": "https://api.github.com/users/aakashnegi10/orgs",
"repos_url": "https://api.github.com/users/aakashnegi10/repos",
"events_url": "https://api.github.com/users/aakashnegi10/events{/privacy}",
"received_events_url": "https://api.github.com/users/aakashnegi10/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"https://huggingface.co/datasets/samsum\r\n\r\n```\r\nStatus code: 400\r\nException: ValueError\r\nMessage: Cannot seek streaming HTTP file\r\n```",
"Currently, only the datasets that can be streamed support the dataset viewer. Maybe @lhoestq @albertvillanova or @mariosasko could give more details about why the dataset cannot be streamed.",
"It looks like the host (https://arxiv.org) doesn't allow HTTP Range requests, which is what we use to stream data.\r\n\r\nThis can be fix if we host the data ourselves, which is ok since the dataset is under CC BY-NC-ND 4.0"
] | 1,649,694,177,000 | 1,651,249,569,000 | 1,651,249,569,000 | NONE | null | ## Dataset viewer issue for '*name of the dataset*'
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4146/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4146/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4145 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4145/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4145/comments | https://api.github.com/repos/huggingface/datasets/issues/4145/events | https://github.com/huggingface/datasets/pull/4145 | 1,200,209,781 | PR_kwDODunzps42A6Rt | 4,145 | Redirect TIMIT download from LDC | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"CI is failing because some tags are outdated, but they're fixed in #4067 ",
"_The documentation is not available anymore as the PR was closed or merged._",
"We may do a release pretty soon (today ?), let me know if it's fine to include it in the new release",
"Fine to include this change!"
] | 1,649,693,875,000 | 1,649,864,371,000 | 1,649,863,984,000 | MEMBER | null | LDC data is protected under US copyright laws and under various legal agreements between the Linguistic Data Consortium/the University of Pennsylvania and data providers which prohibit redistribution of that data by anyone other than LDC. Similarly, LDC's membership agreements, non-member user agreement and various corpus-specific license agreements specifically state that users cannot publish, retransmit, disclose, copy, reproduce or redistribute LDC databases to others outside their organizations.
LDC explicitly asked us to remove the download script for the TIMIT dataset. In this PR I remove all means to download the dataset, and redirect users to download the data from https://catalog.ldc.upenn.edu/LDC93S1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4145/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4145/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4145",
"html_url": "https://github.com/huggingface/datasets/pull/4145",
"diff_url": "https://github.com/huggingface/datasets/pull/4145.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4145.patch",
"merged_at": 1649863983000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4144 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4144/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4144/comments | https://api.github.com/repos/huggingface/datasets/issues/4144/events | https://github.com/huggingface/datasets/pull/4144 | 1,200,016,983 | PR_kwDODunzps42ARmu | 4,144 | Fix splits in local packaged modules, local datasets without script and hub datasets without script | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks !\r\nI'm in favor of this change, even though it's a breaking change:\r\n\r\nif you had a dataset\r\n```\r\ndata/\r\n train.csv\r\n test.csv\r\n```\r\n\r\nthen running this code would now return both train and test splits:\r\n```python\r\nload_dataset(\"csv\", data_dir=\"data/\")\r\n```\r\nwhereas right now it returns only a train split with the data from both CSV files.\r\n\r\nIn my opinion it's ok do do this breaking change because:\r\n- it makes this behavior consistent with `load_dataset(\"path/to/data\")` that also returns both splits: data_files resolution must be the same\r\n- I don't expect too many affected users (unless people really wanted to group train and test images in the train split on purpose ?) compared to the many new users to come (especially with #4069 )\r\n- this usage will become more and more common as we add packaged builder and imagefolder/audiofolder usage grows, so it may be better to do this change early\r\n\r\nLet me know if you think this is acceptable @mariosasko @albertvillanova or not, and if you think we need to first have a warning for some time before switching to this new behavior",
"Also, if people really want to put train and test, say, images in a single train split they could do \r\n`load_dataset(\"imagefolder\", data_files={\"train\": \"/path/to/data/**})`. Probably (arguably :)), if this is a more counterintuitive case, then it should require manual files specification, not a default one (in which we expect that users do want to infer splits from filenames / dir structure but currently they have to pass smth like `{\"train\": \"/path/to/data/train*\", \"test\": \"/path/to/data/test*\"}` explicitly as `data_files`) ",
"I also like this change, and I don't think we even need a warning during the transition period, considering I've been asked several times since the release of `imagefolder` why splits are not correctly inferred if the directory structure is as follows:\r\n```\r\ndata_dir\r\n train\r\n label_a\r\n 0.jpg\r\n ...\r\n label_b \r\n 0.jpg\r\n ...\r\n test\r\n label_a\r\n 0.jpg\r\n ...\r\n label_b \r\n 0.jpg\r\n ...\r\n```",
"Cool ! Feel free to add a test (maybe something similar to `test_PackagedDatasetModuleFactory_with_data_dir` but with a data_dir that contains several splits) and mark this PR as ready for review then @polinaeterna :)",
"@lhoestq @mariosasko do you think it's a good idea to do the same with `HubDatasetModuleFactoryWithoutScript` and `LocalDatasetModuleFactoryWithoutScript` (see the latest change). If we agree on the current change, doing \r\n```python\r\nds = load_dataset(\"polinaeterna/jsonl_test\", data_dir=\"data/\")\r\n```\r\non dataset with the following structure:\r\n```\r\ntrain.jsonl\r\ntest.jsonl\r\ndata/\r\n train.jsonl\r\n test.jsonl\r\n```\r\nwill result in having two splits from files under `data/` dir in specified repo, while master version returns a single train split. \r\nThe same would be for local dataset without script if doing smth like:\r\n```python\r\nds = load_dataset(\"/home/polina/workspace/repos/jsonl_test\", data_dir=\"/home/polina/workspace/repos/jsonl_test/data\")\r\n```\r\n(though I'm not sure I understand this use case :D)\r\nLet me know if you think we should preserve the same logic for all factories or if I should roll back this change.",
"@lhoestq to test passing subdirectory (`base_path`) to data_files functions and methods, I extended the temporary test directory with data so that it contains subdirectory. Because of that the number of files in this directory increased, so I had to change some numbers and patterns to account for this change - [907ddf0](https://github.com/huggingface/datasets/pull/4144/commits/907ddf09d3afece5afbae18675c859d6e453f2bf)\r\n\r\nDo you think it's ok? Another option is to create another tmp dir and do all the checks inside it. "
] | 1,649,685,453,000 | 1,651,223,534,000 | 1,651,179,765,000 | CONTRIBUTOR | null | fixes #4150
I suggest to infer splits structure from files when `data_dir` is passed with `get_patterns_locally`, analogous to what's done in `LocalDatasetModuleFactoryWithoutScript` with `self.path`, instead of generating files with `data_dir/**` patterns and putting them all into a single default (train) split.
I would also suggest to align `HubDatasetModuleFactoryWithoutScript` and `LocalDatasetModuleFactoryWithoutScript` with this logic (remove `data_files = os.path.join(data_dir, "**")`). It's not reflected in the current code now as I'd like to discuss it cause I might be unaware of some use cases. @lhoestq @mariosasko @albertvillanova WDYT? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4144/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4144/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4144",
"html_url": "https://github.com/huggingface/datasets/pull/4144",
"diff_url": "https://github.com/huggingface/datasets/pull/4144.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4144.patch",
"merged_at": 1651179764000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4143 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4143/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4143/comments | https://api.github.com/repos/huggingface/datasets/issues/4143/events | https://github.com/huggingface/datasets/issues/4143 | 1,199,937,961 | I_kwDODunzps5HhZmp | 4,143 | Unable to download `Wikepedia` 20220301.en version | {
"login": "beyondguo",
"id": 37113676,
"node_id": "MDQ6VXNlcjM3MTEzNjc2",
"avatar_url": "https://avatars.githubusercontent.com/u/37113676?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/beyondguo",
"html_url": "https://github.com/beyondguo",
"followers_url": "https://api.github.com/users/beyondguo/followers",
"following_url": "https://api.github.com/users/beyondguo/following{/other_user}",
"gists_url": "https://api.github.com/users/beyondguo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/beyondguo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/beyondguo/subscriptions",
"organizations_url": "https://api.github.com/users/beyondguo/orgs",
"repos_url": "https://api.github.com/users/beyondguo/repos",
"events_url": "https://api.github.com/users/beyondguo/events{/privacy}",
"received_events_url": "https://api.github.com/users/beyondguo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi! We've recently updated the Wikipedia script, so these changes are only available on master and can be fetched as follows:\r\n```python\r\ndataset_wikipedia = load_dataset(\"wikipedia\", \"20220301.en\", revision=\"master\")\r\n```",
"Hi, how can I load the previous \"20200501.en\" version of wikipedia which had been downloaded to the default path? Thanks!",
"@JiaQiSJTU just reinstall the previous verision of the package, e.g. `!pip install -q datasets==1.0.0`"
] | 1,649,682,014,000 | 1,660,696,675,000 | 1,650,560,654,000 | NONE | null | ## Describe the bug
Unable to download `Wikepedia` dataset, 20220301.en version
## Steps to reproduce the bug
```python
!pip install apache_beam mwparserfromhell
dataset_wikipedia = load_dataset("wikipedia", "20220301.en")
```
## Actual results
```
ValueError: BuilderConfig 20220301.en not found.
Available: ['20200501.aa', '20200501.ab', '20200501.ace', '20200501.ady', '20200501.af', '20200501.ak', '20200501.als', '20200501.am', '20200501.an', '20200501.ang', '20200501.ar', '20200501.arc', '20200501.arz', '20200501.as', '20200501.ast', '20200501.atj', '20200501.av', '20200501.ay', '20200501.az', '20200501.azb', '20200501.ba', '20200501.bar', '20200501.bat-smg', '20200501.bcl', '20200501.be', '20200501.be-x-old', '20200501.bg', '20200501.bh', '20200501.bi', '20200501.bjn', '20200501.bm', '20200501.bn', '20200501.bo', '20200501.bpy', '20200501.br', '20200501.bs', '20200501.bug', '20200501.bxr', '20200501.ca', '20200501.cbk-zam', '20200501.cdo', '20200501.ce', '20200501.ceb', '20200501.ch', '20200501.cho', '20200501.chr', '20200501.chy', '20200501.ckb', '20200501.co', '20200501.cr', '20200501.crh', '20200501.cs', '20200501.csb', '20200501.cu', '20200501.cv', '20200501.cy', '20200501.da', '20200501.de', '20200501.din', '20200501.diq', '20200501.dsb', '20200501.dty', '20200501.dv', '20200501.dz', '20200501.ee', '20200501.el', '20200501.eml', '20200501.en', '20200501.eo', '20200501.es', '20200501.et', '20200501.eu', '20200501.ext', '20200501.fa', '20200501.ff', '20200501.fi', '20200501.fiu-vro', '20200501.fj', '20200501.fo', '20200501.fr', '20200501.frp', '20200501.frr', '20200501.fur', '20200501.fy', '20200501.ga', '20200501.gag', '20200501.gan', '20200501.gd', '20200501.gl', '20200501.glk', '20200501.gn', '20200501.gom', '20200501.gor', '20200501.got', '20200501.gu', '20200501.gv', '20200501.ha', '20200501.hak', '20200501.haw', '20200501.he', '20200501.hi', '20200501.hif', '20200501.ho', '20200501.hr', '20200501.hsb', '20200501.ht', '20200501.hu', '20200501.hy', '20200501.ia', '20200501.id', '20200501.ie', '20200501.ig', '20200501.ii', '20200501.ik', '20200501.ilo', '20200501.inh', '20200501.io', '20200501.is', '20200501.it', '20200501.iu', '20200501.ja', '20200501.jam', '20200501.jbo', '20200501.jv', '20200501.ka', '20200501.kaa', '20200501.kab', '20200501.kbd', '20200501.kbp', '20200501.kg', '20200501.ki', '20200501.kj', '20200501.kk', '20200501.kl', '20200501.km', '20200501.kn', '20200501.ko', '20200501.koi', '20200501.krc', '20200501.ks', '20200501.ksh', '20200501.ku', '20200501.kv', '20200501.kw', '20200501.ky', '20200501.la', '20200501.lad', '20200501.lb', '20200501.lbe', '20200501.lez', '20200501.lfn', '20200501.lg', '20200501.li', '20200501.lij', '20200501.lmo', '20200501.ln', '20200501.lo', '20200501.lrc', '20200501.lt', '20200501.ltg', '20200501.lv', '20200501.mai', '20200501.map-bms', '20200501.mdf', '20200501.mg', '20200501.mh', '20200501.mhr', '20200501.mi', '20200501.min', '20200501.mk', '20200501.ml', '20200501.mn', '20200501.mr', '20200501.mrj', '20200501.ms', '20200501.mt', '20200501.mus', '20200501.mwl', '20200501.my', '20200501.myv', '20200501.mzn', '20200501.na', '20200501.nah', '20200501.nap', '20200501.nds', '20200501.nds-nl', '20200501.ne', '20200501.new', '20200501.ng', '20200501.nl', '20200501.nn', '20200501.no', '20200501.nov', '20200501.nrm', '20200501.nso', '20200501.nv', '20200501.ny', '20200501.oc', '20200501.olo', '20200501.om', '20200501.or', '20200501.os', '20200501.pa', '20200501.pag', '20200501.pam', '20200501.pap', '20200501.pcd', '20200501.pdc', '20200501.pfl', '20200501.pi', '20200501.pih', '20200501.pl', '20200501.pms', '20200501.pnb', '20200501.pnt', '20200501.ps', '20200501.pt', '20200501.qu', '20200501.rm', '20200501.rmy', '20200501.rn', '20200501.ro', '20200501.roa-rup', '20200501.roa-tara', '20200501.ru', '20200501.rue', '20200501.rw', '20200501.sa', '20200501.sah', '20200501.sat', '20200501.sc', '20200501.scn', '20200501.sco', '20200501.sd', '20200501.se', '20200501.sg', '20200501.sh', '20200501.si', '20200501.simple', '20200501.sk', '20200501.sl', '20200501.sm', '20200501.sn', '20200501.so', '20200501.sq', '20200501.sr', '20200501.srn', '20200501.ss', '20200501.st', '20200501.stq', '20200501.su', '20200501.sv', '20200501.sw', '20200501.szl', '20200501.ta', '20200501.tcy', '20200501.te', '20200501.tet', '20200501.tg', '20200501.th', '20200501.ti', '20200501.tk', '20200501.tl', '20200501.tn', '20200501.to', '20200501.tpi', '20200501.tr', '20200501.ts', '20200501.tt', '20200501.tum', '20200501.tw', '20200501.ty', '20200501.tyv', '20200501.udm', '20200501.ug', '20200501.uk', '20200501.ur', '20200501.uz', '20200501.ve', '20200501.vec', '20200501.vep', '20200501.vi', '20200501.vls', '20200501.vo', '20200501.wa', '20200501.war', '20200501.wo', '20200501.wuu', '20200501.xal', '20200501.xh', '20200501.xmf', '20200501.yi', '20200501.yo', '20200501.za', '20200501.zea', '20200501.zh', '20200501.zh-classical', '20200501.zh-min-nan', '20200501.zh-yue', '20200501.zu']
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: Ubuntu
- Python version: 3.6
- PyArrow version: 6.0.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4143/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4143/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4142 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4142/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4142/comments | https://api.github.com/repos/huggingface/datasets/issues/4142/events | https://github.com/huggingface/datasets/issues/4142 | 1,199,794,750 | I_kwDODunzps5Hg2o- | 4,142 | Add ObjectFolder 2.0 dataset | {
"login": "osanseviero",
"id": 7246357,
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/osanseviero",
"html_url": "https://github.com/osanseviero",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | null | [] | null | [] | 1,649,674,671,000 | 1,649,674,671,000 | null | MEMBER | null | ## Adding a Dataset
- **Name:** ObjectFolder 2.0
- **Description:** ObjectFolder 2.0 is a dataset of 1,000 objects in the form of implicit representations. It contains 1,000 Object Files each containing the complete multisensory profile for an object instance.
- **Paper:** [*link to the dataset paper if available*](https://arxiv.org/abs/2204.02389)
- **Data:** https://github.com/rhgao/ObjectFolder
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4142/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4142/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4141 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4141/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4141/comments | https://api.github.com/repos/huggingface/datasets/issues/4141/events | https://github.com/huggingface/datasets/issues/4141 | 1,199,610,885 | I_kwDODunzps5HgJwF | 4,141 | Why is the dataset not visible under the dataset preview section? | {
"login": "Nid989",
"id": 75028682,
"node_id": "MDQ6VXNlcjc1MDI4Njgy",
"avatar_url": "https://avatars.githubusercontent.com/u/75028682?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nid989",
"html_url": "https://github.com/Nid989",
"followers_url": "https://api.github.com/users/Nid989/followers",
"following_url": "https://api.github.com/users/Nid989/following{/other_user}",
"gists_url": "https://api.github.com/users/Nid989/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Nid989/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nid989/subscriptions",
"organizations_url": "https://api.github.com/users/Nid989/orgs",
"repos_url": "https://api.github.com/users/Nid989/repos",
"events_url": "https://api.github.com/users/Nid989/events{/privacy}",
"received_events_url": "https://api.github.com/users/Nid989/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | null | [] | null | [] | 1,649,666,202,000 | 1,649,703,332,000 | 1,649,696,989,000 | NONE | null | ## Dataset viewer issue for '*name of the dataset*'
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4141/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4141/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4140 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4140/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4140/comments | https://api.github.com/repos/huggingface/datasets/issues/4140/events | https://github.com/huggingface/datasets/issues/4140 | 1,199,492,356 | I_kwDODunzps5Hfs0E | 4,140 | Error loading arxiv data set | {
"login": "yjqiu",
"id": 5383918,
"node_id": "MDQ6VXNlcjUzODM5MTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5383918?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjqiu",
"html_url": "https://github.com/yjqiu",
"followers_url": "https://api.github.com/users/yjqiu/followers",
"following_url": "https://api.github.com/users/yjqiu/following{/other_user}",
"gists_url": "https://api.github.com/users/yjqiu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjqiu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjqiu/subscriptions",
"organizations_url": "https://api.github.com/users/yjqiu/orgs",
"repos_url": "https://api.github.com/users/yjqiu/repos",
"events_url": "https://api.github.com/users/yjqiu/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjqiu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi! I think this error may be related to using an older version of the library. I was able to load the dataset without any issues using the latest version of `datasets`. Can you upgrade to the latest version of `datasets` and try again? :)",
"Hi! As @stevhliu suggested, to fix the issue, update the lib to the newest version with:\r\n```\r\npip install -U datasets\r\n```\r\nand download the dataset as follows:\r\n```python\r\nfrom datasets import load_dataset\r\ndset = load_dataset('scientific_papers', 'arxiv', download_mode=\"force_redownload\")\r\n```",
"Thanks for the quick response! It works now. The problem is that I used nlp. load_dataset instead of datasets. load_dataset."
] | 1,649,660,794,000 | 1,649,780,648,000 | 1,649,780,648,000 | NONE | null | ## Describe the bug
A clear and concise description of what the bug is.
I met the error below when loading arxiv dataset via `nlp.load_dataset('scientific_papers', 'arxiv',)`.
```
Traceback (most recent call last):
File "scripts/summarization.py", line 354, in <module>
main(args)
File "scripts/summarization.py", line 306, in main
model.hf_datasets = nlp.load_dataset('scientific_papers', 'arxiv')
File "/opt/conda/envs/longformer/lib/python3.7/site-packages/nlp/load.py", line 549, in load_dataset
download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,
File "/opt/conda/envs/longformer/lib/python3.7/site-packages/nlp/builder.py", line 463, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/opt/conda/envs/longformer/lib/python3.7/site-packages/nlp/builder.py", line 522, in _download_and_prepare
self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files"
File "/opt/conda/envs/longformer/lib/python3.7/site-packages/nlp/utils/info_utils.py", line 38, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
nlp.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/uc?id=1b3rmCSIoh6VhD4HKWjI4HOW-cSwcwbeC&export=download', 'https://drive.google.com/uc?id=1lvsqvsFi3W-pE1SqNZI0s8NR9rC1tsja&export=download']
```
I then tried to ignore verification steps by `ignore_verifications=True` and there is another error.
```
Traceback (most recent call last):
File "/opt/conda/envs/longformer/lib/python3.7/site-packages/nlp/builder.py", line 537, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/opt/conda/envs/longformer/lib/python3.7/site-packages/nlp/builder.py", line 810, in _prepare_split
for key, record in utils.tqdm(generator, unit=" examples", total=split_info.num_examples, leave=False):
File "/opt/conda/envs/longformer/lib/python3.7/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/opt/conda/envs/longformer/lib/python3.7/site-packages/nlp/datasets/scientific_papers/9e4f2cfe3d8494e9f34a84ce49c3214605b4b52a3d8eb199104430d04c52cc12/scientific_papers.py", line 108, in _generate_examples
with open(path, encoding="utf-8") as f:
NotADirectoryError: [Errno 20] Not a directory: '/home/username/.cache/huggingface/datasets/downloads/c0deae7af7d9c87f25dfadf621f7126f708d7dcac6d353c7564883084a000076/arxiv-dataset/train.txt'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "scripts/summarization.py", line 354, in <module>
main(args)
File "scripts/summarization.py", line 306, in main
model.hf_datasets = nlp.load_dataset('scientific_papers', 'arxiv', ignore_verifications=True)
File "/opt/conda/envs/longformer/lib/python3.7/site-packages/nlp/load.py", line 549, in load_dataset
download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,
File "/opt/conda/envs/longformer/lib/python3.7/site-packages/nlp/builder.py", line 463, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/opt/conda/envs/longformer/lib/python3.7/site-packages/nlp/builder.py", line 539, in _download_and_prepare
raise OSError("Cannot find data file. " + (self.manual_download_instructions or ""))
OSError: Cannot find data file.
```
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform:
- Python version:
- PyArrow version:
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4140/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4140/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4139 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4139/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4139/comments | https://api.github.com/repos/huggingface/datasets/issues/4139/events | https://github.com/huggingface/datasets/issues/4139 | 1,199,443,822 | I_kwDODunzps5Hfg9u | 4,139 | Dataset viewer issue for Winoground | {
"login": "alcinos",
"id": 7438704,
"node_id": "MDQ6VXNlcjc0Mzg3MDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/7438704?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alcinos",
"html_url": "https://github.com/alcinos",
"followers_url": "https://api.github.com/users/alcinos/followers",
"following_url": "https://api.github.com/users/alcinos/following{/other_user}",
"gists_url": "https://api.github.com/users/alcinos/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alcinos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alcinos/subscriptions",
"organizations_url": "https://api.github.com/users/alcinos/orgs",
"repos_url": "https://api.github.com/users/alcinos/repos",
"events_url": "https://api.github.com/users/alcinos/events{/privacy}",
"received_events_url": "https://api.github.com/users/alcinos/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
},
{
"id": 4030248571,
"node_id": "LA_kwDODunzps7wOLZ7",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer-gated",
"name": "dataset-viewer-gated",
"color": "51F745",
"default": false,
"description": ""
}
] | closed | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
},
{
"login": "SBrandeis",
"id": 33657802,
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SBrandeis",
"html_url": "https://github.com/SBrandeis",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"type": "User",
"site_admin": false
},
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"related (same dataset): https://github.com/huggingface/datasets/issues/4149. But the issue is different. Looking at it",
"I thought this issue was related to the error I was seeing, but upon consideration I'd think the dataset viewer would return a 500 (unable to create the split like me) or a 404 (unable to load split b/c it was never created) error if it was having the issue I was seeing in #4149. 401 message makes it look like dataset viewer isn't passing through the identity of the user who has signed the licensing agreement when making the request to GET [examples.jsonl](https://huggingface.co/datasets/facebook/winoground/resolve/a86a60456fbbd242e9a744199071a6bd3e7fd9de/examples.jsonl).",
"Pinging @SBrandeis, as it seems related to gated datasets and access tokens.",
"To replicate:\r\n\r\n```python\r\n>>> import datasets\r\n>>> dataset= datasets.load_dataset('facebook/winoground', name='facebook--winoground', split='train', use_auth_token=\"hf_app_...\", streaming=True)\r\n>>> next(iter(dataset))\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 497, in __iter__\r\n for key, example in self._iter():\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 494, in _iter\r\n yield from ex_iterable\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 87, in __iter__\r\n yield from self.generate_examples_fn(**self.kwargs)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 439, in wrapper\r\n for key, table in generate_tables_fn(**kwargs):\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py\", line 85, in _generate_tables\r\n for file_idx, file in enumerate(files):\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/utils/streaming_download_manager.py\", line 679, in __iter__\r\n yield from self.generator(*self.args, **self.kwargs)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/utils/streaming_download_manager.py\", line 731, in _iter_from_urlpaths\r\n for dirpath, _, filenames in xwalk(urlpath, use_auth_token=use_auth_token):\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/utils/streaming_download_manager.py\", line 623, in xwalk\r\n for dirpath, dirnames, filenames in fs.walk(main_hop):\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/spec.py\", line 372, in walk\r\n listing = self.ls(path, detail=True, **kwargs)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/asyn.py\", line 85, in wrapper\r\n return sync(self.loop, func, *args, **kwargs)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/asyn.py\", line 65, in sync\r\n raise return_result\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/asyn.py\", line 25, in _runner\r\n result[0] = await coro\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py\", line 196, in _ls\r\n out = await self._ls_real(url, detail=detail, **kwargs)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py\", line 150, in _ls_real\r\n self._raise_not_found_for_status(r, url)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py\", line 208, in _raise_not_found_for_status\r\n response.raise_for_status()\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/aiohttp/client_reqrep.py\", line 1004, in raise_for_status\r\n raise ClientResponseError(\r\naiohttp.client_exceptions.ClientResponseError: 401, message='Unauthorized', url=URL('https://huggingface.co/datasets/facebook/winoground/resolve/a86a60456fbbd242e9a744199071a6bd3e7fd9de/examples.jsonl')\r\n```\r\n\r\n*edited to fix `use_token` -> `use_auth_token`, thx @odellus*",
"~~Using your command to replicate and changing `use_token` to `use_auth_token` fixes the problem I was seeing in #4149.~~\r\nNevermind it gave me an iterator to a method returning the same 401s. Changing `use_token` to `use_auth_token` does not fix the issue.",
"After investigation with @severo , we found a potential culprit: https://github.com/huggingface/datasets/blob/3cd0a009a43f9f174056d70bfa2ca32216181926/src/datasets/utils/streaming_download_manager.py#L610-L624\r\n\r\nThe streaming manager does not seem to pass `use_auth_token` to `fsspec` when streaming and not iterating content of a zip archive\r\n\r\ncc @albertvillanova @lhoestq ",
"I was able to reproduce it on a private dataset, let me work on a fix",
"Hey @lhoestq, Thanks for working on a fix! Any plans to merge #4173 into master? ",
"Thanks for the heads up, I still need to fix some tests that are failing in the CI before merging ;)",
"The fix has been merged, we'll do a new release soon, and update the dataset viewer",
"Fixed, thanks!\r\n<img width=\"1119\" alt=\"Capture d’écran 2022-06-21 à 18 41 09\" src=\"https://user-images.githubusercontent.com/1676121/174853571-afb0749c-4178-4c89-ab40-bb162a449788.png\">\r\n"
] | 1,649,657,501,000 | 1,655,829,838,000 | 1,655,829,838,000 | NONE | null | ## Dataset viewer issue for 'Winoground'
**Link:** [*link to the dataset viewer page*](https://huggingface.co/datasets/facebook/winoground/viewer/facebook--winoground/train)
*short description of the issue*
Getting 401, message='Unauthorized'
The dataset is subject to authorization, but I can access the files from the interface, so I assume I'm granted to access it. I'd assume the permission somehow doesn't propagate to the dataset viewer tool.
Am I the one who added this dataset ? No
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4139/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4139/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4138 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4138/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4138/comments | https://api.github.com/repos/huggingface/datasets/issues/4138/events | https://github.com/huggingface/datasets/issues/4138 | 1,199,291,730 | I_kwDODunzps5He71S | 4,138 | Incorrect Russian filenames encoding after extraction by datasets.DownloadManager.download_and_extract() | {
"login": "iluvvatar",
"id": 55381086,
"node_id": "MDQ6VXNlcjU1MzgxMDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/55381086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iluvvatar",
"html_url": "https://github.com/iluvvatar",
"followers_url": "https://api.github.com/users/iluvvatar/followers",
"following_url": "https://api.github.com/users/iluvvatar/following{/other_user}",
"gists_url": "https://api.github.com/users/iluvvatar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iluvvatar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iluvvatar/subscriptions",
"organizations_url": "https://api.github.com/users/iluvvatar/orgs",
"repos_url": "https://api.github.com/users/iluvvatar/repos",
"events_url": "https://api.github.com/users/iluvvatar/events{/privacy}",
"received_events_url": "https://api.github.com/users/iluvvatar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"To reproduce:\r\n\r\n```python\r\n>>> import datasets\r\n>>> datasets.get_dataset_split_names('MalakhovIlya/RuREBus', config_name='raw_txt')\r\nTraceback (most recent call last):\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 280, in get_dataset_config_info\r\n for split_generator in builder._split_generators(\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/MalakhovIlya--RuREBus/21046f5f1a0cf91187d68c30918d78d934ec7113ec435e146776d4f28f12c4ed/RuREBus.py\", line 101, in _split_generators\r\n decode_file_names(folder)\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/MalakhovIlya--RuREBus/21046f5f1a0cf91187d68c30918d78d934ec7113ec435e146776d4f28f12c4ed/RuREBus.py\", line 26, in decode_file_names\r\n for root, dirs, files in os.walk(folder, topdown=False):\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/streaming.py\", line 66, in wrapper\r\n return function(*args, use_auth_token=use_auth_token, **kwargs)\r\nTypeError: xwalk() got an unexpected keyword argument 'topdown'\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 323, in get_dataset_split_names\r\n info = get_dataset_config_info(\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 285, in get_dataset_config_info\r\n raise SplitsNotFoundError(\"The split names could not be parsed from the dataset config.\") from err\r\ndatasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.\r\n```\r\n\r\nIt's not related to the dataset viewer. Maybe @albertvillanova or @lhoestq could help more on this issue.",
"Hi! This issue stems from the fact that `xwalk`, which is a streamable version of `os.walk`, doesn't support the `topdown` param due to `fsspec`'s `walk` also not supporting it, so fixing this issue could be tricky. \r\n\r\n@MalakhovIlyaPavlovich You can avoid the error by tweaking your data processing and not using this param. (and `Path.rename`, which also cannot be streamed) ",
"@mariosasko thank you for your reply. I couldn't reproduce error showed by @severo either on Ubuntu 20.04.3 LTS, Windows 10 and Google Colab environments. But trying to avoid using os.walk(topdown=False) and Path.rename(), In _split_generators I replaced\r\n```\r\ndef decode_file_names(folder):\r\n for root, dirs, files in os.walk(folder, topdown=False):\r\n root = Path(root)\r\n for file in files:\r\n old_name = root / Path(file)\r\n new_name = root / Path(\r\n file.encode('cp437').decode('cp866'))\r\n old_name.rename(new_name)\r\n for dir in dirs:\r\n old_name = root / Path(dir)\r\n new_name = root / Path(dir.encode('cp437').decode('cp866'))\r\n old_name.rename(new_name)\r\n\r\nfolder = dl_manager.download_and_extract(self._RAW_TXT_URLS)['raw_txt']\r\ndecode_file_names(folder)\r\n```\r\nby\r\n```\r\ndef extract(zip_file_path):\r\n p = Path(zip_file_path)\r\n dest_dir = str(p.parent / 'extracted' / p.stem)\r\n os.makedirs(dest_dir, exist_ok=True)\r\n with zipfile.ZipFile(zip_file_path) as archive:\r\n for file_info in tqdm(archive.infolist(), desc='Extracting'):\r\n filename = file_info.filename.encode('cp437').decode('cp866')\r\n target = os.path.join(dest_dir, *filename.split('/'))\r\n os.makedirs(os.path.dirname(target), exist_ok=True)\r\n if not file_info.is_dir():\r\n with archive.open(file_info) as source, open(target, 'wb') as dest:\r\n shutil.copyfileobj(source, dest)\r\n return dest_dir\r\n\r\nzip_file = dl_manager.download(self._RAW_TXT_URLS)['raw_txt']\r\nif not is_url(zip_file):\r\n folder = extract(zip_file)\r\nelse:\r\n folder = None\r\n```\r\nand now everything works well except data viewer for \"raw_txt\" subset: dataset preview on hub shows \"No data.\". As far as I understand dl_manager.download returns original URL when we are calling datasets.get_dataset_split_names and my suspicions are that dataset viewer can do smth similar. I couldn't find information about how it works. I would be very grateful, if you could tell me how to fix this)",
"This is what I get when I try to stream the `raw_txt` subset:\r\n```python\r\n>>> dset = load_dataset(\"MalakhovIlya/RuREBus\", \"raw_txt\", split=\"raw_txt\", streaming=True)\r\n>>> next(iter(dset))\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\nStopIteration\r\n```\r\nSo there is a bug in your script.",
"streaming=True helped me to find solution. I fixed\r\n```\r\ndef extract(zip_file_path):\r\n p = Path(zip_file_path)\r\n dest_dir = str(p.parent / 'extracted' / p.stem)\r\n os.makedirs(dest_dir, exist_ok=True)\r\n with zipfile.ZipFile(zip_file_path) as archive:\r\n for file_info in tqdm(archive.infolist(), desc='Extracting'):\r\n filename = file_info.filename.encode('cp437').decode('cp866')\r\n target = os.path.join(dest_dir, *filename.split('/'))\r\n os.makedirs(os.path.dirname(target), exist_ok=True)\r\n if not file_info.is_dir():\r\n with archive.open(file_info) as source, open(target, 'wb') as dest:\r\n shutil.copyfileobj(source, dest)\r\n return dest_dir\r\n\r\nzip_file = dl_manager.download(self._RAW_TXT_URLS)['raw_txt']\r\nfolder = extract(zip_file)\r\n```\r\nby \r\n```\r\nfolder = dl_manager.download_and_extract(self._RAW_TXT_URLS)['raw_txt']\r\npath = os.path.join(folder, 'MED_txt/unparsed_txt')\r\nfor root, dirs, files in os.walk(path):\r\n decoded_root_name = Path(root).name.encode('cp437').decode('cp866')\r\n```\r\n@mariosasko thank you for your help :)"
] | 1,649,642,833,000 | 1,650,338,146,000 | 1,650,123,989,000 | NONE | null | ## Dataset viewer issue for 'MalakhovIlya/RuREBus'
**Link:** https://huggingface.co/datasets/MalakhovIlya/RuREBus
**Description**
Using os.walk(topdown=False) in DatasetBuilder causes following error:
Status code: 400
Exception: TypeError
Message: xwalk() got an unexpected keyword argument 'topdown'
Couldn't find where "xwalk" come from. How can I fix this?
Am I the one who added this dataset ? Yes
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4138/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4138/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4137 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4137/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4137/comments | https://api.github.com/repos/huggingface/datasets/issues/4137/events | https://github.com/huggingface/datasets/pull/4137 | 1,199,000,453 | PR_kwDODunzps419D6A | 4,137 | Add single dataset citations for TweetEval | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The `test_dataset_cards` method is failing with the error:\r\n\r\n```\r\nif error_messages:\r\n> raise ValueError(\"\\n\".join(error_messages))\r\nE ValueError: The following issues have been found in the dataset cards:\r\nE YAML tags:\r\nE The following typing errors are found: {'annotations_creators': \"(Expected `typing.List` with length > 0. Found value of type: `<class 'list'>`, with length: 0.\\n)\\nOR\\n(Expected `typing.Dict` with length > 0. Found value of type: `<class 'list'>`, with length: 0.\\n)\"}\r\n```\r\n\r\nAdding `found` as annotation creators."
] | 1,649,591,514,000 | 1,649,750,242,000 | 1,649,749,875,000 | CONTRIBUTOR | null | This PR adds single data citations as per request of the original creators of the TweetEval dataset.
This is a recent email from the creator:
> Could I ask you a favor? Would you be able to add at the end of the README the citations of the single datasets as well? You can just copy our readme maybe? https://github.com/cardiffnlp/tweeteval#citing-tweeteval
(just to be sure that the creator of the single datasets also get credits when tweeteval is used)
Please let me know if this looks okay or if any changes are needed.
Thanks,
Gunjan
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4137/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4137/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4137",
"html_url": "https://github.com/huggingface/datasets/pull/4137",
"diff_url": "https://github.com/huggingface/datasets/pull/4137.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4137.patch",
"merged_at": 1649749875000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4135 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4135/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4135/comments | https://api.github.com/repos/huggingface/datasets/issues/4135/events | https://github.com/huggingface/datasets/pull/4135 | 1,198,307,610 | PR_kwDODunzps416-Rn | 4,135 | Support streaming xtreme dataset for PAN-X config | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649,485,188,000 | 1,651,826,380,000 | 1,649,660,054,000 | MEMBER | null | Support streaming xtreme dataset for PAN-X config. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4135/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4135/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4135",
"html_url": "https://github.com/huggingface/datasets/pull/4135",
"diff_url": "https://github.com/huggingface/datasets/pull/4135.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4135.patch",
"merged_at": 1649660054000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4134 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4134/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4134/comments | https://api.github.com/repos/huggingface/datasets/issues/4134/events | https://github.com/huggingface/datasets/issues/4134 | 1,197,937,146 | I_kwDODunzps5HZxH6 | 4,134 | ELI5 supporting documents | {
"login": "Slayer-007",
"id": 69015896,
"node_id": "MDQ6VXNlcjY5MDE1ODk2",
"avatar_url": "https://avatars.githubusercontent.com/u/69015896?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Slayer-007",
"html_url": "https://github.com/Slayer-007",
"followers_url": "https://api.github.com/users/Slayer-007/followers",
"following_url": "https://api.github.com/users/Slayer-007/following{/other_user}",
"gists_url": "https://api.github.com/users/Slayer-007/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Slayer-007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Slayer-007/subscriptions",
"organizations_url": "https://api.github.com/users/Slayer-007/orgs",
"repos_url": "https://api.github.com/users/Slayer-007/repos",
"events_url": "https://api.github.com/users/Slayer-007/events{/privacy}",
"received_events_url": "https://api.github.com/users/Slayer-007/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892912,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "Further information is requested"
}
] | open | false | null | [] | null | [
"Hi ! Please post your question on the [forum](https://discuss.huggingface.co/), more people will be able to help you there ;)"
] | 1,649,460,987,000 | 1,649,857,966,000 | null | NONE | null | if i am using dense search to create supporting documents for eli5 how much time it will take bcz i read somewhere that it takes about 18 hrs?? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4134/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4134/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4133 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4133/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4133/comments | https://api.github.com/repos/huggingface/datasets/issues/4133/events | https://github.com/huggingface/datasets/issues/4133 | 1,197,830,623 | I_kwDODunzps5HZXHf | 4,133 | HANS dataset preview broken | {
"login": "pietrolesci",
"id": 61748653,
"node_id": "MDQ6VXNlcjYxNzQ4NjUz",
"avatar_url": "https://avatars.githubusercontent.com/u/61748653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pietrolesci",
"html_url": "https://github.com/pietrolesci",
"followers_url": "https://api.github.com/users/pietrolesci/followers",
"following_url": "https://api.github.com/users/pietrolesci/following{/other_user}",
"gists_url": "https://api.github.com/users/pietrolesci/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pietrolesci/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pietrolesci/subscriptions",
"organizations_url": "https://api.github.com/users/pietrolesci/orgs",
"repos_url": "https://api.github.com/users/pietrolesci/repos",
"events_url": "https://api.github.com/users/pietrolesci/events{/privacy}",
"received_events_url": "https://api.github.com/users/pietrolesci/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3287858981,
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming",
"name": "streaming",
"color": "fef2c0",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [
"The dataset cannot be loaded, be it in normal or streaming mode.\r\n\r\n```python\r\n>>> import datasets\r\n>>> dataset=datasets.load_dataset(\"hans\", split=\"train\", streaming=True)\r\n>>> next(iter(dataset))\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 497, in __iter__\r\n for key, example in self._iter():\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 494, in _iter\r\n yield from ex_iterable\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 87, in __iter__\r\n yield from self.generate_examples_fn(**self.kwargs)\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/hans/1bbcb735c482acd54f2e118074b59cfd2bf5f7a5a285d4d540d1e632216672ac/hans.py\", line 121, in _generate_examples\r\n for idx, line in enumerate(open(filepath, \"rb\")):\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/spec.py\", line 1595, in __next__\r\n out = self.readline()\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/spec.py\", line 1592, in readline\r\n return self.readuntil(b\"\\n\")\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/spec.py\", line 1581, in readuntil\r\n self.seek(start + found + len(char))\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py\", line 676, in seek\r\n raise ValueError(\"Cannot seek streaming HTTP file\")\r\nValueError: Cannot seek streaming HTTP file\r\n>>> dataset=datasets.load_dataset(\"hans\", split=\"train\", streaming=False)\r\nDownloading and preparing dataset hans/plain_text (download: 29.51 MiB, generated: 30.34 MiB, post-processed: Unknown size, total: 59.85 MiB) to /home/slesage/.cache/huggingface/datasets/hans/plain_text/1.0.0/1bbcb735c482acd54f2e118074b59cfd2bf5f7a5a285d4d540d1e632216672ac...\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1687, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 605, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1104, in _download_and_prepare\r\n super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 694, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1087, in _prepare_split\r\n for key, record in logging.tqdm(\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/tqdm/std.py\", line 1180, in __iter__\r\n for obj in iterable:\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/hans/1bbcb735c482acd54f2e118074b59cfd2bf5f7a5a285d4d540d1e632216672ac/hans.py\", line 121, in _generate_examples\r\n for idx, line in enumerate(open(filepath, \"rb\")):\r\nValueError: readline of closed file\r\n```\r\n\r\n",
"Hi! I've opened a PR that should make this dataset stremable. You can test it as follows:\r\n```python\r\nfrom datasets import load_dataset\r\ndset = load_dataset(\"hans\", split=\"train\", streaming=True, revision=\"49decd29839c792ecc24ac88f861cbdec30c1c40\")\r\n```\r\n\r\n@severo The current script doesn't throw an error in normal mode (only in streaming mode) on my local machine or in Colab. Can you update your installation of `datasets` and see if that fixes the issue?",
"Thanks for this. It works well, thanks! The dataset viewer is using https://github.com/huggingface/datasets/releases/tag/2.0.0, I'm eager to upgrade to 2.0.1 😉"
] | 1,649,451,975,000 | 1,649,851,054,000 | 1,649,851,054,000 | NONE | null | ## Dataset viewer issue for '*hans*'
**Link:** [https://huggingface.co/datasets/hans](https://huggingface.co/datasets/hans)
HANS dataset preview is broken with error 400
Am I the one who added this dataset ? No
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4133/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4133/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4132 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4132/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4132/comments | https://api.github.com/repos/huggingface/datasets/issues/4132/events | https://github.com/huggingface/datasets/pull/4132 | 1,197,661,720 | PR_kwDODunzps41460R | 4,132 | Support streaming xtreme dataset for PAWS-X config | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649,442,332,000 | 1,651,826,382,000 | 1,649,451,764,000 | MEMBER | null | Support streaming xtreme dataset for PAWS-X config. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4132/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4132/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4132",
"html_url": "https://github.com/huggingface/datasets/pull/4132",
"diff_url": "https://github.com/huggingface/datasets/pull/4132.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4132.patch",
"merged_at": 1649451764000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4131 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4131/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4131/comments | https://api.github.com/repos/huggingface/datasets/issues/4131/events | https://github.com/huggingface/datasets/pull/4131 | 1,197,472,249 | PR_kwDODunzps414Zt1 | 4,131 | Support streaming xtreme dataset for udpos config | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649,431,849,000 | 1,651,826,386,000 | 1,649,435,287,000 | MEMBER | null | Support streaming xtreme dataset for udpos config. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4131/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4131/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4131",
"html_url": "https://github.com/huggingface/datasets/pull/4131",
"diff_url": "https://github.com/huggingface/datasets/pull/4131.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4131.patch",
"merged_at": 1649435287000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4130 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4130/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4130/comments | https://api.github.com/repos/huggingface/datasets/issues/4130/events | https://github.com/huggingface/datasets/pull/4130 | 1,197,456,857 | PR_kwDODunzps414Wqx | 4,130 | Add SBU Captions Photo Dataset | {
"login": "thomasw21",
"id": 24695242,
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomasw21",
"html_url": "https://github.com/thomasw21",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649,431,059,000 | 1,649,760,451,000 | 1,649,760,089,000 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4130/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4130/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4130",
"html_url": "https://github.com/huggingface/datasets/pull/4130",
"diff_url": "https://github.com/huggingface/datasets/pull/4130.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4130.patch",
"merged_at": 1649760089000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4129 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4129/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4129/comments | https://api.github.com/repos/huggingface/datasets/issues/4129/events | https://github.com/huggingface/datasets/issues/4129 | 1,197,376,796 | I_kwDODunzps5HXoUc | 4,129 | dataset metadata for reproducibility | {
"login": "nbroad1881",
"id": 24982805,
"node_id": "MDQ6VXNlcjI0OTgyODA1",
"avatar_url": "https://avatars.githubusercontent.com/u/24982805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nbroad1881",
"html_url": "https://github.com/nbroad1881",
"followers_url": "https://api.github.com/users/nbroad1881/followers",
"following_url": "https://api.github.com/users/nbroad1881/following{/other_user}",
"gists_url": "https://api.github.com/users/nbroad1881/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nbroad1881/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nbroad1881/subscriptions",
"organizations_url": "https://api.github.com/users/nbroad1881/orgs",
"repos_url": "https://api.github.com/users/nbroad1881/repos",
"events_url": "https://api.github.com/users/nbroad1881/events{/privacy}",
"received_events_url": "https://api.github.com/users/nbroad1881/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 1,649,427,448,000 | 1,649,427,448,000 | null | NONE | null | When pulling a dataset from the hub, it would be useful to have some metadata about the specific dataset and version that is used. The metadata could then be passed to the `Trainer` which could then be saved to a model card. This is useful for people who run many experiments on different versions (commits/branches) of the same dataset.
The dataset could have a list of “source datasets” metadata and ignore what happens to them before arriving in the Trainer (i.e. ignore mapping, filtering, etc.).
Here is a basic representation (made by @lhoestq )
```python
>>> from datasets import load_dataset
>>>
>>> my_dataset = load_dataset(...)["train"]
>>> my_dataset = my_dataset.map(...)
>>>
>>> my_dataset.sources
[HFHubDataset(repo_id=..., revision=..., arguments={...})]
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4129/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4129/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4128 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4128/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4128/comments | https://api.github.com/repos/huggingface/datasets/issues/4128/events | https://github.com/huggingface/datasets/pull/4128 | 1,197,326,311 | PR_kwDODunzps4138I6 | 4,128 | More robust `cast_to_python_objects` in `TypedSequence` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649,424,815,000 | 1,649,858,861,000 | 1,649,858,476,000 | CONTRIBUTOR | null | Adds a fallback to run an expensive version of `cast_to_python_objects` which exhaustively checks entire lists to avoid the `ArrowInvalid: Could not convert` error in `TypedSequence`. Currently, this error can happen in situations where only some images are decoded in `map`, in which case `cast_to_python_objects` fails to recognize that it needs to cast `PIL.Image` objects if they are not at the beginning of the sequence and stops after the first image dictionary (e.g., if `data` is `[{"bytes": None, "path": "some path"}, PIL.Image(), ...]`)
Fix #4124 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4128/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4128/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4128",
"html_url": "https://github.com/huggingface/datasets/pull/4128",
"diff_url": "https://github.com/huggingface/datasets/pull/4128.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4128.patch",
"merged_at": 1649858476000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4127 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4127/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4127/comments | https://api.github.com/repos/huggingface/datasets/issues/4127/events | https://github.com/huggingface/datasets/pull/4127 | 1,197,297,756 | PR_kwDODunzps4132EN | 4,127 | Add configs with processed data in medical_dialog dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649,423,296,000 | 1,651,826,390,000 | 1,649,434,851,000 | MEMBER | null | There exist processed data files that do not require parsing the raw data files (which can take long time).
Fix #4122. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4127/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4127/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4127",
"html_url": "https://github.com/huggingface/datasets/pull/4127",
"diff_url": "https://github.com/huggingface/datasets/pull/4127.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4127.patch",
"merged_at": 1649434851000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4126 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4126/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4126/comments | https://api.github.com/repos/huggingface/datasets/issues/4126/events | https://github.com/huggingface/datasets/issues/4126 | 1,196,665,194 | I_kwDODunzps5HU6lq | 4,126 | dataset viewer issue for common_voice | {
"login": "laphang",
"id": 24724502,
"node_id": "MDQ6VXNlcjI0NzI0NTAy",
"avatar_url": "https://avatars.githubusercontent.com/u/24724502?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/laphang",
"html_url": "https://github.com/laphang",
"followers_url": "https://api.github.com/users/laphang/followers",
"following_url": "https://api.github.com/users/laphang/following{/other_user}",
"gists_url": "https://api.github.com/users/laphang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/laphang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/laphang/subscriptions",
"organizations_url": "https://api.github.com/users/laphang/orgs",
"repos_url": "https://api.github.com/users/laphang/repos",
"events_url": "https://api.github.com/users/laphang/events{/privacy}",
"received_events_url": "https://api.github.com/users/laphang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
},
{
"id": 4027368468,
"node_id": "LA_kwDODunzps7wDMQU",
"url": "https://api.github.com/repos/huggingface/datasets/labels/audio_column",
"name": "audio_column",
"color": "F83ACF",
"default": false,
"description": ""
}
] | closed | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Yes, it's a known issue, and we expect to fix it soon.",
"Fixed.\r\n\r\n<img width=\"1393\" alt=\"Capture d’écran 2022-04-25 à 15 42 05\" src=\"https://user-images.githubusercontent.com/1676121/165101176-d729d85b-efff-45a8-bad1-b69223edba5f.png\">\r\n"
] | 1,649,374,468,000 | 1,650,894,137,000 | 1,650,894,136,000 | NONE | null | ## Dataset viewer issue for 'common_voice'
**Link:** https://huggingface.co/datasets/common_voice
Server Error
Status code: 400
Exception: TypeError
Message: __init__() got an unexpected keyword argument 'audio_column'
Am I the one who added this dataset ? No
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4126/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4126/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4125 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4125/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4125/comments | https://api.github.com/repos/huggingface/datasets/issues/4125/events | https://github.com/huggingface/datasets/pull/4125 | 1,196,633,936 | PR_kwDODunzps411qeR | 4,125 | BIG-bench | {
"login": "andersjohanandreassen",
"id": 43357549,
"node_id": "MDQ6VXNlcjQzMzU3NTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/43357549?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andersjohanandreassen",
"html_url": "https://github.com/andersjohanandreassen",
"followers_url": "https://api.github.com/users/andersjohanandreassen/followers",
"following_url": "https://api.github.com/users/andersjohanandreassen/following{/other_user}",
"gists_url": "https://api.github.com/users/andersjohanandreassen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andersjohanandreassen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andersjohanandreassen/subscriptions",
"organizations_url": "https://api.github.com/users/andersjohanandreassen/orgs",
"repos_url": "https://api.github.com/users/andersjohanandreassen/repos",
"events_url": "https://api.github.com/users/andersjohanandreassen/events{/privacy}",
"received_events_url": "https://api.github.com/users/andersjohanandreassen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"> It looks like the CI is failing on windows because our windows CI is unable to clone the bigbench repository (maybe it has to do with filenames that are longer than 256 characters, which windows don't like). Could the smaller installation of bigbench via pip solve this issue ?\r\n> Otherwise we can see how to remove this limitation in our windows CI.\r\n\r\nI'm not sure.\r\nIf it's git's fault that it can't handle the long filenames, it will possibly be resolved by the pip install. If it's an issue with windows not liking long filenames after it's installed, then it will not be resolved.\r\nI don't have a windows computer to try it on, but I might be able to tweek this PR and do an experiment to find out. \r\nWe're waiting for a quota increase for the pip install (https://github.com/pypa/pypi-support/issues/1782). It's been pending for 2-3 weeks, and I don't have an estimate for when it will be resolved. \r\n\r\n>Regarding the dummy data zip files, I think we can just keep datasets/bigbench/dummy/abstract_narrative_understanding/1.0.0/dummy_data.zip and remove all the other ones. We just require to have at least one dummy_data.zip file.\r\n\r\nSounds great. I will trim that down. ",
"Do you know what are the other tests dependencies that have conflicts with bigbench ? I can try to split the CI to end up with a compatible list of test dependencies",
"Hi @lhoestq,\r\n\r\nI haven't played with eliminating requirements form the test dependencies, and I've been trying to resolve this by modifying the bigbench repo to become compatible. \r\nIn the original bigbench repo, the version requirements were strict, and specifically it had a datasets==1.17.0 requirement which was causing trouble. \r\nI'm working on PR https://github.com/google/BIG-bench/pull/766 to get some more flexible versions that might be compatible with the test dependencies in HF/datasets.\r\nWe're somewhat flexible in modifying these version numbers if we can figure out what the exact conflict is. \r\n\r\nI've spent some time experimenting with different versions, but I don't have a very efficient way of doing this debugging on my work computer (which for some reason doesn't produce the same sets of errors running python 3.9 instead of 3.6 or 3.7 in the tests). \r\nIt currently fails at \r\n> The conflict is caused by:\r\n> bert-score 0.3.6 depends on matplotlib\r\n> big-bench 0.0.1 depends on matplotlib<4.0 and >=3.5.1\r\n\r\nwhich doesn't seem like it can be the real issue. \r\n\r\nIf you have any advice for how to resolve these conflicts, that would be greatly appreciated!",
"Hi again @lhoestq, \r\nAfter some more or less random guessing of conflicting packages, I've managed to find a configuration that seems to be compatible with HF/datasets. \r\n\r\nThe errors went away after removing version limits on matplotlib and scipy, and loosening numpy from 1.19 -> 1.17 in the bigbench requirements. \r\n\r\nI might do some more tweaking to see if it lets me set some minimal limits on matplotlib and scipy, but I think we at least can move forward.\r\n\r\nThe WIN tests are still failing, now because of \r\n\r\n> Did not find path entry C:\\tools\\miniconda3\\bin\r\n>C:\\tools\\miniconda3\\envs\\py37\\python.exe: No module named pytest\r\n\r\nI have no way of debugging this locally, and unless there's some way to get more verbose logs, I don't know why it's not finding pytest. Would you be able to take a quick look? \r\n\r\nUpdate: Actually, I see it's still failing because of the long filenames. So perhaps the pytest error is just because the previous steps failed. ",
"One more update on the WIN errors. \r\nI think all the long filenames are in files in the github repo that does not need to be included. \r\nWe will try to remove them .",
"Hi ! The remaining error seems to be a `UnicodeDecodeError` from `setup.py`. I think you can fix your setup.py:\r\n```diff\r\n- with open(os.path.join(os.path.dirname(__file__), fname)) as f:\r\n+ with open(os.path.join(os.path.dirname(__file__), fname), encoding=\"utf-8\") as f:\r\n```\r\nIndeed on windows, when you `open` a file it doesn't always use \"utf-8\" by default",
"Hi @lhoestq, \r\nThe dependency issues seems to now be resolved 🎉 \r\n\r\nNow, the WIN tests are failing at\r\n> ERROR tests/test_arrow_dataset.py::test_dummy_dataset_serialize_s3 - botocore...\r\n> ERROR tests/test_dataset_dict.py::test_dummy_dataset_serialize_s3 - botocore...\r\n\r\nIs this testing the dummy dataset that's added in bigbench? If so, I might need some help getting the right format in.\r\n\r\nThe error message I'm seeing is \r\n> raise EndpointConnectionError(endpoint_url=request.url, error=e)\r\n> E botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: \"http://127.0.0.1:5555/test\"\r\n\r\nWhich seems unrelated, but perhaps the real issue is somewhere I'm not seeing? ",
"Woohoo awesome !\r\n\r\nLet me check the CI error",
"Can you try to re-run the CI, just in case CircleCI messed up ?",
"Hi @lhoestq, \r\nRerunning did not seem to solve the problem. \r\nThe `test_dummy_dataset_serialize_s3` error still seems to remain.",
"Hi again @lhoestq, \r\nI'm not sure if this is informative or not in terms of debugging, but I deleted the dummy data and the errors for windows still fail and the others still pass. \r\nDo you have any idea what could be causing this error on windows?",
"_The documentation is not available anymore as the PR was closed or merged._",
"Now the last question: let's have the dataset under`google/bigbench` @andersjohanandreassen ?\r\n\r\nI think it would be nicer, this way you and anyone in your team can update the dataset card whevener you want without going through a github PR. You just need to join the https://huggingface.co/google page using your google email :)",
"Hi @lhoestq, \r\n\r\nThank you so much for the help! I really appreciate it!!!\r\n\r\nAfter some discussion with the other bigbench organizers, I think there is a slight preference for bigbench to not be under google/bigbench since this is a collaboration with researchers from many different institutions/organizations beyond Google. \r\n\r\nI see the drawback with the updates to the dataset card having to go through a PR, but hopefully that won't be very frequent. \r\n\r\nWe're finalizing putting the bigbench api on pip, so once that's finalized I just need to update the setup.py with the correct dependency and I think we are ready to merge. ",
"Ok perfect, thank you !",
"I noticed that in the latest windows CI run it takes forever to install the dependencies, was there any change in the bigbench dependencies recently ?",
"oh, sorry! I just did a double check on the dependencies, and it seems like there is at least one left that should have been removed. There's also one new one added. \r\nLet me get those removed again. Will ping you here when it's updated. ",
"It looks like there is a circular dependency in `bigbench` at https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz\r\n\r\n```python\r\n>>> import bigbench.api.util as bb_utils\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/bigbench/api/util.py\", line 29, in <module>\r\n import bigbench.models.query_logging_model as query_logging_model\r\n File \"/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/bigbench/models/query_logging_model.py\", line 23, in <module>\r\n import bigbench.api.util as util\r\nAttributeError: module 'bigbench.api' has no attribute 'util'\r\n```",
"Hi @lhoestq , \r\nI think we are ready to merge! \r\n\r\nI have one minor question that I haven't been able to figure out: \r\nIs there a way to bypass the `verify_infos` from triggering? I have `max_examples` as an argument to allow for selecting a fixed subset of the datasets (some of the tasks have *very* many examples). But this is a variable that's not specified by the configs, so it raises an `NonMatchingSplitsSizesError`.\r\nI wasn't able to work my way around this, but perhaps there is a way to bypass this that I'm not seeing?\r\nIf this cannot be done, I'm happy to ignore this for now.\r\n\r\nRegarding pypi, we are working on a release there, but I'm told there is some issue that there is a problem regarding the upload, and we are not sure when it will be resolved, and it's not in my control. \r\nI think merging this PR with the GCS is a great idea, and I will open a new PR when the pypi version is ready. ",
"Cool ! Merging then :D\r\n\r\n> Is there a way to bypass the verify_infos from triggering? I have max_examples as an argument to allow for selecting a fixed subset of the datasets (some of the tasks have very many examples). But this is a variable that's not specified by the configs, so it raises an NonMatchingSplitsSizesError.\r\n\r\nThis is a bug, I opened an issue [here](https://github.com/huggingface/datasets/issues/4462). It should be easy to fix :)",
"The bigbench page is available here ! https://huggingface.co/datasets/bigbench\r\n\r\nI think we can update the dataset viewer to install bigbench on it, but since this is production code I'd rather use the version on pypi for bigbench when it comes out"
] | 1,649,370,810,000 | 1,654,711,068,000 | 1,654,709,552,000 | CONTRIBUTOR | null | This PR adds all BIG-bench json tasks to huggingface/datasets. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4125/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4125/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4125",
"html_url": "https://github.com/huggingface/datasets/pull/4125",
"diff_url": "https://github.com/huggingface/datasets/pull/4125.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4125.patch",
"merged_at": 1654709552000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4124 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4124/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4124/comments | https://api.github.com/repos/huggingface/datasets/issues/4124/events | https://github.com/huggingface/datasets/issues/4124 | 1,196,469,842 | I_kwDODunzps5HUK5S | 4,124 | Image decoding often fails when transforming Image datasets | {
"login": "RafayAK",
"id": 17025191,
"node_id": "MDQ6VXNlcjE3MDI1MTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/17025191?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RafayAK",
"html_url": "https://github.com/RafayAK",
"followers_url": "https://api.github.com/users/RafayAK/followers",
"following_url": "https://api.github.com/users/RafayAK/following{/other_user}",
"gists_url": "https://api.github.com/users/RafayAK/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RafayAK/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RafayAK/subscriptions",
"organizations_url": "https://api.github.com/users/RafayAK/orgs",
"repos_url": "https://api.github.com/users/RafayAK/repos",
"events_url": "https://api.github.com/users/RafayAK/events{/privacy}",
"received_events_url": "https://api.github.com/users/RafayAK/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"A quick hack I have found is that we can call the image first before running the transforms and it makes sure the image is decoded before being passed on.\r\n\r\nFor this I just needed to add `example['img'] = example['img']` to the top of my `generate_flipped_data` function, defined above, so that image decode in invoked.\r\n\r\nAfter this minor change this function works:\r\n```python\r\ndef generate_flipped_data(example, p=0.5):\r\n \"\"\"\r\n A Dataset mapping functions that transforms some of the image up-side-down.\r\n If the probability value (p) is 0.5 approximately half the images will be flipped upside-down\r\n Args:\r\n example: An example from the dataset containing a Python dictionary with \"img\" and \"is_flipped\" key-value pair\r\n p: probability of flipping the image up-side-down, Default 0.5\r\n\r\n Returns:\r\n example: A Dataset object\r\n\r\n \"\"\"\r\n example['img'] = example['img'] # <<< This is the only change\r\n if rng.random() > p: # the flip the image and set is_flipped column to 1\r\n example['img'] = example['img'].transpose(\r\n 1) # ImageOps.flip(example['img']) #example['img'].transpose(Image.FLIP_TOP_BOTTOM)\r\n example['is_flipped'] = 1\r\n\r\n return example\r\n```",
"Hi @RafayAK, thanks for reporting.\r\n\r\nCurrent implementation of the Image feature performs the decoding only if the \"img\" field is accessed by the mapped function.\r\n\r\nIn your original `generate_flipped_data` function:\r\n- it only accesses the \"img\" field (and thus performs decoding) if `rng.random() > p`;\r\n- on the other hand, for the cases where `rng.random() <= p`, the \"img\" field is not accessed and thus no decoding is performed for those examples\r\n\r\nBy adding the code line `example['img'] = example['img']`, you make sure the \"img\" field is accessed in all cases, and the decoding is done for all examples.\r\n\r\nAlso note that there is a little bug in your implementation: `p` is not the probability of flipping, but the probability of not-flipping; the larger is `p`, the smaller is the probability of flipping.\r\n\r\nSome refactoring (fixing also `p`):\r\n```python\r\ndef generate_flipped_data(example, p=0.5):\r\n \"\"\"\r\n A Dataset mapping functions that transforms some of the image up-side-down.\r\n If the probability value (p) is 0.5 approximately half the images will be flipped upside-down.\r\n\r\n Args:\r\n example: An example from the dataset containing a Python dictionary with \"img\" and \"is_flipped\" key-value pair\r\n p: probability of flipping the image up-side-down, Default 0.5\r\n\r\n Returns:\r\n example: A Dataset object\r\n\r\n \"\"\"\r\n do_flip = rng.random() < p # Note the \"<\" sign here instead of \">\"\r\n example['img'] = example['img'].transpose(1) if do_flip else example['img'] # Note \"img\" is always accessed\r\n example['is_flipped'] = 1 if do_flip else 0\r\n return example",
"@albertvillanova Thanks for letting me know this is intended behavior. The docs are severely lacking on this, if I hadn't posted this here I would have never found out how I'm actually supposed to modify images in a Dataset object.",
"@albertvillanova Secondly if you check the error message it shows that around 1999 images were successfully created, I'm pretty sure some of them were also flipped during the process. Back to my main contention, sometimes the decoding takes place other times it fails. \r\n\r\nI suppose to run `map` on any dataset all the examples should be invoked even if on some of them we end up doing nothing, is that right?",
"Hi @RafayAK! I've opened a PR with the fix, which adds a fallback to reattempt casting to PyArrow format with a more robust (but more expensive) procedure if the first attempt fails. Feel free to test it by installing `datasets` from the PR branch with the following command:\r\n```\r\npip install git+https://github.com/huggingface/datasets.git@fix-4124\r\n```",
"@mariosasko I'll try this right away and report back.",
"@mariosasko Thanks a lot for looking into this, now the `map` function at least behaves as one would expect a function to behave. \r\n\r\nLooking forward to exploring Hugging Face more and even contributing 😃.\r\n\r\n```bash\r\n $ conda list | grep datasets\r\ndatasets 2.0.1.dev0 pypi_0 pypi\r\n\r\n```\r\n\r\n```python\r\ndef preprocess_data(dataset):\r\n \"\"\"\r\n Helper funtion to pre-process HuggingFace Cifar-100 Dataset to remove fine_label and coarse_label columns and\r\n add is_flipped column\r\n Args:\r\n dataset: HuggingFace CIFAR-100 Dataset Object\r\n\r\n Returns:\r\n new_dataset: A Dataset object with \"img\" and \"is_flipped\" columns only\r\n\r\n \"\"\"\r\n # remove fine_label and coarse_label columns\r\n new_dataset = dataset.remove_columns(['fine_label', 'coarse_label'])\r\n # add the column for is_flipped\r\n new_dataset = new_dataset.add_column(name=\"is_flipped\", column=np.zeros((len(new_dataset)), dtype=np.uint8))\r\n\r\n return new_dataset\r\n\r\n\r\ndef generate_flipped_data(example, p=0.5):\r\n \"\"\"\r\n A Dataset mapping functions that transforms some of the image up-side-down.\r\n If the probability value (p) is 0.5 approximately half the images will be flipped upside-down\r\n Args:\r\n example: An example from the dataset containing a Python dictionary with \"img\" and \"is_flipped\" key-value pair\r\n p: probability of flipping the image up-side-down, Default 0.5\r\n\r\n Returns:\r\n example: A Dataset object\r\n\r\n \"\"\"\r\n # example['img'] = example['img']\r\n if rng.random() > p: # the flip the image and set is_flipped column to 1\r\n example['img'] = example['img'].transpose(\r\n 1) # ImageOps.flip(example['img']) #example['img'].transpose(Image.FLIP_TOP_BOTTOM)\r\n example['is_flipped'] = 1\r\n\r\n return example\r\n\r\nmy_test = preprocess_data(test_dataset)\r\nmy_test = my_test.map(generate_flipped_data)\r\n```\r\n\r\nThe output now show the function was applied successfully:\r\n``` bash\r\n/home/rafay/anaconda3/envs/pytorch_new/bin/python /home/rafay/Documents/you_only_live_once/upside_down_detector/create_dataset.py\r\nDownloading builder script: 5.61kB [00:00, 3.16MB/s] \r\nDownloading metadata: 4.21kB [00:00, 2.56MB/s] \r\nReusing dataset cifar100 (/home/rafay/.cache/huggingface/datasets/cifar100/cifar100/1.0.0/f365c8b725c23e8f0f8d725c3641234d9331cd2f62919d1381d1baa5b3ba3142)\r\nReusing dataset cifar100 (/home/rafay/.cache/huggingface/datasets/cifar100/cifar100/1.0.0/f365c8b725c23e8f0f8d725c3641234d9331cd2f62919d1381d1baa5b3ba3142)\r\n100%|██████████| 10000/10000 [00:01<00:00, 5149.15ex/s]\r\n```\r\n"
] | 1,649,359,045,000 | 1,649,858,476,000 | 1,649,858,476,000 | NONE | null | ## Describe the bug
When transforming/modifying images in an image dataset using the `map` function the PIL images often fail to decode in time for the image transforms, causing errors.
Using a debugger it is easy to see what the problem is, the Image decode invocation does not take place and the resulting image passed around is still raw bytes:
```
[{'bytes': b'\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x00 \x00\x00\x00 \x08\x02\x00\x00\x00\xfc\x18\xed\xa3\x00\x00\x08\x02IDATx\x9cEVIs[\xc7\x11\xeemf\xde\x82\x8d\x80\x08\x89"\xb5V\\\xb6\x94(\xe5\x9f\x90\xca5\x7f$\xa7T\xe5\x9f&9\xd9\x8a\\.\xdb\xa4$J\xa4\x00\x02x\xc0{\xb3t\xe7\x00\xca\x99\xd3\\f\xba\xba\xbf\xa5?|\xfa\xf4\xa2\xeb\xba\xedv\xa3f^\xf8\xd5\x0bY\xb6\x10\xb3\xaaDq\xcd\x83\x87\xdf5\xf3gZ\x1a\x04\x0f\xa0fp\xfa\xe0\xd4\x07?\x9dN\xc4\xb1\x99\xfd\xf2\xcb/\x97\x97\x97H\xa2\xaaf\x16\x82\xaf\xeb\xca{\xbf\xd9l.\xdf\x7f\xfa\xcb_\xff&\x88\x08\x00\x80H\xc0\x80@.;\x0f\x8c@#v\xe3\xe5\xfc\xd1\x9f\xee6q\xbf\xdf\xa6\x14\'\x93\xf1\xc3\xe5\xe3\xd1x\x14c\x8c1\xa5\x1c\x9dsM\xd3\xb4\xed\x08\x89SJ)\xa5\xedv\xbb^\xafNO\x97D\x84Hf ....
```
## Steps to reproduce the bug
```python
from datasets import load_dataset, Dataset
import numpy as np
# seeded NumPy random number generator for reprodducinble results.
rng = np.random.default_rng(seed=0)
test_dataset = load_dataset('cifar100', split="test")
def preprocess_data(dataset):
"""
Helper function to pre-process HuggingFace Cifar-100 Dataset to remove fine_label and coarse_label columns and
add is_flipped column
Args:
dataset: HuggingFace CIFAR-100 Dataset Object
Returns:
new_dataset: A Dataset object with "img" and "is_flipped" columns only
"""
# remove fine_label and coarse_label columns
new_dataset = dataset.remove_columns(['fine_label', 'coarse_label'])
# add the column for is_flipped
new_dataset = new_dataset.add_column(name="is_flipped", column=np.zeros((len(new_dataset)), dtype=np.uint8))
return new_dataset
def generate_flipped_data(example, p=0.5):
"""
A Dataset mapping function that transforms some of the images up-side-down.
If the probability value (p) is 0.5 approximately half the images will be flipped upside-down
Args:
example: An example from the dataset containing a Python dictionary with "img" and "is_flipped" key-value pair
p: the probability of flipping the image up-side-down, Default 0.5
Returns:
example: A Dataset object
"""
# example['img'] = example['img']
if rng.random() > p: # the flip the image and set is_flipped column to 1
example['img'] = example['img'].transpose(
1) # ImageOps.flip(example['img']) #example['img'].transpose(Image.FLIP_TOP_BOTTOM)
example['is_flipped'] = 1
return example
my_test = preprocess_data(test_dataset)
my_test = my_test.map(generate_flipped_data)
```
## Expected results
The dataset should be transformed without problems.
## Actual results
```
/home/rafay/anaconda3/envs/pytorch_new/bin/python /home/rafay/Documents/you_only_live_once/upside_down_detector/create_dataset.py
Reusing dataset cifar100 (/home/rafay/.cache/huggingface/datasets/cifar100/cifar100/1.0.0/f365c8b725c23e8f0f8d725c3641234d9331cd2f62919d1381d1baa5b3ba3142)
Reusing dataset cifar100 (/home/rafay/.cache/huggingface/datasets/cifar100/cifar100/1.0.0/f365c8b725c23e8f0f8d725c3641234d9331cd2f62919d1381d1baa5b3ba3142)
20%|█▉ | 1999/10000 [00:00<00:01, 5560.44ex/s]
Traceback (most recent call last):
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2326, in _map_single
writer.write(example)
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 441, in write
self.write_examples_on_file()
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 399, in write_examples_on_file
self.write_batch(batch_examples=batch_examples)
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 492, in write_batch
arrays.append(pa.array(typed_sequence))
File "pyarrow/array.pxi", line 230, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 185, in __arrow_array__
out = pa.array(cast_to_python_objects(data, only_1d_for_numpy=True))
File "pyarrow/array.pxi", line 316, in pyarrow.lib.array
File "pyarrow/array.pxi", line 39, in pyarrow.lib._sequence_to_array
File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Could not convert <PIL.Image.Image image mode=RGB size=32x32 at 0x7F56AEE61DE0> with type Image: did not recognize Python value type when inferring an Arrow data type
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/rafay/Documents/you_only_live_once/upside_down_detector/create_dataset.py", line 55, in <module>
my_test = my_test.map(generate_flipped_data)
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 1953, in map
return self._map_single(
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 519, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 486, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/fingerprint.py", line 458, in wrapper
out = func(self, *args, **kwargs)
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2360, in _map_single
writer.finalize()
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 522, in finalize
self.write_examples_on_file()
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 399, in write_examples_on_file
self.write_batch(batch_examples=batch_examples)
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 492, in write_batch
arrays.append(pa.array(typed_sequence))
File "pyarrow/array.pxi", line 230, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 185, in __arrow_array__
out = pa.array(cast_to_python_objects(data, only_1d_for_numpy=True))
File "pyarrow/array.pxi", line 316, in pyarrow.lib.array
File "pyarrow/array.pxi", line 39, in pyarrow.lib._sequence_to_array
File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Could not convert <PIL.Image.Image image mode=RGB size=32x32 at 0x7F56AEE61DE0> with type Image: did not recognize Python value type when inferring an Arrow data type
Process finished with exit code 1
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: Linux(Fedora 35)
- Python version: 3.10
- PyArrow version: 7.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4124/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4124/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4123 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4123/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4123/comments | https://api.github.com/repos/huggingface/datasets/issues/4123/events | https://github.com/huggingface/datasets/issues/4123 | 1,196,367,512 | I_kwDODunzps5HTx6Y | 4,123 | Building C4 takes forever | {
"login": "StellaAthena",
"id": 15899312,
"node_id": "MDQ6VXNlcjE1ODk5MzEy",
"avatar_url": "https://avatars.githubusercontent.com/u/15899312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StellaAthena",
"html_url": "https://github.com/StellaAthena",
"followers_url": "https://api.github.com/users/StellaAthena/followers",
"following_url": "https://api.github.com/users/StellaAthena/following{/other_user}",
"gists_url": "https://api.github.com/users/StellaAthena/gists{/gist_id}",
"starred_url": "https://api.github.com/users/StellaAthena/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StellaAthena/subscriptions",
"organizations_url": "https://api.github.com/users/StellaAthena/orgs",
"repos_url": "https://api.github.com/users/StellaAthena/repos",
"events_url": "https://api.github.com/users/StellaAthena/events{/privacy}",
"received_events_url": "https://api.github.com/users/StellaAthena/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi @StellaAthena, thanks for reporting.\r\n\r\nPlease note, that our `datasets` library performs several operations in order to load a dataset, among them:\r\n- it downloads all the required files: for C4 \"en\", 378.69 GB of JSON GZIPped files\r\n- it parses their content to generate the dataset\r\n- it caches the dataset in an Arrow file: for C4 \"en\", this file size is 1.87 TB\r\n- it memory-maps the Arrow file\r\n\r\nIf it suits your use case, you might load this dataset in streaming mode:\r\n- no Arrow file is generated\r\n- you can iterate over elements immediately (no need to wait to download all the entire files)\r\n\r\n```python\r\nIn [45]: from datasets import load_dataset\r\n ...: ds = load_dataset(\"c4\", \"en\", split=\"train\", streaming=True)\r\n ...: for item in ds:\r\n ...: print(item)\r\n ...: break\r\n ...: \r\n{'text': 'Beginners BBQ Class Taking Place in Missoula!\\nDo you want to get better at making delicious BBQ? You will have the opportunity, put this on your calendar now. Thursday, September 22nd join World Class BBQ Champion, Tony Balay from Lonestar Smoke Rangers. He will be teaching a beginner level class for everyone who wants to get better with their culinary skills.\\nHe will teach you everything you need to know to compete in a KCBS BBQ competition, including techniques, recipes, timelines, meat selection and trimming, plus smoker and fire information.\\nThe cost to be in the class is $35 per person, and for spectators it is free. Included in the cost will be either a t-shirt or apron and you will be tasting samples of each meat that is prepared.', 'timestamp': '2019-04-25T12:57:54Z', 'url': 'https://klyq.com/beginners-bbq-class-taking-place-in-missoula/'}\r\n```\r\nI hope this is useful for your use case."
] | 1,649,353,290,000 | 1,649,424,139,000 | null | NONE | null | ## Describe the bug
C4-en is a 300 GB dataset. However, when I try to download it through the hub it takes over _six hours_ to generate the train/test split from the downloaded files. This is an absurd amount of time and an unnecessary waste of resources.
## Steps to reproduce the bug
```python
c4 = datasets.load("c4", "en")
```
## Expected results
I would like to be able to download pre-split data.
## Environment info
- `datasets` version: 2.0.0
- Platform: Linux-5.13.0-35-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 7.0.0
- Pandas version: 1.4.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4123/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4123/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4122 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4122/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4122/comments | https://api.github.com/repos/huggingface/datasets/issues/4122/events | https://github.com/huggingface/datasets/issues/4122 | 1,196,095,072 | I_kwDODunzps5HSvZg | 4,122 | medical_dialog zh has very slow _generate_examples | {
"login": "nbroad1881",
"id": 24982805,
"node_id": "MDQ6VXNlcjI0OTgyODA1",
"avatar_url": "https://avatars.githubusercontent.com/u/24982805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nbroad1881",
"html_url": "https://github.com/nbroad1881",
"followers_url": "https://api.github.com/users/nbroad1881/followers",
"following_url": "https://api.github.com/users/nbroad1881/following{/other_user}",
"gists_url": "https://api.github.com/users/nbroad1881/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nbroad1881/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nbroad1881/subscriptions",
"organizations_url": "https://api.github.com/users/nbroad1881/orgs",
"repos_url": "https://api.github.com/users/nbroad1881/repos",
"events_url": "https://api.github.com/users/nbroad1881/events{/privacy}",
"received_events_url": "https://api.github.com/users/nbroad1881/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @nbroad1881, thanks for reporting.\r\n\r\nLet me have a look to try to improve its performance. ",
"Thanks @nbroad1881 for reporting! I don't recall it taking so long. I will also have a look at this. \r\n@albertvillanova please let me know if I am doing something unnecessary or time consuming.",
"Hi @nbroad1881 and @vrindaprabhu,\r\n\r\nAs a workaround for the performance of the parsing of the raw data files (this could be addressed in a subsequent PR), I have found that there are also processed data files, that do not require parsing. I have added these as new configurations `processed.en` and `processed.zh`:\r\n```python\r\nds = load_dataset(\"medical_dialog\", \"processed.zh\")\r\n```"
] | 1,649,340,051,000 | 1,649,434,851,000 | 1,649,434,851,000 | NONE | null | ## Describe the bug
After downloading the files from Google Drive, `load_dataset("medical_dialog", "zh", data_dir="./")` takes an unreasonable amount of time. Generating the train/test split for 33% of the dataset takes over 4.5 hours.
## Steps to reproduce the bug
The easiest way I've found to download files from Google Drive is to use `gdown` and use Google Colab because the download speeds will be very high due to the fact that they are both in Google Cloud.
```python
file_ids = [
"1AnKxGEuzjeQsDHHqL3NqI_aplq2hVL_E",
"1tt7weAT1SZknzRFyLXOT2fizceUUVRXX",
"1A64VBbsQ_z8wZ2LDox586JIyyO6mIwWc",
"1AKntx-ECnrxjB07B6BlVZcFRS4YPTB-J",
"1xUk8AAua_x27bHUr-vNoAuhEAjTxOvsu",
"1ezKTfe7BgqVN5o-8Vdtr9iAF0IueCSjP",
"1tA7bSOxR1RRNqZst8cShzhuNHnayUf7c",
"1pA3bCFA5nZDhsQutqsJcH3d712giFb0S",
"1pTLFMdN1A3ro-KYghk4w4sMz6aGaMOdU",
"1dUSnG0nUPq9TEQyHd6ZWvaxO0OpxVjXD",
"1UfCH05nuWiIPbDZxQzHHGAHyMh8dmPQH",
]
for i in file_ids:
url = f"https://drive.google.com/uc?id={i}"
!gdown $url
from datasets import load_dataset
ds = load_dataset("medical_dialog", "zh", data_dir="./")
```
## Expected results
Faster load time
## Actual results
`Generating train split: 33%: 625519/1921127 [4:31:03<31:39:20, 11.37 examples/s]`
## Environment info
- `datasets` version: 2.0.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyArrow version: 6.0.1
- Pandas version: 1.3.5
@vrindaprabhu , could you take a look at this since you implemented it? I think the `_generate_examples` function might need to be rewritten | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4122/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4122/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4121 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4121/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4121/comments | https://api.github.com/repos/huggingface/datasets/issues/4121/events | https://github.com/huggingface/datasets/issues/4121 | 1,196,000,018 | I_kwDODunzps5HSYMS | 4,121 | datasets.load_metric can not load a local metirc | {
"login": "Gare-Ng",
"id": 51749469,
"node_id": "MDQ6VXNlcjUxNzQ5NDY5",
"avatar_url": "https://avatars.githubusercontent.com/u/51749469?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Gare-Ng",
"html_url": "https://github.com/Gare-Ng",
"followers_url": "https://api.github.com/users/Gare-Ng/followers",
"following_url": "https://api.github.com/users/Gare-Ng/following{/other_user}",
"gists_url": "https://api.github.com/users/Gare-Ng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Gare-Ng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Gare-Ng/subscriptions",
"organizations_url": "https://api.github.com/users/Gare-Ng/orgs",
"repos_url": "https://api.github.com/users/Gare-Ng/repos",
"events_url": "https://api.github.com/users/Gare-Ng/events{/privacy}",
"received_events_url": "https://api.github.com/users/Gare-Ng/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 1,649,335,736,000 | 1,649,339,607,000 | 1,649,339,607,000 | NONE | null | ## Describe the bug
No matter how I hard try to tell load_metric that I want to load a local metric file, it still continues to fetch things on the Internet. And unfortunately it says 'ConnectionError: Couldn't reach'. However I can download this file without connectionerror and tell load_metric its local directory. And it comes back where it begins...
## Steps to reproduce the bug
```python
metric = load_metric(path=r'C:\Users\Gare\PycharmProjects\Gare\blue\bleu.py')
ConnectionError: Couldn't reach https://github.com/tensorflow/nmt/raw/master/nmt/scripts/bleu.py
metric = load_metric(path='bleu')
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.12.1/metrics/bleu/bleu.py
metric = load_metric(path='./blue/bleu.py')
ConnectionError: Couldn't reach https://github.com/tensorflow/nmt/raw/master/nmt/scripts/bleu.py
```
## Expected results
I do read the docs [here](https://huggingface.co/docs/datasets/package_reference/loading_methods#datasets.load_metric). There are no other parameters that help function to distinguish from local and online file but path. As what I code above, it should load from local.
## Actual results
> metric = load_metric(path=r'C:\Users\Gare\PycharmProjects\Gare\blue\bleu.py')
> ~\AppData\Local\Temp\ipykernel_19636\1855752034.py in <module>
----> 1 metric = load_metric(path=r'C:\Users\Gare\PycharmProjects\Gare\blue\bleu.py')
D:\Program Files\Anaconda\envs\Gare\lib\site-packages\datasets\load.py in load_metric(path, config_name, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, script_version, **metric_init_kwargs)
817 if data_files is None and data_dir is not None:
818 data_files = os.path.join(data_dir, "**")
--> 819
820 self.name = name
821 self.revision = revision
D:\Program Files\Anaconda\envs\Gare\lib\site-packages\datasets\load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, dynamic_modules_path, return_resolved_file_path, return_associated_base_path, data_files, **download_kwargs)
639 self,
640 path: str,
--> 641 download_config: Optional[DownloadConfig] = None,
642 download_mode: Optional[DownloadMode] = None,
643 dynamic_modules_path: Optional[str] = None,
D:\Program Files\Anaconda\envs\Gare\lib\site-packages\datasets\utils\file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)
297 token = hf_api.HfFolder.get_token()
298 if token:
--> 299 headers["authorization"] = f"Bearer {token}"
300 return headers
301
D:\Program Files\Anaconda\envs\Gare\lib\site-packages\datasets\utils\file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token)
604 def _resumable_file_manager():
605 with open(incomplete_path, "a+b") as f:
--> 606 yield f
607
608 temp_file_manager = _resumable_file_manager
ConnectionError: Couldn't reach https://github.com/tensorflow/nmt/raw/master/nmt/scripts/bleu.py
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: Windows-10-10.0.22000-SP0
- Python version: 3.7.13
- PyArrow version: 7.0.0
- Pandas version: 1.3.4
Any advice would be appreciated. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4121/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4121/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4120 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4120/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4120/comments | https://api.github.com/repos/huggingface/datasets/issues/4120/events | https://github.com/huggingface/datasets/issues/4120 | 1,195,887,430 | I_kwDODunzps5HR8tG | 4,120 | Representing dictionaries (json) objects as features | {
"login": "yanaiela",
"id": 8031035,
"node_id": "MDQ6VXNlcjgwMzEwMzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8031035?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yanaiela",
"html_url": "https://github.com/yanaiela",
"followers_url": "https://api.github.com/users/yanaiela/followers",
"following_url": "https://api.github.com/users/yanaiela/following{/other_user}",
"gists_url": "https://api.github.com/users/yanaiela/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yanaiela/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yanaiela/subscriptions",
"organizations_url": "https://api.github.com/users/yanaiela/orgs",
"repos_url": "https://api.github.com/users/yanaiela/repos",
"events_url": "https://api.github.com/users/yanaiela/events{/privacy}",
"received_events_url": "https://api.github.com/users/yanaiela/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 1,649,329,661,000 | 1,649,329,661,000 | null | CONTRIBUTOR | null | In the process of adding a new dataset to the hub, I stumbled upon the inability to represent dictionaries that contain different key names, unknown in advance (and may differ between samples), original asked in the [forum](https://discuss.huggingface.co/t/representing-nested-dictionary-with-different-keys/16442).
For instance:
```
sample1 = {"nps": {
"a": {"id": 0, "text": "text1"},
"b": {"id": 1, "text": "text2"},
}}
sample2 = {"nps": {
"a": {"id": 0, "text": "text1"},
"b": {"id": 1, "text": "text2"},
"c": {"id": 2, "text": "text3"},
}}
sample3 = {"nps": {
"a": {"id": 0, "text": "text1"},
"b": {"id": 1, "text": "text2"},
"c": {"id": 2, "text": "text3"},
"d": {"id": 3, "text": "text4"},
}}
```
the `nps` field cannot be represented as a Feature while maintaining its original structure.
@lhoestq suggested to add JSON as a new feature type, which will solve this problem.
It seems like an alternative solution would be to change the original data format, which isn't an optimal solution in my case. Moreover, JSON is a common structure, that will likely to be useful in future datasets as well. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4120/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4120/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4119 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4119/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4119/comments | https://api.github.com/repos/huggingface/datasets/issues/4119/events | https://github.com/huggingface/datasets/pull/4119 | 1,195,641,298 | PR_kwDODunzps41yXHF | 4,119 | Hotfix failing CI tests on Windows | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649,317,126,000 | 1,649,324,844,000 | 1,649,318,233,000 | MEMBER | null | This PR makes a hotfix for our CI Windows tests: https://app.circleci.com/pipelines/github/huggingface/datasets/11092/workflows/9cfdb1dd-0fec-4fe0-8122-5f533192ebdc/jobs/67414
Fix #4118
I guess this issue is related to this PR:
- huggingface/huggingface_hub#815 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4119/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4119/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4119",
"html_url": "https://github.com/huggingface/datasets/pull/4119",
"diff_url": "https://github.com/huggingface/datasets/pull/4119.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4119.patch",
"merged_at": 1649318233000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4118 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4118/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4118/comments | https://api.github.com/repos/huggingface/datasets/issues/4118/events | https://github.com/huggingface/datasets/issues/4118 | 1,195,638,944 | I_kwDODunzps5HRACg | 4,118 | Failing CI tests on Windows | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,649,316,985,000 | 1,649,318,233,000 | 1,649,318,233,000 | MEMBER | null | ## Describe the bug
Our CI Windows tests are failing from yesterday: https://app.circleci.com/pipelines/github/huggingface/datasets/11092/workflows/9cfdb1dd-0fec-4fe0-8122-5f533192ebdc/jobs/67414
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4118/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4118/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4117 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4117/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4117/comments | https://api.github.com/repos/huggingface/datasets/issues/4117/events | https://github.com/huggingface/datasets/issues/4117 | 1,195,552,406 | I_kwDODunzps5HQq6W | 4,117 | AttributeError: module 'huggingface_hub' has no attribute 'hf_api' | {
"login": "arymbe",
"id": 4567991,
"node_id": "MDQ6VXNlcjQ1Njc5OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/4567991?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arymbe",
"html_url": "https://github.com/arymbe",
"followers_url": "https://api.github.com/users/arymbe/followers",
"following_url": "https://api.github.com/users/arymbe/following{/other_user}",
"gists_url": "https://api.github.com/users/arymbe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arymbe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arymbe/subscriptions",
"organizations_url": "https://api.github.com/users/arymbe/orgs",
"repos_url": "https://api.github.com/users/arymbe/repos",
"events_url": "https://api.github.com/users/arymbe/events{/privacy}",
"received_events_url": "https://api.github.com/users/arymbe/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @arymbe, thanks for reporting.\r\n\r\nUnfortunately, I'm not able to reproduce your problem.\r\n\r\nCould you please write the complete stack trace? That way we will be able to see which package originates the exception.",
"Hello, thank you for your fast replied. this is the complete error that I got\r\n\r\n---------------------------------------------------------------------------\r\n\r\nAttributeError Traceback (most recent call last)\r\n\r\n---------------------------------------------------------------------------\r\n\r\nAttributeError Traceback (most recent call last)\r\n\r\nInput In [27], in <module>\r\n----> 1 from datasets import load_dataset\r\n\r\nvenv/lib/python3.8/site-packages/datasets/__init__.py:39, in <module>\r\n 37 from .arrow_dataset import Dataset, concatenate_datasets\r\n 38 from .arrow_reader import ReadInstruction\r\n---> 39 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder\r\n 40 from .combine import interleave_datasets\r\n 41 from .dataset_dict import DatasetDict, IterableDatasetDict\r\n\r\nvenv/lib/python3.8/site-packages/datasets/builder.py:40, in <module>\r\n 32 from .arrow_reader import (\r\n 33 HF_GCP_BASE_URL,\r\n 34 ArrowReader,\r\n (...)\r\n 37 ReadInstruction,\r\n 38 )\r\n 39 from .arrow_writer import ArrowWriter, BeamWriter\r\n---> 40 from .data_files import DataFilesDict, sanitize_patterns\r\n 41 from .dataset_dict import DatasetDict, IterableDatasetDict\r\n 42 from .features import Features\r\n\r\nvenv/lib/python3.8/site-packages/datasets/data_files.py:297, in <module>\r\n 292 except FileNotFoundError:\r\n 293 raise FileNotFoundError(f\"The directory at {base_path} doesn't contain any data file\") from None\r\n 296 def _resolve_single_pattern_in_dataset_repository(\r\n--> 297 dataset_info: huggingface_hub.hf_api.DatasetInfo,\r\n 298 pattern: str,\r\n 299 allowed_extensions: Optional[list] = None,\r\n 300 ) -> List[PurePath]:\r\n 301 data_files_ignore = FILES_TO_IGNORE\r\n 302 fs = HfFileSystem(repo_info=dataset_info)\r\n\r\nAttributeError: module 'huggingface_hub' has no attribute 'hf_api'",
"This is weird... It is long ago that the package `huggingface_hub` has a submodule called `hf_api`.\r\n\r\nMaybe you have a problem with your installed `huggingface_hub`...\r\n\r\nCould you please try to update it?\r\n```shell\r\npip install -U huggingface_hub\r\n```",
"Yap, I've updated several times. Then, I've tried numeral combination of datasets and huggingface_hub versions. However, I think your point is right that there is a problem with my huggingface_hub installation. I'll try another way to find the solution. I'll update it later when I get the solution. Thank you :)",
"I'm sorry I can't reproduce your problem.\r\n\r\nMaybe you could try to create a new Python virtual environment and install all dependencies there from scratch. You can use either:\r\n- Python venv: https://docs.python.org/3/library/venv.html\r\n- or conda venv (if you are using conda): https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html",
"Facing the same issue.\r\n\r\nResponse from `pip show datasets`\r\n```\r\nName: datasets\r\nVersion: 1.15.1\r\nSummary: HuggingFace community-driven open-source library of datasets\r\nHome-page: https://github.com/huggingface/datasets\r\nAuthor: HuggingFace Inc.\r\nAuthor-email: [email protected]\r\nLicense: Apache 2.0\r\nLocation: /usr/local/lib/python3.8/dist-packages\r\nRequires: aiohttp, dill, fsspec, huggingface-hub, multiprocess, numpy, packaging, pandas, pyarrow, requests, tqdm, xxhash\r\nRequired-by: lm-eval\r\n```\r\n\r\nResponse from `pip show huggingface_hub`\r\n\r\n```\r\nName: huggingface-hub\r\nVersion: 0.8.1\r\nSummary: Client library to download and publish models, datasets and other repos on the huggingface.co hub\r\nHome-page: https://github.com/huggingface/huggingface_hub\r\nAuthor: Hugging Face, Inc.\r\nAuthor-email: [email protected]\r\nLicense: Apache\r\nLocation: /usr/local/lib/python3.8/dist-packages\r\nRequires: filelock, packaging, pyyaml, requests, tqdm, typing-extensions\r\nRequired-by: datasets\r\n```\r\n\r\nresponse from `datasets-cli env`\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/datasets-cli\", line 5, in <module>\r\n from datasets.commands.datasets_cli import main\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/__init__.py\", line 37, in <module>\r\n from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/builder.py\", line 44, in <module>\r\n from .data_files import DataFilesDict, _sanitize_patterns\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/data_files.py\", line 120, in <module>\r\n dataset_info: huggingface_hub.hf_api.DatasetInfo,\r\n File \"/usr/local/lib/python3.8/dist-packages/huggingface_hub/__init__.py\", line 105, in __getattr__\r\n raise AttributeError(f\"No {package_name} attribute {name}\")\r\nAttributeError: No huggingface_hub attribute hf_api\r\n```",
"A workaround: \r\nI changed lines around Line 125 in `__init__.py` of `huggingface_hub` to something like\r\n```\r\n__getattr__, __dir__, __all__ = _attach(\r\n __name__,\r\n submodules=['hf_api'],\r\n```\r\nand it works ( which gives `datasets` direct access to `huggingface_hub.hf_api` ).",
"I was getting the same issue. After trying a few versions, following combination worked for me.\r\ndataset==2.3.2\r\nhuggingface_hub==0.7.0\r\n\r\nIn another environment, I just installed latest repos from pip through `pip install -U transformers datasets tokenizers evaluate`, resulting in following versions. This also worked. Hope it helps someone. \r\n\r\ndatasets-2.3.2 evaluate-0.1.2 huggingface-hub-0.8.1 responses-0.18.0 tokenizers-0.12.1 transformers-4.20.1",
"For layoutlm_v3 finetune\r\ndatasets-2.3.2 evaluate-0.1.2 huggingface-hub-0.8.1 responses-0.18.0 tokenizers-0.12.1 transformers-4.12.5",
"(For layoutlmv3 fine-tuning) In my case, modifying `requirements.txt` as below worked.\r\n\r\n- python = 3.7\r\n\r\n```\r\ndatasets==2.3.2\r\nevaluate==0.1.2\r\nhuggingface-hub==0.8.1\r\nresponse==0.5.0\r\ntokenizers==0.10.1\r\ntransformers==4.12.5\r\nseqeval==1.2.2\r\ndeepspeed==0.5.7\r\ntensorboard==2.7.0\r\nseqeval==1.2.2\r\nsentencepiece\r\ntimm==0.4.12\r\nPillow\r\neinops\r\ntextdistance\r\nshapely\r\n```",
"> For layoutlm_v3 finetune datasets-2.3.2 evaluate-0.1.2 huggingface-hub-0.8.1 responses-0.18.0 tokenizers-0.12.1 transformers-4.12.5\r\n\r\nGOOD!! Thanks!"
] | 1,649,310,756,000 | 1,659,026,644,000 | 1,650,382,595,000 | NONE | null | ## Describe the bug
Could you help me please. I got this following error.
AttributeError: module 'huggingface_hub' has no attribute 'hf_api'
## Steps to reproduce the bug
when I imported the datasets
# Sample code to reproduce the bug
from datasets import list_datasets, load_dataset, list_metrics, load_metric
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: macOS-12.3-x86_64-i386-64bit
- Python version: 3.8.9
- PyArrow version: 7.0.0
- Pandas version: 1.3.5
- Huggingface-hub: 0.5.0
- Transformers: 4.18.0
Thank you in advance. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4117/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4117/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4116 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4116/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4116/comments | https://api.github.com/repos/huggingface/datasets/issues/4116/events | https://github.com/huggingface/datasets/pull/4116 | 1,194,926,459 | PR_kwDODunzps41wCEO | 4,116 | Pretty print dataset info files | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"maybe just do it from now on no? (i.e. not for existing `dataset_infos.json` files)",
"_The documentation is not available anymore as the PR was closed or merged._",
"> maybe just do it from now on no? (i.e. not for existing dataset_infos.json files)\r\n\r\nYes, or do this only for datasets created with `push_to_hub` to (always) keep the GH datasets small? \r\n",
"yep sounds good too on my side! ",
"I reverted the change to avoid the size increase and added the `pretty_print` flag, which pretty-prints the JSON, and that flag is only True for datasets created with `push_to_hub`. "
] | 1,649,266,848,000 | 1,649,417,281,000 | 1,649,416,913,000 | CONTRIBUTOR | null | Adds indentation to the `dataset_infos.json` file when saving for nicer diffs.
(suggested by @julien-c)
This PR also updates the info files of the GH datasets. Note that this change adds more than **10 MB** to the repo size (the total file size before the change: 29.672298 MB, after: 41.666475 MB), so I'm not sure this change is a good idea.
`src/datasets/info.py` is the only relevant file for reviewers.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4116/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4116/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4116",
"html_url": "https://github.com/huggingface/datasets/pull/4116",
"diff_url": "https://github.com/huggingface/datasets/pull/4116.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4116.patch",
"merged_at": 1649416913000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4115 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4115/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4115/comments | https://api.github.com/repos/huggingface/datasets/issues/4115/events | https://github.com/huggingface/datasets/issues/4115 | 1,194,907,555 | I_kwDODunzps5HONej | 4,115 | ImageFolder add option to ignore some folders like '.ipynb_checkpoints' | {
"login": "cceyda",
"id": 15624271,
"node_id": "MDQ6VXNlcjE1NjI0Mjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/15624271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cceyda",
"html_url": "https://github.com/cceyda",
"followers_url": "https://api.github.com/users/cceyda/followers",
"following_url": "https://api.github.com/users/cceyda/following{/other_user}",
"gists_url": "https://api.github.com/users/cceyda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cceyda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cceyda/subscriptions",
"organizations_url": "https://api.github.com/users/cceyda/orgs",
"repos_url": "https://api.github.com/users/cceyda/repos",
"events_url": "https://api.github.com/users/cceyda/events{/privacy}",
"received_events_url": "https://api.github.com/users/cceyda/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Maybe it would be nice to ignore private dirs like this one (ones starting with `.`) by default. \r\n\r\nCC @mariosasko ",
"Maybe we can add a `ignore_hidden_files` flag to the builder configs of our packaged loaders (to be consistent across all of them), wdyt @lhoestq @albertvillanova? ",
"I think they should always ignore them actually ! Not sure if adding a flag would be helpful",
"@lhoestq But what if the user explicitly requests those files via regex?\r\n\r\n`glob.glob` ignores hidden files (files starting with \".\") by default unless they are explicitly requested, but fsspec's `glob` doesn't follow this behavior, which is probably a bug, so maybe we can raise an issue or open a PR in their repo?",
"> @lhoestq But what if the user explicitly requests those files via regex?\r\n\r\nUsually hidden files are meant to be ignored. If they are data files, they must be placed outside a hidden directory in the first place right ? I think it's more sensible to explain this than adding a flag.\r\n\r\n> glob.glob ignores hidden files (files starting with \".\") by default unless they are explicitly requested, but fsspec's glob doesn't follow this behavior, which is probably a bug, so maybe we can raise an issue or open a PR in their repo?\r\n\r\nAfter globbing using `fsspec`, we already ignore files that start with a `.` in `_resolve_single_pattern_locally` and `_resolve_single_pattern_in_dataset_repository`, I guess we can just account for parent directories as well ?\r\n\r\nWe could open an issue on `fsspec` but I think they won't change this since it's an important breaking change for them."
] | 1,649,266,183,000 | 1,654,088,656,000 | 1,654,088,656,000 | CONTRIBUTOR | null | **Is your feature request related to a problem? Please describe.**
I sometimes like to peek at the dataset images from jupyterlab. thus '.ipynb_checkpoints' folder appears where my dataset is and (just realized) leads to accidental duplicate image additions. I think this is an easy enough thing to miss especially if the dataset is very large.
**Describe the solution you'd like**
maybe have an option `ignore` or something .gitignore style
`dataset = load_dataset("imagefolder", data_dir="./data/original", ignore="regex?")`
**Describe alternatives you've considered**
Could filter out manually
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4115/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4115/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4114 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4114/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4114/comments | https://api.github.com/repos/huggingface/datasets/issues/4114/events | https://github.com/huggingface/datasets/issues/4114 | 1,194,855,345 | I_kwDODunzps5HOAux | 4,114 | Allow downloading just some columns of a dataset | {
"login": "osanseviero",
"id": 7246357,
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/osanseviero",
"html_url": "https://github.com/osanseviero",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"In the general case you can’t always reduce the quantity of data to download, since you can’t parse CSV or JSON data without downloading the whole files right ? ^^ However we could explore this case-by-case I guess",
"Actually for csv pandas has `usecols` which allows loading a subset of columns in a more efficient way afaik, but yes, you're right this might be more complex than I thought."
] | 1,649,263,126,000 | 1,649,318,186,000 | null | MEMBER | null | **Is your feature request related to a problem? Please describe.**
Some people are interested in doing label analysis of a CV dataset without downloading all the images. Downloading the whole dataset does not always makes sense for this kind of use case
**Describe the solution you'd like**
Be able to just download some columns of a dataset, such as doing
```python
load_dataset("huggan/wikiart",columns=["artist", "genre"])
```
Although this might make things a bit complicated in terms of local caching of datasets. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4114/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4114/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4113 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4113/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4113/comments | https://api.github.com/repos/huggingface/datasets/issues/4113/events | https://github.com/huggingface/datasets/issues/4113 | 1,194,843,532 | I_kwDODunzps5HN92M | 4,113 | Multiprocessing with FileLock fails in python 3.9 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [] | 1,649,262,429,000 | 1,649,262,429,000 | null | MEMBER | null | On python 3.9, this code hangs:
```python
from multiprocessing import Pool
from filelock import FileLock
def run(i):
print(f"got the lock in multi process [{i}]")
with FileLock("tmp.lock"):
with Pool(2) as pool:
pool.map(run, range(2))
```
This is because the subprocesses try to acquire the lock from the main process for some reason. This is not the case in older versions of python.
This can cause many issues in python 3.9. In particular, we use multiprocessing to fetch data files when you load a dataset (as long as there are >16 data files). Therefore `imagefolder` hangs, and I expect any dataset that needs to download >16 files to hang as well.
Let's see if we can fix this and have a CI that runs on 3.9.
cc @mariosasko @julien-c | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4113/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4113/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4112 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4112/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4112/comments | https://api.github.com/repos/huggingface/datasets/issues/4112/events | https://github.com/huggingface/datasets/issues/4112 | 1,194,752,765 | I_kwDODunzps5HNnr9 | 4,112 | ImageFolder with Grayscale images dataset | {
"login": "ChainYo",
"id": 50595514,
"node_id": "MDQ6VXNlcjUwNTk1NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/50595514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ChainYo",
"html_url": "https://github.com/ChainYo",
"followers_url": "https://api.github.com/users/ChainYo/followers",
"following_url": "https://api.github.com/users/ChainYo/following{/other_user}",
"gists_url": "https://api.github.com/users/ChainYo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ChainYo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChainYo/subscriptions",
"organizations_url": "https://api.github.com/users/ChainYo/orgs",
"repos_url": "https://api.github.com/users/ChainYo/repos",
"events_url": "https://api.github.com/users/ChainYo/events{/privacy}",
"received_events_url": "https://api.github.com/users/ChainYo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi! Replacing:\r\n```python\r\ntransformed_dataset = dataset.with_transform(transforms)\r\ntransformed_dataset.set_format(type=\"torch\", device=\"cuda\")\r\n```\r\n\r\nwith:\r\n```python\r\ndef transform_func(examples):\r\n examples[\"image\"] = [transforms(img).to(\"cuda\") for img in examples[\"image\"]]\r\n return examples\r\n\r\ntransformed_dataset = dataset.with_transform(transform_func)\r\n```\r\nshould fix the issue. `datasets` doesn't support chaining of transforms (you can think of `set_format`/`with_format` as a predefined transform func for `set_transform`/`with_transforms`), so the last transform (in your case, `set_format`) takes precedence over the previous ones (in your case `with_format`). And the PyTorch formatter is not supported by the Image feature, hence the error (adding support for that is on our short-term roadmap).",
"Ok thanks a lot for the code snippet!\r\n\r\nI love the way `datasets` is easy to use but it made it really long to pre-process all the images (400.000 in my case) before training anything. `ImageFolder` from pytorch is faster in my case but force me to have the images on my local machine.\r\n\r\nI don't know how to speed up the process without switching to `ImageFolder` :smile: ",
"You can pass `ignore_verifications=True` in `load_dataset` to skip checksum verification, which takes a lot of time if the number of files is large. We will consider making this the default behavior."
] | 1,649,257,800,000 | 1,650,622,913,000 | 1,650,622,912,000 | NONE | null | Hi, I'm facing a problem with a grayscale images dataset I have uploaded [here](https://huggingface.co/datasets/ChainYo/rvl-cdip) (RVL-CDIP)
I'm getting an error while I want to use images for training a model with PyTorch DataLoader. Here is the full traceback:
```bash
AttributeError: Caught AttributeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1765, in __getitem__
return self._getitem(
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1750, in _getitem
formatted_output = format_table(
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 532, in format_table
return formatter(pa_table, query_type=query_type)
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 281, in __call__
return self.format_row(pa_table)
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py", line 58, in format_row
return self.recursive_tensorize(row)
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py", line 54, in recursive_tensorize
return map_nested(self._recursive_tensorize, data_struct, map_list=False)
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 314, in map_nested
mapped = [
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 315, in <listcomp>
_single_map_nested((function, obj, types, None, True, None))
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 267, in _single_map_nested
return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 267, in <dictcomp>
return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 251, in _single_map_nested
return function(data_struct)
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py", line 51, in _recursive_tensorize
return self._tensorize(data_struct)
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py", line 38, in _tensorize
if np.issubdtype(value.dtype, np.integer):
AttributeError: 'bytes' object has no attribute 'dtype'
```
I don't really understand why the image is still a bytes object while I used transformations on it. Here the code I used to upload the dataset (and it worked well):
```python
train_dataset = load_dataset("imagefolder", data_dir="data/train")
train_dataset = train_dataset["train"]
test_dataset = load_dataset("imagefolder", data_dir="data/test")
test_dataset = test_dataset["train"]
val_dataset = load_dataset("imagefolder", data_dir="data/val")
val_dataset = val_dataset["train"]
dataset = DatasetDict({
"train": train_dataset,
"val": val_dataset,
"test": test_dataset
})
dataset.push_to_hub("ChainYo/rvl-cdip")
```
Now here is the code I am using to get the dataset and prepare it for training:
```python
img_size = 512
batch_size = 128
normalize = [(0.5), (0.5)]
data_dir = "ChainYo/rvl-cdip"
dataset = load_dataset(data_dir, split="train")
transforms = transforms.Compose([
transforms.Resize(img_size),
transforms.CenterCrop(img_size),
transforms.ToTensor(),
transforms.Normalize(*normalize)
])
transformed_dataset = dataset.with_transform(transforms)
transformed_dataset.set_format(type="torch", device="cuda")
train_dataloader = torch.utils.data.DataLoader(
transformed_dataset, batch_size=batch_size, shuffle=True, num_workers=4, pin_memory=True
)
```
But this get me the error above. I don't understand why it's doing this kind of weird thing?
Do I need to map something on the dataset? Something like this:
```python
labels = dataset.features["label"].names
num_labels = dataset.features["label"].num_classes
def preprocess_data(examples):
images = [ex.convert("RGB") for ex in examples["image"]]
labels = [ex for ex in examples["label"]]
return {"images": images, "labels": labels}
features = Features({
"images": Image(decode=True, id=None),
"labels": ClassLabel(num_classes=num_labels, names=labels)
})
decoded_dataset = dataset.map(preprocess_data, remove_columns=dataset.column_names, features=features, batched=True, batch_size=100)
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4112/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4112/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4111 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4111/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4111/comments | https://api.github.com/repos/huggingface/datasets/issues/4111/events | https://github.com/huggingface/datasets/pull/4111 | 1,194,660,699 | PR_kwDODunzps41vJCt | 4,111 | Update security policy | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649,253,591,000 | 1,649,324,790,000 | 1,649,324,427,000 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4111/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4111/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4111",
"html_url": "https://github.com/huggingface/datasets/pull/4111",
"diff_url": "https://github.com/huggingface/datasets/pull/4111.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4111.patch",
"merged_at": 1649324427000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4110 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4110/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4110/comments | https://api.github.com/repos/huggingface/datasets/issues/4110/events | https://github.com/huggingface/datasets/pull/4110 | 1,194,581,375 | PR_kwDODunzps41u4Je | 4,110 | Matthews Correlation Metric Card | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649,249,975,000 | 1,651,585,397,000 | 1,651,584,973,000 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4110/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4110/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4110",
"html_url": "https://github.com/huggingface/datasets/pull/4110",
"diff_url": "https://github.com/huggingface/datasets/pull/4110.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4110.patch",
"merged_at": 1651584972000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4109 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4109/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4109/comments | https://api.github.com/repos/huggingface/datasets/issues/4109/events | https://github.com/huggingface/datasets/pull/4109 | 1,194,579,257 | PR_kwDODunzps41u3sm | 4,109 | Add Spearmanr Metric Card | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"changes made! @lhoestq let me know what you think ",
"The CI fail is unrelated to this PR and fixed on master, feel free to merge :)"
] | 1,649,249,873,000 | 1,651,596,626,000 | 1,651,596,217,000 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4109/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4109/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4109",
"html_url": "https://github.com/huggingface/datasets/pull/4109",
"diff_url": "https://github.com/huggingface/datasets/pull/4109.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4109.patch",
"merged_at": 1651596217000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4108 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4108/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4108/comments | https://api.github.com/repos/huggingface/datasets/issues/4108/events | https://github.com/huggingface/datasets/pull/4108 | 1,194,578,584 | PR_kwDODunzps41u3j2 | 4,108 | Perplexity Speedup | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"WRT the high values, can you add some unit tests with some [string, model] pairs and their resulting perplexity code, and @TristanThrush can run the same pairs through his version of the code?",
"_The documentation is not available anymore as the PR was closed or merged._",
"I thought that the perplexity metric should output the average perplexity value of all the strings that it gets as input (not a perplexity value per string, as the new version does).\r\n@lhoestq , @TristanThrush thoughts?",
"> I thought that the perplexity metric should output the average perplexity value of all the strings that it gets as input (not a perplexity value per string, as the new version does). @lhoestq , @TristanThrush thoughts?\r\n\r\nI support this change from Emi. If we have a perplexity function that loads GPT2 and then returns an average over all of the strings, then it is impossible to get multiple perplexities of a batch of strings efficiently. If we have this new perplexity function that is built for batching, then it is possible to get a batch of perplexities efficiently and you can still compute the average efficiently afterwards.",
"Thanks a lot for working on this @emibaylor @TristanThrush :)\r\n\r\nFor consistency with the other metrics, I think it's nice if we return the mean perplexity. Though I agree that having the separate perplexities per sample can also be useful. What do you think about returning both ?\r\n```python\r\nreturn {\"perplexities\": ppls, \"mean_perplexity\": np.mean(ppls)}\r\n```\r\nwe're also doing this for the COMET metric.",
"> Thanks a lot for working on this @emibaylor @TristanThrush :)\r\n> \r\n> For consistency with the other metrics, I think it's nice if we return the mean perplexity. Though I agree that having the separate perplexities per sample can also be useful. What do you think about returning both ?\r\n> \r\n> ```python\r\n> return {\"perplexities\": ppls, \"mean_perplexity\": np.mean(ppls)}\r\n> ```\r\n> \r\n> we're also doing this for the COMET metric.\r\n\r\nThanks! Sounds great to me.",
"The CI fail is unrelated to your PR and has been fixed on master, feel free to merge the master branch into your PR to fix the CI ;)"
] | 1,649,249,841,000 | 1,650,459,654,000 | 1,650,459,282,000 | CONTRIBUTOR | null | This PR makes necessary changes to perplexity such that:
- it runs much faster (via batching)
- it throws an error when input is empty, or when input is one word without <BOS> token
- it adds the option to add a <BOS> token
Issues:
- The values returned are extremely high, and I'm worried they aren't correct. Even if they are correct, they are sometimes returned as `inf`, which is not very useful (see [comment below](https://github.com/huggingface/datasets/pull/4108#discussion_r843931094) for some of the output values).
- If the values are not correct, can you help me find the error?
- If the values are correct, it might be worth it to measure something like perplexity per word, which would allow us to get actual values for the larger perplexities, instead of just `inf`
Future:
- `stride` is not currently implemented here. I have some thoughts on how to make it happen with batching, but I think it would be better to get another set of eyes to look at any possible errors causing such large values now rather than later. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4108/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4108/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4108",
"html_url": "https://github.com/huggingface/datasets/pull/4108",
"diff_url": "https://github.com/huggingface/datasets/pull/4108.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4108.patch",
"merged_at": 1650459282000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4107 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4107/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4107/comments | https://api.github.com/repos/huggingface/datasets/issues/4107/events | https://github.com/huggingface/datasets/issues/4107 | 1,194,484,885 | I_kwDODunzps5HMmSV | 4,107 | Unable to view the dataset and loading the same dataset throws the error - ArrowInvalid: Exceeded maximum rows | {
"login": "Pavithree",
"id": 23344465,
"node_id": "MDQ6VXNlcjIzMzQ0NDY1",
"avatar_url": "https://avatars.githubusercontent.com/u/23344465?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pavithree",
"html_url": "https://github.com/Pavithree",
"followers_url": "https://api.github.com/users/Pavithree/followers",
"following_url": "https://api.github.com/users/Pavithree/following{/other_user}",
"gists_url": "https://api.github.com/users/Pavithree/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pavithree/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pavithree/subscriptions",
"organizations_url": "https://api.github.com/users/Pavithree/orgs",
"repos_url": "https://api.github.com/users/Pavithree/repos",
"events_url": "https://api.github.com/users/Pavithree/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pavithree/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Thanks for reporting. I'm looking at it",
" It's not related to the dataset viewer in itself. I can replicate the error with:\r\n\r\n```\r\n>>> import datasets as ds\r\n>>> d = ds.load_dataset('Pavithree/explainLikeImFive')\r\nUsing custom data configuration Pavithree--explainLikeImFive-b68b6d8112cd8a51\r\nDownloading and preparing dataset json/Pavithree--explainLikeImFive to /home/slesage/.cache/huggingface/datasets/json/Pavithree--explainLikeImFive-b68b6d8112cd8a51/0.0.0/ac0ca5f5289a6cf108e706efcf040422dbbfa8e658dee6a819f20d76bb84d26b...\r\nDownloading data: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 305M/305M [00:03<00:00, 98.6MB/s]\r\nDownloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 17.9M/17.9M [00:00<00:00, 75.7MB/s]\r\nDownloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 11.9M/11.9M [00:00<00:00, 70.6MB/s]\r\nDownloading data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:05<00:00, 1.92s/it]\r\nExtracting data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1948.42it/s]\r\nFailed to read file '/home/slesage/.cache/huggingface/datasets/downloads/5fee9c8819754df277aee6f252e4db6897d785231c21938407b8862ca871d246' with error <class 'pyarrow.lib.ArrowInvalid'>: Exceeded maximum rows\r\nTraceback (most recent call last):\r\n File \"/home/slesage/hf/datasets/src/datasets/packaged_modules/json/json.py\", line 144, in _generate_tables\r\n dataset = json.load(f)\r\n File \"/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/json/__init__.py\", line 293, in load\r\n return loads(fp.read(),\r\n File \"/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/json/__init__.py\", line 357, in loads\r\n return _default_decoder.decode(s)\r\n File \"/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/json/decoder.py\", line 340, in decode\r\n raise JSONDecodeError(\"Extra data\", s, end)\r\njson.decoder.JSONDecodeError: Extra data: line 1 column 916 (char 915)\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets/src/datasets/load.py\", line 1691, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 605, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 694, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 1151, in _prepare_split\r\n for key, table in logging.tqdm(\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/tqdm/std.py\", line 1168, in __iter__\r\n for obj in iterable:\r\n File \"/home/slesage/hf/datasets/src/datasets/packaged_modules/json/json.py\", line 146, in _generate_tables\r\n raise e\r\n File \"/home/slesage/hf/datasets/src/datasets/packaged_modules/json/json.py\", line 122, in _generate_tables\r\n pa_table = paj.read_json(\r\n File \"pyarrow/_json.pyx\", line 246, in pyarrow._json.read_json\r\n File \"pyarrow/error.pxi\", line 143, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 99, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Exceeded maximum rows\r\n```\r\n\r\ncc @lhoestq @albertvillanova @mariosasko ",
"It seems that train.json is not a valid JSON Lines file: it has several JSON objects in the first line (the 915th character in the first line starts a new object, and there's no \"\\n\")\r\n\r\nYou need to have one JSON object per line",
"I'm closing this issue.\r\n\r\n@Pavithree, please, feel free to re-open it if fixing the JSON file does not solve it.",
"Thank you! that fixes the issue."
] | 1,649,245,035,000 | 1,649,401,987,000 | 1,649,255,995,000 | NONE | null | ## Dataset viewer issue - -ArrowInvalid: Exceeded maximum rows
**Link:** *https://huggingface.co/datasets/Pavithree/explainLikeImFive*
*This is the subset of original eli5 dataset https://huggingface.co/datasets/vblagoje/lfqa. I just filtered the data samples which belongs to one particular subreddit thread. However, the dataset preview for train split returns the below mentioned error:
Status code: 400
Exception: ArrowInvalid
Message: Exceeded maximum rows
When I try to load the same dataset it returns ArrowInvalid: Exceeded maximum rows error*
Am I the one who added this dataset ? Yes
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4107/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4107/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4106 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4106/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4106/comments | https://api.github.com/repos/huggingface/datasets/issues/4106/events | https://github.com/huggingface/datasets/pull/4106 | 1,194,393,892 | PR_kwDODunzps41uPpa | 4,106 | Support huggingface_hub 0.5 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Looks like GH actions is not able to resolve `huggingface_hub` 0.5.0, I'm investivating",
"_The documentation is not available anymore as the PR was closed or merged._",
"I'm glad to see changes in `huggingface_hub` are simplifying code here.",
"seems to supersede #4102, feel free to close mine :)",
"maybe just cherry-pick the docstring fix",
"I think I've found the issue:\r\n- https://github.com/huggingface/huggingface_hub/pull/790",
"Good catch, `huggingface_hub` doesn't support python 3.6 anymore indeed, therefore we should keep support for 0.4.0. I'm reverting the requirement version bump for now.\r\n\r\nWe can update the requirement once we drop support for python 3.6 in `datasets`",
"@lhoestq, I've opened this PR on `huggingface_hub`: \r\n- https://github.com/huggingface/huggingface_hub/pull/823\r\n\r\nIs there any strong reason why `huggingface_hub` no longer supports Python 3.6? ",
"I think `datasets` can drop support for 3.6 soon. But for now maybe let's keep support for 0.4.0, python 3.6 users are not affected by https://github.com/huggingface/datasets/issues/4105 anyway.\r\n\r\n`huggingface_hub` doesn't not have to support 3.6 again just for the CI IMO",
"@lhoestq I commented on the PR, that IMO it is not a good practice to drop support for Python 3.6 without a previous deprecation cycle.",
"Re-added support for older versions. I ended up checking `huggingface_hub` version to use the old, deprecated API for <0.5.0",
"I find it good practice to have all dependency version related code in a single file so that when you decide to remove support for an old version of a dependency it's easy to find and remove them, hence suggesting `utils/_fixes.py` in https://github.com/huggingface/datasets/issues/4105#issuecomment-1090041204",
"good idea, thanks !",
"I used your suggestion @adrinjalali , I just replace the try/except with a check on the version of `huggingface_hub`"
] | 1,649,240,125,000 | 1,649,413,723,000 | 1,649,413,343,000 | MEMBER | null | Following https://github.com/huggingface/datasets/issues/4105
`huggingface_hub` deprecated some parameters in `HfApi` in 0.5. This PR updates all the calls to HfApi to remove all the deprecations, <s>and I set the `hugginface_hub` requirement to `>=0.5.0`</s>
cc @adrinjalali @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4106/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4106/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4106",
"html_url": "https://github.com/huggingface/datasets/pull/4106",
"diff_url": "https://github.com/huggingface/datasets/pull/4106.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4106.patch",
"merged_at": 1649413343000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4105 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4105/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4105/comments | https://api.github.com/repos/huggingface/datasets/issues/4105/events | https://github.com/huggingface/datasets/issues/4105 | 1,194,297,119 | I_kwDODunzps5HL4cf | 4,105 | push to hub fails with huggingface-hub 0.5.0 | {
"login": "frascuchon",
"id": 2518789,
"node_id": "MDQ6VXNlcjI1MTg3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2518789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/frascuchon",
"html_url": "https://github.com/frascuchon",
"followers_url": "https://api.github.com/users/frascuchon/followers",
"following_url": "https://api.github.com/users/frascuchon/following{/other_user}",
"gists_url": "https://api.github.com/users/frascuchon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/frascuchon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/frascuchon/subscriptions",
"organizations_url": "https://api.github.com/users/frascuchon/orgs",
"repos_url": "https://api.github.com/users/frascuchon/repos",
"events_url": "https://api.github.com/users/frascuchon/events{/privacy}",
"received_events_url": "https://api.github.com/users/frascuchon/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi ! Indeed there was a breaking change in `huggingface_hub` 0.5.0 in `HfApi.create_repo`, which is called here in `datasets` by passing the org name in both the `repo_id` and the `organization` arguments:\r\n\r\nhttps://github.com/huggingface/datasets/blob/2230f7f7d7fbaf102cff356f5a8f3bd1561bea43/src/datasets/arrow_dataset.py#L3363-L3369\r\n\r\nI think we should fix that in `huggingface_hub`, will keep you posted. In the meantime please use `huggingface_hub` 0.4.0",
"I'll be sending a fix for this later today on the `huggingface_hub` side.\r\n\r\nThe error would be converted to a `FutureWarning` if `datasets` uses kwargs instead of positional, for example here: \r\n\r\nhttps://github.com/huggingface/datasets/blob/2230f7f7d7fbaf102cff356f5a8f3bd1561bea43/src/datasets/arrow_dataset.py#L3363-L3369\r\n\r\nto be:\r\n\r\n``` python\r\n api.create_repo(\r\n name=dataset_name,\r\n token=token,\r\n repo_type=\"dataset\",\r\n organization=organization,\r\n private=private,\r\n )\r\n```\r\n\r\nBut `name` and `organization` are deprecated in `huggingface_hub=0.5`, and people should pass `repo_id='org/name` instead. Note that `repo_id` was introduced in 0.5 and if `datasets` wants to support older `huggingface_hub` versions (which I encourage it to do), there needs to be a helper function to do that. It can be something like:\r\n\r\n\r\n```python\r\ndef create_repo(\r\n client,\r\n name: str,\r\n token: Optional[str] = None,\r\n organization: Optional[str] = None,\r\n private: Optional[bool] = None,\r\n repo_type: Optional[str] = None,\r\n exist_ok: Optional[bool] = False,\r\n space_sdk: Optional[str] = None,\r\n) -> str:\r\n try:\r\n return client.create_repo(\r\n repo_id=f\"{organization}/{name}\",\r\n token=token,\r\n private=private,\r\n repo_type=repo_type,\r\n exist_ok=exist_ok,\r\n space_sdk=space_sdk,\r\n )\r\n except TypeError:\r\n return client.create_repo(\r\n name=name,\r\n organization=organization,\r\n token=token,\r\n private=private,\r\n repo_type=repo_type,\r\n exist_ok=exist_ok,\r\n space_sdk=space_sdk,\r\n )\r\n```\r\n\r\nin a `utils/_fixes.py` kinda file and and be used internally.\r\n\r\nI'll be sending a patch to `huggingface_hub` to convert the error reported in this issue to a `FutureWarning`.",
"PR with the hotfix on the `huggingface_hub` side: https://github.com/huggingface/huggingface_hub/pull/822",
"We can definitely change `push_to_hub` to use `repo_id` in `datasets` and require `huggingface_hub>=0.5.0`.\r\n\r\nLet me open a PR :)",
"`huggingface_hub` 0.5.1 just got released with a fix, feel free to update `huggingface_hub` ;)"
] | 1,649,235,597,000 | 1,649,860,247,000 | 1,649,860,247,000 | NONE | null | ## Describe the bug
`ds.push_to_hub` is failing when updating a dataset in the form "org_id/repo_id"
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset("rubrix/news_test")
ds.push_to_hub("<your-user>/news_test", token="<your-token>")
```
## Expected results
The dataset is successfully uploaded
## Actual results
An error validation is raised:
```bash
if repo_id and (name or organization):
> raise ValueError(
"Only pass `repo_id` and leave deprecated `name` and "
"`organization` to be None."
E ValueError: Only pass `repo_id` and leave deprecated `name` and `organization` to be None.
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.1
- `huggingface-hub`: 0.5
- Platform: macOS
- Python version: 3.8.12
- PyArrow version: 6.0.0
cc @adrinjalali
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4105/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4105/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4104 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4104/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4104/comments | https://api.github.com/repos/huggingface/datasets/issues/4104/events | https://github.com/huggingface/datasets/issues/4104 | 1,194,072,966 | I_kwDODunzps5HLBuG | 4,104 | Add time series data - stock market | {
"login": "INF800",
"id": 45640029,
"node_id": "MDQ6VXNlcjQ1NjQwMDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/45640029?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/INF800",
"html_url": "https://github.com/INF800",
"followers_url": "https://api.github.com/users/INF800/followers",
"following_url": "https://api.github.com/users/INF800/following{/other_user}",
"gists_url": "https://api.github.com/users/INF800/gists{/gist_id}",
"starred_url": "https://api.github.com/users/INF800/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/INF800/subscriptions",
"organizations_url": "https://api.github.com/users/INF800/orgs",
"repos_url": "https://api.github.com/users/INF800/repos",
"events_url": "https://api.github.com/users/INF800/events{/privacy}",
"received_events_url": "https://api.github.com/users/INF800/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | null | [] | null | [
"Can I use instructions present in below link for time series dataset as well? \r\nhttps://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md ",
"cc'ing @kashif and @NielsRogge for visibility!",
"@INF800 happy to add this dataset! I will try to set a PR by the end of the day... if you can kindly point me to the dataset? Also, note we have a bunch of time series datasets checked in e.g. `electricity_load_diagrams` or `monash_tsf`, and ideally this dataset could also be in a similar format. ",
"Thankyou. This is how raw data looks like before cleaning for an individual stocks:\r\n\r\n1. https://github.com/INF800/marktech/tree/raw-data/f/data/raw\r\n2. https://github.com/INF800/marktech/tree/raw-data/t/data/raw\r\n3. https://github.com/INF800/marktech/tree/raw-data/rdfn/data/raw\r\n4. https://github.com/INF800/marktech/tree/raw-data/irbt/data/raw\r\n5. https://github.com/INF800/marktech/tree/raw-data/hll/data/raw\r\n6. https://github.com/INF800/marktech/tree/raw-data/infy/data/raw\r\n7. https://github.com/INF800/marktech/tree/raw-data/reli/data/raw\r\n8. https://github.com/INF800/marktech/tree/raw-data/hdbk/data/raw\r\n\r\n> Scraping is automated using GitHub Actions. So, everyday we will see a new file added in the above links.\r\n\r\nI can rewrite the cleaning scripts to make sure it fits HF dataset standards. (P.S I am very much new to HF dataset)\r\n\r\nThe data set above can be converted into univariate regression / multivariate regression / sequence to sequence generation dataset etc. So, do we have some kind of transformation modules that will read the dataset as some type of dataset (`GenericTimeData`) and convert it to other possible dataset relating to a specific ML task. **By having this kind of transformation module, I only have to add data once** and use transformation module whenever necessary\r\n\r\nAdditionally, having some kind of versioning for the dataset will be really helpful because it will keep on updating - especially time series datasets ",
"thanks @INF800 I'll have a look. I believe it should be possible to incorporate this into the time-series format.",
"Referencing https://github.com/qingsongedu/time-series-transformers-review",
"@INF800 yes I am aware of the review repository and paper which is more or less a collection of abstracts etc. I am working on a unified library of implementations of these papers together with datasets to be then able to compare/contrast and build upon the research etc. but I am not ready to share them publicly just yet.\r\n\r\nIn any case regarding your dataset at the moment its seems from looking at the csv files, its mixture of textual and numerical data, sometimes in the same column etc. As you know, for time series models we would need just numeric data so I would need your help in disambiguating the dataset you have collected and also perhaps starting with just numerical data to start with... \r\n\r\nDo you think you can make a version with just numerical data?",
"> @INF800 yes I am aware of the review repository and paper which is more or less a collection of abstracts etc. I am working on a unified library of implementations of these papers together with datasets to be then able to compare/contrast and build upon the research etc. but I am not ready to share them publicly just yet.\r\n> \r\n> In any case regarding your dataset at the moment its seems from looking at the csv files, its mixture of textual and numerical data, sometimes in the same column etc. As you know, for time series models we would need just numeric data so I would need your help in disambiguating the dataset you have collected and also perhaps starting with just numerical data to start with...\r\n> \r\n> Do you think you can make a version with just numerical data?\r\n\r\nWill share the numeric data and conversion script within end of this week. \r\n\r\nI am on a business trip currently - it is in my desktop."
] | 1,649,224,018,000 | 1,649,668,030,000 | null | NONE | null | ## Adding a Time Series Dataset
- **Name:** 2min ticker data for stock market
- **Description:** 8 stocks' data collected for 1month post ukraine-russia war. 4 NSE stocks and 4 NASDAQ stocks. Along with technical indicators (additional features) as shown in below image
- **Data:** Collected by myself from investing.com
- **Motivation:** Test applicability of transformer based model on stock market / time series problem
![image](https://user-images.githubusercontent.com/45640029/161904077-52fe97cb-3720-4e3f-98ee-7f6720a056e2.png) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4104/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4104/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4103 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4103/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4103/comments | https://api.github.com/repos/huggingface/datasets/issues/4103/events | https://github.com/huggingface/datasets/pull/4103 | 1,193,987,104 | PR_kwDODunzps41s3T4 | 4,103 | Add the `GSM8K` dataset | {
"login": "jon-tow",
"id": 41410219,
"node_id": "MDQ6VXNlcjQxNDEwMjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/41410219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jon-tow",
"html_url": "https://github.com/jon-tow",
"followers_url": "https://api.github.com/users/jon-tow/followers",
"following_url": "https://api.github.com/users/jon-tow/following{/other_user}",
"gists_url": "https://api.github.com/users/jon-tow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jon-tow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jon-tow/subscriptions",
"organizations_url": "https://api.github.com/users/jon-tow/orgs",
"repos_url": "https://api.github.com/users/jon-tow/repos",
"events_url": "https://api.github.com/users/jon-tow/events{/privacy}",
"received_events_url": "https://api.github.com/users/jon-tow/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The CI is failing because it's outdated, but the task tags are updated on `master`, merging :)"
] | 1,649,218,072,000 | 1,649,777,908,000 | 1,649,758,876,000 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4103/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4103/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4103",
"html_url": "https://github.com/huggingface/datasets/pull/4103",
"diff_url": "https://github.com/huggingface/datasets/pull/4103.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4103.patch",
"merged_at": 1649758876000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4102 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4102/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4102/comments | https://api.github.com/repos/huggingface/datasets/issues/4102/events | https://github.com/huggingface/datasets/pull/4102 | 1,193,616,722 | PR_kwDODunzps41roGx | 4,102 | [hub] Fix `api.create_repo` call? | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4102). All of your documentation changes will be reflected on that endpoint.",
"Closing in favor of https://github.com/huggingface/datasets/pull/4106"
] | 1,649,186,512,000 | 1,649,752,906,000 | 1,649,752,906,000 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4102/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4102/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4102",
"html_url": "https://github.com/huggingface/datasets/pull/4102",
"diff_url": "https://github.com/huggingface/datasets/pull/4102.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4102.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4101 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4101/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4101/comments | https://api.github.com/repos/huggingface/datasets/issues/4101/events | https://github.com/huggingface/datasets/issues/4101 | 1,193,399,204 | I_kwDODunzps5HIdOk | 4,101 | How can I download only the train and test split for full numbers using load_dataset()? | {
"login": "Nakkhatra",
"id": 64383902,
"node_id": "MDQ6VXNlcjY0MzgzOTAy",
"avatar_url": "https://avatars.githubusercontent.com/u/64383902?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nakkhatra",
"html_url": "https://github.com/Nakkhatra",
"followers_url": "https://api.github.com/users/Nakkhatra/followers",
"following_url": "https://api.github.com/users/Nakkhatra/following{/other_user}",
"gists_url": "https://api.github.com/users/Nakkhatra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Nakkhatra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nakkhatra/subscriptions",
"organizations_url": "https://api.github.com/users/Nakkhatra/orgs",
"repos_url": "https://api.github.com/users/Nakkhatra/repos",
"events_url": "https://api.github.com/users/Nakkhatra/events{/privacy}",
"received_events_url": "https://api.github.com/users/Nakkhatra/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi! Can you please specify the full name of the dataset? IIRC `full_numbers` is one of the configs of the `svhn` dataset, and its generation is slow due to data being stored in binary Matlab files. Even if you specify a specific split, `datasets` downloads all of them, but we plan to fix that soon and only download the requested split.\r\n\r\nIf you are in a hurry, download the `svhn` script [here](`https://huggingface.co/datasets/svhn/blob/main/svhn.py`), remove [this code](https://huggingface.co/datasets/svhn/blob/main/svhn.py#L155-L162), and run:\r\n```python\r\nfrom datasets import load_dataset\r\ndset = load_dataset(\"path/to/your/local/script.py\", \"full_numbers\")\r\n```\r\n\r\nAnd to make loading easier in Colab, you can create a dataset repo on the Hub and upload the script there. Or push the script to Google Drive and mount the drive in Colab."
] | 1,649,174,415,000 | 1,649,250,541,000 | null | NONE | null | How can I download only the train and test split for full numbers using load_dataset()?
I do not need the extra split and it will take 40 mins just to download in Colab. I have very short time in hand. Please help. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4101/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4101/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4100 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4100/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4100/comments | https://api.github.com/repos/huggingface/datasets/issues/4100/events | https://github.com/huggingface/datasets/pull/4100 | 1,193,393,959 | PR_kwDODunzps41q4ce | 4,100 | Improve RedCaps dataset card | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I find this preprocessing a bit too specific to add it as a method to `datasets` as it's only useful in the context of CV (and we support multiple modalities). However, I agree it would be great to move this code to another lib to avoid code duplication. Maybe we should create a package with preprocessing functions/transforms for this purpose?"
] | 1,649,174,234,000 | 1,649,858,934,000 | 1,649,858,546,000 | CONTRIBUTOR | null | This PR modifies the RedCaps card to:
* fix the formatting of the Point of Contact fields on the Hub
* speed up the image fetching logic (aligns it with the [img2dataset](https://github.com/rom1504/img2dataset) tool) and make it more robust (return None if **any** exception is thrown) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4100/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4100/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4100",
"html_url": "https://github.com/huggingface/datasets/pull/4100",
"diff_url": "https://github.com/huggingface/datasets/pull/4100.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4100.patch",
"merged_at": 1649858546000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4099 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4099/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4099/comments | https://api.github.com/repos/huggingface/datasets/issues/4099/events | https://github.com/huggingface/datasets/issues/4099 | 1,193,253,768 | I_kwDODunzps5HH5uI | 4,099 | UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128) | {
"login": "andreybond",
"id": 20210017,
"node_id": "MDQ6VXNlcjIwMjEwMDE3",
"avatar_url": "https://avatars.githubusercontent.com/u/20210017?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andreybond",
"html_url": "https://github.com/andreybond",
"followers_url": "https://api.github.com/users/andreybond/followers",
"following_url": "https://api.github.com/users/andreybond/following{/other_user}",
"gists_url": "https://api.github.com/users/andreybond/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andreybond/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andreybond/subscriptions",
"organizations_url": "https://api.github.com/users/andreybond/orgs",
"repos_url": "https://api.github.com/users/andreybond/repos",
"events_url": "https://api.github.com/users/andreybond/events{/privacy}",
"received_events_url": "https://api.github.com/users/andreybond/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @andreybond, thanks for reporting.\r\n\r\nUnfortunately, I'm not able to able to reproduce your issue:\r\n```python\r\nIn [4]: from datasets import load_dataset\r\n ...: datasets = load_dataset(\"nielsr/XFUN\", \"xfun.ja\")\r\n\r\nIn [5]: datasets\r\nOut[5]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'input_ids', 'bbox', 'labels', 'image', 'entities', 'relations'],\r\n num_rows: 194\r\n })\r\n validation: Dataset({\r\n features: ['id', 'input_ids', 'bbox', 'labels', 'image', 'entities', 'relations'],\r\n num_rows: 71\r\n })\r\n})\r\n```\r\n\r\nThe only reason I can imagine this issue may arise is if your default encoding is not \"UTF-8\" (and it is ASCII instead). This is usually the case on Windows machines; but you say your environment is a Linux machine. Maybe you change your machine default encoding?\r\n\r\nCould you please check this?\r\n```python\r\nIn [6]: import sys\r\n\r\nIn [7]: sys.getdefaultencoding()\r\nOut[7]: 'utf-8'\r\n```",
"I opened a PR in the original dataset loading script:\r\n- microsoft/unilm#677\r\n\r\nand fixed the corresponding dataset script on the Hub:\r\n- https://huggingface.co/datasets/nielsr/XFUN/commit/73ba5e026621e05fb756ae0f267eb49971f70ebd",
"import sys\r\nsys.getdefaultencoding()\r\n\r\nreturned: 'utf-8'\r\n\r\n---------------------\r\n\r\nI've just cloned master branch - your fix works! Thank you!"
] | 1,649,169,758,000 | 1,649,227,064,000 | 1,649,226,954,000 | NONE | null | ## Describe the bug
Error "UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128)" is thrown when downloading dataset.
## Steps to reproduce the bug
```python
from datasets import load_dataset
datasets = load_dataset("nielsr/XFUN", "xfun.ja")
```
## Expected results
Dataset should be downloaded without exceptions
## Actual results
Stack trace (for the second-time execution):
Downloading and preparing dataset xfun/xfun.ja to /root/.cache/huggingface/datasets/nielsr___xfun/xfun.ja/0.0.0/e06e948b673d1be9a390a83c05c10e49438bf03dd85ae9a4fe06f8747a724477...
Downloading data files: 100%
2/2 [00:00<00:00, 88.48it/s]
Extracting data files: 100%
2/2 [00:00<00:00, 79.60it/s]
UnicodeDecodeErrorTraceback (most recent call last)
<ipython-input-31-79c26bd1109c> in <module>
1 from datasets import load_dataset
2
----> 3 datasets = load_dataset("nielsr/XFUN", "xfun.ja")
/usr/local/lib/python3.6/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
/usr/local/lib/python3.6/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
604 )
605
--> 606 # By default, return all splits
607 if split is None:
608 split = {s: s for s in self.info.splits}
/usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos)
/usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
692 Args:
693 split: `datasets.Split` which subset of the data to read.
--> 694
695 Returns:
696 `Dataset`
/usr/local/lib/python3.6/dist-packages/datasets/builder.py in _prepare_split(self, split_generator, check_duplicate_keys)
/usr/local/lib/python3.6/dist-packages/tqdm/notebook.py in __iter__(self)
252 if not self.disable:
253 self.display(check_delay=False)
--> 254
255 def __iter__(self):
256 try:
/usr/local/lib/python3.6/dist-packages/tqdm/std.py in __iter__(self)
1183 for obj in iterable:
1184 yield obj
-> 1185 return
1186
1187 mininterval = self.mininterval
~/.cache/huggingface/modules/datasets_modules/datasets/nielsr--XFUN/e06e948b673d1be9a390a83c05c10e49438bf03dd85ae9a4fe06f8747a724477/XFUN.py in _generate_examples(self, filepaths)
140 logger.info("Generating examples from = %s", filepath)
141 with open(filepath[0], "r") as f:
--> 142 data = json.load(f)
143
144 for doc in data["documents"]:
/usr/lib/python3.6/json/__init__.py in load(fp, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
294
295 """
--> 296 return loads(fp.read(),
297 cls=cls, object_hook=object_hook,
298 parse_float=parse_float, parse_int=parse_int,
/usr/lib/python3.6/encodings/ascii.py in decode(self, input, final)
24 class IncrementalDecoder(codecs.IncrementalDecoder):
25 def decode(self, input, final=False):
---> 26 return codecs.ascii_decode(input, self.errors)[0]
27
28 class StreamWriter(Codec,codecs.StreamWriter):
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128)
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0 (but reproduced with many previous versions)
- Platform: Docker: Linux da5b74136d6b 5.3.0-1031-azure #32~18.04.1-Ubuntu SMP Mon Jun 22 15:27:23 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux ; Base docker image is : huggingface/transformers-pytorch-cpu
- Python version: 3.6.9
- PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4099/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4099/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4098 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4098/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4098/comments | https://api.github.com/repos/huggingface/datasets/issues/4098/events | https://github.com/huggingface/datasets/pull/4098 | 1,193,245,522 | PR_kwDODunzps41qXjo | 4,098 | Proposing WikiSplit metric card | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"A quick Github tip ;) To avoid running N times the CI, you can push all the changes at once: go to Files Changed tab, and on each suggestion there's a \"add to commit batch\" and then you can do one commit for all the suggestions you want to approve ;)"
] | 1,649,169,394,000 | 1,649,173,717,000 | 1,649,173,348,000 | CONTRIBUTOR | null | Pinging @lhoestq to ensure that my distinction between the dataset and the metric are clear :sweat_smile: | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4098/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4098/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4098",
"html_url": "https://github.com/huggingface/datasets/pull/4098",
"diff_url": "https://github.com/huggingface/datasets/pull/4098.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4098.patch",
"merged_at": 1649173348000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4097 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4097/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4097/comments | https://api.github.com/repos/huggingface/datasets/issues/4097/events | https://github.com/huggingface/datasets/pull/4097 | 1,193,205,751 | PR_kwDODunzps41qPEu | 4,097 | Updating FrugalScore metric card | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649,167,764,000 | 1,649,171,255,000 | 1,649,170,906,000 | CONTRIBUTOR | null | removing duplicate paragraph | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4097/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4097/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4097",
"html_url": "https://github.com/huggingface/datasets/pull/4097",
"diff_url": "https://github.com/huggingface/datasets/pull/4097.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4097.patch",
"merged_at": 1649170906000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4096 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4096/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4096/comments | https://api.github.com/repos/huggingface/datasets/issues/4096/events | https://github.com/huggingface/datasets/issues/4096 | 1,193,165,229 | I_kwDODunzps5HHkGt | 4,096 | Add support for streaming Zarr stores for hosted datasets | {
"login": "jacobbieker",
"id": 7170359,
"node_id": "MDQ6VXNlcjcxNzAzNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7170359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jacobbieker",
"html_url": "https://github.com/jacobbieker",
"followers_url": "https://api.github.com/users/jacobbieker/followers",
"following_url": "https://api.github.com/users/jacobbieker/following{/other_user}",
"gists_url": "https://api.github.com/users/jacobbieker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jacobbieker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jacobbieker/subscriptions",
"organizations_url": "https://api.github.com/users/jacobbieker/orgs",
"repos_url": "https://api.github.com/users/jacobbieker/repos",
"events_url": "https://api.github.com/users/jacobbieker/events{/privacy}",
"received_events_url": "https://api.github.com/users/jacobbieker/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @jacobbieker, thanks for your request and study of possible alternatives.\r\n\r\nWe are very interested in finding a way to make `datasets` useful to you.\r\n\r\nLooking at the Zarr docs, I saw that among its storage alternatives, there is the ZIP file format: https://zarr.readthedocs.io/en/stable/api/storage.html#zarr.storage.ZipStore\r\n\r\nThis might be convenient for many reasons:\r\n- On the one hand, we avoid the Git issue with huge number of small files: chunks files are compressed into a single ZIP file\r\n- On the other hand, the ZIP file format is specially suited for streaming data because it allows random access to its component files (i.e. it supports random access to its chunks)\r\n\r\nAnyway, I think that a Python loading script will be necessary: you need to implement additional logic to select certain chunks (based on date or other criteria).\r\n\r\nPlease, let me know if this makes sense to you.",
"Ah okay, I missed the option of zip files for zarr, I'll try that with our repos and see if it works! Thanks a lot!",
"Hi @jacobbieker, does the Zarr ZipStore work for your use case?",
"Hi,\r\n\r\nYes, it seems to! I got it working for https://huggingface.co/datasets/openclimatefix/mrms thanks for the help! ",
"On behalf of the Zarr developers, let me say THANK YOU for working to support Zarr on HF! 🙏 Zarr is a 100% open-source and community driven project (fiscally sponsored by NumFocus). We see it as an ideal format for ML training datasets, particularly in scientific domains.\r\n\r\nI think the solution of zipping the Zarr store is a reasonable way to balance the constraints of Git LFS with the structure of Zarr.\r\n\r\nIt would be amazing to get something on the [Hugging Face Datasets Docs](https://huggingface.co/docs/datasets/index) about how to best work with Zarr. Let me know if there's a way I could help with that effort.",
"Also just noting here that I was able to lazily open @jacobbieker's dataset over the internet from HF hub 🚀 !\r\n\r\n```python\r\nimport xarray as xr\r\nurl = \"https://huggingface.co/datasets/openclimatefix/mrms/resolve/main/data/2016_001.zarr.zip\"\r\nzip_url = 'zip:///::' + url\r\nds = xr.open_dataset(zip_url, engine='zarr', chunks={})\r\n```\r\n\r\n<img width=\"740\" alt=\"image\" src=\"https://user-images.githubusercontent.com/1197350/164508663-bc75cdc0-734d-44f4-9562-2877ecfdf433.png\">\r\n",
"However, I wasn't able to get streaming working using the Datasets api:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\nds = load_dataset(\"openclimatefix/mrms\", streaming=True, split='train')\r\nitem = next(iter(ds))\r\n```\r\n\r\n<details>\r\n<summary>FileNotFoundError traceback</summary>\r\n\r\n```\r\nNo config specified, defaulting to: mrms/2021\r\nzip://::https://huggingface.co/datasets/openclimatefix/mrms/resolve/main/data/2016_001.zarr.zip\r\ndata/2016_001.zarr.zip\r\nzip://2016_001.zarr.zip::https://huggingface.co/datasets/openclimatefix/mrms/resolve/main/data/2016_001.zarr.zip\r\n---------------------------------------------------------------------------\r\nFileNotFoundError Traceback (most recent call last)\r\nInput In [1], in <cell line: 3>()\r\n 1 from datasets import load_dataset\r\n 2 ds = load_dataset(\"openclimatefix/mrms\", streaming=True, split='train')\r\n----> 3 item = next(iter(ds))\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/datasets/iterable_dataset.py:497, in IterableDataset.__iter__(self)\r\n 496 def __iter__(self):\r\n--> 497 for key, example in self._iter():\r\n 498 if self.features:\r\n 499 # we encode the example for ClassLabel feature types for example\r\n 500 encoded_example = self.features.encode_example(example)\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/datasets/iterable_dataset.py:494, in IterableDataset._iter(self)\r\n 492 else:\r\n 493 ex_iterable = self._ex_iterable\r\n--> 494 yield from ex_iterable\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/datasets/iterable_dataset.py:87, in ExamplesIterable.__iter__(self)\r\n 86 def __iter__(self):\r\n---> 87 yield from self.generate_examples_fn(**self.kwargs)\r\n\r\nFile ~/.cache/huggingface/modules/datasets_modules/datasets/openclimatefix--mrms/2a6f697014d7eb3caf586ca137d47ca38785ae2fe36248611b021f8248b59936/mrms.py:150, in MRMS._generate_examples(self, filepath, split)\r\n 147 filepath = \"[https://huggingface.co/datasets/openclimatefix/mrms/resolve/main/data/2016_001.zarr.zip](https://huggingface.co/datasets/openclimatefix/mrms/resolve/main/data/2016_001.zarr.zip%3C/span%3E%3Cspan) style=\"color:rgb(175,0,0)\">\"\r\n 148 # TODO: This method handles input defined in _split_generators to yield (key, example) tuples from the dataset.\r\n 149 # The `key` is for legacy reasons (tfds) and is not important in itself, but must be unique for each example.\r\n--> 150 with zarr.storage.FSStore(fsspec.open(\"zip::\" + filepath, mode='r'), mode='r') as store:\r\n 151 data = xr.open_zarr(store)\r\n 152 for key, row in enumerate(data[\"time\"].values):\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/zarr/storage.py:1120, in FSStore.__init__(self, url, normalize_keys, key_separator, mode, exceptions, dimension_separator, **storage_options)\r\n 1117 import fsspec\r\n 1118 self.normalize_keys = normalize_keys\r\n-> 1120 protocol, _ = fsspec.core.split_protocol(url)\r\n 1121 # set auto_mkdir to True for local file system\r\n 1122 if protocol in (None, \"file\") and not storage_options.get(\"auto_mkdir\"):\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/fsspec/core.py:514, in split_protocol(urlpath)\r\n 512 def split_protocol(urlpath):\r\n 513 \"\"\"Return protocol, path pair\"\"\"\r\n--> 514 urlpath = stringify_path(urlpath)\r\n 515 if \"://\" in urlpath:\r\n 516 protocol, path = urlpath.split(\"://\", 1)\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/fsspec/utils.py:315, in stringify_path(filepath)\r\n 313 return filepath\r\n 314 elif hasattr(filepath, \"__fspath__\"):\r\n--> 315 return filepath.__fspath__()\r\n 316 elif isinstance(filepath, pathlib.Path):\r\n 317 return str(filepath)\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/fsspec/core.py:98, in OpenFile.__fspath__(self)\r\n 96 def __fspath__(self):\r\n 97 # may raise if cannot be resolved to local file\r\n---> 98 return self.open().__fspath__()\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/fsspec/core.py:140, in OpenFile.open(self)\r\n 132 def open(self):\r\n 133 \"\"\"Materialise this as a real open file without context\r\n 134 \r\n 135 The file should be explicitly closed to avoid enclosed file\r\n (...)\r\n 138 been deleted; but a with-context is better style.\r\n 139 \"\"\"\r\n--> 140 out = self.__enter__()\r\n 141 closer = out.close\r\n 142 fobjects = self.fobjects.copy()[:-1]\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/fsspec/core.py:103, in OpenFile.__enter__(self)\r\n 100 def __enter__(self):\r\n 101 mode = self.mode.replace(\"t\", \"\").replace(\"b\", \"\") + \"b\"\r\n--> 103 f = self.fs.open(self.path, mode=mode)\r\n 105 self.fobjects = [f]\r\n 107 if self.compression is not None:\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/fsspec/spec.py:1009, in AbstractFileSystem.open(self, path, mode, block_size, cache_options, compression, **kwargs)\r\n 1007 else:\r\n 1008 ac = kwargs.pop(\"autocommit\", not self._intrans)\r\n-> 1009 f = self._open(\r\n 1010 path,\r\n 1011 mode=mode,\r\n 1012 block_size=block_size,\r\n 1013 autocommit=ac,\r\n 1014 cache_options=cache_options,\r\n 1015 **kwargs,\r\n 1016 )\r\n 1017 if compression is not None:\r\n 1018 from fsspec.compression import compr\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/fsspec/implementations/zip.py:96, in ZipFileSystem._open(self, path, mode, block_size, autocommit, cache_options, **kwargs)\r\n 94 if mode != \"rb\":\r\n 95 raise NotImplementedError\r\n---> 96 info = self.info(path)\r\n 97 out = self.zip.open(path, \"r\")\r\n 98 out.size = info[\"size\"]\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/fsspec/archive.py:42, in AbstractArchiveFileSystem.info(self, path, **kwargs)\r\n 40 return self.dir_cache[path + \"/\"]\r\n 41 else:\r\n---> 42 raise FileNotFoundError(path)\r\n\r\nFileNotFoundError:\r\n```\r\n\r\n</details>\r\n\r\nIs this a bug? Or am I just doing it wrong...",
"I'm still messing around with that dataset, so the data might have moved. I currently have each year of MRMS precipitation rate data as it's own zarr, but as they are quite large (on order of 100GB each) I'm working to split them into single days, and as such they are still being moved around, I was just trying to get a proof of concept working originally. ",
"I've mostly finished rearranging the data now and uploading some more, so this works now:\r\n```python\r\nimport datasets\r\nds = datasets.load_dataset(\"openclimatefix/mrms\", streaming=True, split=\"train\")\r\nitem = next(iter(ds))\r\nprint(item.keys())\r\nprint(item[\"timestamp\"])\r\n```\r\n\r\nThe MRMS data now goes most of 2016-2022, with quite a few gaps I'm working on filling in"
] | 1,649,165,912,000 | 1,650,873,852,000 | 1,650,528,778,000 | NONE | null | **Is your feature request related to a problem? Please describe.**
Lots of geospatial data is stored in the Zarr format. This format works well for n-dimensional data and coordinates, and can have good compression. Unfortunately, HF datasets doesn't support streaming in data in Zarr format as far as I can tell. Zarr stores are designed to be easily streamed in from cloud storage, especially with xarray and fsspec. Since geospatial data tends to be very large, and on the order of TBs of data or 10's of TBs of data for a single dataset, it can be difficult to store the dataset locally for users. Just adding Zarr stores with HF git doesn't work well (see https://github.com/huggingface/datasets/issues/3823) as Zarr splits the data into lots of small chunks for fast loading, and that doesn't work well with git. I've somewhat gotten around that issue by tarring each Zarr store and uploading them as a single file, which seems to be working (see https://huggingface.co/datasets/openclimatefix/gfs-reforecast for example data files, although the script isn't written yet). This does mean that streaming doesn't quite work though. On the other hand, in https://huggingface.co/datasets/openclimatefix/eumetsat_uk_hrv we stream in a Zarr store from a public GCP bucket quite easily.
**Describe the solution you'd like**
A way to upload Zarr stores for hosted datasets so that we can stream it with xarray and fsspec.
**Describe alternatives you've considered**
Tarring each Zarr store individually and just extracting them in the dataset script -> Downside this is a lot of data that probably doesn't fit locally for a lot of potential users.
Pre-prepare examples in a format like Parquet -> Would use a lot more storage, and a lot less flexibility, in the eumetsat_uk_hrv, we use the one Zarr store for multiple different configurations.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4096/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4096/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4095 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4095/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4095/comments | https://api.github.com/repos/huggingface/datasets/issues/4095/events | https://github.com/huggingface/datasets/pull/4095 | 1,192,573,353 | PR_kwDODunzps41oIFI | 4,095 | fix typo in rename_column error message | {
"login": "hunterlang",
"id": 680821,
"node_id": "MDQ6VXNlcjY4MDgyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/680821?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hunterlang",
"html_url": "https://github.com/hunterlang",
"followers_url": "https://api.github.com/users/hunterlang/followers",
"following_url": "https://api.github.com/users/hunterlang/following{/other_user}",
"gists_url": "https://api.github.com/users/hunterlang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hunterlang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hunterlang/subscriptions",
"organizations_url": "https://api.github.com/users/hunterlang/orgs",
"repos_url": "https://api.github.com/users/hunterlang/repos",
"events_url": "https://api.github.com/users/hunterlang/events{/privacy}",
"received_events_url": "https://api.github.com/users/hunterlang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4095). All of your documentation changes will be reflected on that endpoint."
] | 1,649,130,956,000 | 1,649,148,886,000 | 1,649,148,353,000 | CONTRIBUTOR | null | I feel bad submitting such a tiny change as a PR but it confused me today 😄 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4095/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4095/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4095",
"html_url": "https://github.com/huggingface/datasets/pull/4095",
"diff_url": "https://github.com/huggingface/datasets/pull/4095.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4095.patch",
"merged_at": 1649148353000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4094 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4094/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4094/comments | https://api.github.com/repos/huggingface/datasets/issues/4094/events | https://github.com/huggingface/datasets/issues/4094 | 1,192,534,414 | I_kwDODunzps5HFKGO | 4,094 | Helo Mayfrends | {
"login": "Budigming",
"id": 102933353,
"node_id": "U_kgDOBiKjaQ",
"avatar_url": "https://avatars.githubusercontent.com/u/102933353?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Budigming",
"html_url": "https://github.com/Budigming",
"followers_url": "https://api.github.com/users/Budigming/followers",
"following_url": "https://api.github.com/users/Budigming/following{/other_user}",
"gists_url": "https://api.github.com/users/Budigming/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Budigming/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Budigming/subscriptions",
"organizations_url": "https://api.github.com/users/Budigming/orgs",
"repos_url": "https://api.github.com/users/Budigming/repos",
"events_url": "https://api.github.com/users/Budigming/events{/privacy}",
"received_events_url": "https://api.github.com/users/Budigming/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [] | 1,649,126,577,000 | 1,649,143,002,000 | 1,649,143,002,000 | NONE | null | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons to have this dataset*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4094/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4094/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4093 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4093/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4093/comments | https://api.github.com/repos/huggingface/datasets/issues/4093/events | https://github.com/huggingface/datasets/issues/4093 | 1,192,523,161 | I_kwDODunzps5HFHWZ | 4,093 | elena-soare/crawled-ecommerce: missing dataset | {
"login": "seevaratnam",
"id": 17519354,
"node_id": "MDQ6VXNlcjE3NTE5MzU0",
"avatar_url": "https://avatars.githubusercontent.com/u/17519354?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/seevaratnam",
"html_url": "https://github.com/seevaratnam",
"followers_url": "https://api.github.com/users/seevaratnam/followers",
"following_url": "https://api.github.com/users/seevaratnam/following{/other_user}",
"gists_url": "https://api.github.com/users/seevaratnam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/seevaratnam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/seevaratnam/subscriptions",
"organizations_url": "https://api.github.com/users/seevaratnam/orgs",
"repos_url": "https://api.github.com/users/seevaratnam/repos",
"events_url": "https://api.github.com/users/seevaratnam/events{/privacy}",
"received_events_url": "https://api.github.com/users/seevaratnam/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"It's a bug! Thanks for reporting, I'm looking at it.",
"By the way, the error on our part is due to the huge size of every row (~90MB). The dataset viewer does not support such big dataset rows for the moment.\r\nAnyway, we're working to give a hint about this in the dataset viewer.",
"Fixed. See https://huggingface.co/datasets/elena-soare/crawled-ecommerce/viewer/elena-soare--crawled-ecommerce/train.\r\n\r\n<img width=\"1552\" alt=\"Capture d’écran 2022-04-12 à 11 23 51\" src=\"https://user-images.githubusercontent.com/1676121/162929722-2e2b80e2-154a-4b61-87bd-e341bd6c46e6.png\">\r\n\r\nThanks for reporting!"
] | 1,649,125,519,000 | 1,649,756,093,000 | 1,649,756,093,000 | NONE | null | elena-soare/crawled-ecommerce
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4093/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4093/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4092 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4092/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4092/comments | https://api.github.com/repos/huggingface/datasets/issues/4092/events | https://github.com/huggingface/datasets/pull/4092 | 1,192,499,903 | PR_kwDODunzps41n40R | 4,092 | Fix dataset `amazon_us_reviews` metadata - 4/4/2022 | {
"login": "trentonstrong",
"id": 191985,
"node_id": "MDQ6VXNlcjE5MTk4NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/191985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/trentonstrong",
"html_url": "https://github.com/trentonstrong",
"followers_url": "https://api.github.com/users/trentonstrong/followers",
"following_url": "https://api.github.com/users/trentonstrong/following{/other_user}",
"gists_url": "https://api.github.com/users/trentonstrong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/trentonstrong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/trentonstrong/subscriptions",
"organizations_url": "https://api.github.com/users/trentonstrong/orgs",
"repos_url": "https://api.github.com/users/trentonstrong/repos",
"events_url": "https://api.github.com/users/trentonstrong/events{/privacy}",
"received_events_url": "https://api.github.com/users/trentonstrong/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"cc: @albertvillanova just FYI"
] | 1,649,122,785,000 | 1,649,421,341,000 | 1,649,420,971,000 | CONTRIBUTOR | null | Fixes #4048 by running `dataset-cli test` to reprocess data and regenerate metadata. Additionally I've updated the README to include up-to-date counts for the subsets. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4092/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4092/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4092",
"html_url": "https://github.com/huggingface/datasets/pull/4092",
"diff_url": "https://github.com/huggingface/datasets/pull/4092.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4092.patch",
"merged_at": 1649420970000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4091 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4091/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4091/comments | https://api.github.com/repos/huggingface/datasets/issues/4091/events | https://github.com/huggingface/datasets/issues/4091 | 1,192,023,855 | I_kwDODunzps5HDNcv | 4,091 | Build a Dataset One Example at a Time Without Loading All Data Into Memory | {
"login": "aravind-tonita",
"id": 99340348,
"node_id": "U_kgDOBevQPA",
"avatar_url": "https://avatars.githubusercontent.com/u/99340348?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aravind-tonita",
"html_url": "https://github.com/aravind-tonita",
"followers_url": "https://api.github.com/users/aravind-tonita/followers",
"following_url": "https://api.github.com/users/aravind-tonita/following{/other_user}",
"gists_url": "https://api.github.com/users/aravind-tonita/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aravind-tonita/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aravind-tonita/subscriptions",
"organizations_url": "https://api.github.com/users/aravind-tonita/orgs",
"repos_url": "https://api.github.com/users/aravind-tonita/repos",
"events_url": "https://api.github.com/users/aravind-tonita/events{/privacy}",
"received_events_url": "https://api.github.com/users/aravind-tonita/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi! Yes, the problem with `add_item` is that it keeps examples in memory, so you are left with these options:\r\n* writing a dataset loading script in which you iterate over `custom_example_dict_streamer` and yield the examples (in `_generate examples`)\r\n* storing the data in a JSON/CSV/Parquet/TXT file and using `Dataset.from_{format}`\r\n* using `add_item` + `save_to_disk` on smaller chunks: \r\n ```python\r\n from datasets import Dataset, concatenate_datasets\r\n MAX_SAMPLES_IN_MEMORY = 1000\r\n samples_in_dset = 0\r\n dset = Dataset.from_dict({\"col1\": [], \"col2\": []}) # empty dataset\r\n path_to_save_dir = \"path/to/save/dir\"\r\n num_chunks = 0\r\n for example_dict in custom_example_dict_streamer(\"/path/to/raw/data\"):\r\n dset = dset.add_item(example_dict)\r\n samples_in_dset += 1\r\n if samples_in_dset == MAX_SAMPLES_IN_MEMORY:\r\n samples_in_dset = 0\r\n dset.save_to_disk(f\"{path_to_save_dir}{num_chunks}\")\r\n num_chunks =+ 1\r\n dset = Dataset.from_dict({\"col1\": [], \"col2\": []}) # empty dataset\r\n if samples_in_dset > 0:\r\n dset.save_to_disk(f\"{path_to_save_dir}{num_chunks}\")\r\n num_chunks =+ 1\r\n loaded_dsets = [] # memory-mapped\r\n for chunk_num in range(num_chunks):\r\n dset = Dataset.load_from_disk(f\"{path_to_save_dir}{chunk_num}\") \r\n loaded_dsets.append(dset)\r\n final_dset = concatenate_datasets(dset)\r\n ```\r\n If you still have issues with this approach, you can try to delete unused datasets with `gc.collect()` to free some memory. ",
"This is really elegant, thank you @mariosasko! I will try this."
] | 1,649,089,164,000 | 1,650,465,060,000 | 1,650,465,060,000 | NONE | null | **Is your feature request related to a problem? Please describe.**
I have a very large dataset stored on disk in a custom format. I have some custom code that reads one data example at a time and yields it in the form of a dictionary. I want to construct a `Dataset` with all examples, and then save it to disk. I later want to load the saved `Dataset` and use it like any other HuggingFace dataset, get splits, wrap it in a PyTorch `DataLoader`, etc. **Crucially, I do not ever want to materialize all the data in memory while building the dataset.**
**Describe the solution you'd like**
I would like to be able to do something like the following. Notice how each example is read and then immediately added to the dataset. We do not store all the data in memory when constructing the `Dataset`. If it helps, I will know the schema of my dataset before hand.
```
# Initialize an empty Dataset, possibly from a known schema.
dataset = Dataset()
# Read in examples one by one using a custom data streamer.
for example_dict in custom_example_dict_streamer("/path/to/raw/data"):
# Add this example to the dict but do not store it in memory.
dataset.add_item(example_dict)
# Save the final dataset to disk as an Arrow-backed dataset.
dataset.save_to_disk("/path/to/dataset")
...
# I'd like to be able to later `load_from_disk` and use the loaded Dataset
# just like any other memory-mapped pyarrow-backed HuggingFace dataset...
loaded_dataset = Dataset.load_from_disk("/path/to/dataset")
loaded_dataset.set_format(type="torch", columnns=["foo", "bar", "baz"])
dataloader = torch.utils.data.DataLoader(loaded_dataset, batch_size=16)
...
```
**Describe alternatives you've considered**
I initially tried to read all the data into memory, construct a Pandas DataFrame and then call `Dataset.from_pandas`. This would not work as it requires storing all the data in memory. It seems that there is an `add_item` method already -- I tried to implement something like the desired API written above, but I've not been able to initialize an empty `Dataset` (this seems to require several layers of constructing `datasets.table.Table` which requires constructing a `pyarrow.lib.Table`, etc). I also considered writing my data to multiple sharded CSV files or JSON files and then using `from_csv` or `from_json`. I'd prefer not to do this because (1) I'd prefer to avoid the intermediate step of creating these temp CSV/JSON files and (2) I'm not sure if `from_csv` and `from_json` use memory-mapping.
Do you have any suggestions on how I'd be able to achieve this use case? Does something already exist to support this? Thank you very much in advance! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4091/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4091/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4090 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4090/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4090/comments | https://api.github.com/repos/huggingface/datasets/issues/4090/events | https://github.com/huggingface/datasets/pull/4090 | 1,191,956,734 | PR_kwDODunzps41mEs5 | 4,090 | Avoid writing empty license files | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649,085,817,000 | 1,649,335,605,000 | 1,649,335,243,000 | MEMBER | null | This PR avoids the creation of empty `LICENSE` files. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4090/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4090/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4090",
"html_url": "https://github.com/huggingface/datasets/pull/4090",
"diff_url": "https://github.com/huggingface/datasets/pull/4090.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4090.patch",
"merged_at": 1649335243000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4089 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4089/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4089/comments | https://api.github.com/repos/huggingface/datasets/issues/4089/events | https://github.com/huggingface/datasets/pull/4089 | 1,191,915,196 | PR_kwDODunzps41l7yd | 4,089 | Create metric card for Frugal Score | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649,084,029,000 | 1,649,168,086,000 | 1,649,167,610,000 | CONTRIBUTOR | null | Proposing metric card for Frugal Score.
@albertvillanova or @lhoestq -- there are certain aspects that I'm not 100% sure on (such as how exactly the distillation between BertScore and FrugalScore is done) -- so if you find that something isn't clear, please let me know! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4089/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4089/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4089",
"html_url": "https://github.com/huggingface/datasets/pull/4089",
"diff_url": "https://github.com/huggingface/datasets/pull/4089.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4089.patch",
"merged_at": 1649167610000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4088 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4088/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4088/comments | https://api.github.com/repos/huggingface/datasets/issues/4088/events | https://github.com/huggingface/datasets/pull/4088 | 1,191,901,172 | PR_kwDODunzps41l4yE | 4,088 | Remove unused legacy Beam utils | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649,083,431,000 | 1,649,172,207,000 | 1,649,171,861,000 | MEMBER | null | This PR removes unused legacy custom `WriteToParquet`, once official Apache Beam includes the patch since version 2.22.0:
- Patch PR: https://github.com/apache/beam/pull/11699
- Issue: https://issues.apache.org/jira/browse/BEAM-10022
In relation with:
- #204 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4088/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4088/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4088",
"html_url": "https://github.com/huggingface/datasets/pull/4088",
"diff_url": "https://github.com/huggingface/datasets/pull/4088.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4088.patch",
"merged_at": 1649171861000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4087 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4087/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4087/comments | https://api.github.com/repos/huggingface/datasets/issues/4087/events | https://github.com/huggingface/datasets/pull/4087 | 1,191,819,805 | PR_kwDODunzps41lnfO | 4,087 | Fix BeamWriter output Parquet file | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649,080,010,000 | 1,649,170,840,000 | 1,649,170,488,000 | MEMBER | null | Since now, the `BeamWriter` saved a Parquet file with a simplified schema, where each field value was serialized to JSON. That resulted in Parquet files larger than Arrow files.
This PR:
- writes Parquet file preserving original schema and without serialization, thus avoiding serialization overhead and resulting in a smaller output file size.
- fixes `parquet_to_arrow` function | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4087/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4087/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4087",
"html_url": "https://github.com/huggingface/datasets/pull/4087",
"diff_url": "https://github.com/huggingface/datasets/pull/4087.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4087.patch",
"merged_at": 1649170488000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4086 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4086/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4086/comments | https://api.github.com/repos/huggingface/datasets/issues/4086/events | https://github.com/huggingface/datasets/issues/4086 | 1,191,373,374 | I_kwDODunzps5HAuo- | 4,086 | Dataset viewer issue for McGill-NLP/feedbackQA | {
"login": "cslizc",
"id": 54827718,
"node_id": "MDQ6VXNlcjU0ODI3NzE4",
"avatar_url": "https://avatars.githubusercontent.com/u/54827718?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cslizc",
"html_url": "https://github.com/cslizc",
"followers_url": "https://api.github.com/users/cslizc/followers",
"following_url": "https://api.github.com/users/cslizc/following{/other_user}",
"gists_url": "https://api.github.com/users/cslizc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cslizc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cslizc/subscriptions",
"organizations_url": "https://api.github.com/users/cslizc/orgs",
"repos_url": "https://api.github.com/users/cslizc/repos",
"events_url": "https://api.github.com/users/cslizc/events{/privacy}",
"received_events_url": "https://api.github.com/users/cslizc/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @cslizc, thanks for reporting.\r\n\r\nI have just forced the refresh of the corresponding cache and the preview is working now.",
"thank you so much"
] | 1,649,057,240,000 | 1,649,111,393,000 | 1,649,059,305,000 | NONE | null | ## Dataset viewer issue for '*McGill-NLP/feedbackQA*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/McGill-NLP/feedbackQA)*
*short description of the issue*
The dataset can be loaded correctly with `load_dataset` but the preview doesn't work. Error message:
```
Status code: 400
Exception: Status400Error
Message: Not found. Maybe the cache is missing, or maybe the dataset does not exist.
```
Am I the one who added this dataset ? Yes
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4086/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4086/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4085 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4085/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4085/comments | https://api.github.com/repos/huggingface/datasets/issues/4085/events | https://github.com/huggingface/datasets/issues/4085 | 1,190,621,345 | I_kwDODunzps5G93Ch | 4,085 | datasets.set_progress_bar_enabled(False) not working in datasets v2 | {
"login": "virilo",
"id": 3381112,
"node_id": "MDQ6VXNlcjMzODExMTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3381112?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/virilo",
"html_url": "https://github.com/virilo",
"followers_url": "https://api.github.com/users/virilo/followers",
"following_url": "https://api.github.com/users/virilo/following{/other_user}",
"gists_url": "https://api.github.com/users/virilo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/virilo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/virilo/subscriptions",
"organizations_url": "https://api.github.com/users/virilo/orgs",
"repos_url": "https://api.github.com/users/virilo/repos",
"events_url": "https://api.github.com/users/virilo/events{/privacy}",
"received_events_url": "https://api.github.com/users/virilo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Now, I can't find any reference to set_progress_bar_enabled in the code.\r\n\r\nI think it have been deleted",
"Hi @virilo,\r\n\r\nPlease note that since `datasets` version 2.0.0, we have aligned with `transformers` the management of the progress bar (among other things):\r\n- #3897\r\n\r\nNow, you should update your code to use `datasets.logging.disable_progress_bar`.\r\n\r\nYou have more info in our docs: [Logging methods](https://huggingface.co/docs/datasets/package_reference/logging_methods)",
"One important thing for beginner like me is: from datasets.utils.logging import disable_progress_bar\r\nDo not forget the 'utils' or you will waste a long time like me...."
] | 1,648,903,210,000 | 1,663,381,083,000 | 1,649,054,674,000 | NONE | null | ## Describe the bug
datasets.set_progress_bar_enabled(False) not working in datasets v2
## Steps to reproduce the bug
```python
datasets.set_progress_bar_enabled(False)
```
## Expected results
datasets not using any progress bar
## Actual results
AttributeError: module 'datasets' has no attribute 'set_progress_bar_enabled
## Environment info
datasets version 2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4085/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4085/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4084 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4084/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4084/comments | https://api.github.com/repos/huggingface/datasets/issues/4084/events | https://github.com/huggingface/datasets/issues/4084 | 1,190,060,415 | I_kwDODunzps5G7uF_ | 4,084 | Errors in `Train with Datasets` Tensorflow code section on Huggingface.co | {
"login": "blackhat-coder",
"id": 57095771,
"node_id": "MDQ6VXNlcjU3MDk1Nzcx",
"avatar_url": "https://avatars.githubusercontent.com/u/57095771?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/blackhat-coder",
"html_url": "https://github.com/blackhat-coder",
"followers_url": "https://api.github.com/users/blackhat-coder/followers",
"following_url": "https://api.github.com/users/blackhat-coder/following{/other_user}",
"gists_url": "https://api.github.com/users/blackhat-coder/gists{/gist_id}",
"starred_url": "https://api.github.com/users/blackhat-coder/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/blackhat-coder/subscriptions",
"organizations_url": "https://api.github.com/users/blackhat-coder/orgs",
"repos_url": "https://api.github.com/users/blackhat-coder/repos",
"events_url": "https://api.github.com/users/blackhat-coder/events{/privacy}",
"received_events_url": "https://api.github.com/users/blackhat-coder/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @blackhat-coder, thanks for reporting.\r\n\r\nPlease note that the `transformers` library updated their data collators API last year (version 4.10.0):\r\n- huggingface/transformers#13105\r\n\r\nnow requiring to pass `return_tensors` argument at Data Collator instantiation.\r\n\r\nAnd therefore, we also updated in the `datasets` library documentation all the examples using `transformers` data collators.\r\n\r\nIf you would like to follow our examples, please update your installed `transformers` version:\r\n```\r\npip install -U transformers\r\n```"
] | 1,648,832,567,000 | 1,649,057,077,000 | 1,649,056,891,000 | NONE | null | ## Describe the bug
Hi
### Error 1
Running the Tensforlow code on [Huggingface](https://huggingface.co/docs/datasets/use_dataset) gives a TypeError: __init__() got an unexpected keyword argument 'return_tensors'
### Error 2
`DataCollatorWithPadding` isn't imported
## Steps to reproduce the bug
```python
import tensorflow as tf
from datasets import load_dataset
from transformers import AutoTokenizer
dataset = load_dataset('glue', 'mrpc', split='train')
tokenizer = AutoTokenizer.from_pretrained('bert-base-cased')
dataset = dataset.map(lambda e: tokenizer(e['sentence1'], truncation=True, padding='max_length'), batched=True)
data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="tf")
train_dataset = dataset["train"].to_tf_dataset(
columns=['input_ids', 'token_type_ids', 'attention_mask', 'label'],
shuffle=True,
batch_size=16,
collate_fn=data_collator,
)
```
This is the same code on Huggingface.co
## Actual results
TypeError: __init__() got an unexpected keyword argument 'return_tensors'
## Environment info
- `datasets` version: 2.0.0
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.9.7
- PyArrow version: 6.0.0
- Pandas version: 1.4.1
> | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4084/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4084/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4083 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4083/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4083/comments | https://api.github.com/repos/huggingface/datasets/issues/4083/events | https://github.com/huggingface/datasets/pull/4083 | 1,190,025,878 | PR_kwDODunzps41gEbu | 4,083 | Add SacreBLEU Metric Card | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,830,296,000 | 1,649,796,300,000 | 1,649,795,920,000 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4083/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4083/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4083",
"html_url": "https://github.com/huggingface/datasets/pull/4083",
"diff_url": "https://github.com/huggingface/datasets/pull/4083.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4083.patch",
"merged_at": 1649795920000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4082 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4082/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4082/comments | https://api.github.com/repos/huggingface/datasets/issues/4082/events | https://github.com/huggingface/datasets/pull/4082 | 1,189,965,845 | PR_kwDODunzps41f3fb | 4,082 | Add chrF(++) Metric Card | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,827,132,000 | 1,649,796,235,000 | 1,649,795,886,000 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4082/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4082/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4082",
"html_url": "https://github.com/huggingface/datasets/pull/4082",
"diff_url": "https://github.com/huggingface/datasets/pull/4082.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4082.patch",
"merged_at": 1649795886000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4081 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4081/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4081/comments | https://api.github.com/repos/huggingface/datasets/issues/4081/events | https://github.com/huggingface/datasets/pull/4081 | 1,189,916,472 | PR_kwDODunzps41fsxW | 4,081 | Close parquet writer properly in `push_to_hub` | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq / @albertvillanova / @mariosasko \r\nI am facing the same scenario. Let me explain the situation point. I have a glue ETL job\r\n\r\n1--> My files are in parquet format and stored in AWS s3.\r\n2--> I am iterating a loop for a data set where the same file name can occur with diffrent other data.\r\n3--> I read the parquet and saved it in a pandas data frame.\r\n4--> Done some operation on that data frame\r\n5--> upload the updated data frame into the S3 parquet file. Below are code snippet what I am using to save the updated \r\n data frame into parquet format and load into S3\r\n `header_name_column_list = dict(data_frame)\r\n header_list = []\r\n for col_id, col_type in header_name_column_list.items():\r\n header_list.append(pyarrow.field(col_id, pyarrow.string()))\r\n table_schema = pyarrow.schema(header_list)\r\n table = pyarrow.Table.from_pandas(data_frame, schema=table_schema, preserve_index=False)\r\n writer = parquet.ParquetWriter(b_buffer, table.schema)\r\n writer.write_table(table)\r\n writer.close()\r\n b_buffer.seek(0)\r\n .....\r\n ....\r\n self.s3_client.upload_fileobj(\r\n b_buffer,\r\n self.bucket,\r\n file_key,\r\n ExtraArgs=extra_args)`\r\n\r\nBut when I executed the glue etl job, the first time it works properly and but in the next iteration, when I try to open the same file got that error.\r\n\r\n\r\n<html>\r\n<body>\r\n<!--StartFragment-->\r\n\r\nINFO:Iot-dsip-de-duplication-job:Dataframe uploaded: s3://abc/2022/07/12/file1_ft_20220714122108.3065_12345.parquet INFO:Iot-dsip-de-duplication-job:Sleep for 60 sec\r\nINFO:Iot-dsip-de-duplication-job:start after sleep\r\n.......................\r\n..........................\r\n..........................\r\nERROR:Iot-dsip-de-duplication-job:Failed to read data from parquet file s3://abc/2022/07/12/file1_ft_20220714122108.3065_12345.parquet, error is : Invalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.INFO:Iot-dsip-de-duplication-job:Empty dataframe found\r\n\r\n<!--EndFragment-->\r\n</body>\r\n</html>\r\n\r\nAny clue will be really helpful..I got stuck with this problem."
] | 1,648,825,130,000 | 1,657,826,526,000 | 1,648,829,779,000 | MEMBER | null | We don’t call writer.close(), which causes https://github.com/huggingface/datasets/issues/4077. It can happen that we upload the file before the writer is garbage collected and writes the footer.
I fixed this by explicitly closing the parquet writer.
Close https://github.com/huggingface/datasets/issues/4077. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4081/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4081/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4081",
"html_url": "https://github.com/huggingface/datasets/pull/4081",
"diff_url": "https://github.com/huggingface/datasets/pull/4081.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4081.patch",
"merged_at": 1648829779000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4080 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4080/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4080/comments | https://api.github.com/repos/huggingface/datasets/issues/4080/events | https://github.com/huggingface/datasets/issues/4080 | 1,189,667,296 | I_kwDODunzps5G6OHg | 4,080 | NonMatchingChecksumError for downloading conll2012_ontonotesv5 dataset | {
"login": "richarddwang",
"id": 17963619,
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richarddwang",
"html_url": "https://github.com/richarddwang",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
},
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @richarddwang,\r\n\r\n\r\nIndeed, we have recently updated the loading script of that dataset (and fixed that bug as well):\r\n- #4002\r\n\r\nThat fix will be available in our next `datasets` library release. In the meantime, you can incorporate that fix by:\r\n- installing `datasets` from our GitHub repo:\r\n```bash\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\n- forcing the data files to be redownloaded\r\n```python\r\nds = load_dataset('conll2012_ontonotesv5', 'english_v4', split=\"test\", download_mode=\"force_redownload\")\r\n```\r\n\r\nFeel free to re-open this issue if the problem persists. \r\n\r\nDuplicate of:\r\n- #4031"
] | 1,648,812,868,000 | 1,648,821,550,000 | 1,648,821,550,000 | CONTRIBUTOR | null | ## Steps to reproduce the bug
```python
datasets.load_dataset("conll2012_ontonotesv5", "english_v12")
```
## Actual results
```
Downloading builder script: 32.2kB [00:00, 9.72MB/s]
Downloading metadata: 20.0kB [00:00, 10.4MB/s]
Downloading and preparing dataset conll2012_ontonotesv5/english_v12 (download: 174.83 MiB, generated: 204.29 MiB, post-processed: Unknown size
, total: 379.12 MiB) to ...
Traceback (most recent call last): [315/390]
File "/home/yisiang/lgtn/conll2012/run.py", line 86, in <module>
train()
File "/home/yisiang/lgtn/conll2012/run.py", line 65, in train
trainer.fit(model, datamodule=dm)
File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 740, in fit
self._call_and_handle_interrupt(
File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 685, in _call_and_handle_inte
rrupt
return trainer_fn(*args, **kwargs)
File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 777, in _fit_impl
self._run(model, ckpt_path=ckpt_path)
File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1131, in _run
self._data_connector.prepare_data()
File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/trainer/connectors/data_connector.py", line 154, in pre
pare_data
self.trainer.datamodule.prepare_data()
File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/core/datamodule.py", line 474, in wrapped_fn
fn(*args, **kwargs)
File "/home/yisiang/lgtn/_abstract_task/data.py", line 43, in prepare_data
raw_dsets = datasets.load_dataset(**load_dataset_kwargs)
File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/datasets/load.py", line 1687, in load_dataset
builder_instance.download_and_prepare(
File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/datasets/builder.py", line 605, in download_and_prepare
self._download_and_prepare(
File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/datasets/builder.py", line 1104, in _download_and_prepare
super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/datasets/builder.py", line 676, in _download_and_prepare
verify_checksums(
File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://md-datasets-cache-zipfiles-prod.s3.eu-west-1.amazonaws.com/zmycy7t9h9-1.zip']
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4080/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4080/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4079 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4079/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4079/comments | https://api.github.com/repos/huggingface/datasets/issues/4079/events | https://github.com/huggingface/datasets/pull/4079 | 1,189,521,576 | PR_kwDODunzps41eYRC | 4,079 | Increase max retries for GitHub datasets | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,805,643,000 | 1,648,827,160,000 | 1,648,826,831,000 | MEMBER | null | As GitHub recurrently raises connectivity issues, this PR increases the number of max retries to request GitHub datasets, as previously done for GitHub metrics:
- #4063
Note that this is a temporary solution, while we decide when and how to load GitHub datasets from the Hub:
- #4059
Fix #2048
Related to:
- #4051
- #3210
- #2787
- #2075
- #2036
CC: @lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4079/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4079/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4079",
"html_url": "https://github.com/huggingface/datasets/pull/4079",
"diff_url": "https://github.com/huggingface/datasets/pull/4079.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4079.patch",
"merged_at": 1648826830000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4078 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4078/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4078/comments | https://api.github.com/repos/huggingface/datasets/issues/4078/events | https://github.com/huggingface/datasets/pull/4078 | 1,189,513,572 | PR_kwDODunzps41eWnl | 4,078 | Fix GithubMetricModuleFactory instantiation with None download_config | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,805,218,000 | 1,648,824,291,000 | 1,648,823,967,000 | MEMBER | null | Recent PR:
- #4063
introduced a potential bug if `GithubMetricModuleFactory` is instantiated with None `download_config`.
This PR add instantiation tests and fix that potential issue.
CC: @lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4078/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4078/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4078",
"html_url": "https://github.com/huggingface/datasets/pull/4078",
"diff_url": "https://github.com/huggingface/datasets/pull/4078.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4078.patch",
"merged_at": 1648823967000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4077 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4077/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4077/comments | https://api.github.com/repos/huggingface/datasets/issues/4077/events | https://github.com/huggingface/datasets/issues/4077 | 1,189,467,585 | I_kwDODunzps5G5dXB | 4,077 | ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file. | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,648,802,953,000 | 1,648,829,779,000 | 1,648,829,779,000 | CONTRIBUTOR | null | ## Describe the bug
When uploading a relatively large image dataset of > 1GB, reloading doesn't work for me, even though pushing to the hub went just fine.
Basically, I do:
```
from datasets import load_dataset
dataset = load_dataset("imagefolder", data_files="path_to_my_files")
dataset.push_to_hub("dataset_name") # works fine, no errors
reloaded_dataset = load_dataset("dataset_name")
```
and it returns:
```
/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.
```
I created a Colab notebook to reproduce my error: https://colab.research.google.com/drive/141LJCcM2XyqprPY83nIQ-Zk3BbxWeahq?usp=sharing
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4077/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4077/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4076 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4076/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4076/comments | https://api.github.com/repos/huggingface/datasets/issues/4076/events | https://github.com/huggingface/datasets/pull/4076 | 1,188,478,867 | PR_kwDODunzps41a1n2 | 4,076 | Add ROUGE Metric Card | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,751,674,000 | 1,649,796,225,000 | 1,649,795,858,000 | CONTRIBUTOR | null | Add ROUGE metric card.
I've left the 'Values from popular papers' section empty for the time being because I don't know the summarization literature very well and am therefore not sure which paper(s) to pull from (note that the original rouge paper does not seem to present specific values, just correlations with human judgements). Any suggestions on which paper(s) to pull from would be helpful! :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4076/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4076/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4076",
"html_url": "https://github.com/huggingface/datasets/pull/4076",
"diff_url": "https://github.com/huggingface/datasets/pull/4076.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4076.patch",
"merged_at": 1649795858000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4075 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4075/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4075/comments | https://api.github.com/repos/huggingface/datasets/issues/4075/events | https://github.com/huggingface/datasets/issues/4075 | 1,188,462,162 | I_kwDODunzps5G1n5S | 4,075 | Add CCAgT dataset | {
"login": "johnnv1",
"id": 20444345,
"node_id": "MDQ6VXNlcjIwNDQ0MzQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/20444345?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johnnv1",
"html_url": "https://github.com/johnnv1",
"followers_url": "https://api.github.com/users/johnnv1/followers",
"following_url": "https://api.github.com/users/johnnv1/following{/other_user}",
"gists_url": "https://api.github.com/users/johnnv1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/johnnv1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johnnv1/subscriptions",
"organizations_url": "https://api.github.com/users/johnnv1/orgs",
"repos_url": "https://api.github.com/users/johnnv1/repos",
"events_url": "https://api.github.com/users/johnnv1/events{/privacy}",
"received_events_url": "https://api.github.com/users/johnnv1/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 3608941089,
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision",
"name": "vision",
"color": "bfdadc",
"default": false,
"description": "Vision datasets"
}
] | closed | false | {
"login": "johnnv1",
"id": 20444345,
"node_id": "MDQ6VXNlcjIwNDQ0MzQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/20444345?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johnnv1",
"html_url": "https://github.com/johnnv1",
"followers_url": "https://api.github.com/users/johnnv1/followers",
"following_url": "https://api.github.com/users/johnnv1/following{/other_user}",
"gists_url": "https://api.github.com/users/johnnv1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/johnnv1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johnnv1/subscriptions",
"organizations_url": "https://api.github.com/users/johnnv1/orgs",
"repos_url": "https://api.github.com/users/johnnv1/repos",
"events_url": "https://api.github.com/users/johnnv1/events{/privacy}",
"received_events_url": "https://api.github.com/users/johnnv1/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "johnnv1",
"id": 20444345,
"node_id": "MDQ6VXNlcjIwNDQ0MzQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/20444345?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johnnv1",
"html_url": "https://github.com/johnnv1",
"followers_url": "https://api.github.com/users/johnnv1/followers",
"following_url": "https://api.github.com/users/johnnv1/following{/other_user}",
"gists_url": "https://api.github.com/users/johnnv1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/johnnv1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johnnv1/subscriptions",
"organizations_url": "https://api.github.com/users/johnnv1/orgs",
"repos_url": "https://api.github.com/users/johnnv1/repos",
"events_url": "https://api.github.com/users/johnnv1/events{/privacy}",
"received_events_url": "https://api.github.com/users/johnnv1/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Awesome ! Let us know if you have questions or if we can help ;) I'm assigning you\r\n\r\nPS: if possible, please try to not use Google Drive links in your dataset script, since Google Drive has download quotas and is not always reliable.",
"HI, I was waiting to come out in the second version to do the implementation.\r\n\r\n- Paper: https://dx.doi.org/10.2139/ssrn.4126881\r\n- Data: [Data mendelay](http://doi.org/10.17632/wg4bpm33hj.2)",
"Nice ! 🚀 ",
"The link of CCAgT dataset is: https://huggingface.co/datasets/lapix/CCAgT"
] | 1,648,750,828,000 | 1,657,134,222,000 | 1,657,134,222,000 | NONE | null | ## Adding a Dataset
- **Name:** CCAgT dataset: Images of Cervical Cells with AgNOR Stain Technique
- **Description:** The dataset contains 2540 images (1600x1200 where each pixel is 0.111μm×0.111μm) from three different slides, having at least one nucleus per image. These images are from fields belonging to a sample cervical slide, colored with silver-stained, a method known as Argyrophilic Nucleolar Organizer Regions (AgNOR).
- **Paper:** https://doi.org/10.1109/cbms49503.2020.00110
- **Data:** https://arquivos.ufsc.br/d/373be2177a33426a9e6c/ or https://drive.google.com/drive/u/4/folders/1TBpYCv6S1ydASLauSzcsvO7Wc5O-WUw0
- **Motivation:** This is a unique dataset (because of the stain), for a major health problem, cervical cancer, with real data.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
Hi, this is a public version of the dataset that I have been working on, soon we will have another version of this dataset. But until this new version goes out, I thought I would add this dataset here, if it makes sense for the repository. You can assign the task to me if possible | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4075/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4075/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4074 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4074/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4074/comments | https://api.github.com/repos/huggingface/datasets/issues/4074/events | https://github.com/huggingface/datasets/issues/4074 | 1,188,449,142 | I_kwDODunzps5G1kt2 | 4,074 | Error in google/xtreme_s dataset card | {
"login": "wranai",
"id": 1048544,
"node_id": "MDQ6VXNlcjEwNDg1NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1048544?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wranai",
"html_url": "https://github.com/wranai",
"followers_url": "https://api.github.com/users/wranai/followers",
"following_url": "https://api.github.com/users/wranai/following{/other_user}",
"gists_url": "https://api.github.com/users/wranai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wranai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wranai/subscriptions",
"organizations_url": "https://api.github.com/users/wranai/orgs",
"repos_url": "https://api.github.com/users/wranai/repos",
"events_url": "https://api.github.com/users/wranai/events{/privacy}",
"received_events_url": "https://api.github.com/users/wranai/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
},
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | null | [] | null | [
"Hi @wranai, thanks for reporting.\r\n\r\nPlease note that the information about language families and groups is taken form the original paper: [XTREME-S: Evaluating Cross-lingual Speech Representations](https://arxiv.org/abs/2203.10752).\r\n\r\nIf that information is wrong, feel free to contact the paper's authors to suggest that correction.\r\n\r\nJust note that Hungarian language (contrary to their geographically surrounding neighbor languages) belongs to the Uralic (languages) family, together with (among others) Finnish, Estonian, some other languages in northern regions of Scandinavia..."
] | 1,648,750,065,000 | 1,648,800,776,000 | 1,648,800,776,000 | NONE | null | **Link:** https://huggingface.co/datasets/google/xtreme_s
Not a big deal but Hungarian is considered an Eastern European language, together with Serbian, Slovak, Slovenian (all correctly categorized; Slovenia is mostly to the West of Hungary, by the way).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4074/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4074/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4073 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4073/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4073/comments | https://api.github.com/repos/huggingface/datasets/issues/4073/events | https://github.com/huggingface/datasets/pull/4073 | 1,188,364,711 | PR_kwDODunzps41adPA | 4,073 | Create a metric card for Competition MATH | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,745,339,000 | 1,648,839,759,000 | 1,648,839,433,000 | CONTRIBUTOR | null | Proposing metric card for Competition MATH | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4073/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4073/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4073",
"html_url": "https://github.com/huggingface/datasets/pull/4073",
"diff_url": "https://github.com/huggingface/datasets/pull/4073.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4073.patch",
"merged_at": 1648839432000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4072 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4072/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4072/comments | https://api.github.com/repos/huggingface/datasets/issues/4072/events | https://github.com/huggingface/datasets/pull/4072 | 1,188,266,410 | PR_kwDODunzps41aIUG | 4,072 | Add installation instructions to image_process doc | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,740,577,000 | 1,648,746,346,000 | 1,648,746,019,000 | CONTRIBUTOR | null | This PR adds the installation instructions for the Image feature to the image process doc. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4072/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4072/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4072",
"html_url": "https://github.com/huggingface/datasets/pull/4072",
"diff_url": "https://github.com/huggingface/datasets/pull/4072.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4072.patch",
"merged_at": 1648746019000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4071 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4071/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4071/comments | https://api.github.com/repos/huggingface/datasets/issues/4071/events | https://github.com/huggingface/datasets/issues/4071 | 1,187,587,683 | I_kwDODunzps5GySZj | 4,071 | Loading issue for xuyeliu/notebookCDG dataset | {
"login": "Jun-jie-Huang",
"id": 46160972,
"node_id": "MDQ6VXNlcjQ2MTYwOTcy",
"avatar_url": "https://avatars.githubusercontent.com/u/46160972?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jun-jie-Huang",
"html_url": "https://github.com/Jun-jie-Huang",
"followers_url": "https://api.github.com/users/Jun-jie-Huang/followers",
"following_url": "https://api.github.com/users/Jun-jie-Huang/following{/other_user}",
"gists_url": "https://api.github.com/users/Jun-jie-Huang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jun-jie-Huang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jun-jie-Huang/subscriptions",
"organizations_url": "https://api.github.com/users/Jun-jie-Huang/orgs",
"repos_url": "https://api.github.com/users/Jun-jie-Huang/repos",
"events_url": "https://api.github.com/users/Jun-jie-Huang/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jun-jie-Huang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | null | [] | null | [
"Hi @Jun-jie-Huang,\r\n\r\nAs the error message says, \".pkl\" data files are not supported.\r\n\r\nIf you would like to share your dataset on the Hub, you would need:\r\n- either to create a Python loading script, that loads the data in any format\r\n- or to transform your data files to one of the supported formats (listed in the error message above: CSV, JSON, Parquet, TXT,...)\r\n\r\nYou can find the details in our docs: \r\n- How to share a dataset: https://huggingface.co/docs/datasets/share\r\n- How to create a dataset loading script: https://huggingface.co/docs/datasets/dataset_script\r\n\r\nFeel free to re-open this issue and ping us if you need further assistance."
] | 1,648,708,589,000 | 1,648,714,621,000 | 1,648,714,576,000 | NONE | null | ## Dataset viewer issue for '*xuyeliu/notebookCDG*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/xuyeliu/notebookCDG)*
*Couldn't load the xuyeliu/notebookCDG with provided scripts: *
```
from datasets import load_dataset
dataset = load_dataset("xuyeliu/notebookCDG/dataset_notebook.pkl")
```
I get an error message as follows:
FileNotFoundError: Couldn't find a dataset script at /home/code_documentation/code/xuyeliu/notebookCDG/notebookCDG.py or any data file in the same directory. Couldn't find 'xuyeliu/notebookCDG' on the Hugging Face Hub either: FileNotFoundError: Unable to resolve any data file that matches ['**train*'] in dataset repository xuyeliu/notebookCDG with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip']
Am I the one who added this dataset ? No
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4071/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4071/timeline | null | completed | null | null | false |