url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 49
51
| id
int64 1.05B
1.38B
| node_id
stringlengths 18
19
| number
int64 3.26k
4.99k
| title
stringlengths 1
162
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
null | comments
sequence | created_at
int64 1,637B
1,664B
| updated_at
int64 1,637B
1,664B
| closed_at
int64 1,637B
1,664B
⌀ | author_association
stringclasses 3
values | active_lock_reason
null | body
stringlengths 2
36.2k
⌀ | reactions
dict | timeline_url
stringlengths 70
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/4070 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4070/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4070/comments | https://api.github.com/repos/huggingface/datasets/issues/4070/events | https://github.com/huggingface/datasets/pull/4070 | 1,186,810,205 | PR_kwDODunzps41VMYq | 4,070 | Create metric card for seqeval | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,663,681,000 | 1,648,839,778,000 | 1,648,839,445,000 | CONTRIBUTOR | null | Proposing metric card for seqeval. Not sure which values to report for Popular papers though. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4070/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4070/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4070",
"html_url": "https://github.com/huggingface/datasets/pull/4070",
"diff_url": "https://github.com/huggingface/datasets/pull/4070.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4070.patch",
"merged_at": 1648839445000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4069 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4069/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4069/comments | https://api.github.com/repos/huggingface/datasets/issues/4069/events | https://github.com/huggingface/datasets/pull/4069 | 1,186,790,578 | PR_kwDODunzps41VIMJ | 4,069 | Add support for metadata files to `imagefolder` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Love it !\r\n\r\n+1 to using JSON Lines rather than CSV. I've also seen image datasets for which JSON Lines was used.\r\n\r\nA `file_name` column sounds good as well, and it means we could reuse the same name for audio. And ok to check the metadata file by default :)\r\n\r\nYou suggested to name the file infos.json - since we already have a datasets_infos.json file, maybe it would be nice to have a name for the metadata/annotations that doesn't contain \"info\" ? (e.g. metadata.json, annotations.json, labels.json)",
"@lhoestq I've addressed your comments and my TODOs. Additionally, I've updated `encode_nested_example`/`decode_nested_example` to support null values in place of a dictionary (if it's not top-level) since JSON Lines also supports this. ",
"@lhoestq Sure, feel free to add more tests if you have the time. ",
"I created a dedicated test file for `imagefolder`, moved some existing tests there from `test_packaged_modules.py`, and added an end-to-end test of `imagefolder` with metadata. I tested for train split only, and for two splits train and test.\r\n\r\nLet me know if the test looks ok to you. I'll add similar tests but with the other structures we support on tuesday",
"Thanks a lot for working on this! The test looks great :). ",
"Added a test for archives. Will also add a test when the metadata file is not named correctly, and see if we can raise an informative error"
] | 1,648,662,471,000 | 1,651,582,140,000 | 1,651,581,736,000 | CONTRIBUTOR | null | This PR adds support for metadata files to `imagefolder` to add an ability to specify image fields other than `image` and `label`, which are inferred from the directory structure in the loaded dataset.
To be parsed as an image metadata file, a file should be named `"info.csv"` and should have the following structure:
```
image_id,some_col1_name,some_col2_name
rel/path/to/image1.jpg,image1_col1_value,image1_col2_value
rel/path/to/image2.jpg,image2_col1_value,image2_col2_value
...
```
This is how the resolution works:
```
- path/to/imagefolder/directory
- info.csv
- 10.jpg # referenced as 10.jpg in "info.csv"
- Cat
- 0.jpg # referenced as Cat/0.jpg in "info.csv"
- 1.jpg # referenced as Cat/1.jpg in "info.csv"
- Dog
- 0.jpg # referenced as Dog/0.jpg in "info.csv"
- 1.jpg # referenced as Dog/1.jpg in "info.csv"
```
Open questions:
1. IMO it makes more sense to store image metadata as JSON Lines than CSV. CSV is sufficient for textual metadata but not the best for representing bounding boxes, for instance. Also, JSON Lines is more strict, which is good in this case (CSV supports various delimiters, the header line is optional, etc., so it's easier to enforce rules on JSON Lines that it's on CSV)
2. A better name for the `image_id` column, which contains image identifiers? Maybe `image_file` or `image_filename`?
3. WDYT about making `with_metadata=True` the default behavior if the loaded repo/directory contains an `info.csv` file?
An example repository: https://huggingface.co/datasets/mariosasko/PetImages. Can be loaded by installing `datasets` from the PR branch and running `load_dataset("mariosasko/PetImages", with_metadata=True)`.
cc: @abhishekkrthakur (this PR should address https://huggingface.slack.com/archives/C02JB9L6JKF/p1645450017434029?thread_ts=1645157416.389499&cid=C02JB9L6JKF)
TODOs:
- [x] Test
- [x] Metadata file nesting
```
- path/to/imagefolder/directory
- info.csv
- 10.jpg
- Cat
- info.csv # should have higher precedence in this directory than the top-level info.csv, but we choose the first "eligible" metadata file currently
- 0.jpg
- 1.jpg
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4069/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4069/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4069",
"html_url": "https://github.com/huggingface/datasets/pull/4069",
"diff_url": "https://github.com/huggingface/datasets/pull/4069.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4069.patch",
"merged_at": 1651581736000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4068 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4068/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4068/comments | https://api.github.com/repos/huggingface/datasets/issues/4068/events | https://github.com/huggingface/datasets/pull/4068 | 1,186,765,422 | PR_kwDODunzps41VC0I | 4,068 | Improve out of bounds error message | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,660,930,000 | 1,648,715,948,000 | 1,648,715,637,000 | MEMBER | null | In 1.18.4 with https://github.com/huggingface/datasets/pull/3719 we introduced an error message for users using `select` with out of bounds indices. The message ended up being confusing for some users because it mentioned negative indices, which is not the main use case.
I replaced it with a message that is very similar to the one you get with you try to access a list with an out-of-range index. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4068/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4068/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4068",
"html_url": "https://github.com/huggingface/datasets/pull/4068",
"diff_url": "https://github.com/huggingface/datasets/pull/4068.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4068.patch",
"merged_at": 1648715636000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4067 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4067/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4067/comments | https://api.github.com/repos/huggingface/datasets/issues/4067/events | https://github.com/huggingface/datasets/pull/4067 | 1,186,731,905 | PR_kwDODunzps41U7qc | 4,067 | Update datasets task tags to align tags with models | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Looks good, but I think we are missing some scripts with outdated tags (RedCaps, MNIST, ...).",
"Just updated the tags of vision datasets :)\r\nWe can figure out one for image datasets without labels like PASS - not sure how to name the task for this, maybe `image-fill-mask` (for consistency with language modeling for pretraining) / `masked-auto-encoding` (from ViT). Let's see that in another PR later"
] | 1,648,658,972,000 | 1,649,871,447,000 | 1,649,871,071,000 | MEMBER | null | **Requires https://github.com/huggingface/datasets/pull/4066 to be merged first**
Following https://github.com/huggingface/datasets/pull/4066 we need to update many dataset tags to use the new ones. This PR takes case of this and is quite big - feel free to review only certain tags if you don't want to spend too much time on it.
Note that the CI will never be green for this PR, because many dataset cards have missing tags or sections, and fixing them is out of scope of this PR (the CI on master will be green anyway) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4067/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4067/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4067",
"html_url": "https://github.com/huggingface/datasets/pull/4067",
"diff_url": "https://github.com/huggingface/datasets/pull/4067.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4067.patch",
"merged_at": 1649871071000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4066 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4066/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4066/comments | https://api.github.com/repos/huggingface/datasets/issues/4066/events | https://github.com/huggingface/datasets/pull/4066 | 1,186,728,104 | PR_kwDODunzps41U63x | 4,066 | Tasks alignment with models | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Yay! This is exciting! Note that we would probably be able to generate this JSON directly from `huggingface/hub-docs`' `Types.ts` file (cc @osanseviero)",
"The following issue should make this much easier :smile: https://github.com/huggingface/hub-docs/issues/83",
"So far I think I've addressed all the comments that I got on slack, but feel free to do a review @osanseviero and let me know if it sounds good to you",
"It just occurred to me that we should probably restart the `datasets-tagging` space once this is merged to update all the task categories there: https://huggingface.co/spaces/huggingface/datasets-tagging",
"Yes, let me update it now",
"Updated: https://huggingface.co/spaces/huggingface/datasets-tagging",
"current automated export is visible at #4154"
] | 1,648,658,756,000 | 1,649,855,572,000 | 1,649,420,400,000 | MEMBER | null | I updated our `tasks.json` file with the new task taxonomy that is aligned with models.
The rule that defines a task is the following:
**Two tasks are different if and only if the steps of their pipelines** are different, i.e. if they can’t reasonably be implemented using the same coherent code (level of granularity/complexity of the code to be defined - ideally I’d like to say “HF user’s level”) - this is the same definition in `transformers`
I will update the tags of all the datasets in this repository [in another PR](https://github.com/huggingface/datasets/pull/4067) for readability.
Main changes:
- conditional-text-generation is split between summarization, translation, text-generation and text2text-generation
- speech-processing is split into automatic-speech-recognition, audio-classification, etc.
- structure-prediction is renamed token-classification
- abstractive-qa now belongs to text2text-generation
Here is just a simplified YAML dump of `tasks.json`:
```yaml
audio-classification:
- keyword-spotting
- speaker-identification
- speaker-intent-classification
- emotion-recognition
- speaker-language-identification
audio-to-audio: []
automatic-speech-recognition: []
conversational:
- dialogue-generation
feature-extraction: []
fill-mask:
- slot-filling
- masked-language-modeling
image-classification:
- multi-label-image-classification
- multi-class-image-classification
image-segmentation:
- instance-segmentation
- semantic-segmentation
- panoptic-segmentation
image-to-text:
- image-captioning
multiple-choice:
- multiple-choice-qa
- multiple-choice-coreference-resolution
object-detection:
- face-detection
- vehicle-detection
question-answering:
- extractive-qa
- open-domain-qa
- closed-domain-qa
sentence-similarity: []
tabular-classification: []
tabular-to-text:
- rdf-to-text
summarization:
- news-articles-summarization
- news-articles-headline-generation
table-to-text: []
table-question-answering: []
text-classification:
- acceptability-classification
- entity-linking-classification
- fact-checking
- intent-classification
- multi-class-classification
- multi-label-classification
- natural-language-inference
- semantic-similarity-classification
- sentiment-classification
- topic-classification
- semantic-similarity-scoring
- sentiment-scoring
- sentiment-analysis
- hate-speech-detection
- text-scoring
text-generation:
- dialogue-modeling
- language-modeling
text-retrieval:
- document-retrieval
- utterance-retrieval
- entity-linking-retrieval
- fact-checking-retrieval
text-to-image: []
text-to-tabular:
- relation-extraction
- semantic-role-labeling
text-to-speech: []
text2text-generation:
- text-simplification
- explanation-generation
- abstractive-qa
- open-domain-abstractive-qa
- closed-domain-qa
- open-book-qa
- closed-book-qa
time-series-forecasting:
- univariate-time-series-forecasting
- multivariate-time-series-forecasting
token-classification:
- named-entity-recognition
- part-of-speech-tagging
- parsing
- lemmatization
- word-sense-disambiguation
- coreference-resolution
translation: []
visual-question-answering: []
voice-activity-detection: []
zero-shot-classification: []
zero-shot-image-classification: []
reinforcement-learning: []
other: []
```
Feel free to comment and give suggestions, especially if you think we can also align this list with other projects
cc @julien-c @osanseviero @severo @lewtun @yjernite @albertvillanova @mariosasko @polinaeterna | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4066/reactions",
"total_count": 7,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 5,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4066/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4066",
"html_url": "https://github.com/huggingface/datasets/pull/4066",
"diff_url": "https://github.com/huggingface/datasets/pull/4066.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4066.patch",
"merged_at": 1649420400000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4065 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4065/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4065/comments | https://api.github.com/repos/huggingface/datasets/issues/4065/events | https://github.com/huggingface/datasets/pull/4065 | 1,186,722,478 | PR_kwDODunzps41U5rq | 4,065 | Create metric card for METEOR | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,658,430,000 | 1,648,746,730,000 | 1,648,746,470,000 | CONTRIBUTOR | null | Proposing a metric card for METEOR | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4065/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4065/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4065",
"html_url": "https://github.com/huggingface/datasets/pull/4065",
"diff_url": "https://github.com/huggingface/datasets/pull/4065.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4065.patch",
"merged_at": 1648746470000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4064 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4064/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4064/comments | https://api.github.com/repos/huggingface/datasets/issues/4064/events | https://github.com/huggingface/datasets/pull/4064 | 1,186,650,321 | PR_kwDODunzps41UqXS | 4,064 | Contributing MedMCQA dataset | {
"login": "monk1337",
"id": 17107749,
"node_id": "MDQ6VXNlcjE3MTA3NzQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/17107749?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/monk1337",
"html_url": "https://github.com/monk1337",
"followers_url": "https://api.github.com/users/monk1337/followers",
"following_url": "https://api.github.com/users/monk1337/following{/other_user}",
"gists_url": "https://api.github.com/users/monk1337/gists{/gist_id}",
"starred_url": "https://api.github.com/users/monk1337/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/monk1337/subscriptions",
"organizations_url": "https://api.github.com/users/monk1337/orgs",
"repos_url": "https://api.github.com/users/monk1337/repos",
"events_url": "https://api.github.com/users/monk1337/events{/privacy}",
"received_events_url": "https://api.github.com/users/monk1337/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq Could you please take a look?\r\nThank you!!",
"Hi, thank you for the modifications and suggestions. Please check the changes.",
"Can you run `make style` to fix the code formatting please ?\r\n\r\nOh and was wrong with the dummy_data.zip file, it must actually be placed at `datasets/medmcqa/dummy/1.1.0/dummy_data.zip` - sorry about that\r\n\r\nCan you also set the class label names to `names=[\"a\", \"b\", \"c\", \"d\"]` to make it explicit which label corresponds to each answer ? You might have to regenerate `dataset_infos.json` after that",
"Hi, \r\n\r\n1) Changed the dummy data folder\r\n\r\n2) The labels are not ['a', 'b', 'c', 'd'] rather the labels are [1,2,3,4] where 1 represents the 1'st option, 2nd represents 2nd option so on, and its int.\r\n\r\nI tried changing to ['a','b','c','d'] and while generating `dataset_infos.json` getting this error :\r\n\r\n`ValueError: Class label 4 greater than configured num_classes 4`\r\nPlease check.",
"@lhoestq [lhoestq](https://github.com/lhoestq) Please check",
"You have this error because we expect the labels to start at 0, not 1. I think you just need to pass `int(data[\"cop\"]) - 1` when generating the examples.\r\n\r\nSorry for the delay in responding btw",
"@lhoestq I corrected that but here is another issue I am facing while generating `dataset_infos.json`\r\n\r\nI am using `\" \"` if it's test set and otherwise it's the correct option\r\n\r\nhttps://github.com/monk1337/datasets/blob/179f81d48cdd3093302e498babce04c0bf1e33b3/datasets/medmcqa/medmcqa.py#L111\r\n` \"cop\": \"\" if split == \"test\" else int(data[\"cop\"]) -1,\r\n`\r\n\r\nbut while running this command :\r\n\r\n`datasets-cli test datasets/medmcqa --save_infos --all_configs\r\n`\r\n\r\ngiving this error:\r\n\r\n```\r\n/content/datasets# datasets-cli test datasets/medmcqa --save_infos --all_configs\r\nUsing custom data configuration default\r\nTesting builder 'default' (1/1)\r\nDownloading and preparing dataset med_mcqa/default (download: 52.72 MiB, generated: 128.73 MiB, post-processed: Unknown size, total: 181.46 MiB) to /root/.cache/huggingface/datasets/med_mcqa/default/1.1.0/4c8e418778967b6d9603f79bbfc4fdfbcfffc389664d9aeb85e102cfde418043...\r\nTraceback (most recent call last): \r\n File \"/usr/local/bin/datasets-cli\", line 33, in <module>\r\n sys.exit(load_entry_point('datasets', 'console_scripts', 'datasets-cli')())\r\n File \"/content/datasets/src/datasets/commands/datasets_cli.py\", line 33, in main\r\n service.run()\r\n File \"/content/datasets/src/datasets/commands/test.py\", line 162, in run\r\n try_from_hf_gcs=False,\r\n File \"/content/datasets/src/datasets/builder.py\", line 606, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/content/datasets/src/datasets/builder.py\", line 1104, in _download_and_prepare\r\n super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)\r\n File \"/content/datasets/src/datasets/builder.py\", line 694, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/content/datasets/src/datasets/builder.py\", line 1095, in _prepare_split\r\n example = self.info.features.encode_example(record)\r\n File \"/content/datasets/src/datasets/features/features.py\", line 1356, in encode_example\r\n return encode_nested_example(self, example)\r\n File \"/content/datasets/src/datasets/features/features.py\", line 1007, in encode_nested_example\r\n return {k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in zip_dict(schema, obj)}\r\n File \"/content/datasets/src/datasets/features/features.py\", line 1007, in <dictcomp>\r\n return {k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in zip_dict(schema, obj)}\r\n File \"/content/datasets/src/datasets/features/features.py\", line 1052, in encode_nested_example\r\n return schema.encode_example(obj) if obj is not None else None\r\n File \"/content/datasets/src/datasets/features/features.py\", line 897, in encode_example\r\n example_data = self.str2int(example_data)\r\n File \"/content/datasets/src/datasets/features/features.py\", line 854, in str2int\r\n output.append(self._str2int[str(value)])\r\nKeyError: ''\r\n```",
"Hey ! You can use this instead:\r\n`\"cop\": -1 if split == \"test\" else int(data[\"cop\"]) -1`",
"@lhoestq Thank you for your assistance, and I have updated the `dataset_infos.json` without any error. All the issues are resolved. Please review and approve if it's ready to merge.",
"Thanks ! There are two things to fic the CI:\r\n1. run `make style` to fix code formatting\r\n2. fix the dummy_data.zip file. Currently it's created from a directory called \"dummy\" that contains the JSON file, but it should be called \"dummy_data\" instead",
"@lhoestq Please check if anything else needs to be done :) ",
"Let me gently remind you that you can check the CI before pinging reviewers, this way you can know if something needs to be fixed right away.\r\n\r\nRight now, if you check the CI, you will see that you didn't fix the code formatting, and that you didn't fix the dummy data.\r\n\r\nLet me take a look",
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @lhoestq, I am sorry if I pinged multiple times; I have already corrected the dummy_data file issues and format issue before pinging for the merge request, as you commented last time\r\n\r\n_fix the dummy_data.zip file. Currently, it's created from a directory called \"dummy\" that contains the JSON file, but it should be called \"dummy_data\" instead._\r\n\r\nI fixed the file name and location.\r\n\r\nAnd I also ran the commands last time.\r\n\r\n```\r\nmake style\r\nflake8 datasets\r\n```\r\nPlease let me know if anything else needs to be changed.",
"Thanks a lot @monk1337 ! :)"
] | 1,648,654,967,000 | 1,651,830,040,000 | 1,651,826,576,000 | CONTRIBUTOR | null | Adding MedMCQA dataset ( https://paperswithcode.com/dataset/medmcqa )
**Name**: MedMCQA
**Description**: MedMCQA is a large-scale, Multiple-Choice Question Answering (MCQA) dataset designed to address real-world medical entrance exam questions.
MedMCQA has more than 194k high-quality AIIMS & NEET PG entrance exam MCQs covering 2.4k healthcare topics and 21 medical subjects are collected with an average token length of 12.77 and high topical diversity.
The dataset contains questions about the following topics: Anesthesia, Anatomy, Biochemistry, Dental, ENT, Forensic Medicine (FM), Obstetrics and Gynecology (O&G), Medicine, Microbiology, Ophthalmology, Orthopedics Pathology, Pediatrics, Pharmacology, Physiology,
Psychiatry, Radiology Skin, Preventive & Social Medicine (PSM), and Surgery
**Code**: https://github.com/medmcqa/medmcqa
All files are at place :
**a dataset script** : medmcqa.py
**a dataset card with tags and information** : README.md.
**a metadata file** : dataset_infos.json
**a dummy-data file** : Please help to generate this file, I was facing
` raise JSONDecodeError("Extra data", s, end)` error | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4064/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4064/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4064",
"html_url": "https://github.com/huggingface/datasets/pull/4064",
"diff_url": "https://github.com/huggingface/datasets/pull/4064.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4064.patch",
"merged_at": 1651826576000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4063 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4063/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4063/comments | https://api.github.com/repos/huggingface/datasets/issues/4063/events | https://github.com/huggingface/datasets/pull/4063 | 1,186,611,368 | PR_kwDODunzps41UiDm | 4,063 | Increase max retries for GitHub metrics | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,653,168,000 | 1,648,737,772,000 | 1,648,737,467,000 | MEMBER | null | As GitHub recurrently raises connectivity issues, this PR increases the number of max retries to request GitHub metrics.
Related to:
- #3134
Also related to:
- #4059 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4063/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4063/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4063",
"html_url": "https://github.com/huggingface/datasets/pull/4063",
"diff_url": "https://github.com/huggingface/datasets/pull/4063.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4063.patch",
"merged_at": 1648737467000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4062 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4062/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4062/comments | https://api.github.com/repos/huggingface/datasets/issues/4062/events | https://github.com/huggingface/datasets/issues/4062 | 1,186,330,732 | I_kwDODunzps5Gtfhs | 4,062 | Loading mozilla-foundation/common_voice_7_0 dataset failed | {
"login": "aapot",
"id": 19529125,
"node_id": "MDQ6VXNlcjE5NTI5MTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/19529125?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aapot",
"html_url": "https://github.com/aapot",
"followers_url": "https://api.github.com/users/aapot/followers",
"following_url": "https://api.github.com/users/aapot/following{/other_user}",
"gists_url": "https://api.github.com/users/aapot/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aapot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aapot/subscriptions",
"organizations_url": "https://api.github.com/users/aapot/orgs",
"repos_url": "https://api.github.com/users/aapot/repos",
"events_url": "https://api.github.com/users/aapot/events{/privacy}",
"received_events_url": "https://api.github.com/users/aapot/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @aapot, thanks for reporting.\r\n\r\nWe are investigating the cause of this issue. We will keep you informed. ",
"When making HTTP request from code line:\r\n```\r\nresponse = requests.get(f\"{_API_URL}/bucket/dataset/{path}/{use_cdn}\", timeout=10.0).json()\r\n```\r\nit cannot be decoded to JSON because it raises a 404 Not Found error.\r\n\r\nThe request is fixed if removing the `/{use_cdn}` from the URL.\r\n\r\nMaybe there was a change in the Common Voice API?\r\n\r\nCC: @anton-l @patrickvonplaten @polinaeterna ",
"We have contacted by email the data owners of the Common Voice dataset.",
"Hotfix: https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0/commit/17b237961e4f7f84a2a0aea645abe5428a9d568e",
"I have also made the hotfix for all the rest of Common Voice script versions: 8.0, 6.1, 6.0,..., 1.0",
"Hey, is there anything new?\r\nI could not load the dataset.",
"cc @lhoestq @polinaeterna ",
"Hi @ngoquanghuy99! The dataset should load fine if you go through the following steps:\r\n\r\n1. Go to https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0 and click \"Access repository\" if you see a message about sharing your contact information with Mozilla Foundation at the top of the page. If you've already done that then skip to step 2.\r\n2. Run the command `huggingface-cli login` in your terminal or notebook to authenticate your machine.\r\n3. Load the dataset with `use_auth_token=True`:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"mozilla-foundation/common_voice_9_0\", \"ab\", use_auth_token=True)\r\n```",
"Thanks @anton-l \r\nI could load the dataset now, but in another way.\r\nThanks anyways!"
] | 1,648,640,381,000 | 1,655,796,983,000 | 1,648,714,684,000 | NONE | null | ## Describe the bug
I wanted to load `mozilla-foundation/common_voice_7_0` dataset with `fi` language and `test` split from datasets on Colab/Kaggle notebook, but I am getting an error `JSONDecodeError: [Errno Expecting value] Not Found: 0` while loading it. The bug seems to affect other languages and splits too than just the `fi` and `test` split.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("mozilla-foundation/common_voice_7_0", "fi", split="test", use_auth_token="YOUR TOKEN")
```
## Expected results
load `mozilla-foundation/common_voice_7_0` dataset succesfully
## Actual results
```
JSONDecodeError Traceback (most recent call last)
/opt/conda/lib/python3.7/site-packages/requests/models.py in json(self, **kwargs)
909 try:
--> 910 return complexjson.loads(self.text, **kwargs)
911 except JSONDecodeError as e:
/opt/conda/lib/python3.7/site-packages/simplejson/__init__.py in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, use_decimal, **kw)
524 and not use_decimal and not kw):
--> 525 return _default_decoder.decode(s)
526 if cls is None:
/opt/conda/lib/python3.7/site-packages/simplejson/decoder.py in decode(self, s, _w, _PY3)
369 s = str(s, self.encoding)
--> 370 obj, end = self.raw_decode(s)
371 end = _w(s, end).end()
/opt/conda/lib/python3.7/site-packages/simplejson/decoder.py in raw_decode(self, s, idx, _w, _PY3)
399 idx += 3
--> 400 return self.scan_once(s, idx=_w(s, idx).end())
JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
JSONDecodeError Traceback (most recent call last)
/tmp/ipykernel_358/370980805.py in <module>
1 # load Common Voice 7.0 dataset from Huggingface with Finnish "test" split
----> 2 test_dataset = load_dataset("mozilla-foundation/common_voice_7_0", "fi", split="test", use_auth_token=True)
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1690 ignore_verifications=ignore_verifications,
1691 try_from_hf_gcs=try_from_hf_gcs,
-> 1692 use_auth_token=use_auth_token,
1693 )
1694
/opt/conda/lib/python3.7/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
604 if not downloaded_from_gcs:
605 self._download_and_prepare(
--> 606 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
607 )
608 # Sync info
/opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos)
1102
1103 def _download_and_prepare(self, dl_manager, verify_infos):
-> 1104 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
1105
1106 def _get_examples_iterable_for_split(self, split_generator: SplitGenerator) -> ExamplesIterable:
/opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
670 split_dict = SplitDict(dataset_name=self.name)
671 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 672 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
673
674 # Checksums verification
~/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_7_0/fe20cac47c166e25b1f096ab661832e3da7cf298ed4a91dcaa1343ad972d175b/common_voice_7_0.py in _split_generators(self, dl_manager)
151
152 self._log_download(self.config.name, bundle_version, hf_auth_token)
--> 153 archive = dl_manager.download(self._get_bundle_url(self.config.name, bundle_url_template))
154
155 if self.config.version < datasets.Version("5.0.0"):
~/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_7_0/fe20cac47c166e25b1f096ab661832e3da7cf298ed4a91dcaa1343ad972d175b/common_voice_7_0.py in _get_bundle_url(self, locale, url_template)
130 path = urllib.parse.quote(path.encode("utf-8"), safe="~()*!.'")
131 use_cdn = self.config.size_bytes < 20 * 1024 * 1024 * 1024
--> 132 response = requests.get(f"{_API_URL}/bucket/dataset/{path}/{use_cdn}", timeout=10.0).json()
133 return response["url"]
134
/opt/conda/lib/python3.7/site-packages/requests/models.py in json(self, **kwargs)
915 raise RequestsJSONDecodeError(e.message)
916 else:
--> 917 raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
918
919 @property
JSONDecodeError: [Errno Expecting value] Not Found: 0
```
## Environment info
- `datasets` version: 2.0.0
- Platform: Linux-5.10.90+-x86_64-with-debian-bullseye-sid
- Python version: 3.7.12
- PyArrow version: 5.0.0
- Pandas version: 1.3.5
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4062/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4062/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4061 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4061/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4061/comments | https://api.github.com/repos/huggingface/datasets/issues/4061/events | https://github.com/huggingface/datasets/issues/4061 | 1,186,317,071 | I_kwDODunzps5GtcMP | 4,061 | Loading cnn_dailymail dataset failed | {
"login": "Arij-Aladel",
"id": 68355048,
"node_id": "MDQ6VXNlcjY4MzU1MDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/68355048?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Arij-Aladel",
"html_url": "https://github.com/Arij-Aladel",
"followers_url": "https://api.github.com/users/Arij-Aladel/followers",
"following_url": "https://api.github.com/users/Arij-Aladel/following{/other_user}",
"gists_url": "https://api.github.com/users/Arij-Aladel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Arij-Aladel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Arij-Aladel/subscriptions",
"organizations_url": "https://api.github.com/users/Arij-Aladel/orgs",
"repos_url": "https://api.github.com/users/Arij-Aladel/repos",
"events_url": "https://api.github.com/users/Arij-Aladel/events{/privacy}",
"received_events_url": "https://api.github.com/users/Arij-Aladel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @Arij-Aladel, thanks for reporting.\r\n\r\nThis issue was already reported \r\n- #3784\r\n\r\nand its root cause is a change in the Google Drive service. See:\r\n- #3786 \r\n\r\nWe have already fixed it in our 2.0.0 release. See:\r\n- #3787 \r\n\r\nPlease, update your `datasets` version:\r\n```\r\npip install -U datasets\r\n```\r\nand retry loading the dataset by forcing its redownload:\r\n```python\r\ndataset = load_dataset(\"cnn_dailymail\", \"3.0.0\", download_mode=\"force_redownload\")\r\n```"
] | 1,648,639,742,000 | 1,648,647,374,000 | 1,648,647,374,000 | NONE | null | ## Describe the bug
I wanted to load cnn_dailymail dataset from huggingface datasets on jupyter lab, but I am getting an error ` NotADirectoryError:[Errno20] Not a directory ` while loading it.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('cnn_dailymail', '3.0.0')
```
## Expected results
load `cnn_dailymail` dataset succesfully
## Actual results
failed to load and get error
> NotADirectoryError: [Errno 20] Not a directory
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` 1.8.0:
- Platform: Ubuntu-20.04
- Python version: 3.9.10
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4061/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4061/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4060 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4060/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4060/comments | https://api.github.com/repos/huggingface/datasets/issues/4060/events | https://github.com/huggingface/datasets/pull/4060 | 1,186,281,033 | PR_kwDODunzps41Tbmg | 4,060 | Deprecate canonical Multilingual Librispeech | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Yes, as discussed in #4006 we should update facebook/multilingual_librispeech indeed before we do a release. @anton-l could you help taking care of updating facebook/multilingual_librispeech ? We need to update the task template\r\n```python\r\ntask_templates=[AutomaticSpeechRecognition(audio_column=\"audio\", transcription_column=\"text\")],\r\n```\r\nand write that `datasets>=2.1` is necessary to load it in the dataset card.\r\n\r\nOnce the change is done we can merge this PR and do the release I think",
"@polinaeterna @lhoestq \r\nUpdated the script and the dataset card: https://huggingface.co/datasets/facebook/multilingual_librispeech ",
"@anton-l @lhoestq now previewer doesn't work for this datasets as it cannot recognize new `audio_column` argument:\r\n![image](https://user-images.githubusercontent.com/16348744/161233533-3170760b-5141-4525-9592-6675669c223a.png)\r\n\r\nI'm not an expert in previewer things, where should I look into the corresponding code?",
"Yes, there are several datasets with the same error, eg https://github.com/huggingface/datasets-preview-backend/issues/188. I'm not sure what I should do to fix this? Upgrade datasets to master?\r\n",
"@anton-l ended up removing the task template in facebook/multilingual_librispeech to make it work for the current version of `datasets` and fix the viewer :) thanks !",
"@lhoestq can we merge now? ^^"
] | 1,648,637,816,000 | 1,648,817,645,000 | 1,648,817,331,000 | CONTRIBUTOR | null | Deprecate canonical Multilingual Librispeech in favor of [the community one](https://huggingface.co/datasets/facebook/multilingual_librispeech) which supports streaming.
However, there is a problem regarding new ASR template schema: since it's changed, I guess all community datasets that use this template do not work with new version of the library, including MLS. Should we somehow notify users about that or is it possible to change this line ourselves? For MLS specifically, I cannot change the code directly as I'm not the member of the Facebook org.
Hm, and the code should be change after the release, no? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4060/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4060/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4060",
"html_url": "https://github.com/huggingface/datasets/pull/4060",
"diff_url": "https://github.com/huggingface/datasets/pull/4060.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4060.patch",
"merged_at": 1648817331000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4059 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4059/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4059/comments | https://api.github.com/repos/huggingface/datasets/issues/4059/events | https://github.com/huggingface/datasets/pull/4059 | 1,186,149,949 | PR_kwDODunzps41TC-o | 4,059 | Load GitHub datasets from Hub | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Currently the github datasets versioning is synced with the `datasets` lib versioning: when you load a github dataset using `datasets==x.y.z`, then the version of the dataset will be the one at the git tag `x.y.z`. This is for reproducibility reasons.\r\n\r\nWe could stop having this behavior and always use the latest version of the dataset, but when we do a breaking change it will break github datasets for previous versions of the library. It could be nice to think about tools that will allow backward compatibility if we ever need to to a breaking change in some datasets. Maybe a way to specify which revision of the dataset to use based on the `datasets` major version.\r\n\r\nIf we keep this behavior, then maybe add a note in setup.py to push to PyPI only after the `Update Hub repositories` CI job is done. It can take a few minutes to add the version tag to all the dataset repositories on the Hub. If we push to PyPI before the tags are pushed, then some users might get some 404 if at the same time they installed `datasets` and run `load_dataset`.",
"@lhoestq I was going to increase the `max_retries` as done for metrics:\r\n- #4063 \r\n\r\nBut then I realized that loading from the Hub would work as well. That is why I opened this PR.\r\n\r\nDefinitely, we should decide which behavior we want:\r\n- We have been working in the direction of eliminating the distinctions between canonical/community datasets\r\n- If we continue to go in that direction, then passing (or not passing) `revision` should have the same behavior for canonical/community\r\n- If we want to continue to tight the library version with the canonical datasets version, that is definitely a difference between canonical and community datasets\r\n\r\nNot sure what could be better in the long term...",
"> We could stop having this behavior and always use the latest version of the dataset, but when we do a breaking change it will break github datasets for previous versions of the library. \r\n\r\nNot sure of understanding this. Previous versions of the `datasets` library will continue to download GitHub datasets from GitHub, syncing library/dataset versions... Where is the problem?",
"Yes you're right, previous versions of `datasets` will still continue to download from github, but not future versions.\r\nIf we release `datasets` 2.1 by removing this behavior and if one day we release `datasets` 3.0 with a breaking change in the dataset scripts, then all version >=2.1 will break.",
"Ideally we should drop the differences between github datasets and community datasets, and maybe provide a way to fallback on an older version of a dataset repository if the user's `datasets` version is too old and incompatible with it.",
"I just noticed I literally opened the same PR lol\r\n\r\nI'm still convinced that we should do a better version compatibility check but we can see that later IMO",
"Normally in open source projects, when there is a duplicate PR, the latter is tagged as \"duplicate\" and closed. :stuck_out_tongue_winking_eye: \r\n\r\nLet me make things clear in my mind: so you say that the blocking point that was preventing this PR from merging, now is no longer a blocking point and could be addresses in a subsequent PR?",
"Let me close the duplicate one, sorry\r\n\r\n> Let me make things clear my mind: so you say that the blocking point that was preventing this PR from merging now is no longer a blocking point and could be addresses in a subsequent PR?\r\n\r\nYes 🙈",
"> Note that after this PR, all the changes made to a dataset will affect all the datasets version from now on\r\n\r\nYes, we have aligned this behavior with Hub datasets, as this is already the case for Hub datasets."
] | 1,648,632,116,000 | 1,663,332,206,000 | 1,663,332,043,000 | MEMBER | null | We have recurrently had connection errors when requesting GitHub because sometimes the site is not available.
This PR requests the Hub instead, once all GitHub datasets are mirrored on the Hub.
Fix #2048
Related to:
- #4051
- #3210
- #2787
- #2075
- #2036 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4059/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4059/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4059",
"html_url": "https://github.com/huggingface/datasets/pull/4059",
"diff_url": "https://github.com/huggingface/datasets/pull/4059.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4059.patch",
"merged_at": 1663332043000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4058 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4058/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4058/comments | https://api.github.com/repos/huggingface/datasets/issues/4058/events | https://github.com/huggingface/datasets/pull/4058 | 1,185,611,600 | PR_kwDODunzps41RPhl | 4,058 | Updated annotations for nli_tr dataset | {
"login": "e-budur",
"id": 2246791,
"node_id": "MDQ6VXNlcjIyNDY3OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2246791?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/e-budur",
"html_url": "https://github.com/e-budur",
"followers_url": "https://api.github.com/users/e-budur/followers",
"following_url": "https://api.github.com/users/e-budur/following{/other_user}",
"gists_url": "https://api.github.com/users/e-budur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/e-budur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/e-budur/subscriptions",
"organizations_url": "https://api.github.com/users/e-budur/orgs",
"repos_url": "https://api.github.com/users/e-budur/repos",
"events_url": "https://api.github.com/users/e-budur/events{/privacy}",
"received_events_url": "https://api.github.com/users/e-budur/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you so much @[lhoestq](https://github.com/lhoestq) for the time you take to your review the PR!"
] | 1,648,597,619,000 | 1,649,796,912,000 | 1,649,759,842,000 | CONTRIBUTOR | null | This PR adds annotation tags for `nli_tr` dataset so that the dataset can be searchable wrt. relevant query parameters.
The annotations in this PR are based on the existing annotations of `snli` and `multi_nli` datasets as `nli_tr` is a machine-generated extension of those datasets.
This PR is intended only for updating the annotation labels but a followup PR will focus on updating the missing sections in the `README.md` as well.
Thanks for all your time to review it. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4058/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4058/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4058",
"html_url": "https://github.com/huggingface/datasets/pull/4058",
"diff_url": "https://github.com/huggingface/datasets/pull/4058.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4058.patch",
"merged_at": 1649759842000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4057 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4057/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4057/comments | https://api.github.com/repos/huggingface/datasets/issues/4057/events | https://github.com/huggingface/datasets/issues/4057 | 1,185,442,001 | I_kwDODunzps5GqGjR | 4,057 | `load_dataset` consumes too much memory for audio + tar archives | {
"login": "JFCeron",
"id": 50839826,
"node_id": "MDQ6VXNlcjUwODM5ODI2",
"avatar_url": "https://avatars.githubusercontent.com/u/50839826?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JFCeron",
"html_url": "https://github.com/JFCeron",
"followers_url": "https://api.github.com/users/JFCeron/followers",
"following_url": "https://api.github.com/users/JFCeron/following{/other_user}",
"gists_url": "https://api.github.com/users/JFCeron/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JFCeron/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JFCeron/subscriptions",
"organizations_url": "https://api.github.com/users/JFCeron/orgs",
"repos_url": "https://api.github.com/users/JFCeron/repos",
"events_url": "https://api.github.com/users/JFCeron/events{/privacy}",
"received_events_url": "https://api.github.com/users/JFCeron/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi ! Could it be because you need to free the memory used by `tarfile` by emptying the tar `members` by any chance ?\r\n```python\r\n yield key, {\"audio\": {\"path\": audio_name, \"bytes\": audio_file_obj.read()}}\r\n audio_tarfile.members = [] # free memory\r\n key += 1\r\n```\r\n\r\nand then you can set `DEFAULT_WRITER_BATCH_SIZE` to whatever value makes more sense for your dataset.\r\n\r\nLet me know if the issue persists (which could happen, given that you managed to run your generator without RAM issues and using os.walk didn't solve the issue)",
"Thanks for your reply! Tried it but the issue persists. ",
"I also run out of memory when loading `mozilla-foundation/common_voice_8_0` that also uses `tarfile` via `dl_manager.iter_archive`. There seems to be some data files that stay in memory somewhere\r\n\r\nI don't have the issue with other compression formats like gzipped files",
"I'm facing a similar memory leak issue when loading cv8. As you said @lhoestq \r\n\r\n`load_dataset(\"mozilla-foundation/common_voice_8_0\", \"en\", use_auth_token=True, writer_batch_size=1)`\r\n\r\nThis issue is happening on a 32GB RAM machine. \r\n\r\nAny updates on how to fix this?",
"I've run a memory profiler to see where's the leak comes from:\r\n\r\n![image](https://user-images.githubusercontent.com/5097052/165101712-e7060ae5-77b2-4f6a-92bd-2996dbd60b36.png)\r\n\r\n... it seems that it's related to the tarfile lib buffer reader. But I don't know why it's only happening on the huggingface script",
"I have the same problem when loading video into numpy. \r\n```\r\nyield id,{ \r\n \"video\": imageio.v3.imread(video_path),\r\n \"label\": int(label)\r\n}\r\n```\r\nSince video files are heavy, it can only processes a dozen samples before OOM.",
"For video datasets I think you can just define the max number of video that can stay in memory by adding this class attribute to your dataset builer:\r\n```py\r\nDEFAULT_WRITER_BATCH_SIZE = 8 # only 8 videos at a time in memory before flushing the dataset writer\r\n```",
"same thing happens for me with `load_dataset(\"mozilla-foundation/common_voice_8_0\", \"en\", use_auth_token=True, writer_batch_size=1)` on azure ml. seems to fill up `tmp` and not release that memory until OOM",
"I'll add that I'm encountering the same issue with\r\n`load_dataset('wikipedia', 'ceb', runner='DirectRunner', split='train')`.\r\nSame for `'es'` in place of `'ceb'`.",
"> I'll add that I'm encountering the same issue with\r\n> load_dataset('wikipedia', 'ceb', runner='DirectRunner', split='train').\r\n> Same for 'es' in place of 'ceb'.\r\n\r\nThis is because the Apache Beam `DirectRunner` runs with the full data in memory unfortunately. Optimizing the `DirectRunner` is not in the scope of the `datasets` library, but rather in the Apache Beam project I believe. If you have memory issues with the `DirectRunner`, please consider switching to a machine with more RAM, or to distributed processing runtimes like Spark, Flink or DataFlow. There is a bit of documentation here: https://huggingface.co/docs/datasets/beam",
"> > I'll add that I'm encountering the same issue with\r\n> > `load_dataset('wikipedia', 'ceb', runner='DirectRunner', split='train')`.\r\n> > Same for `'es'` in place of `'ceb'`.\r\n> \r\n> This is because the Apache Beam `DirectRunner` runs with the full data in memory unfortunately. Optimizing the `DirectRunner` is not in the scope of the `datasets` library, but rather in the Apache Beam project I believe. If you have memory issues with the `DirectRunner`, please consider switching to a machine with more RAM, or to distributed processing runtimes like Spark, Flink or DataFlow. There is a bit of documentation here: https://huggingface.co/docs/datasets/beam\r\n\r\nFair enough, but this line of code crashed an AWS instance with 1024GB of RAM! I have also tried with `Runner='Flink'` on an environment with 51GB of RAM, which also failed.\r\n\r\nApache Beam has tons of open tickets already - is it worth submitting one to them over this?",
"> Fair enough, but this line of code crashed an AWS instance with 1024GB of RAM!\r\n\r\nWhat, wikipedia is not even bigger than 20GB\r\n\r\ncc @albertvillanova",
"> > Fair enough, but this line of code crashed an AWS instance with 1024GB of RAM!\r\n> \r\n> What, wikipedia is not even bigger than 20GB\r\n> \r\n> cc @albertvillanova\r\n\r\nLuckily, on Colab you can watch the call stack at the bottom of the screen - much of the time and space complexity seems to come from `_parse_and_clean_wikicode()` rather than the actual download process. As far as I can tell, the script is loading the full dataset and then cleaning it all at once, which is consuming a lot of memory.",
"I think we are mixing many different bugs in this Issue page:\r\n- TAR archive with audio files\r\n- video file\r\n- distributed parsing of Wikipedia using Apache Beam\r\n\r\n@dan-the-meme-man may I ask you to open a separate Issue for your problem? Then I will address it. It is important to fix it because we are currently working on a Datasets enhancement to be able to provide all Wikipedias already preprocessed.\r\n\r\nOn the other hand, I think we could keep this Issue page for the original problem: TAR archive with audio files. That is not fixed yet either.",
"Is there an update on the TAR archive issue with audio files? Happy to lend a hand in fixing this :)",
"I found the issue with Common Voice 8 and opened a PR to fix it: https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0/discussions/2\r\n\r\nBasically the `metadata` dict that contains the transcripts per audio file was continuously getting filled with bytes from `f.read()` because of this code:\r\n```python\r\nresult = metadata[path]\r\nresult[\"audio\"] = {\"path\": path, \"bytes\": f.read()}\r\n```\r\ncopying the result with `result = dict(metadata[path])` fixes it: the bytes are no longer added to `metadata`\r\n\r\nI also opened PRs to the other CV datasets",
"Amazing, that's a great find! Thanks @lhoestq!",
"I'm closing this one for now, but feel free to reopen if you encounter other memory issues with audio datasets"
] | 1,648,589,935,000 | 1,660,645,375,000 | 1,660,645,375,000 | NONE | null |
## Description
`load_dataset` consumes more and more memory until it's killed, even though it's made with a generator. I'm adding a loading script for a new dataset, made up of ~15s audio coming from a tar file. Tried setting `DEFAULT_WRITER_BATCH_SIZE = 1` as per the discussion in #741 but the problem persists.
## Steps to reproduce the bug
Here's my implementation of `_generate_examples`:
```python
class MyDatasetBuilder(datasets.GeneratorBasedBuilder):
DEFAULT_WRITER_BATCH_SIZE = 1
...
def _split_generators(self, dl_manager):
archive_path = dl_manager.download(_DL_URLS[self.config.name])
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN,
gen_kwargs={
"audio_tarfile_path": archive_path["audio_tarfile"]
},
),
]
def _generate_examples(self, audio_tarfile_path):
key = 0
with tarfile.open(audio_tarfile_path, mode="r|") as audio_tarfile:
for audio_tarinfo in audio_tarfile:
audio_name = audio_tarinfo.name
audio_file_obj = audio_tarfile.extractfile(audio_tarinfo)
yield key, {"audio": {"path": audio_name, "bytes": audio_file_obj.read()}}
key += 1
```
I then try to load via `ds = load_dataset('./datasets/my_new_dataset', writer_batch_size=1)`, and memory usage grows until all 8GB of my machine are taken and process is killed (`Killed`). Also tried an untarred version of this using `os.walk` but the same happened.
I created a script to confirm that one can safely go through such a generator, which runs just fine with memory <500MB at all times.
```python
import tarfile
def generate_examples():
audio_tarfile = tarfile.open("audios.tar", mode="r|")
key = 0
for audio_tarinfo in audio_tarfile:
audio_name = audio_tarinfo.name
audio_file_obj = audio_tarfile.extractfile(audio_tarinfo)
yield key, {"audio": {"path": audio_name, "bytes": audio_file_obj.read()}}
key += 1
if __name__ == "__main__":
examples = generate_examples()
for example in examples:
pass
```
## Expected results
Memory consumption should be similar to the non-huggingface script.
## Actual results
Process is killed after consuming too much memory.
## Environment info
- `datasets` version: 2.0.1.dev0
- Platform: Linux-4.19.0-20-cloud-amd64-x86_64-with-debian-10.12
- Python version: 3.7.12
- PyArrow version: 6.0.1
- Pandas version: 1.3.5 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4057/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4057/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4056 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4056/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4056/comments | https://api.github.com/repos/huggingface/datasets/issues/4056/events | https://github.com/huggingface/datasets/issues/4056 | 1,185,155,775 | I_kwDODunzps5GpAq_ | 4,056 | Unexpected behavior of _TempDirWithCustomCleanup | {
"login": "JonasGeiping",
"id": 22680696,
"node_id": "MDQ6VXNlcjIyNjgwNjk2",
"avatar_url": "https://avatars.githubusercontent.com/u/22680696?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JonasGeiping",
"html_url": "https://github.com/JonasGeiping",
"followers_url": "https://api.github.com/users/JonasGeiping/followers",
"following_url": "https://api.github.com/users/JonasGeiping/following{/other_user}",
"gists_url": "https://api.github.com/users/JonasGeiping/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JonasGeiping/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JonasGeiping/subscriptions",
"organizations_url": "https://api.github.com/users/JonasGeiping/orgs",
"repos_url": "https://api.github.com/users/JonasGeiping/repos",
"events_url": "https://api.github.com/users/JonasGeiping/events{/privacy}",
"received_events_url": "https://api.github.com/users/JonasGeiping/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi ! Would setting TMPDIR at the beginning of your python script/session work ? I mean, even before importing transformers, datasets, etc. and using them ? I think this would be the most robust solution given any library that uses `tempfile`. I don't think we aim to support environment variables to be changed at run time",
"Hi, yeah setting the environment variable before the imports / as environment variable outside is another way to fix this. I am just arguing that `datasets` already uses its own global variable to track temporary files: `_TEMP_DIR_FOR_TEMP_CACHE_FILES`, and the creation of this global variable should respect TMPDIR instead of relying on tempfile to do so."
] | 1,648,573,102,000 | 1,648,652,884,000 | null | NONE | null | ## Describe the bug
This is not 100% a bug in `datasets`, but behavior that surprised me and I think this could be made more robust on the `datasets`side.
When using `datasets.disable_caching()`, cache files are written to a temporary directory. This directory should be based on the environment variable TMPDIR. I want to set TMPDIR at runtime using os.ENVIRON["TMPDIR"] = something, but depending on other imported modules this can fail to take effect.
## Steps to reproduce the bug
`_TempDirWithCustomCleanup` relies on `tempfile` to generate a path to a temporary directory. However, `tempfile` generates the path only once. This can be a problem when trying to set TMPDIR at runtime whenever other code imports `tempfile` first and does something unexpected.
For example (after too much trial and error) I found out that a different part of the code base I work with defines a class `PatchedDataCollatorForLanguageModeling(transformers.DataCollatorForLanguageModeling)` based on a `transformers` class. This import is enough to trigger `tempfile` to generate `tempfile` to generate a temporary path and leading to the wrong path being cached in `tempfile.tempdir`.
## Suggestion:
I could file this also as bug with `transformers`, but I think fixing this on the datasets would be much more robust:
Datasets could recompute the temporary path once (technically possible via `tempfile._get_default_tempdir` or resetting
the global variable `tempfile.tmpdir` to None) before setting its own global `_TEMP_DIR_FOR_TEMP_CACHE_FILES`.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4056/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4056/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4055 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4055/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4055/comments | https://api.github.com/repos/huggingface/datasets/issues/4055/events | https://github.com/huggingface/datasets/pull/4055 | 1,184,976,292 | PR_kwDODunzps41PGF1 | 4,055 | [DO NOT MERGE] Test doc-builder | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Docs built successfully, so closing this."
] | 1,648,564,742,000 | 1,648,643,474,000 | 1,648,643,152,000 | MEMBER | null | This is a test PR to ensure the changes in https://github.com/huggingface/doc-builder/pull/164 don't break anything in `datasets` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4055/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4055/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4055",
"html_url": "https://github.com/huggingface/datasets/pull/4055",
"diff_url": "https://github.com/huggingface/datasets/pull/4055.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4055.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4054 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4054/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4054/comments | https://api.github.com/repos/huggingface/datasets/issues/4054/events | https://github.com/huggingface/datasets/pull/4054 | 1,184,575,368 | PR_kwDODunzps41Nwjz | 4,054 | Support float data types in pearsonr/spearmanr metrics | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,546,150,000 | 1,648,562,879,000 | 1,648,562,540,000 | MEMBER | null | Fix #4053. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4054/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4054/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4054",
"html_url": "https://github.com/huggingface/datasets/pull/4054",
"diff_url": "https://github.com/huggingface/datasets/pull/4054.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4054.patch",
"merged_at": 1648562540000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4053 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4053/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4053/comments | https://api.github.com/repos/huggingface/datasets/issues/4053/events | https://github.com/huggingface/datasets/issues/4053 | 1,184,500,378 | I_kwDODunzps5Gmgqa | 4,053 | Modify datatype from `int32` to `float` for pearsonr, spearmanr. | {
"login": "Woodywarhol9",
"id": 86637320,
"node_id": "MDQ6VXNlcjg2NjM3MzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/86637320?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Woodywarhol9",
"html_url": "https://github.com/Woodywarhol9",
"followers_url": "https://api.github.com/users/Woodywarhol9/followers",
"following_url": "https://api.github.com/users/Woodywarhol9/following{/other_user}",
"gists_url": "https://api.github.com/users/Woodywarhol9/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Woodywarhol9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Woodywarhol9/subscriptions",
"organizations_url": "https://api.github.com/users/Woodywarhol9/orgs",
"repos_url": "https://api.github.com/users/Woodywarhol9/repos",
"events_url": "https://api.github.com/users/Woodywarhol9/events{/privacy}",
"received_events_url": "https://api.github.com/users/Woodywarhol9/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@Woodywarhol9 good catch, thanks for reporting.\r\n\r\nWe are fixing this."
] | 1,648,542,461,000 | 1,648,562,540,000 | 1,648,562,540,000 | NONE | null | **Is your feature request related to a problem? Please describe.**
- Now [Pearsonr](https://github.com/huggingface/datasets/blob/master/metrics/pearsonr/pearsonr.py) and [Spearmanr](https://github.com/huggingface/datasets/blob/master/metrics/spearmanr/spearmanr.py) both get input data as 'int32'.
**Describe the solution you'd like**
- Considering that those metrics are widely used for the STS task(labels are in 'float' data type),
it would be better to modify datatype from 'int32' to 'float' for getting exact values of similarity. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4053/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4053/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4052 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4052/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4052/comments | https://api.github.com/repos/huggingface/datasets/issues/4052/events | https://github.com/huggingface/datasets/issues/4052 | 1,184,447,977 | I_kwDODunzps5GmT3p | 4,052 | metric = metric_cls( TypeError: 'NoneType' object is not callable | {
"login": "klyuhang9",
"id": 39409233,
"node_id": "MDQ6VXNlcjM5NDA5MjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/39409233?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/klyuhang9",
"html_url": "https://github.com/klyuhang9",
"followers_url": "https://api.github.com/users/klyuhang9/followers",
"following_url": "https://api.github.com/users/klyuhang9/following{/other_user}",
"gists_url": "https://api.github.com/users/klyuhang9/gists{/gist_id}",
"starred_url": "https://api.github.com/users/klyuhang9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/klyuhang9/subscriptions",
"organizations_url": "https://api.github.com/users/klyuhang9/orgs",
"repos_url": "https://api.github.com/users/klyuhang9/repos",
"events_url": "https://api.github.com/users/klyuhang9/events{/privacy}",
"received_events_url": "https://api.github.com/users/klyuhang9/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @klyuhang9,\r\n\r\nI'm sorry but I can't reproduce your problem:\r\n```python\r\nIn [2]: metric = load_metric('glue', 'rte')\r\nDownloading builder script: 5.76kB [00:00, 2.40MB/s]\r\n```\r\n\r\nCould you please, retry to load the metric? Sometimes there are temporary connectivity issues.\r\n\r\nFeel free to re-open this issue of the problem persists."
] | 1,648,539,788,000 | 1,648,562,761,000 | 1,648,562,761,000 | NONE | null | Hi, friend. I meet a problem.
When I run the code:
`metric = load_metric('glue', 'rte')`
There is a problem raising:
`metric = metric_cls(
TypeError: 'NoneType' object is not callable `
I don't know why. Thanks for your help!
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4052/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4052/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4051 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4051/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4051/comments | https://api.github.com/repos/huggingface/datasets/issues/4051/events | https://github.com/huggingface/datasets/issues/4051 | 1,184,400,179 | I_kwDODunzps5GmIMz | 4,051 | ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.0.0/datasets/glue/glue.py | {
"login": "klyuhang9",
"id": 39409233,
"node_id": "MDQ6VXNlcjM5NDA5MjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/39409233?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/klyuhang9",
"html_url": "https://github.com/klyuhang9",
"followers_url": "https://api.github.com/users/klyuhang9/followers",
"following_url": "https://api.github.com/users/klyuhang9/following{/other_user}",
"gists_url": "https://api.github.com/users/klyuhang9/gists{/gist_id}",
"starred_url": "https://api.github.com/users/klyuhang9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/klyuhang9/subscriptions",
"organizations_url": "https://api.github.com/users/klyuhang9/orgs",
"repos_url": "https://api.github.com/users/klyuhang9/repos",
"events_url": "https://api.github.com/users/klyuhang9/events{/privacy}",
"received_events_url": "https://api.github.com/users/klyuhang9/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @klyuhang9,\r\n\r\nI'm sorry but I can't reproduce your problem:\r\n```python\r\nIn [4]: ds = load_dataset(\"glue\", \"sst2\", download_mode=\"force_redownload\")\r\nDownloading builder script: 28.8kB [00:00, 9.15MB/s] \r\nDownloading metadata: 28.7kB [00:00, 10.7MB/s] \r\nDownloading and preparing dataset glue/sst2 (download: 7.09 MiB, generated: 4.78 MiB, post-processed: Unknown size, total: 11.88 MiB) to .../.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad...\r\nDownloading data: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7.44M/7.44M [00:01<00:00, 4.12MB/s]\r\nDataset glue downloaded and prepared to .../.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad. Subsequent calls will reuse this data. \r\n100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1047.96it/s]\r\n\r\nIn [5]: ds\r\nOut[5]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 67349\r\n })\r\n validation: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 872\r\n })\r\n test: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 1821\r\n })\r\n})\r\n```\r\n\r\nPlease, note that sometimes GitHub has some temporary connectivity issues. Feel free to retry and re-open this issue if the problem persists.",
"Maybe it's because we are in China.",
"Are you able to access the URL in your web browser?",
"> Are you able to access the URL in your web browser?\r\n\r\nYes, with or without a VPN, we (people in China) can access the URL. And we can even use wget to download these files. We can download the pretrained language model automatically with the code.\r\nHowever, we CANNOT access glue.py & metric.py automatically. Every time, it will raise ConnectionError, and we have to download datasets manually (SQuAD is extremely hard to preprocess) and replace metric.py with scipy.metrics. If this problem is solved, many Chinese will save a lot of time.",
"> ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.0.0/datasets/glue/glue.py\r\n> \r\n> I don't know why; it is ok when I use\r\n\r\nIf you would query the question `ConnectionError: Couldn't reach` in www.baidu.com (Chinese Google, Google is banned and some people cannot access it), you will find that there are so many questions about accessing `https://raw.githubusercontent.com`. There are some solutions like adding `185.199.108.133 raw.githubusercontent.com` to `C:/windows/systen32/drives/etc/hosts`, but it is time-consuming, hard for green-hand, and invalid sometimes."
] | 1,648,537,231,000 | 1,651,994,852,000 | 1,648,542,565,000 | NONE | null | Hi, I meet a problem.
When I run the code:
`dataset = load_dataset('glue','sst2')`
There is a issue raising:
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.0.0/datasets/glue/glue.py
I don't know why; it is ok when I use Google Chrome to view this url.
Thanks for your help! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4051/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4051/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4050 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4050/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4050/comments | https://api.github.com/repos/huggingface/datasets/issues/4050/events | https://github.com/huggingface/datasets/pull/4050 | 1,184,346,501 | PR_kwDODunzps41NAMF | 4,050 | Add RVL-CDIP dataset | {
"login": "dnaveenr",
"id": 17746528,
"node_id": "MDQ6VXNlcjE3NzQ2NTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dnaveenr",
"html_url": "https://github.com/dnaveenr",
"followers_url": "https://api.github.com/users/dnaveenr/followers",
"following_url": "https://api.github.com/users/dnaveenr/following{/other_user}",
"gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions",
"organizations_url": "https://api.github.com/users/dnaveenr/orgs",
"repos_url": "https://api.github.com/users/dnaveenr/repos",
"events_url": "https://api.github.com/users/dnaveenr/events{/privacy}",
"received_events_url": "https://api.github.com/users/dnaveenr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks a lot for inputs. I'll use the URL suggested and check.\r\n\r\n> we need to implement the streamable (can't use os.path.join) and the non-streamable versions of _generate_examples.\r\n\r\nSure. I will check the reference and try this out, will get back to you if I face any issues.\r\n\r\n> The labels-only data file URL doesn't work for me, so feel free to ask the authors whether they are OK with us hosting the file on the Hub/S3 (to speed up the streamable version)\r\n\r\nJust checked. The author (Adam Harley) has responded positively and allowed us to host the file. Do I share the file with you for hosting it on Hub/S3 ?",
"> Just checked. The author (Adam Harley) has responded positively and allowed us to host the file. Do I share the file with you for hosting it on Hub/S3 ?\r\n\r\nYes, feel free to e-mail me the file. Then I'll create a repo under my namespace and push the file there. We run a GH action on a GH dataset after merging to create its repo on the Hub, so after this PR is merged, I'll push the file to the \"official\" namespace and update the download link.",
"> You can use this URL to avoid manual download: https://drive.google.com/uc?export=download&id=0Bz1dfcnrpXM-MUt4cHNzUEFXcmc\r\n\r\nFor some reason, the direct download doesn't seem to work for me even with this URL. \r\n```\r\nDownloading and preparing dataset rvl_cdip/default to ~/.cache/huggingface/datasets/rvl_cdip/default/1.0.0/ea152149e06310d60a9ef3c3020199dd4780bb952a773ba5aac6b57d59f12628...\r\nDownloading data files: 100%|█████████████████████████████████████████████████████| 1/1 [00:00<00:00, 6307.22it/s]\r\n{'rvl-cdip': '~/.cache/huggingface/datasets/downloads/07ef956a33750078d570d76fefe9fed49f7dc32ecf6e872d690de11e66bbe869'}\r\n```\r\nAnd this directory does not exist. Am I doing something wrong ?\r\nTo verify, I tried using [gdown](https://github.com/wkentaro/gdown) for the above URL, we get the following : \r\n```\r\nAccess denied with the following error:\r\n\r\n Cannot retrieve the public link of the file. You may need to change\r\n the permission to 'Anyone with the link', or have had many accesses. \r\n\r\nYou may still be able to access the file from the browser:\r\n```\r\n----\r\n\r\n> Yes, feel free to e-mail me the file. Then I'll create a repo under my namespace and push the file there. We run a GH action on a GH dataset after merging to create its repo on the Hub, so after this PR is merged, I'll push the file to the \"official\" namespace and update the download link.\r\n\r\nGot it. I've sent you an email with the file. Thank you.",
"Actually this URL works for direct download :\r\n`https://drive.google.com/uc?export=download&confirm=pbef&id=0Bz1dfcnrpXM-MUt4cHNzUEFXcmc`\r\nRef : https://github.com/wkentaro/gdown/issues/146#issuecomment-1042382215\r\n\r\nI'm working on the streamable versions of _generate_examples as well, will update you regarding this.",
"Google Drive is a tricky host, and it's easy to exceed daily download quota limits, so if we are allowed to host the `rvl-cdip.tar.gz` file, I can push it to the Hub.",
"Just checked, the authors have agreed. He mentioned that he had complaints about the GDrive link.\r\nYou can push it to the Hub and share the link. :)",
"I have added :\r\n- streaming support for rvl-cdip.tar.gz file. [ Need to test this ]\r\n\r\nIs it possible for you to upload the train.txt, test.txt, val.txt files separately to the Hub instead of labels_only.tar.gz file.\r\nCurrently during the tests in stream mode, we get : \r\n`NotImplementedError: Extraction protocol for TAR archives like 'https://huggingface.co/datasets/mariosasko/rvl_cdip/resolve/main/labels_only.tar.gz' is not implemented in streaming mode. Please use dl_manager.iter_archive instead.`\r\nIf the label files are present as .txt files then we can directly use dl_manager.download.\r\n\r\n\r\n",
"The rvl-cdip.tar.gz archive and txt files with the labels are on the Hub!",
"- Added 🤗 Hub download links.\r\n- streamable and non-streamable versions of _generate_examples.\r\n- Updated dummy data, both real and dummy dataset tests have passed.\r\n\r\n",
"I've removed the extraction of the archive file locally as suggested. Let me know if any other changes are required. :)",
"The check for **Update Hub repositories / update-hub-repositories** has failed.\r\n\r\n> https://github.com/huggingface/datasets/runs/6116502392?check_suite_focus=true\r\n\r\n",
"Hi ! Thanks for reporting ;) yes this CI job has been failing for a few days. I'm working on fixing it, and I'm manually running it on my side in the meantime",
"Great. :D Thank you @lhoestq "
] | 1,648,533,602,000 | 1,650,621,307,000 | 1,650,561,341,000 | CONTRIBUTOR | null | Resolves #2762
Dataset Request : Add RVL-CDIP dataset [#2762](https://github.com/huggingface/datasets/issues/2762)
This PR adds the RVL-CDIP dataset.
The dataset contains Google Drive link for download and wasn't getting downloaded automatically, so I have provided manual_download_instructions.
- I have added the dummy_data.zip as well.
Needed inputs on how I can run the real data and the dummy data tests for datasets with manual download ?
Inputs and suggestions for improvement are welcome. Thank you. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4050/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4050/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4050",
"html_url": "https://github.com/huggingface/datasets/pull/4050",
"diff_url": "https://github.com/huggingface/datasets/pull/4050.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4050.patch",
"merged_at": 1650561341000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4049 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4049/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4049/comments | https://api.github.com/repos/huggingface/datasets/issues/4049/events | https://github.com/huggingface/datasets/pull/4049 | 1,183,832,893 | PR_kwDODunzps41LSjv | 4,049 | Create metric card for the Code Eval metric | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"if possible, give relevant names to your Pull requests @sashavor (make it easier to scan the repo activity) Thanks!",
"updating them now! thanks for the feedback @julien-c "
] | 1,648,492,463,000 | 1,648,561,092,000 | 1,648,560,770,000 | CONTRIBUTOR | null | Creating initial Code Eval metric card | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4049/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4049/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4049",
"html_url": "https://github.com/huggingface/datasets/pull/4049",
"diff_url": "https://github.com/huggingface/datasets/pull/4049.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4049.patch",
"merged_at": 1648560770000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4048 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4048/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4048/comments | https://api.github.com/repos/huggingface/datasets/issues/4048/events | https://github.com/huggingface/datasets/issues/4048 | 1,183,804,576 | I_kwDODunzps5Gj2yg | 4,048 | Split size error on `amazon_us_reviews` / `PC_v1_00` dataset | {
"login": "trentonstrong",
"id": 191985,
"node_id": "MDQ6VXNlcjE5MTk4NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/191985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/trentonstrong",
"html_url": "https://github.com/trentonstrong",
"followers_url": "https://api.github.com/users/trentonstrong/followers",
"following_url": "https://api.github.com/users/trentonstrong/following{/other_user}",
"gists_url": "https://api.github.com/users/trentonstrong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/trentonstrong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/trentonstrong/subscriptions",
"organizations_url": "https://api.github.com/users/trentonstrong/orgs",
"repos_url": "https://api.github.com/users/trentonstrong/repos",
"events_url": "https://api.github.com/users/trentonstrong/events{/privacy}",
"received_events_url": "https://api.github.com/users/trentonstrong/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | closed | false | {
"login": "trentonstrong",
"id": 191985,
"node_id": "MDQ6VXNlcjE5MTk4NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/191985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/trentonstrong",
"html_url": "https://github.com/trentonstrong",
"followers_url": "https://api.github.com/users/trentonstrong/followers",
"following_url": "https://api.github.com/users/trentonstrong/following{/other_user}",
"gists_url": "https://api.github.com/users/trentonstrong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/trentonstrong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/trentonstrong/subscriptions",
"organizations_url": "https://api.github.com/users/trentonstrong/orgs",
"repos_url": "https://api.github.com/users/trentonstrong/repos",
"events_url": "https://api.github.com/users/trentonstrong/events{/privacy}",
"received_events_url": "https://api.github.com/users/trentonstrong/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "trentonstrong",
"id": 191985,
"node_id": "MDQ6VXNlcjE5MTk4NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/191985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/trentonstrong",
"html_url": "https://github.com/trentonstrong",
"followers_url": "https://api.github.com/users/trentonstrong/followers",
"following_url": "https://api.github.com/users/trentonstrong/following{/other_user}",
"gists_url": "https://api.github.com/users/trentonstrong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/trentonstrong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/trentonstrong/subscriptions",
"organizations_url": "https://api.github.com/users/trentonstrong/orgs",
"repos_url": "https://api.github.com/users/trentonstrong/repos",
"events_url": "https://api.github.com/users/trentonstrong/events{/privacy}",
"received_events_url": "https://api.github.com/users/trentonstrong/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Follow-up: I have confirmed there are no duplicate lines via `sort amazon_reviews_us_PC_v1_00.tsv | uniq -cd` after extracting the raw file.",
"Hi @trentonstrong, thanks for reporting!\r\n\r\nI confirm that loading this dataset configuration throws a `NonMatchingSplitsSizesError`:\r\n```\r\nNonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=350242049, num_examples=785730, dataset_name='amazon_us_reviews'), 'recorded': SplitInfo(name='train', num_bytes=3982712078, num_examples=6908554, dataset_name='amazon_us_reviews')}]\r\n```\r\n\r\nAlso thank you for your offer to fix this. You can find information about how to update the metadata JSON file here: https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#automatically-add-code-metadata\r\n```shell\r\ndatasets-cli test datasets/amazon_us_reviews --save_infos --all_configs\r\n```\r\nPlease, feel free to open a PR with this fix. And do not hesitate to ping me if you need any help.",
"No sweat. Will get it patched up ASAP."
] | 1,648,491,124,000 | 1,649,420,970,000 | 1,649,420,970,000 | CONTRIBUTOR | null | ## Describe the bug
When downloading this subset as of 3-28-2022 you will encounter a split size error after the dataset is extracted. The extracted dataset has roughly ~6m rows while the split expects <1m.
Upon digging a little deeper, I downloaded the raw files from `https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_PC_v1_00.tsv.gz` and extracted them. A line count via `wc -l` confirms the ~6m number that we see and the data looks valid at a glance (I did not check for duplicate rows). My guess is this file has either been updated in place or there is a bug in the dataset metadata.
Happy to submit a PR and fix this up if turns out to be a metadata issue but wanted to get some other :eyes: on it first.
## Steps to reproduce the bug
```python
load_dataset('amazon_us_reviews', 'PC_v1_00')
```
## Expected results
Dataset is downloaded and extracted successfully.
## Actual results
An split size exception is thrown.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 7.0.0
- Pandas version: 1.4.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4048/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4048/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4047 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4047/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4047/comments | https://api.github.com/repos/huggingface/datasets/issues/4047/events | https://github.com/huggingface/datasets/issues/4047 | 1,183,789,237 | I_kwDODunzps5GjzC1 | 4,047 | Dataset.unique(column: str) -> ArrowNotImplementedError | {
"login": "orkenstein",
"id": 1461936,
"node_id": "MDQ6VXNlcjE0NjE5MzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1461936?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/orkenstein",
"html_url": "https://github.com/orkenstein",
"followers_url": "https://api.github.com/users/orkenstein/followers",
"following_url": "https://api.github.com/users/orkenstein/following{/other_user}",
"gists_url": "https://api.github.com/users/orkenstein/gists{/gist_id}",
"starred_url": "https://api.github.com/users/orkenstein/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/orkenstein/subscriptions",
"organizations_url": "https://api.github.com/users/orkenstein/orgs",
"repos_url": "https://api.github.com/users/orkenstein/repos",
"events_url": "https://api.github.com/users/orkenstein/events{/privacy}",
"received_events_url": "https://api.github.com/users/orkenstein/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @orkenstein, thanks for reporting.\r\n\r\nPlease note that for this case, our `datasets` library uses under the hood the Apache Arrow `unique` function: https://arrow.apache.org/docs/python/generated/pyarrow.compute.unique.html#pyarrow.compute.unique\r\n\r\nAnd currently the Apache Arrow `unique` function is only implemented for these input types (see info in their [docs](https://arrow.apache.org/docs/cpp/compute.html#array-wise-vector-functions)): Boolean, Null, Numeric, Temporal, Binary- and String-like.\r\n\r\nHowever, the data types of the `wikiann` dataset are all `list<item: string>` (see its [dataset card](https://huggingface.co/datasets/wikiann#data-fields)), and thus, not yet supported by the Apache Arrow `unique` function.",
"As a workaround solution you can use pandas:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset('wikiann', 'en', split='train')\r\ndf = dataset.to_pandas()\r\nunique_df = df[~df.tokens.apply(tuple).duplicated()] # from https://stackoverflow.com/a/46958336/17517845\r\n```\r\n\r\nNote that pandas loads the dataset in memory (this one is small so it's fine).",
"@lhoestq thank you! I will fall back to this method for now"
] | 1,648,490,372,000 | 1,648,837,497,000 | 1,648,837,497,000 | NONE | null | ## Describe the bug
I'm trying to use `unique()` function, but it fails
## Steps to reproduce the bug
1. Get dataset
2. Call `unique`
3. Error
# Sample code to reproduce the bug
```python
!pip show datasets
from datasets import load_dataset
dataset = load_dataset('wikiann', 'en')
dataset['train'].column_names
dataset['train'].unique(dataset['train'].column_names[0])
```
## Expected results
It would be nice to actually see unique items
## Actual results
Error:
```python
---------------------------------------------------------------------------
ArrowNotImplementedError Traceback (most recent call last)
[<ipython-input-10-5e0de07ed42c>](https://s0qyv2vjaji-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220324-060046-RC00_436956229#) in <module>()
6
7 dataset['train'].column_names
----> 8 dataset['train'].unique(dataset['train'].column_names[0])
5 frames
/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowNotImplementedError: Function unique has no kernel matching input types (array[list<item: string>])
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: Google Collab
- Python version: 3.7.13
- PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4047/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4047/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4046 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4046/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4046/comments | https://api.github.com/repos/huggingface/datasets/issues/4046/events | https://github.com/huggingface/datasets/pull/4046 | 1,183,723,360 | PR_kwDODunzps41K6_H | 4,046 | Create metric card for XNLI | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,486,678,000 | 1,648,560,779,000 | 1,648,560,450,000 | CONTRIBUTOR | null | Proposing a metric card for XNLI | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4046/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4046/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4046",
"html_url": "https://github.com/huggingface/datasets/pull/4046",
"diff_url": "https://github.com/huggingface/datasets/pull/4046.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4046.patch",
"merged_at": 1648560450000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4045 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4045/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4045/comments | https://api.github.com/repos/huggingface/datasets/issues/4045/events | https://github.com/huggingface/datasets/pull/4045 | 1,183,661,091 | PR_kwDODunzps41KtfV | 4,045 | Fix CLI dummy data generation | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,483,755,000 | 1,648,739,052,000 | 1,648,738,746,000 | MEMBER | null | PR:
- #3868
broke the CLI dummy data generation.
Fix #4044. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4045/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4045/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4045",
"html_url": "https://github.com/huggingface/datasets/pull/4045",
"diff_url": "https://github.com/huggingface/datasets/pull/4045.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4045.patch",
"merged_at": 1648738746000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4044 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4044/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4044/comments | https://api.github.com/repos/huggingface/datasets/issues/4044/events | https://github.com/huggingface/datasets/issues/4044 | 1,183,658,942 | I_kwDODunzps5GjTO- | 4,044 | CLI dummy data generation is broken | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,648,483,657,000 | 1,648,738,746,000 | 1,648,738,746,000 | MEMBER | null | ## Describe the bug
We get a TypeError when running CLI dummy data generation:
```shell
datasets-cli dummy_data datasets/<your-dataset-folder> --auto_generate
```
gives:
```
File ".../huggingface/datasets/src/datasets/commands/dummy_data.py", line 361, in _autogenerate_dummy_data
dataset_builder._prepare_split(split_generator)
TypeError: _prepare_split() missing 1 required positional argument: 'check_duplicate_keys'
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4044/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4044/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4043 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4043/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4043/comments | https://api.github.com/repos/huggingface/datasets/issues/4043/events | https://github.com/huggingface/datasets/pull/4043 | 1,183,624,475 | PR_kwDODunzps41Kl0b | 4,043 | Create metric card for CUAD | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,481,938,000 | 1,648,567,256,000 | 1,648,566,919,000 | CONTRIBUTOR | null | Proposing a CUAD metric card | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4043/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4043/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4043",
"html_url": "https://github.com/huggingface/datasets/pull/4043",
"diff_url": "https://github.com/huggingface/datasets/pull/4043.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4043.patch",
"merged_at": 1648566919000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4041 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4041/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4041/comments | https://api.github.com/repos/huggingface/datasets/issues/4041/events | https://github.com/huggingface/datasets/issues/4041 | 1,183,599,461 | I_kwDODunzps5GjEtl | 4,041 | Add support for IIIF in datasets | {
"login": "davanstrien",
"id": 8995957,
"node_id": "MDQ6VXNlcjg5OTU5NTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8995957?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davanstrien",
"html_url": "https://github.com/davanstrien",
"followers_url": "https://api.github.com/users/davanstrien/followers",
"following_url": "https://api.github.com/users/davanstrien/following{/other_user}",
"gists_url": "https://api.github.com/users/davanstrien/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davanstrien/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davanstrien/subscriptions",
"organizations_url": "https://api.github.com/users/davanstrien/orgs",
"repos_url": "https://api.github.com/users/davanstrien/repos",
"events_url": "https://api.github.com/users/davanstrien/events{/privacy}",
"received_events_url": "https://api.github.com/users/davanstrien/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi! Thanks for the detailed analysis of adding IIIF support. I like the idea of \"using IIIF through datasets scripts\" due to its ease of use. Another approach that I like is yielding image ids and using the `piffle` library (which offers a bit more flexibility) + `map` to download + cache images. We can handle bad URLs in `map` by returning `None`. Plus, we can add a `Dataset Preprocessing` section with the code that explains this approach to the card of such datasets. WDYT?\r\n\r\n> currently, IIIF is mainly used by cultural heritage organizations (museums, archives etc.) The adoption of IIIF in this sector has been growing but it's possible that adoption won't be extended to other industries which may also be a source of image data for training ML models.\r\n\r\nThis is why (currently) adding a new feature type would be overkill, IMO.\r\n"
] | 1,648,480,765,000 | 1,649,182,853,000 | null | CONTRIBUTOR | null | This is a feature request for support for IIIF in `datasets`. Apologies for the long issue. I have also used a different format to the usual feature request since I think that makes more sense but happy to use the standard template if preferred.
## What is [IIIF](https://iiif.io/)?
IIIF (International Image Interoperability Framework)
> is a set of open standards for delivering high-quality, attributed digital objects online at scale. It’s also an international community developing and implementing the IIIF APIs. IIIF is backed by a consortium of leading cultural institutions.
The tl;dr is that IIIF provides various specifications for implementing useful functionality for:
- Institutions to make available images for various use cases
- Users to have a consistent way of interacting/requesting these images
- For developers to have a common standard for developing tools for working with IIIF images that will work across all institutions that implement a particular IIIF standard (for example the image viewer for the BNF can also work for the Library of Congress if they both use IIIF).
Some institutions that various levels of support IIF include: The British Library, Internet Archive, Library of Congress, Wikidata. There are also many smaller institutions that have IIIF support. An incomplete list can be found here: https://iiif.io/guides/finding_resources/
## IIIF APIs
IIIF consists of a number of APIs which could be integrated with datasets. I think the most obvious candidate for inclusion would be the [Image API](https://iiif.io/api/image/3.0/)
### IIIF Image API
The Image API https://iiif.io/api/image/3.0/ is likely the most suitable first candidate for integration with datasets. The Image API offers a consistent protocol for requesting images via a URL:
```{scheme}://{server}{/prefix}/{identifier}/{region}/{size}/{rotation}/{quality}.{format}```
A concrete example of this:
```https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/full/0/default.jpg```
As you can see the scheme offers a number of options that can be specified in the URL, for example, size. Using the example URL we return:
![](https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/full/0/default.jpg)
We can change the size to request a size of 250 by 250, this is done by changing the size from `full` to `250,250` i.e. switching the URL to `https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/250,250/0/default.jpg`
![](https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/250,250/0/default.jpg)
We can also request the image with max width 250, max height 250 whilst maintaining the aspect ratio using `!w,h`. i.e. change the url to `https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/!250,250/0/default.jpg`
![](https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/!250,250/0/default.jpg)
A full overview of the options for size can be found here: https://iiif.io/api/image/3.0/#42-size
## Why would/could this be useful for datasets?
There are a few reasons why support for the IIIF Image API could be useful. Broadly the ability to have more control over how an image is returned from a server is useful for many ML workflows:
- images can be requested in the right size, this prevents having to download/stream large images when the actual desired size is much smaller
- can select a subset of an image: it is possible to select a sub-region of an image, this could be useful for example when you already have a bounding box for a subset of an image and then want to use this subset of an image for another task. For example, https://github.com/Living-with-machines/nnanno uses IIIF to request parts of a newspaper image that have been detected as 'photograph', 'illustration' etc for downstream use.
- options for quality, rotation, the format can all be encoded in the URL request.
These may become particularly useful when pre-training models on large image datasets where the cost of downloading images with 1600 pixel width when you actually want 240 has a larger impact.
## What could this look like in datasets?
I think there are various ways in which support for IIIF could potentially be included in `datasets`. These suggestions aren't fully fleshed out but hopefully, give a sense of possible approaches that match existing `datasets` methods in their approach.
### Use through datasets scripts
Loading images via URL is already supported. There are a few possible 'extras' that could be included when using IIIF. One option is to leverage the IIIF protocol in datasets scripts, i.e. the dataset script can expose the IIIF options via the dataset script:
```python
ds = load_dataset("iiif_dataset", image_size="250,250", fmt="jpg")
```
This is already possible. The approach to parsing the IIIF URLs would be left to the person creating the dataset script.
### Support through dataset scripts (with some datasets support)
This is similar to the above but `datasets` would offer some way of saying this is a iiif URL and then expose the options associated with IIIF images automatically. i.e. if you did something like:
```python
features = {"label": ClassLabel(names=['dog','cat']),
"url": datasets.IIIFURL()}
```
inside your loading script, you would automatically have exposed `size`, `fmt` etc. options when loading the dataset.
### Other possible integrations
Some other possible pseudocode ways that a user could interact with IIIF URLs:
The ability to cast to an `IIIFImage` feature type:
```
ds.cast_column('url', IIIFImage, download=False)
```
The ability to specify some options associated with IIIF urls.
```
ds = ds.set_iiif_options(column='url', size="250,250")
```
I think all of these would rely on having an `IIIFImage` feature type - this would be a little bit of a Frankenstein between a `string` and `datasets.Image`. I think most of the actual image behaviour would be exactly the same as `datasets.Image`, the difference would be that the underlying URL could be modified in various ways.
## prerequisite requirements
There are a few pre-requisites that I can anticipate. This doesn't cover a full implementation of IIIF support which would have different requirements depending on the approach taken to implementing IIIF. Some of these features would be useful independently of adding IIIF support:
### support for handling failed images loaded via a URL (or a specific IIIFImage feature).
Working with images via web requests will inevitably return the odd failed request. If these images are then requests and don't return it would be useful to have a `None` returned instead of an error. For example, when using `push_to_hub` `datasets` will try and include the image but currently fails with bad URLs.
```python
from datasets import Dataset
import datasets
urls = ['https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/!250,250/0/default.jpg']*3
urls.append("badurl.com/image.jpg")
data = {"url":urls}
ds = Dataset.from_dict(data)
ds = ds.cast_column('url', datasets.Image())
ds[3]['url']
```
returns a `FileNotFoundError`, for streaming large datasets of images using their URLs it could be useful to have `None` returned instead. This has implications for the actual training loop i.e. you now need to somehow skip those examples because of this it might not be desirable to support this.
### Caching support
Since IIIF requests images via a URL it would be great to have a way of not requesting the images multiple times. This is tracked in https://github.com/huggingface/datasets/issues/3142 and I think this would also be very desirable to have here particularly as one of the primary use cases of IIIF may be to do unsupervised pre-training on large datasets of IIIF URLs.
### Support for Parsing IIIF URLs
This gets closer to the actual implementation. Here the requirement would be some way for `datasets` to parse a URL that the users specify is an IIIF URL. An example of a Python library that does this: https://github.com/Princeton-CDH/piffle. I also have a rough version that uses `dataclasses` which I can share.
## Why it might not be worthwhile/suitable for datasets
There are some reasons that this might not be worth implementing:
- currently, IIIF is mainly used by cultural heritage organizations (museums, archives etc.) The adoption of IIIF in this sector has been growing but it's possible that adoption won't be extended to other industries which may also be a source of image data for training ML models.
- It may end up being better to leave this to the user. It would for example be possible for someone to write map functions to change an IIIF URL to the correct size etc. Adding direct support for IIIF in datasets may potentially not be worth the trouble.
- The impact of different approaches to doing image scaling can impact the downstream model's performance, see: https://twitter.com/wightmanr/status/1479528581466243073?s=20. Since different IIIF image servers may implement different approaches to resizing images this could have a downstream impact on model performance. think this is something that could be flagged to the end-user in the documentation. This probably also falls into general "gotchas" that probably aren't the `datasets` libraries' role to protect users from.
Some of the requirements outlined above would be useful for images anyway. These could be implemented prior to a final decision about whether IIIF support could/should be added to datasets.
## Suggested next steps:
I realise this is a long and slightly open-ended issue. I am happy to clarify/answer questions on IIIF and possible integrations. If the prerequisite requirements seem worth exploring/are better explored in their own issues let me know and I can open new issues for those.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4041/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4041/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4039 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4039/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4039/comments | https://api.github.com/repos/huggingface/datasets/issues/4039/events | https://github.com/huggingface/datasets/pull/4039 | 1,183,468,927 | PR_kwDODunzps41KFIf | 4,039 | Support streaming xcopa dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,475,155,000 | 1,648,484,808,000 | 1,648,484,506,000 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4039/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4039/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4039",
"html_url": "https://github.com/huggingface/datasets/pull/4039",
"diff_url": "https://github.com/huggingface/datasets/pull/4039.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4039.patch",
"merged_at": 1648484506000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4038 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4038/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4038/comments | https://api.github.com/repos/huggingface/datasets/issues/4038/events | https://github.com/huggingface/datasets/pull/4038 | 1,183,189,827 | PR_kwDODunzps41JKUG | 4,038 | [DO NOT MERGE] Test doc-builder with skipped installation feature | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Fix in https://github.com/huggingface/doc-builder/pull/162 works as expected (docs build), closing this"
] | 1,648,461,511,000 | 1,648,470,845,000 | 1,648,470,549,000 | MEMBER | null | This PR is just for testing that we can build PR docs with changes made on the [`skip-install-for-real`](https://github.com/huggingface/doc-builder/tree/skip-install-for-real) branch of `doc-builder`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4038/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4038/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4038",
"html_url": "https://github.com/huggingface/datasets/pull/4038",
"diff_url": "https://github.com/huggingface/datasets/pull/4038.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4038.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4037 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4037/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4037/comments | https://api.github.com/repos/huggingface/datasets/issues/4037/events | https://github.com/huggingface/datasets/issues/4037 | 1,183,144,486 | I_kwDODunzps5GhVom | 4,037 | Error while building documentation | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"After some investigation, maybe the bug is in `doc-builder`.\r\n\r\nI've opened an issue there:\r\n- huggingface/doc-builder#160",
"Fixed by @lewtun (thank you):\r\n- huggingface/doc-builder@31fe6c8bc7225810e281c2f6c6cd32f38828c504"
] | 1,648,459,364,000 | 1,648,461,712,000 | 1,648,461,648,000 | MEMBER | null | ## Describe the bug
Documentation building is failing:
- https://github.com/huggingface/datasets/runs/5716300989?check_suite_focus=true
```
ValueError: There was an error when converting ../datasets/docs/source/package_reference/main_classes.mdx to the MDX format.
Unable to find datasets.filesystems.S3FileSystem in datasets. Make sure the path to that object is correct.
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4037/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4037/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4036 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4036/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4036/comments | https://api.github.com/repos/huggingface/datasets/issues/4036/events | https://github.com/huggingface/datasets/pull/4036 | 1,183,126,893 | PR_kwDODunzps41I854 | 4,036 | Fix building of documentation | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Superseded by huggingface/doc-builder@31fe6c8bc7225810e281c2f6c6cd32f38828c504"
] | 1,648,458,552,000 | 1,648,466,311,000 | 1,648,466,002,000 | MEMBER | null | Documentation building is failing:
- https://github.com/huggingface/datasets/runs/5716300989?check_suite_focus=true
```
ValueError: There was an error when converting ../datasets/docs/source/package_reference/main_classes.mdx to the MDX format.
Unable to find datasets.filesystems.S3FileSystem in datasets. Make sure the path to that object is correct.
```
Fix #4037. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4036/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4036/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4036",
"html_url": "https://github.com/huggingface/datasets/pull/4036",
"diff_url": "https://github.com/huggingface/datasets/pull/4036.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4036.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4035 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4035/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4035/comments | https://api.github.com/repos/huggingface/datasets/issues/4035/events | https://github.com/huggingface/datasets/pull/4035 | 1,183,067,456 | PR_kwDODunzps41Iwb2 | 4,035 | Add zero_division argument to precision and recall metrics | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,648,455,554,000 | 1,648,461,187,000 | 1,648,461,186,000 | MEMBER | null | Fix #4025. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4035/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4035/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4035",
"html_url": "https://github.com/huggingface/datasets/pull/4035",
"diff_url": "https://github.com/huggingface/datasets/pull/4035.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4035.patch",
"merged_at": 1648461186000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4034 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4034/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4034/comments | https://api.github.com/repos/huggingface/datasets/issues/4034/events | https://github.com/huggingface/datasets/pull/4034 | 1,183,033,285 | PR_kwDODunzps41IpN1 | 4,034 | Fix null checksum in xcopa dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,648,453,694,000 | 1,648,454,774,000 | 1,648,454,774,000 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4034/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4034/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4034",
"html_url": "https://github.com/huggingface/datasets/pull/4034",
"diff_url": "https://github.com/huggingface/datasets/pull/4034.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4034.patch",
"merged_at": 1648454774000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4033 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4033/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4033/comments | https://api.github.com/repos/huggingface/datasets/issues/4033/events | https://github.com/huggingface/datasets/pull/4033 | 1,182,984,445 | PR_kwDODunzps41Ie6w | 4,033 | Fix checksum error in cats_vs_dogs dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,450,885,000 | 1,648,453,779,000 | 1,648,453,464,000 | MEMBER | null | Recent PR updated the metadata JSON file of cats_vs_dogs dataset:
- #3878
However, that new JSON file contains a None checksum.
This PR fixes it.
Fix #4032. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4033/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4033/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4033",
"html_url": "https://github.com/huggingface/datasets/pull/4033",
"diff_url": "https://github.com/huggingface/datasets/pull/4033.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4033.patch",
"merged_at": 1648453464000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4032 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4032/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4032/comments | https://api.github.com/repos/huggingface/datasets/issues/4032/events | https://github.com/huggingface/datasets/issues/4032 | 1,182,595,697 | I_kwDODunzps5GfPpx | 4,032 | can't download cats_vs_dogs dataset | {
"login": "RRaphaell",
"id": 74569835,
"node_id": "MDQ6VXNlcjc0NTY5ODM1",
"avatar_url": "https://avatars.githubusercontent.com/u/74569835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RRaphaell",
"html_url": "https://github.com/RRaphaell",
"followers_url": "https://api.github.com/users/RRaphaell/followers",
"following_url": "https://api.github.com/users/RRaphaell/following{/other_user}",
"gists_url": "https://api.github.com/users/RRaphaell/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RRaphaell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RRaphaell/subscriptions",
"organizations_url": "https://api.github.com/users/RRaphaell/orgs",
"repos_url": "https://api.github.com/users/RRaphaell/repos",
"events_url": "https://api.github.com/users/RRaphaell/events{/privacy}",
"received_events_url": "https://api.github.com/users/RRaphaell/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thnaks for reporting @RRaphaell.\r\n\r\nWe are fixing it. "
] | 1,648,400,739,000 | 1,648,453,464,000 | 1,648,453,464,000 | NONE | null | ## Describe the bug
can't download cats_vs_dogs dataset. error: Checksums didn't match for dataset source files
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("cats_vs_dogs")
```
## Expected results
loaded successfully.
## Actual results
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip']
## Environment info
fresh google colab notebook
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4032/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4032/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4031 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4031/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4031/comments | https://api.github.com/repos/huggingface/datasets/issues/4031/events | https://github.com/huggingface/datasets/issues/4031 | 1,182,415,124 | I_kwDODunzps5GejkU | 4,031 | Cannot load the dataset conll2012_ontonotesv5 | {
"login": "cathyxl",
"id": 8326473,
"node_id": "MDQ6VXNlcjgzMjY0NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8326473?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cathyxl",
"html_url": "https://github.com/cathyxl",
"followers_url": "https://api.github.com/users/cathyxl/followers",
"following_url": "https://api.github.com/users/cathyxl/following{/other_user}",
"gists_url": "https://api.github.com/users/cathyxl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cathyxl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cathyxl/subscriptions",
"organizations_url": "https://api.github.com/users/cathyxl/orgs",
"repos_url": "https://api.github.com/users/cathyxl/repos",
"events_url": "https://api.github.com/users/cathyxl/events{/privacy}",
"received_events_url": "https://api.github.com/users/cathyxl/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @cathyxl, thanks for reporting.\r\n\r\nIndeed, we have recently updated the loading script of that dataset (and fixed that bug as well):\r\n- #4002\r\n\r\nThat fix will be available in our next `datasets` library release. In the meantime, you can incorporate that fix by:\r\n- installing `datasets` from our GitHub repo:\r\n```bash\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\n- forcing the data files to be redownloaded\r\n```python\r\nds = load_dataset('conll2012_ontonotesv5', 'english_v4', split=\"test\", download_mode=\"force_redownload\")\r\n```\r\n\r\nFeel free to re-open this issue if the problem persists."
] | 1,648,366,703,000 | 1,648,450,711,000 | 1,648,449,078,000 | NONE | null | ## Describe the bug
Cannot load the dataset conll2012_ontonotesv5
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets import load_dataset
dataset = load_dataset('conll2012_ontonotesv5', 'english_v4', split="test")
print(dataset)
```
## Expected results
The datasets should be downloaded successfully
## Actual results
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://md-datasets-cache-zipfiles-prod.s3.eu-west-1.amazonaws.com/zmycy7t9h9-1.zip']
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: Linux-5.4.0-88-generic-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyArrow version: 7.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4031/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4031/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4030 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4030/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4030/comments | https://api.github.com/repos/huggingface/datasets/issues/4030/events | https://github.com/huggingface/datasets/pull/4030 | 1,182,157,056 | PR_kwDODunzps41FxjE | 4,030 | Use a constant for the articles regex in SQuAD v2 | {
"login": "bryant1410",
"id": 3905501,
"node_id": "MDQ6VXNlcjM5MDU1MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bryant1410",
"html_url": "https://github.com/bryant1410",
"followers_url": "https://api.github.com/users/bryant1410/followers",
"following_url": "https://api.github.com/users/bryant1410/following{/other_user}",
"gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions",
"organizations_url": "https://api.github.com/users/bryant1410/orgs",
"repos_url": "https://api.github.com/users/bryant1410/repos",
"events_url": "https://api.github.com/users/bryant1410/events{/privacy}",
"received_events_url": "https://api.github.com/users/bryant1410/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,335,990,000 | 1,649,781,045,000 | 1,649,761,224,000 | CONTRIBUTOR | null | The main reason for doing this is to be able to change the articles list if using another language, for example. It's not the most elegant solution but at least it makes the metric more extensible with no drawbacks.
BTW, what could be the best way to make this more generic (i.e., SQuAD in other languages)? Maybe receive a regex as an optional param, with the current value as the default? Similarly for SQuAD v1 (can't they re-use code?). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4030/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4030/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4030",
"html_url": "https://github.com/huggingface/datasets/pull/4030",
"diff_url": "https://github.com/huggingface/datasets/pull/4030.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4030.patch",
"merged_at": 1649761224000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4029 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4029/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4029/comments | https://api.github.com/repos/huggingface/datasets/issues/4029/events | https://github.com/huggingface/datasets/issues/4029 | 1,181,057,011 | I_kwDODunzps5GZX_z | 4,029 | Add FAISS .range_search() method for retrieving all texts from dataset above similarity threshold | {
"login": "MoritzLaurer",
"id": 41862082,
"node_id": "MDQ6VXNlcjQxODYyMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/41862082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MoritzLaurer",
"html_url": "https://github.com/MoritzLaurer",
"followers_url": "https://api.github.com/users/MoritzLaurer/followers",
"following_url": "https://api.github.com/users/MoritzLaurer/following{/other_user}",
"gists_url": "https://api.github.com/users/MoritzLaurer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MoritzLaurer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MoritzLaurer/subscriptions",
"organizations_url": "https://api.github.com/users/MoritzLaurer/orgs",
"repos_url": "https://api.github.com/users/MoritzLaurer/repos",
"events_url": "https://api.github.com/users/MoritzLaurer/events{/privacy}",
"received_events_url": "https://api.github.com/users/MoritzLaurer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi ! You can access the faiss index with\r\n```python\r\nfaiss_index = my_dataset.get_index(\"my_index_name\").faiss_index\r\n```\r\nand then do whatever you want with it, e.g. query it using range_search:\r\n```python\r\nthreshold = 0.95\r\nlimits, distances, indices = faiss_index.range_search(x=xq, thresh=threshold)\r\n\r\ntexts = dataset[indices]\r\n```",
"wow, that's great, thank you for the explanation. (if that's not already in the documentation, could be worth adding it)\r\n\r\nwhich type of faiss index is Datasets using? I looked into faiss recently and I understand that there are several different types of indexes and the choice is important, e.g. regarding which distance metric you use (euclidian vs. cosine/dot product), the size of my dataset etc. can I chose the type of index somehow as well?",
"`Dataset.add_faiss_index` has a `string_factory` parameter, used to set the type of index (see the faiss documentation about [index factory](https://github.com/facebookresearch/faiss/wiki/The-index-factory)). Alternatively, you can pass an index you've defined yourself using faiss with the `custom_index` parameter of `Dataset.add_faiss_index` \r\n\r\nHere is the full documentation of `Dataset.add_faiss_index`: https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classes#datasets.Dataset.add_faiss_index",
"great thanks, I will try it out"
] | 1,648,229,493,000 | 1,651,826,152,000 | 1,651,826,152,000 | NONE | null | **Is your feature request related to a problem? Please describe.**
I would like to retrieve all texts from a dataset, which are semantically similar to a specific input text (query), above a certain (cosine) similarity threshold. My dataset is very large (Wikipedia), so I need to use Datasets and FAISS for this. I would like to be able to repeat many different queries on the dataset quickly.
**Describe the solution you'd like**
dataset objects currently have the .get_nearest_examples() method for text retrieval via FAISS. But this only allows retrieving a specific number of K texts instead of everything above a specified similarity threshold.
It would be great if HF Datasets would also support the FAISS method .range_search() for retrieving texts above a certain similarity threshold.
see details here: https://github.com/facebookresearch/faiss/issues/1273
**Describe alternatives you've considered**
I've considered using native FAISS, but doing this via HF datasets would be better. My assumption is that Dataset features like dataset streaming make it easier to work with large datasets
**Additional context**
The concrete use-case is: I have a large dataset (wikipedia) and I would like to retrieve all paragraphs which are similar to a query. I will use sentence-transformers for encoding the texts.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4029/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4029/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4028 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4028/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4028/comments | https://api.github.com/repos/huggingface/datasets/issues/4028/events | https://github.com/huggingface/datasets/pull/4028 | 1,181,022,675 | PR_kwDODunzps41B429 | 4,028 | Fix docs on audio feature installation | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,227,311,000 | 1,648,743,647,000 | 1,648,743,320,000 | MEMBER | null | This PR:
- Removes the explicit installation of `librosa` (this is installed with `pip install datasets[audio]`
- Adds the warning for Linux users to install manually the non-Python package `libsndfile`
- Explains that the installation of `torchaudio` is only necessary to support loading audio datasets containing MP3 audio files
Related to #4000. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4028/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4028/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4028",
"html_url": "https://github.com/huggingface/datasets/pull/4028",
"diff_url": "https://github.com/huggingface/datasets/pull/4028.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4028.patch",
"merged_at": 1648743320000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4027 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4027/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4027/comments | https://api.github.com/repos/huggingface/datasets/issues/4027/events | https://github.com/huggingface/datasets/issues/4027 | 1,180,991,344 | I_kwDODunzps5GZH9w | 4,027 | ElasticSearch Indexing example: TypeError: __init__() missing 1 required positional argument: 'scheme' | {
"login": "MoritzLaurer",
"id": 41862082,
"node_id": "MDQ6VXNlcjQxODYyMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/41862082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MoritzLaurer",
"html_url": "https://github.com/MoritzLaurer",
"followers_url": "https://api.github.com/users/MoritzLaurer/followers",
"following_url": "https://api.github.com/users/MoritzLaurer/following{/other_user}",
"gists_url": "https://api.github.com/users/MoritzLaurer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MoritzLaurer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MoritzLaurer/subscriptions",
"organizations_url": "https://api.github.com/users/MoritzLaurer/orgs",
"repos_url": "https://api.github.com/users/MoritzLaurer/repos",
"events_url": "https://api.github.com/users/MoritzLaurer/events{/privacy}",
"received_events_url": "https://api.github.com/users/MoritzLaurer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi, @MoritzLaurer, thanks for reporting.\r\n\r\nNormally this is due to a mismatch between the versions of your Elasticsearch client and server:\r\n- your ES client is passing only keyword arguments to your ES server\r\n- whereas your ES server expects a positional argument called 'scheme'\r\n\r\nIn order to fix this, you should align the major versions of both Elasticsearch client and server.\r\n\r\nYou can have more info:\r\n- on this other issue page: https://github.com/huggingface/datasets/issues/3956#issuecomment-1072115173\r\n- Elasticsearch client docs: https://www.elastic.co/guide/en/elasticsearch/client/python-api/current/overview.html#_compatibility\r\n\r\nFeel free to re-open this issue if the problem persists.\r\n\r\nDuplicate of:\r\n- #3956",
"1. Check elasticsearch version\r\n```\r\nimport elasticsearch\r\nprint(elasticsearch.__version__)\r\n```\r\nEx: 7.9.1\r\n2. Uninstall current elasticsearch package\r\n`pip uninstall elasticsearch`\r\n3. Install elasticsearch 7.9.1 package\r\n`pip install elasticsearch==7.9.1`"
] | 1,648,225,348,000 | 1,649,327,392,000 | 1,648,454,336,000 | NONE | null | ## Describe the bug
I am following the example in the documentation for elastic search step by step (on google colab): https://huggingface.co/docs/datasets/faiss_es#elasticsearch
```
from datasets import load_dataset
squad = load_dataset('crime_and_punish', split='train[:1000]')
```
When I run the line:
`squad.add_elasticsearch_index("context", host="localhost", port="9200")`
I get the error:
`TypeError: __init__() missing 1 required positional argument: 'scheme'`
## Expected results
No error message
## Actual results
```
TypeError Traceback (most recent call last)
[<ipython-input-23-9205593edef3>](https://localhost:8080/#) in <module>()
1 import elasticsearch
----> 2 squad.add_elasticsearch_index("text", host="localhost", port="9200")
6 frames
[/usr/local/lib/python3.7/dist-packages/elasticsearch/_sync/client/utils.py](https://localhost:8080/#) in host_mapping_to_node_config(host)
209 options["path_prefix"] = options.pop("url_prefix")
210
--> 211 return NodeConfig(**options) # type: ignore
212
213
TypeError: __init__() missing 1 required positional argument: 'scheme'
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.2.0
- Platform: Linux, Google Colab
- Python version: Google Colab (probably 3.7)
- PyArrow version: ?
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4027/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4027/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4026 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4026/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4026/comments | https://api.github.com/repos/huggingface/datasets/issues/4026/events | https://github.com/huggingface/datasets/pull/4026 | 1,180,968,774 | PR_kwDODunzps41Btcm | 4,026 | Support streaming xtreme dataset for bucc18 config | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,224,040,000 | 1,648,225,610,000 | 1,648,225,312,000 | MEMBER | null | Support streaming xtreme dataset for bucc18 config. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4026/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4026/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4026",
"html_url": "https://github.com/huggingface/datasets/pull/4026",
"diff_url": "https://github.com/huggingface/datasets/pull/4026.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4026.patch",
"merged_at": 1648225312000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4025 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4025/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4025/comments | https://api.github.com/repos/huggingface/datasets/issues/4025/events | https://github.com/huggingface/datasets/issues/4025 | 1,180,963,105 | I_kwDODunzps5GZBEh | 4,025 | Missing argument in precision/recall | {
"login": "Dref360",
"id": 8976546,
"node_id": "MDQ6VXNlcjg5NzY1NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dref360",
"html_url": "https://github.com/Dref360",
"followers_url": "https://api.github.com/users/Dref360/followers",
"following_url": "https://api.github.com/users/Dref360/following{/other_user}",
"gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dref360/subscriptions",
"organizations_url": "https://api.github.com/users/Dref360/orgs",
"repos_url": "https://api.github.com/users/Dref360/repos",
"events_url": "https://api.github.com/users/Dref360/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dref360/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for the suggestion, @Dref360.\r\n\r\nWe are adding that argument. "
] | 1,648,223,752,000 | 1,648,461,186,000 | 1,648,461,186,000 | CONTRIBUTOR | null | **Is your feature request related to a problem? Please describe.**
[`sklearn.metrics.precision_score`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_score.html) accepts an argument `zero_division`, but it is not available in [precision Metric](https://github.com/huggingface/datasets/blob/master/metrics/precision/precision.py#L117)
Same issue is present for Recall.
**Describe the solution you'd like**
Support for **kwargs or adding a new field for `zero_division`.
**Describe alternatives you've considered**
I could filter the warnings myself, but that is not ideal.
**Additional context**
I can make the requested changes if this is approved. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4025/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4025/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4024 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4024/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4024/comments | https://api.github.com/repos/huggingface/datasets/issues/4024/events | https://github.com/huggingface/datasets/pull/4024 | 1,180,951,817 | PR_kwDODunzps41Bp3V | 4,024 | Doc: image_process small tip | {
"login": "FrancescoSaverioZuppichini",
"id": 15908060,
"node_id": "MDQ6VXNlcjE1OTA4MDYw",
"avatar_url": "https://avatars.githubusercontent.com/u/15908060?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FrancescoSaverioZuppichini",
"html_url": "https://github.com/FrancescoSaverioZuppichini",
"followers_url": "https://api.github.com/users/FrancescoSaverioZuppichini/followers",
"following_url": "https://api.github.com/users/FrancescoSaverioZuppichini/following{/other_user}",
"gists_url": "https://api.github.com/users/FrancescoSaverioZuppichini/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FrancescoSaverioZuppichini/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FrancescoSaverioZuppichini/subscriptions",
"organizations_url": "https://api.github.com/users/FrancescoSaverioZuppichini/orgs",
"repos_url": "https://api.github.com/users/FrancescoSaverioZuppichini/repos",
"events_url": "https://api.github.com/users/FrancescoSaverioZuppichini/events{/privacy}",
"received_events_url": "https://api.github.com/users/FrancescoSaverioZuppichini/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"This tip is unnecessary, i.e., Pillow will already be installed since the `Image` feature requires it for encoding and decoding. Thanks anyway.\r\n\r\ncc @stevhliu I've noticed we are missing the installation section in the doc (`pip install datasets[vision]`). I can add it myself."
] | 1,648,223,072,000 | 1,648,740,935,000 | 1,648,740,620,000 | NONE | null | I've added a small tip in the `image_process` doc | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4024/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4024/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4024",
"html_url": "https://github.com/huggingface/datasets/pull/4024",
"diff_url": "https://github.com/huggingface/datasets/pull/4024.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4024.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4023 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4023/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4023/comments | https://api.github.com/repos/huggingface/datasets/issues/4023/events | https://github.com/huggingface/datasets/pull/4023 | 1,180,840,399 | PR_kwDODunzps41BSZT | 4,023 | Replace yahoo_answers_topics data url | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The CI is failing because of issues in the dataset cards that are unrelated to this PR - merging"
] | 1,648,217,337,000 | 1,648,462,376,000 | 1,648,462,072,000 | MEMBER | null | I replaced the Google Drive URL of the dataset by the FastAI one, since we've had some issues with Google Drive. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4023/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4023/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4023",
"html_url": "https://github.com/huggingface/datasets/pull/4023",
"diff_url": "https://github.com/huggingface/datasets/pull/4023.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4023.patch",
"merged_at": 1648462072000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4022 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4022/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4022/comments | https://api.github.com/repos/huggingface/datasets/issues/4022/events | https://github.com/huggingface/datasets/pull/4022 | 1,180,816,682 | PR_kwDODunzps41BNeA | 4,022 | Replace dbpedia_14 data url | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,216,041,000 | 1,648,220,617,000 | 1,648,220,329,000 | MEMBER | null | I replaced the Google Drive URL of the dataset by the FastAI one, since we've had some issues with Google Drive. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4022/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4022/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4022",
"html_url": "https://github.com/huggingface/datasets/pull/4022",
"diff_url": "https://github.com/huggingface/datasets/pull/4022.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4022.patch",
"merged_at": 1648220329000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4021 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4021/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4021/comments | https://api.github.com/repos/huggingface/datasets/issues/4021/events | https://github.com/huggingface/datasets/pull/4021 | 1,180,805,092 | PR_kwDODunzps41BLAf | 4,021 | Fix `map` remove_columns on empty dataset | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,215,389,000 | 1,648,561,291,000 | 1,648,560,944,000 | MEMBER | null | On an empty dataset, the `remove_columns` parameter of `map` currently doesn't actually remove the columns:
```python
>>> ds = datasets.load_dataset("glue", "rte")
>>> ds_filtered = ds.filter(lambda x: x["label"] != -1)
>>> ds_mapped = ds_filtered.map(lambda x: x, remove_columns=["label"])
>>> print(repr(ds_mapped.column_names))
{
'train': ['sentence1', 'sentence2', 'idx'],
'validation': ['sentence1', 'sentence2', 'idx'],
'test': ['sentence1', 'sentence2', 'label', 'idx']
}
```
I fixed this error and updated the tests | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4021/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4021/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4021",
"html_url": "https://github.com/huggingface/datasets/pull/4021",
"diff_url": "https://github.com/huggingface/datasets/pull/4021.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4021.patch",
"merged_at": 1648560944000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4020 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4020/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4020/comments | https://api.github.com/repos/huggingface/datasets/issues/4020/events | https://github.com/huggingface/datasets/pull/4020 | 1,180,636,754 | PR_kwDODunzps41Am4R | 4,020 | Replace amazon_polarity data URL | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,205,457,000 | 1,648,220,556,000 | 1,648,220,261,000 | MEMBER | null | I replaced the Google Drive URL of the dataset by the FastAI one, since we've had some issues with Google Drive. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4020/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4020/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4020",
"html_url": "https://github.com/huggingface/datasets/pull/4020",
"diff_url": "https://github.com/huggingface/datasets/pull/4020.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4020.patch",
"merged_at": 1648220261000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4019 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4019/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4019/comments | https://api.github.com/repos/huggingface/datasets/issues/4019/events | https://github.com/huggingface/datasets/pull/4019 | 1,180,628,293 | PR_kwDODunzps41AlFk | 4,019 | Make yelp_polarity streamable | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The CI is failing because of the incomplete dataset card - this is unrelated to the goal of this PR so we can ignore it"
] | 1,648,204,971,000 | 1,648,220,539,000 | 1,648,220,236,000 | MEMBER | null | It was using `dl_manager.download_and_extract` on a TAR archive, which is not supported in streaming mode. I replaced this by `dl_manager.iter_archive` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4019/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4019/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4019",
"html_url": "https://github.com/huggingface/datasets/pull/4019",
"diff_url": "https://github.com/huggingface/datasets/pull/4019.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4019.patch",
"merged_at": 1648220235000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4018 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4018/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4018/comments | https://api.github.com/repos/huggingface/datasets/issues/4018/events | https://github.com/huggingface/datasets/pull/4018 | 1,180,622,816 | PR_kwDODunzps41Aj7g | 4,018 | Replace yelp_review_full data url | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,204,638,000 | 1,648,220,462,000 | 1,648,220,170,000 | MEMBER | null | I replaced the Google Drive URL of the Yelp review dataset by the FastAI one, since we've had some issues with Google Drive.
Close https://github.com/huggingface/datasets/issues/4005 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4018/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4018/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4018",
"html_url": "https://github.com/huggingface/datasets/pull/4018",
"diff_url": "https://github.com/huggingface/datasets/pull/4018.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4018.patch",
"merged_at": 1648220170000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4017 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4017/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4017/comments | https://api.github.com/repos/huggingface/datasets/issues/4017/events | https://github.com/huggingface/datasets/pull/4017 | 1,180,595,160 | PR_kwDODunzps41Ad_L | 4,017 | Support streaming scan dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,203,088,000 | 1,648,210,135,000 | 1,648,209,832,000 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4017/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4017/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4017",
"html_url": "https://github.com/huggingface/datasets/pull/4017",
"diff_url": "https://github.com/huggingface/datasets/pull/4017.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4017.patch",
"merged_at": 1648209832000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4016 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4016/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4016/comments | https://api.github.com/repos/huggingface/datasets/issues/4016/events | https://github.com/huggingface/datasets/pull/4016 | 1,180,557,828 | PR_kwDODunzps41AWBk | 4,016 | Support streaming blimp dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,201,150,000 | 1,648,207,158,000 | 1,648,206,853,000 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4016/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4016/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4016",
"html_url": "https://github.com/huggingface/datasets/pull/4016",
"diff_url": "https://github.com/huggingface/datasets/pull/4016.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4016.patch",
"merged_at": 1648206853000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4015 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4015/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4015/comments | https://api.github.com/repos/huggingface/datasets/issues/4015/events | https://github.com/huggingface/datasets/issues/4015 | 1,180,510,856 | I_kwDODunzps5GXSqI | 4,015 | Can not correctly parse the classes with imagefolder | {
"login": "YiSyuanChen",
"id": 21264909,
"node_id": "MDQ6VXNlcjIxMjY0OTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/21264909?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YiSyuanChen",
"html_url": "https://github.com/YiSyuanChen",
"followers_url": "https://api.github.com/users/YiSyuanChen/followers",
"following_url": "https://api.github.com/users/YiSyuanChen/following{/other_user}",
"gists_url": "https://api.github.com/users/YiSyuanChen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YiSyuanChen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YiSyuanChen/subscriptions",
"organizations_url": "https://api.github.com/users/YiSyuanChen/orgs",
"repos_url": "https://api.github.com/users/YiSyuanChen/repos",
"events_url": "https://api.github.com/users/YiSyuanChen/events{/privacy}",
"received_events_url": "https://api.github.com/users/YiSyuanChen/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"I found that the problem arises because the image files in my folder are actually symbolic links (for my own reasons). After modifications, the classes can now be correctly parsed. Therefore, I close this issue.",
"HI, I have a question. How much time did you load the ImageNet data files? "
] | 1,648,198,277,000 | 1,648,429,323,000 | 1,648,200,476,000 | NONE | null | ## Describe the bug
I try to load my own image dataset with imagefolder, but the parsing of classes is incorrect.
## Steps to reproduce the bug
I organized my dataset (ImageNet) in the following structure:
```
- imagenet/
- train/
- n01440764/
- ILSVRC2012_val_00000293.jpg
- ......
- n01695060/
- ......
- val/
- n01440764/
- n01695060/
- ......
```
At first, I followed the instructions from the Huggingface [example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification#using-your-own-data) to load my data as:
```
from datasets import load_dataset
data_files = {'train': 'imagenet/train', 'val': 'imagenet/val'}
ds = load_dataset("nateraw/image-folder", data_files=data_files, task="image-classification")
```
but it resulted following error (I mask my personal path as <PERSONAL_PATH>):
```
FileNotFoundError: Unable to find 'https://huggingface.co/datasets/nateraw/image-folder/resolve/main/imagenet/train' at <PERSONAL_PATH>/ImageNet/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
```
Next, I followed a recent issue #3960 to load data as:
```
from datasets import load_dataset
data_files = {'train': ['imagenet/train/**'], 'val': ['imagenet/val/**']}
ds = load_dataset("imagefolder", data_files=data_files, task="image-classification")
```
and the data can be loaded without error as: (I copy val folder to train folder for illustration)
```
>>> ds
DatasetDict({
train: Dataset({
features: ['image', 'labels'],
num_rows: 50000
})
val: Dataset({
features: ['image', 'labels'],
num_rows: 50000
})
})
```
However, the parsed classes is wrong (should be 1000 classes):
```
>>> ds["train"].features
{'image': Image(decode=True, id=None), 'labels': ClassLabel(num_classes=1, names=['val'], id=None)}
```
## Expected results
I expect that the "labels" in ds["train"].features should contain 1000 classes.
## Actual results
The "labels" in ds["train"].features contains only 1 wrong class.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: Ubuntu 18.04
- Python version: Python 3.7.12
- PyArrow version: 7.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4015/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4015/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4014 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4014/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4014/comments | https://api.github.com/repos/huggingface/datasets/issues/4014/events | https://github.com/huggingface/datasets/pull/4014 | 1,180,481,229 | PR_kwDODunzps41AGBu | 4,014 | Support streaming id_clickbait dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,196,308,000 | 1,648,198,711,000 | 1,648,198,412,000 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4014/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4014/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4014",
"html_url": "https://github.com/huggingface/datasets/pull/4014",
"diff_url": "https://github.com/huggingface/datasets/pull/4014.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4014.patch",
"merged_at": 1648198412000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4013 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4013/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4013/comments | https://api.github.com/repos/huggingface/datasets/issues/4013/events | https://github.com/huggingface/datasets/issues/4013 | 1,180,427,174 | I_kwDODunzps5GW-Om | 4,013 | Cannot preview "hazal/Turkish-Biomedical-corpus-trM" | {
"login": "hazalturkmen",
"id": 42860397,
"node_id": "MDQ6VXNlcjQyODYwMzk3",
"avatar_url": "https://avatars.githubusercontent.com/u/42860397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hazalturkmen",
"html_url": "https://github.com/hazalturkmen",
"followers_url": "https://api.github.com/users/hazalturkmen/followers",
"following_url": "https://api.github.com/users/hazalturkmen/following{/other_user}",
"gists_url": "https://api.github.com/users/hazalturkmen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hazalturkmen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hazalturkmen/subscriptions",
"organizations_url": "https://api.github.com/users/hazalturkmen/orgs",
"repos_url": "https://api.github.com/users/hazalturkmen/repos",
"events_url": "https://api.github.com/users/hazalturkmen/events{/privacy}",
"received_events_url": "https://api.github.com/users/hazalturkmen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @hazalturkmen, thanks for reporting.\r\n\r\nNote that your dataset repository does not contain any loading script; it only contains a data file named `tr_article_2`.\r\n\r\nWhen there is no loading script but only data files, the `datasets` library tries to infer how to load the data by looking at the data file extensions. However, your data file does not have any extension.\r\n\r\nNote that current supported data file extensions are: 'csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip'.\r\n\r\nYou have more info on our docs: [How to share a dataset](https://huggingface.co/docs/datasets/share).",
"thanks for reply :)"
] | 1,648,192,322,000 | 1,649,059,501,000 | 1,648,217,771,000 | NONE | null | ## Dataset viewer issue for '*hazal/Turkish-Biomedical-corpus-trM'
**Link:** *https://huggingface.co/datasets/hazal/Turkish-Biomedical-corpus-trM*
*I cannot see the dataset preview.*
```
Server Error
Status code: 400
Exception: HTTPError
Message: 403 Client Error: Forbidden for url: https://huggingface.co/api/datasets/hazal/Turkish-Biomedical-corpus-trM?full=true
```
Am I the one who added this dataset ? Yes
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4013/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4013/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4012 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4012/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4012/comments | https://api.github.com/repos/huggingface/datasets/issues/4012/events | https://github.com/huggingface/datasets/pull/4012 | 1,180,350,083 | PR_kwDODunzps40_qgo | 4,012 | Rename wer to cer | {
"login": "pmgautam",
"id": 28428143,
"node_id": "MDQ6VXNlcjI4NDI4MTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/28428143?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pmgautam",
"html_url": "https://github.com/pmgautam",
"followers_url": "https://api.github.com/users/pmgautam/followers",
"following_url": "https://api.github.com/users/pmgautam/following{/other_user}",
"gists_url": "https://api.github.com/users/pmgautam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pmgautam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pmgautam/subscriptions",
"organizations_url": "https://api.github.com/users/pmgautam/orgs",
"repos_url": "https://api.github.com/users/pmgautam/repos",
"events_url": "https://api.github.com/users/pmgautam/events{/privacy}",
"received_events_url": "https://api.github.com/users/pmgautam/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,648,184,765,000 | 1,648,475,845,000 | 1,648,475,845,000 | CONTRIBUTOR | null | wer variable changed to cer in README file
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4012/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4012/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4012",
"html_url": "https://github.com/huggingface/datasets/pull/4012",
"diff_url": "https://github.com/huggingface/datasets/pull/4012.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4012.patch",
"merged_at": 1648475845000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4011 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4011/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4011/comments | https://api.github.com/repos/huggingface/datasets/issues/4011/events | https://github.com/huggingface/datasets/pull/4011 | 1,179,885,965 | PR_kwDODunzps40-Ho0 | 4,011 | Fix SQuAD v2 metric docs on `references` format | {
"login": "bryant1410",
"id": 3905501,
"node_id": "MDQ6VXNlcjM5MDU1MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bryant1410",
"html_url": "https://github.com/bryant1410",
"followers_url": "https://api.github.com/users/bryant1410/followers",
"following_url": "https://api.github.com/users/bryant1410/following{/other_user}",
"gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions",
"organizations_url": "https://api.github.com/users/bryant1410/orgs",
"repos_url": "https://api.github.com/users/bryant1410/repos",
"events_url": "https://api.github.com/users/bryant1410/events{/privacy}",
"received_events_url": "https://api.github.com/users/bryant1410/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4011). All of your documentation changes will be reflected on that endpoint."
] | 1,648,146,430,000 | 1,657,120,792,000 | null | CONTRIBUTOR | null | `references` it's not a list of dictionaries but a dictionary that has a list in its values. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4011/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4011/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4011",
"html_url": "https://github.com/huggingface/datasets/pull/4011",
"diff_url": "https://github.com/huggingface/datasets/pull/4011.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4011.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4010 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4010/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4010/comments | https://api.github.com/repos/huggingface/datasets/issues/4010/events | https://github.com/huggingface/datasets/pull/4010 | 1,179,848,036 | PR_kwDODunzps409_QV | 4,010 | Fix None issue with Sequence of dict | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Merging since I'd like do do a patch release soon for this one"
] | 1,648,144,739,000 | 1,648,462,433,000 | 1,648,462,120,000 | MEMBER | null | `Features.encode_example` currently fails if it contains a sequence if dict like `Sequence({"subcolumn": Value("int32")})` and if `None` is passed instead of the dict.
```python
File "/Users/quentinlhoest/Desktop/hf/datasets/src/datasets/features/features.py", line 1310, in encode_example
return encode_nested_example(self, example)
File "/Users/quentinlhoest/Desktop/hf/datasets/src/datasets/features/features.py", line 973, in encode_nested_example
return {k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in zip_dict(schema, obj)}
File "/Users/quentinlhoest/Desktop/hf/datasets/src/datasets/features/features.py", line 973, in <dictcomp>
return {k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in zip_dict(schema, obj)}
File "/Users/quentinlhoest/Desktop/hf/datasets/src/datasets/features/features.py", line 998, in encode_nested_example
for k, (sub_schema, sub_objs) in zip_dict(schema.feature, obj):
File "/Users/quentinlhoest/Desktop/hf/datasets/src/datasets/utils/py_utils.py", line 207, in zip_dict
yield key, tuple(d[key] for d in dicts)
File "/Users/quentinlhoest/Desktop/hf/datasets/src/datasets/utils/py_utils.py", line 207, in <genexpr>
yield key, tuple(d[key] for d in dicts)
TypeError: 'NoneType' object is not subscriptable
```
I fixed this issue and updated the tests (this case was missing in the tests) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4010/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4010/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4010",
"html_url": "https://github.com/huggingface/datasets/pull/4010",
"diff_url": "https://github.com/huggingface/datasets/pull/4010.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4010.patch",
"merged_at": 1648462120000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4009 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4009/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4009/comments | https://api.github.com/repos/huggingface/datasets/issues/4009/events | https://github.com/huggingface/datasets/issues/4009 | 1,179,658,611 | I_kwDODunzps5GUClz | 4,009 | AMI load_dataset error: sndfile library not found | {
"login": "i-am-neo",
"id": 102043285,
"node_id": "U_kgDOBhUOlQ",
"avatar_url": "https://avatars.githubusercontent.com/u/102043285?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/i-am-neo",
"html_url": "https://github.com/i-am-neo",
"followers_url": "https://api.github.com/users/i-am-neo/followers",
"following_url": "https://api.github.com/users/i-am-neo/following{/other_user}",
"gists_url": "https://api.github.com/users/i-am-neo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/i-am-neo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/i-am-neo/subscriptions",
"organizations_url": "https://api.github.com/users/i-am-neo/orgs",
"repos_url": "https://api.github.com/users/i-am-neo/repos",
"events_url": "https://api.github.com/users/i-am-neo/events{/privacy}",
"received_events_url": "https://api.github.com/users/i-am-neo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Issue unresolved, see [4000](https://github.com/huggingface/datasets/issues/4009#issue-1179658611)"
] | 1,648,134,818,000 | 1,648,136,798,000 | 1,648,135,049,000 | NONE | null | ## Describe the bug
Getting error message when loading AMI dataset.
## Steps to reproduce the bug
`python3 -c "from datasets import load_dataset; print(load_dataset('ami', 'headset-single', split='validation')[0])"
`
## Expected results
A clear and concise description of the expected results.
## Actual results
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/load.py", line 1707, in load_dataset
use_auth_token=use_auth_token,
File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py", line 595, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py", line 690, in _download_and_prepare
) from None
OSError: Cannot find data file.
Original error:
sndfile library not found
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.3
- Platform: Linux-4.19.0-18-cloud-amd64-x86_64-with-debian-10.11
- Python version: 3.7.3
- PyArrow version: 7.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4009/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4009/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4008 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4008/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4008/comments | https://api.github.com/repos/huggingface/datasets/issues/4008/events | https://github.com/huggingface/datasets/pull/4008 | 1,179,591,068 | PR_kwDODunzps409Ixp | 4,008 | Support streaming daily_dialog dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Yay! I love this dataset!"
] | 1,648,131,803,000 | 1,648,135,741,000 | 1,648,133,218,000 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4008/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4008/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4008",
"html_url": "https://github.com/huggingface/datasets/pull/4008",
"diff_url": "https://github.com/huggingface/datasets/pull/4008.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4008.patch",
"merged_at": 1648133218000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4007 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4007/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4007/comments | https://api.github.com/repos/huggingface/datasets/issues/4007/events | https://github.com/huggingface/datasets/issues/4007 | 1,179,381,021 | I_kwDODunzps5GS-0d | 4,007 | set_format does not work with multi dimension tensor | {
"login": "phihung",
"id": 5902432,
"node_id": "MDQ6VXNlcjU5MDI0MzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5902432?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phihung",
"html_url": "https://github.com/phihung",
"followers_url": "https://api.github.com/users/phihung/followers",
"following_url": "https://api.github.com/users/phihung/following{/other_user}",
"gists_url": "https://api.github.com/users/phihung/gists{/gist_id}",
"starred_url": "https://api.github.com/users/phihung/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phihung/subscriptions",
"organizations_url": "https://api.github.com/users/phihung/orgs",
"repos_url": "https://api.github.com/users/phihung/repos",
"events_url": "https://api.github.com/users/phihung/events{/privacy}",
"received_events_url": "https://api.github.com/users/phihung/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi! Use the `ArrayXD` feature type (where X is the number of dimensions) to get correctly formated tensors. So in your case, define the dataset as follows :\r\n```python\r\nds = Dataset.from_dict({\"A\": [torch.rand((2, 2))]}, features=Features({\"A\": Array2D(shape=(2, 2), dtype=\"float32\")}))\r\n```\r\n",
"Hi @mariosasko I'm facing the same issue and the only work around I've found so far is to convert my `DatasetDict` to a dictionary and then create new objects with `Dataset.from_dict`.\r\n```\r\ndataset = load_dataset(\"my_dataset.py\")\r\ndataset = dataset.map(lambda example: blabla(example))\r\ndict_dataset_test = dataset[\"test\"].to_dict()\r\n...\r\ndataset_test = Dataset.from_dict(dict_dataset_test, features=Features(features))\r\n```\r\nHowever, converting a `Dataset` object to a dict takes quite a lot of time and memory... Is there a way to directly create an `Array2D` without having to transform the original `Dataset` to a dict?",
"Hi! Yes, you can directly pass the `Features` dictionary as `features` in `map` to cast the column to `Array2D`:\r\n```python\r\ndataset = dataset.map(lambda example: blabla(example), features=Features(features))\r\n```\r\nOr you can use `cast` after `map` to do that:\r\n```python\r\ndataset = dataset.map(lambda example: blabla(example))\r\ndataset = dataset.cast(Features(features))\r\n```",
"Fantastic thank you @mariosasko\r\nThe first option you suggested is indeed way faster 😃 "
] | 1,648,121,263,000 | 1,648,625,337,000 | 1,648,132,769,000 | NONE | null | ## Describe the bug
set_format only transforms the last dimension of a multi-dimension list to tensor
## Steps to reproduce the bug
```python
import torch
from datasets import Dataset
ds = Dataset.from_dict({"A": [torch.rand((2, 2))]})
# ds = Dataset.from_dict({"A": [np.random.rand(2, 2)]}) # => same result
ds = ds.with_format("torch")
print(ds[0])
```
## Expected results
```
{'A': [tensor([[0.6689, 0.1516], [0.1403, 0.5567]])]}
```
## Actual results
```
{'A': [tensor([0.6689, 0.1516]), tensor([0.1403, 0.5567])]}
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- datasets version: 2.0.0
- Platform: Mac OSX
- Python version: 3.8.12
- PyArrow version: 7.0.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4007/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4007/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4006 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4006/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4006/comments | https://api.github.com/repos/huggingface/datasets/issues/4006/events | https://github.com/huggingface/datasets/pull/4006 | 1,179,367,195 | PR_kwDODunzps408YnW | 4,006 | Use audio feature in ASR task template | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,120,522,000 | 1,648,142,369,000 | 1,648,140,482,000 | MEMBER | null | The AutomaticSpeechRecognition task template is outdated: it still uses the file path column as input instead of the audio column.
I changed that and updated all the datasets as well as the tests.
The only community dataset that will need to be updated is `facebook/multilingual_librispeech`. It has almost zero usage unfortunately (probably because users load the duplicate `multilingual_librispeech` directly instead), but it means we can update it.
(this makes me think that we should deprecate `multilingual_librispeech` it and redirect users to `facebook/multilingual_librispeech`).
This PR is also useful for the AudioFolder in https://github.com/huggingface/datasets/pull/3963 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4006/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4006/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4006",
"html_url": "https://github.com/huggingface/datasets/pull/4006",
"diff_url": "https://github.com/huggingface/datasets/pull/4006.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4006.patch",
"merged_at": 1648140482000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4005 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4005/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4005/comments | https://api.github.com/repos/huggingface/datasets/issues/4005/events | https://github.com/huggingface/datasets/issues/4005 | 1,179,365,663 | I_kwDODunzps5GS7Ef | 4,005 | Yelp not working | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"I don't think it's an issue with the dataset-viewer. Maybe @lhoestq or @albertvillanova could confirm.\r\n\r\n```python\r\n>>> from datasets import load_dataset, DownloadMode\r\n>>> import itertools\r\n>>> # without streaming\r\n>>> dataset = load_dataset(\"yelp_review_full\", name=\"yelp_review_full\", split=\"train\", download_mode=DownloadMode.FORCE_REDOWNLOAD)\r\n\r\nDownloading builder script: 4.39kB [00:00, 5.97MB/s]\r\nDownloading metadata: 2.13kB [00:00, 3.14MB/s]\r\nDownloading and preparing dataset yelp_review_full/yelp_review_full (download: 187.06 MiB, generated: 496.94 MiB, post-processed: Unknown size, total: 684.00 MiB) to /home/slesage/.cache/huggingface/datasets/yelp_review_full/yelp_review_full/1.0.0/13c31a618ba62568ec8572a222a283dfc29a6517776a3ac5945fb508877dde43...\r\nDownloading data: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.10k/1.10k [00:00<00:00, 1.39MB/s]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets/src/datasets/load.py\", line 1687, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 605, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 1104, in _download_and_prepare\r\n super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 676, in _download_and_prepare\r\n verify_checksums(\r\n File \"/home/slesage/hf/datasets/src/datasets/utils/info_utils.py\", line 40, in verify_checksums\r\n raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\ndatasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://drive.google.com/uc?export=download&id=0Bz8a_Dbh9QhbZlU4dXhHTFhZQU0']\r\n\r\n>>> # with streaming\r\n>>> dataset = load_dataset(\"yelp_review_full\", name=\"yelp_review_full\", split=\"train\", download_mode=DownloadMode.FORCE_REDOWNLOAD, streaming=True)\r\n\r\nDownloading builder script: 4.39kB [00:00, 5.53MB/s]\r\nDownloading metadata: 2.13kB [00:00, 3.14MB/s]\r\nTraceback (most recent call last):\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/implementations/http.py\", line 375, in _info\r\n await _file_info(\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/implementations/http.py\", line 736, in _file_info\r\n r.raise_for_status()\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/aiohttp/client_reqrep.py\", line 1000, in raise_for_status\r\n raise ClientResponseError(\r\naiohttp.client_exceptions.ClientResponseError: 403, message='Forbidden', url=URL('https://doc-0g-bs-docs.googleusercontent.com/docs/securesc/ha0ro937gcuc7l7deffksulhg5h7mbp1/gklhpdq1arj8v15qrg7ces34a8c3413d/1648144575000/07511006523564980941/*/0Bz8a_Dbh9QhbZlU4dXhHTFhZQU0?e=download')\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets/src/datasets/load.py\", line 1677, in load_dataset\r\n return builder_instance.as_streaming_dataset(\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 906, in as_streaming_dataset\r\n splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/yelp_review_full/13c31a618ba62568ec8572a222a283dfc29a6517776a3ac5945fb508877dde43/yelp_review_full.py\", line 102, in _split_generators\r\n data_dir = dl_manager.download_and_extract(my_urls)\r\n File \"/home/slesage/hf/datasets/src/datasets/utils/streaming_download_manager.py\", line 800, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"/home/slesage/hf/datasets/src/datasets/utils/streaming_download_manager.py\", line 778, in extract\r\n urlpaths = map_nested(self._extract, path_or_paths, map_tuple=True)\r\n File \"/home/slesage/hf/datasets/src/datasets/utils/py_utils.py\", line 306, in map_nested\r\n return function(data_struct)\r\n File \"/home/slesage/hf/datasets/src/datasets/utils/streaming_download_manager.py\", line 783, in _extract\r\n protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)\r\n File \"/home/slesage/hf/datasets/src/datasets/utils/streaming_download_manager.py\", line 372, in _get_extraction_protocol\r\n with fsspec.open(urlpath, **kwargs) as f:\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/core.py\", line 102, in __enter__\r\n f = self.fs.open(self.path, mode=mode)\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/spec.py\", line 978, in open\r\n f = self._open(\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/implementations/http.py\", line 335, in _open\r\n size = size or self.info(path, **kwargs)[\"size\"]\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/asyn.py\", line 88, in wrapper\r\n return sync(self.loop, func, *args, **kwargs)\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/asyn.py\", line 69, in sync\r\n raise result[0]\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/asyn.py\", line 25, in _runner\r\n result[0] = await coro\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/implementations/http.py\", line 388, in _info\r\n raise FileNotFoundError(url) from exc\r\nFileNotFoundError: https://drive.google.com/uc?export=download&id=0Bz8a_Dbh9QhbZlU4dXhHTFhZQU0&confirm=t\r\n```\r\n\r\nAnd this is before even trying to access the rows with\r\n\r\n```python\r\n>>> rows = list(itertools.islice(dataset, 100))\r\n>>> rows = list(dataset.take(100))\r\n```",
"Yet another issue related to google drive not being nice. Most likely your IP has been banned from using their API programmatically. Do you know if we are allowed to host and redistribute the data ourselves ?",
"Hi,\r\n\r\nFacing the same issue while loading the dataset: \r\n\r\n`Error: {NonMatchingChecksumError}Checksums didn't match for dataset source files`\r\n\r\nThanks",
"> Facing the same issue while loading the dataset:\r\n> \r\n> Error: {NonMatchingChecksumError}Checksums didn't match for dataset source files\r\n\r\nThanks for reporting. I think this is the same issue. Feel free to try again later, once Google Drive stopped blocking you. You can retry by passing `download_mode=\"force_redownload\"` to `load_dataset`",
"I noticed that FastAI hosts the Yelp dataset at https://s3.amazonaws.com/fast-ai-nlp/yelp_review_full_csv.tgz (from their catalog [here](https://course.fast.ai/datasets))\r\n\r\nLet's update the yelp dataset script to download from there instead of Google Drive",
"I updated the link to not use Google Drive anymore, we will do a release early next week with the updated download url of the dataset :)"
] | 1,648,120,440,000 | 1,648,220,397,000 | 1,648,220,170,000 | MEMBER | null | ## Dataset viewer issue for '*name of the dataset*'
**Link:** https://huggingface.co/datasets/yelp_review_full/viewer/yelp_review_full/train
Doesn't work:
```
Server error
Status code: 400
Exception: Error
Message: line contains NULL
```
Am I the one who added this dataset ? No
A seamingly copy of the dataset: https://huggingface.co/datasets/SetFit/yelp_review_full works . The original one: https://huggingface.co/datasets/yelp_review_full has > 20K downloads.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4005/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4005/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4004 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4004/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4004/comments | https://api.github.com/repos/huggingface/datasets/issues/4004/events | https://github.com/huggingface/datasets/pull/4004 | 1,179,320,795 | PR_kwDODunzps408Onj | 4,004 | ASSIN 2 dataset: replace broken Google Drive _URLS by links on github | {
"login": "ruanchaves",
"id": 14352388,
"node_id": "MDQ6VXNlcjE0MzUyMzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/14352388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ruanchaves",
"html_url": "https://github.com/ruanchaves",
"followers_url": "https://api.github.com/users/ruanchaves/followers",
"following_url": "https://api.github.com/users/ruanchaves/following{/other_user}",
"gists_url": "https://api.github.com/users/ruanchaves/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ruanchaves/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ruanchaves/subscriptions",
"organizations_url": "https://api.github.com/users/ruanchaves/orgs",
"repos_url": "https://api.github.com/users/ruanchaves/repos",
"events_url": "https://api.github.com/users/ruanchaves/events{/privacy}",
"received_events_url": "https://api.github.com/users/ruanchaves/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,118,259,000 | 1,648,476,106,000 | 1,648,475,799,000 | CONTRIBUTOR | null | Closes #4003 .
Fixes checksum error. Replaces Google Drive urls by the files hosted here: [Multilingual Transformer Ensembles for Portuguese Natural Language Tasks](https://github.com/ruanchaves/assin) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4004/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4004/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4004",
"html_url": "https://github.com/huggingface/datasets/pull/4004",
"diff_url": "https://github.com/huggingface/datasets/pull/4004.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4004.patch",
"merged_at": 1648475799000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4003 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4003/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4003/comments | https://api.github.com/repos/huggingface/datasets/issues/4003/events | https://github.com/huggingface/datasets/issues/4003 | 1,179,286,877 | I_kwDODunzps5GSn1d | 4,003 | ASSIN2 dataset checksum bug | {
"login": "ruanchaves",
"id": 14352388,
"node_id": "MDQ6VXNlcjE0MzUyMzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/14352388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ruanchaves",
"html_url": "https://github.com/ruanchaves",
"followers_url": "https://api.github.com/users/ruanchaves/followers",
"following_url": "https://api.github.com/users/ruanchaves/following{/other_user}",
"gists_url": "https://api.github.com/users/ruanchaves/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ruanchaves/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ruanchaves/subscriptions",
"organizations_url": "https://api.github.com/users/ruanchaves/orgs",
"repos_url": "https://api.github.com/users/ruanchaves/repos",
"events_url": "https://api.github.com/users/ruanchaves/events{/privacy}",
"received_events_url": "https://api.github.com/users/ruanchaves/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Using latest code, I am still facing the issue.\r\n\r\n```python\r\n(base) vimos@vimosmu ➜ ~ ipython\r\nPython 3.6.7 | packaged by conda-forge | (default, Nov 6 2019, 16:19:42) \r\nType 'copyright', 'credits' or 'license' for more information\r\nIPython 7.11.1 -- An enhanced Interactive Python. Type '?' for help.\r\n\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: load_dataset(\"assin2\")\r\nDownloading builder script: 4.24kB [00:00, 244kB/s]\r\nDownloading metadata: 2.58kB [00:00, 2.19MB/s]\r\nUsing custom data configuration default\r\nDownloading and preparing dataset assin2/default (download: 2.02 MiB, generated: 1.21 MiB, post-processed: Unknown size, total: 3.23 MiB) to /home/vimos/.cache/huggingface/datasets/assin2/default/1.0.0/8467f7acbda82f62ab960ca869dc1e96350e0e103a1ef7eaa43bbee530b80061...\r\nDownloading data: 1.51MB [00:00, 102MB/s]\r\nDownloading data: 116kB [00:00, 63.6MB/s]\r\nDownloading data: 493kB [00:00, 95.8MB/s] \r\nDownloading data files: 100%|██████████████████████████████████████████| 3/3 [00:00<00:00, 8.27it/s]\r\n---------------------------------------------------------------------------\r\nExpectedMoreDownloadedFiles Traceback (most recent call last)\r\n<ipython-input-2-b367d1ffd68e> in <module>\r\n----> 1 load_dataset(\"assin2\")\r\n\r\n~/anaconda3/lib/python3.6/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)\r\n 1694 ignore_verifications=ignore_verifications,\r\n 1695 try_from_hf_gcs=try_from_hf_gcs,\r\n-> 1696 use_auth_token=use_auth_token,\r\n 1697 )\r\n 1698\r\n\r\n~/anaconda3/lib/python3.6/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)\r\n 604 if not downloaded_from_gcs:\r\n 605 self._download_and_prepare(\r\n--> 606 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 607 )\r\n 608 # Sync info\r\n\r\n~/anaconda3/lib/python3.6/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos)\r\n 1102\r\n 1103 def _download_and_prepare(self, dl_manager, verify_infos):\r\n-> 1104 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)\r\n 1105\r\n 1106 def _get_examples_iterable_for_split(self, split_generator: SplitGenerator) -> ExamplesIterable:\r\n\r\n~/anaconda3/lib/python3.6/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 675 if verify_infos:\r\n 676 verify_checksums(\r\n--> 677 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), \"dataset source files\"\r\n 678 )\r\n 679\r\n\r\n~/anaconda3/lib/python3.6/site-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)\r\n 31 return\r\n 32 if len(set(expected_checksums) - set(recorded_checksums)) > 0:\r\n---> 33 raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums)))\r\n 34 if len(set(recorded_checksums) - set(expected_checksums)) > 0:\r\n 35 raise UnexpectedDownloadedFile(str(set(recorded_checksums) - set(expected_checksums)))\r\n\r\nExpectedMoreDownloadedFiles: {'https://drive.google.com/u/0/uc?id=1kb7xq6Mb3eaqe9cOAo70BaG9ypwkIqEU&export=download', 'https://drive.google.com/u/0/uc?id=1J3FpQaHxpM-FDfBUyooh-sZF-B-bM_lU&export=download', 'https://drive.google.com/u/0/uc?id=1Q9j1a83CuKzsHCGaNulSkNxBm7Dkn7Ln&export=download'}\r\n```",
"That's true. Steps to reproduce the bug on Google Colab:\r\n\r\n```\r\ngit clone https://github.com/huggingface/datasets.git\r\ncd datasets\r\npip install -e .\r\npython -c \"from datasets import load_dataset; print(load_dataset('assin2')['train'][0])\"\r\n```\r\n\r\nHowever the dataset will load without any problems if you just install version 2.0.0:\r\n\r\n ```\r\npip install datasets\r\npython -c \"from datasets import load_dataset; print(load_dataset('assin2')['train'][0])\"\r\n```\r\n\r\nAny thoughts @lhoestq ?",
"Right indeed ! Let me open a PR to fix this.\r\nThe dataset_infos.json file that stores some metadata about the dataset to download (and is used to verify it was correctly downloaded) hasn't been updated correctly",
"Not sure what the status of this is, but personally I am still getting this error, with glue.",
"Can you open a new issue if you got an error with glue please ?",
"Have posted at #4241"
] | 1,648,116,530,000 | 1,651,068,885,000 | 1,648,475,799,000 | CONTRIBUTOR | null | ## Describe the bug
Checksum error after trying to load the [ASSIN 2 dataset](https://huggingface.co/datasets/assin2).
`NonMatchingChecksumError` triggered by calling `load_dataset("assin2")`.
Similar to #3952 , #3942 , #3941 , etc.
```
---------------------------------------------------------------------------
NonMatchingChecksumError Traceback (most recent call last)
[<ipython-input-13-c664a92ad5e7>](https://localhost:8080/#) in <module>()
----> 1 load_dataset('assin2')
4 frames
[/usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py](https://localhost:8080/#) in verify_checksums(expected_checksums, recorded_checksums, verification_name)
38 if len(bad_urls) > 0:
39 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls))
41 logger.info("All the checksums matched successfully" + for_verification_name)
42
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/u/0/uc?id=1Q9j1a83CuKzsHCGaNulSkNxBm7Dkn7Ln&export=download']
```
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset("assin2")
```
## Expected results
Load the dataset.
## Actual results
The dataset won't load.
## Environment info
- `datasets` version: 2.0.1.dev0
- Platform: Google Colab
- Python version: 3.7.12
- PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4003/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4003/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4002 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4002/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4002/comments | https://api.github.com/repos/huggingface/datasets/issues/4002/events | https://github.com/huggingface/datasets/pull/4002 | 1,179,263,787 | PR_kwDODunzps408Cfp | 4,002 | Support streaming conll2012_ontonotesv5 dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,115,396,000 | 1,648,119,221,000 | 1,648,118,927,000 | MEMBER | null | Use another URL whit a single ZIP file (instead of previous one with a ZIP file inside another ZIP file). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4002/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4002/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4002",
"html_url": "https://github.com/huggingface/datasets/pull/4002",
"diff_url": "https://github.com/huggingface/datasets/pull/4002.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4002.patch",
"merged_at": 1648118927000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4001 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4001/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4001/comments | https://api.github.com/repos/huggingface/datasets/issues/4001/events | https://github.com/huggingface/datasets/issues/4001 | 1,179,231,418 | I_kwDODunzps5GSaS6 | 4,001 | How to use generate this multitask dataset for SQUAD? I am getting a value error. | {
"login": "gsk1692",
"id": 1963097,
"node_id": "MDQ6VXNlcjE5NjMwOTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1963097?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gsk1692",
"html_url": "https://github.com/gsk1692",
"followers_url": "https://api.github.com/users/gsk1692/followers",
"following_url": "https://api.github.com/users/gsk1692/following{/other_user}",
"gists_url": "https://api.github.com/users/gsk1692/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gsk1692/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gsk1692/subscriptions",
"organizations_url": "https://api.github.com/users/gsk1692/orgs",
"repos_url": "https://api.github.com/users/gsk1692/repos",
"events_url": "https://api.github.com/users/gsk1692/events{/privacy}",
"received_events_url": "https://api.github.com/users/gsk1692/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi! Replacing `nlp.<obj>` with `datasets.<obj>` in the script should fix the problem. `nlp` has been renamed to `datasets` more than a year ago, so please use `datasets` instead to avoid weird issues.",
"Thank You! Was able to solve with the help of this.",
"But I request you to please fix the same in the dataset hub explorer as well...",
"May I ask how to get this dataset?"
] | 1,648,113,711,000 | 1,648,288,101,000 | 1,648,265,743,000 | NONE | null | ## Dataset viewer issue for 'squad_multitask*'
**Link:** https://huggingface.co/datasets/vershasaxena91/squad_multitask
*short description of the issue*
I am trying to generate the multitask dataset for squad dataset. However, gives the error in dataset explorer as well as my local machine.
I tried the command: dataset = load_dataset("vershasaxena91/squad_multitask", 'highlight_qg_format')
Error:
Status code: 400
Exception: TypeError
Message: argument of type 'Value' is not iterable
Kindly advice.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4001/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4001/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4000 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4000/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4000/comments | https://api.github.com/repos/huggingface/datasets/issues/4000/events | https://github.com/huggingface/datasets/issues/4000 | 1,178,844,616 | I_kwDODunzps5GQ73I | 4,000 | load_dataset error: sndfile library not found | {
"login": "i-am-neo",
"id": 102043285,
"node_id": "U_kgDOBhUOlQ",
"avatar_url": "https://avatars.githubusercontent.com/u/102043285?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/i-am-neo",
"html_url": "https://github.com/i-am-neo",
"followers_url": "https://api.github.com/users/i-am-neo/followers",
"following_url": "https://api.github.com/users/i-am-neo/following{/other_user}",
"gists_url": "https://api.github.com/users/i-am-neo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/i-am-neo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/i-am-neo/subscriptions",
"organizations_url": "https://api.github.com/users/i-am-neo/orgs",
"repos_url": "https://api.github.com/users/i-am-neo/repos",
"events_url": "https://api.github.com/users/i-am-neo/events{/privacy}",
"received_events_url": "https://api.github.com/users/i-am-neo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @i-am-neo,\r\n\r\nThe audio support is an extra feature of `datasets` and therefore it must be installed as an additional optional dependency:\r\n```shell\r\npip install datasets[audio]\r\n```\r\nAdditionally, for specific MP3 support (which is not the case for AMI dataset, that contains WAV audio files), there is another third-party dependency on `torchaudio`.\r\n\r\nYou have all the information in our docs: https://huggingface.co/docs/datasets/audio_process#installation",
"Thanks @albertvillanova . Unfortunately the error persists after installing ```datasets[audio]```. Can you direct towards a solution?\r\n\r\n```\r\npip3 install datasets[audio]\r\n```\r\n### log\r\nRequirement already satisfied: datasets[audio] in ./.virtualenvs/hubert/lib/python3.7/site-packages (1.18.3)\r\nRequirement already satisfied: numpy>=1.17 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (1.21.5)\r\nRequirement already satisfied: xxhash in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (3.0.0)\r\nRequirement already satisfied: fsspec[http]>=2021.05.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (2022.2.0)\r\nRequirement already satisfied: dill in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (0.3.4)\r\nRequirement already satisfied: pandas in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (1.3.5)\r\nRequirement already satisfied: huggingface-hub<1.0.0,>=0.1.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (0.4.0)\r\nRequirement already satisfied: packaging in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (21.3)\r\nRequirement already satisfied: multiprocess in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (0.70.12.2)\r\nRequirement already satisfied: pyarrow!=4.0.0,>=3.0.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (7.0.0)\r\nRequirement already satisfied: tqdm>=4.62.1 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (4.63.1)\r\nRequirement already satisfied: aiohttp in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (3.8.1)\r\nRequirement already satisfied: importlib-metadata in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (4.11.3)\r\nRequirement already satisfied: requests>=2.19.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (2.27.1)\r\nRequirement already satisfied: librosa in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (0.9.1)\r\nRequirement already satisfied: pyyaml in ./.virtualenvs/hubert/lib/python3.7/site-packages (from huggingface-hub<1.0.0,>=0.1.0->datasets[audio]) (6.0)\r\nRequirement already satisfied: typing-extensions>=3.7.4.3 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from huggingface-hub<1.0.0,>=0.1.0->datasets[audio]) (4.1.1)\r\nRequirement already satisfied: filelock in ./.virtualenvs/hubert/lib/python3.7/site-packages (from huggingface-hub<1.0.0,>=0.1.0->datasets[audio]) (3.6.0)\r\nRequirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from packaging->datasets[audio]) (3.0.7)\r\nRequirement already satisfied: idna<4,>=2.5 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->datasets[audio]) (3.3)\r\nRequirement already satisfied: certifi>=2017.4.17 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->datasets[audio]) (2021.10.8)\r\nRequirement already satisfied: charset-normalizer~=2.0.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->datasets[audio]) (2.0.12)\r\nRequirement already satisfied: urllib3<1.27,>=1.21.1 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->datasets[audio]) (1.26.9)\r\nRequirement already satisfied: attrs>=17.3.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from aiohttp->datasets[audio]) (21.4.0)\r\nRequirement already satisfied: frozenlist>=1.1.1 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from aiohttp->datasets[audio]) (1.3.0)\r\nRequirement already satisfied: aiosignal>=1.1.2 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from aiohttp->datasets[audio]) (1.2.0)\r\nRequirement already satisfied: yarl<2.0,>=1.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from aiohttp->datasets[audio]) (1.7.2)\r\nRequirement already satisfied: asynctest==0.13.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from aiohttp->datasets[audio]) (0.13.0)\r\nRequirement already satisfied: multidict<7.0,>=4.5 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from aiohttp->datasets[audio]) (6.0.2)\r\nRequirement already satisfied: async-timeout<5.0,>=4.0.0a3 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from aiohttp->datasets[audio]) (4.0.2)\r\nRequirement already satisfied: zipp>=0.5 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from importlib-metadata->datasets[audio]) (3.7.0)\r\nRequirement already satisfied: decorator>=4.0.10 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (5.1.1)\r\nRequirement already satisfied: soundfile>=0.10.2 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (0.10.3.post1)\r\nRequirement already satisfied: numba>=0.45.1 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (0.55.1)\r\nRequirement already satisfied: pooch>=1.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (1.6.0)\r\nRequirement already satisfied: resampy>=0.2.2 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (0.2.2)\r\nRequirement already satisfied: audioread>=2.1.5 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (2.1.9)\r\nRequirement already satisfied: joblib>=0.14 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (1.1.0)\r\nRequirement already satisfied: scipy>=1.2.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (1.7.3)\r\nRequirement already satisfied: scikit-learn>=0.19.1 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (1.0.2)\r\nRequirement already satisfied: python-dateutil>=2.7.3 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from pandas->datasets[audio]) (2.8.2)\r\nRequirement already satisfied: pytz>=2017.3 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from pandas->datasets[audio]) (2022.1)\r\nRequirement already satisfied: setuptools in ./.virtualenvs/hubert/lib/python3.7/site-packages (from numba>=0.45.1->librosa->datasets[audio]) (60.10.0)\r\nRequirement already satisfied: llvmlite<0.39,>=0.38.0rc1 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from numba>=0.45.1->librosa->datasets[audio]) (0.38.0)\r\nRequirement already satisfied: appdirs>=1.3.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from pooch>=1.0->librosa->datasets[audio]) (1.4.4)\r\nRequirement already satisfied: six>=1.5 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from python-dateutil>=2.7.3->pandas->datasets[audio]) (1.16.0)\r\nRequirement already satisfied: threadpoolctl>=2.0.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from scikit-learn>=0.19.1->librosa->datasets[audio]) (3.1.0)\r\nRequirement already satisfied: cffi>=1.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from soundfile>=0.10.2->librosa->datasets[audio]) (1.15.0)\r\nRequirement already satisfied: pycparser in ./.virtualenvs/hubert/lib/python3.7/site-packages (from cffi>=1.0->soundfile>=0.10.2->librosa->datasets[audio]) (2.21)\r\n\r\n### reload\r\n```\r\npython3 -c \"from datasets import load_dataset; print(load_dataset('ami', 'headset-single', split='validation')[0])\"\r\n```\r\n\r\n### log\r\nDownloading and preparing dataset ami/headset-single (download: 10.71 GiB, generated: 49.99 MiB, post-processed: Unknown size, total: 10.76 GiB) to /home/neo/.cache/huggingface/datasets/ami/headset-single/1.6.2/2accdf810f7c0585f78f4bcfa47684fbb980e35d29ecf126e6906dbecb872d9e...\r\nAMI corpus cannot be downloaded using multi-processing. Setting number of downloaded processes `num_proc` to 1. \r\n100%|██████████████████████████████████████████████████████| 136/136 [00:00<00:00, 33542.59it/s]\r\n100%|█████████████████████████████████████████████████████████| 136/136 [00:06<00:00, 22.28it/s]\r\n100%|████████████████████████████████████████████████████████| 18/18 [00:00<00:00, 21558.39it/s]\r\n100%|█████████████████████████████████████████████████████████| 18/18 [00:00<00:00, 2996.41it/s]\r\n100%|████████████████████████████████████████████████████████| 16/16 [00:00<00:00, 23431.87it/s]\r\n100%|█████████████████████████████████████████████████████████| 16/16 [00:00<00:00, 2697.52it/s]\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/load.py\", line 1707, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py\", line 595, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py\", line 690, in _download_and_prepare\r\n ) from None\r\nOSError: Cannot find data file. \r\nOriginal error:\r\nsndfile library not found\r\n\r\n### just to double-check as per your docs\r\n```\r\npip3 install librosa torchaudio\r\n```\r\n\r\n### logs\r\nRequirement already satisfied: librosa in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (0.9.1)\r\nRequirement already satisfied: torchaudio in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (0.11.0+cu113)\r\nRequirement already satisfied: audioread>=2.1.5 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (2.1.9)\r\nRequirement already satisfied: joblib>=0.14 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (1.1.0)\r\nRequirement already satisfied: packaging>=20.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (21.3)\r\nRequirement already satisfied: scikit-learn>=0.19.1 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (1.0.2)\r\nRequirement already satisfied: scipy>=1.2.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (1.7.3)\r\nRequirement already satisfied: decorator>=4.0.10 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (5.1.1)\r\nRequirement already satisfied: resampy>=0.2.2 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (0.2.2)\r\nRequirement already satisfied: pooch>=1.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (1.6.0)\r\nRequirement already satisfied: numpy>=1.17.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (1.21.5)\r\nRequirement already satisfied: soundfile>=0.10.2 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (0.10.3.post1)\r\nRequirement already satisfied: numba>=0.45.1 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (0.55.1)\r\nRequirement already satisfied: torch==1.11.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from torchaudio) (1.11.0+cu113)\r\nRequirement already satisfied: typing-extensions in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from torch==1.11.0->torchaudio) (4.1.1)\r\nRequirement already satisfied: setuptools in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from numba>=0.45.1->librosa) (60.10.0)\r\nRequirement already satisfied: llvmlite<0.39,>=0.38.0rc1 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from numba>=0.45.1->librosa) (0.38.0)\r\nRequirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from packaging>=20.0->librosa) (3.0.7)\r\nRequirement already satisfied: requests>=2.19.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from pooch>=1.0->librosa) (2.27.1)\r\nRequirement already satisfied: appdirs>=1.3.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from pooch>=1.0->librosa) (1.4.4)\r\nRequirement already satisfied: six>=1.3 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from resampy>=0.2.2->librosa) (1.16.0)\r\nRequirement already satisfied: threadpoolctl>=2.0.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from scikit-learn>=0.19.1->librosa) (3.1.0)\r\nRequirement already satisfied: cffi>=1.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from soundfile>=0.10.2->librosa) (1.15.0)\r\nRequirement already satisfied: pycparser in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from cffi>=1.0->soundfile>=0.10.2->librosa) (2.21)\r\nRequirement already satisfied: charset-normalizer~=2.0.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->pooch>=1.0->librosa) (2.0.12)\r\nRequirement already satisfied: certifi>=2017.4.17 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->pooch>=1.0->librosa) (2021.10.8)\r\nRequirement already satisfied: urllib3<1.27,>=1.21.1 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->pooch>=1.0->librosa) (1.26.9)\r\nRequirement already satisfied: idna<4,>=2.5 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->pooch>=1.0->librosa) (3.3)\r\n\r\n### try loading again\r\n```\r\npython3 -c \"from datasets import load_dataset; print(load_dataset('ami', 'headset-single', split='validation')[0])\"\r\n```\r\n\r\n### same error\r\nDownloading and preparing dataset ami/headset-single (download: 10.71 GiB, generated: 49.99 MiB, post-processed: Unknown size, total: 10.76 GiB) to /home/neo/.cache/huggingface/datasets/ami/headset-single/1.6.2/2accdf810f7c0585f78f4bcfa47684fbb980e35d29ecf126e6906dbecb872d9e...\r\nAMI corpus cannot be downloaded using multi-processing. Setting number of downloaded processes `num_proc` to 1. \r\n100%|██████████████████████████████████████████████████████| 136/136 [00:00<00:00, 33542.59it/s]\r\n100%|█████████████████████████████████████████████████████████| 136/136 [00:06<00:00, 22.28it/s]\r\n100%|████████████████████████████████████████████████████████| 18/18 [00:00<00:00, 21558.39it/s]\r\n100%|█████████████████████████████████████████████████████████| 18/18 [00:00<00:00, 2996.41it/s]\r\n100%|████████████████████████████████████████████████████████| 16/16 [00:00<00:00, 23431.87it/s]\r\n100%|█████████████████████████████████████████████████████████| 16/16 [00:00<00:00, 2697.52it/s]\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/load.py\", line 1707, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py\", line 595, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py\", line 690, in _download_and_prepare\r\n ) from None\r\nOSError: Cannot find data file. \r\nOriginal error:\r\nsndfile library not found\r\n",
"Hi @i-am-neo, thanks again for your detailed report.\r\n\r\nOur `datasets` library support for audio relies on a third-party Python library called `librosa`, which is installed when you do:\r\n```shell\r\npip install datasets[audio]\r\n```\r\n\r\nHowever, the `librosa` library has a dependency on `soundfile`; and `soundfile` depends on a non-Python package called `sndfile`. \r\n\r\nOn Linux (which is your case), this must be installed manually using your operating system package manager, for example:\r\n```shell\r\nsudo apt-get install libsndfile1\r\n```\r\n\r\nPlease, let me know if this works and if so, I will update our docs with all this information.",
"@albertvillanova thanks, all good. The key is ```libsndfile1``` - it may help others to note that in your docs. I had installed libsndfile previously."
] | 1,648,086,752,000 | 1,648,230,813,000 | 1,648,230,813,000 | NONE | null | ## Describe the bug
Can't load ami dataset
## Steps to reproduce the bug
```
python3 -c "from datasets import load_dataset; print(load_dataset('ami', 'headset-single', split='validation')[0])"
```
## Expected results
## Actual results
Downloading and preparing dataset ami/headset-single (download: 10.71 GiB, generated: 49.99 MiB, post-processed: Unknown size, total: 10.76 GiB) to /home/neo/.cache/huggingface/datasets/ami/headset-single/1.6.2/2accdf810f7c0585f78f4bcfa47684fbb980e35d29ecf126e6906dbecb872d9e...
AMI corpus cannot be downloaded using multi-processing. Setting number of downloaded processes `num_proc` to 1.
100%|██████████████████████████████████████████████████████| 136/136 [00:00<00:00, 36004.88it/s]
100%|█████████████████████████████████████████████████████████| 136/136 [00:01<00:00, 79.10it/s]
100%|████████████████████████████████████████████████████████| 18/18 [00:00<00:00, 25343.23it/s]
100%|█████████████████████████████████████████████████████████| 18/18 [00:00<00:00, 2874.78it/s]
100%|████████████████████████████████████████████████████████| 16/16 [00:00<00:00, 27950.38it/s]
100%|█████████████████████████████████████████████████████████| 16/16 [00:00<00:00, 2892.25it/s]
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/load.py", line 1707, in load_dataset
use_auth_token=use_auth_token,
File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py", line 595, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py", line 690, in _download_and_prepare
) from None
OSError: Cannot find data file.
Original error:
sndfile library not found
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.3
- Platform: Linux-4.19.0-18-cloud-amd64-x86_64-with-debian-10.11
- Python version: 3.7.3
- PyArrow version: 7.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4000/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4000/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3999 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3999/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3999/comments | https://api.github.com/repos/huggingface/datasets/issues/3999/events | https://github.com/huggingface/datasets/pull/3999 | 1,178,685,280 | PR_kwDODunzps406WN_ | 3,999 | Docs maintenance | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,070,853,000 | 1,648,659,705,000 | 1,648,659,398,000 | MEMBER | null | This PR links some functions to the API reference. These functions previously only showed up in code format because the path to the actual API was incorrect. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3999/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3999/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3999",
"html_url": "https://github.com/huggingface/datasets/pull/3999",
"diff_url": "https://github.com/huggingface/datasets/pull/3999.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3999.patch",
"merged_at": 1648659398000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3998 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3998/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3998/comments | https://api.github.com/repos/huggingface/datasets/issues/3998/events | https://github.com/huggingface/datasets/pull/3998 | 1,178,631,986 | PR_kwDODunzps406KyA | 3,998 | Fix Audio.encode_example() when writing an array | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@albertvillanova do you think [this line](https://github.com/huggingface/datasets/pull/3998/files#diff-58e348f6e4deaa5f3119e420a5d48ebb82875a78c28628831748fb54f59b2c78R67) is enough? that's why we missed this bug, we didn't check this case"
] | 1,648,067,533,000 | 1,648,563,704,000 | 1,648,563,373,000 | CONTRIBUTOR | null | Closes #3996 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3998/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3998/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3998",
"html_url": "https://github.com/huggingface/datasets/pull/3998",
"diff_url": "https://github.com/huggingface/datasets/pull/3998.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3998.patch",
"merged_at": 1648563373000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3997 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3997/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3997/comments | https://api.github.com/repos/huggingface/datasets/issues/3997/events | https://github.com/huggingface/datasets/pull/3997 | 1,178,566,568 | PR_kwDODunzps4058xr | 3,997 | Sync Features dictionaries | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,063,431,000 | 1,649,865,147,000 | 1,649,864,779,000 | CONTRIBUTOR | null | This PR adds a wrapper to the `Features` class to keep the secondary dict, `_column_requires_decoding`, aligned with the main dict (as discussed in https://github.com/huggingface/datasets/pull/3723#discussion_r806912731).
A more elegant approach would be to subclass `UserDict` and override `__setitem__` and `__delitem__`, but this PR doesn't implement it for the following reasons:
* it requires replacing all occurrences of `isinstance(obj, dict)` with `isinstance(obj, Mapping)`, which is five times slower than `isinstance(obj, dict)` on my machine, in `features.py`
* is a breaking change, i.e., `isinstance(Features(...), dict)` would return `False` after it
* IMO, it makes sense to be consistent in the user-facing API and subclass either `dict` or `UserDict`. The problem with the latter is that it can't be used for `DatasetDict` because `DatasetDict` exposes the `data` property, which is also used by `UserDict`, so this would result in a collision.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3997/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3997/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3997",
"html_url": "https://github.com/huggingface/datasets/pull/3997",
"diff_url": "https://github.com/huggingface/datasets/pull/3997.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3997.patch",
"merged_at": 1649864779000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3996 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3996/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3996/comments | https://api.github.com/repos/huggingface/datasets/issues/3996/events | https://github.com/huggingface/datasets/issues/3996 | 1,178,415,905 | I_kwDODunzps5GPTMh | 3,996 | Audio.encode_example() throws an error when writing example from array | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Good catch ! Yes I think passing `format=\"wav\"` is the right thing to do",
"Thanks @polinaeterna for reporting this issue.\r\n\r\nIn relation to the decoding of MP3 audio files without torchaudio, I remember Patrick made some tests and these had quite bad performance. That is why he proposed to support MP3 files only with torchaudio. But yes, nice to give an alternative to non-torchaudio users (with a big warning on performance).",
"> I remember Patrick made some tests and these had quite bad performance. That is why he proposed to support MP3 files only with torchaudio.\r\n\r\nYeah, I know, but as far as I understand, some users just categorically don't want to have torchaudio in their environment. Anyway, it's just a more or less random example, they can use any library they like following the same logic (I'm just not a big expert in decoding utils so if you can give me some presentation / resources about that I would really appreciate it 🤗)"
] | 1,648,055,507,000 | 1,648,563,373,000 | 1,648,563,373,000 | CONTRIBUTOR | null | ## Describe the bug
When trying to do `Audio().encode_example()` with preexisting array (see [this line](https://github.com/huggingface/datasets/blob/master/src/datasets/features/audio.py#L73)), `sf.write()` throws you an error:
`TypeError: No format specified and unable to get format from file extension: <_io.BytesIO object at 0x7f4218c0db30>`
## Steps to reproduce the bug
### Sample code to reproduce the bug
```python
# download sample file
!wget https://huggingface.co/datasets/polinaeterna/test_encode_example/resolve/main/common_voice_vi_21824030.mp3
arr, sr = librosa.load("common_voice_vi_21824030.mp3")
Audio().encode_example({
"path": "common_voice_vi_21824030.mp3",
"array": arr,
"sampling_rate":sr
})
```
## Expected results
An encoded example (`{"bytes": b'....', "path": 'path'}`)
## Actual results
```python
TypeError Traceback (most recent call last)
Input In [3], in <module>
1 arr, sr = librosa.load("common_voice_vi_21824030.mp3")
----> 3 Audio().encode_example({
4 "path": "common_voice_vi_21824030.mp3",
5 "array": arr,
6 "sampling_rate":sr
7 })
File ~/workspace/datasets/src/datasets/features/audio.py:75, in Audio.encode_example(self, value)
73 elif isinstance(value, dict) and "array" in value:
74 buffer = BytesIO()
---> 75 sf.write(buffer, value["array"], value["sampling_rate"])
76 return {"bytes": buffer.getvalue(), "path": value.get("path")}
77 elif value.get("bytes") is not None or value.get("path") is not None:
File ~/miniconda3/envs/datasets/lib/python3.8/site-packages/soundfile.py:314, in write(file, data, samplerate, subtype, endian, format, closefd)
312 else:
313 channels = data.shape[1]
--> 314 with SoundFile(file, 'w', samplerate, channels,
315 subtype, endian, format, closefd) as f:
316 f.write(data)
File ~/miniconda3/envs/datasets/lib/python3.8/site-packages/soundfile.py:627, in SoundFile.__init__(self, file, mode, samplerate, channels, subtype, endian, format, closefd)
625 mode_int = _check_mode(mode)
626 self._mode = mode
--> 627 self._info = _create_info_struct(file, mode, samplerate, channels,
628 format, subtype, endian)
629 self._file = self._open(file, mode_int, closefd)
630 if set(mode).issuperset('r+') and self.seekable():
631 # Move write position to 0 (like in Python file objects)
File ~/miniconda3/envs/datasets/lib/python3.8/site-packages/soundfile.py:1416, in _create_info_struct(file, mode, samplerate, channels, format, subtype, endian)
1414 original_format = format
1415 if format is None:
-> 1416 format = _get_format_from_filename(file, mode)
1417 assert isinstance(format, (_unicode, str))
1418 else:
File ~/miniconda3/envs/datasets/lib/python3.8/site-packages/soundfile.py:1457, in _get_format_from_filename(file, mode)
1455 pass
1456 if format.upper() not in _formats and 'r' not in mode:
-> 1457 raise TypeError("No format specified and unable to get format from "
1458 "file extension: {0!r}".format(file))
1459 return format
TypeError: No format specified and unable to get format from file extension: <_io.BytesIO object at 0x7fd8daf88180>
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: datasets master
- Platform: Ubuntu 20.04
- Python version: python 3.8.12
- PyArrow version: 6.0.1
## Solution
I guess we just need to add `format` arg in [this line](https://github.com/huggingface/datasets/blob/master/src/datasets/features/audio.py#L75) like this:
```python
sf.write(buffer, value["array"], value["sampling_rate"], format="wav")
```
BTW discovered this when trying to decode audio in mp3 format without torchaudio (would be useful for TensorFlow users), like this:
```python
from datasets import load_dataset, Features, Audio
ds = load_dataset("common_voice", "vi", split="test")
ds = ds.remove_columns("audio")
ds.select(range(3)) # 3 samples just for testing
def load_mp3_with_librosa(example):
arr, sr = librosa.load(example["path"])
example["audio"] = {
"path": example["path"],
"array": arr,
"sampling_rate": sr
}
return example
updated_dataset = ds.map(lambda example: load_mp3_with_librosa(example),
features=Features(
{"audio": Audio(decode=False)}
))
```
@lhoestq @mariosasko @albertvillanova am I right in my logic? do we agree that we can set wav as the format? 🤗 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3996/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3996/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3995 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3995/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3995/comments | https://api.github.com/repos/huggingface/datasets/issues/3995/events | https://github.com/huggingface/datasets/pull/3995 | 1,178,232,623 | PR_kwDODunzps404054 | 3,995 | Close `PIL.Image` file handler in `Image.decode_example` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,047,108,000 | 1,648,059,892,000 | 1,648,059,567,000 | CONTRIBUTOR | null | Closes the file handler of the PIL image object in `Image.decode_example` to avoid the `Too many open files` error.
To pass [the image equality checks](https://app.circleci.com/pipelines/github/huggingface/datasets/10774/workflows/d56670e6-16bb-4c64-b601-a152c5acf5ed/jobs/65825) in CI, `Image.decode_example` calls `image.load()` regardless of how the image object is created (not only for the `PIL.Image.open(local_path)` case). This is needed because `load()` sets the `readonly` attribute of a `PIL.Image` object to 0 (it's 1 after `PIL.Image.open(file_like)`), and in the older PIL versions (only fixed on main), that attribute is considered in `PIL.Image.__eq__`. More info can be found here: https://github.com/python-pillow/Pillow/issues/5926.
Fix #3985
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3995/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3995/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3995",
"html_url": "https://github.com/huggingface/datasets/pull/3995",
"diff_url": "https://github.com/huggingface/datasets/pull/3995.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3995.patch",
"merged_at": 1648059566000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3994 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3994/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3994/comments | https://api.github.com/repos/huggingface/datasets/issues/3994/events | https://github.com/huggingface/datasets/pull/3994 | 1,178,211,138 | PR_kwDODunzps404wWu | 3,994 | Change audio column from string path to Audio feature in ASR task | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,648,046,092,000 | 1,648,050,223,000 | 1,648,050,223,000 | CONTRIBUTOR | null | Will fix #3990 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3994/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3994/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3994",
"html_url": "https://github.com/huggingface/datasets/pull/3994",
"diff_url": "https://github.com/huggingface/datasets/pull/3994.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3994.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3993 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3993/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3993/comments | https://api.github.com/repos/huggingface/datasets/issues/3993/events | https://github.com/huggingface/datasets/issues/3993 | 1,178,201,495 | I_kwDODunzps5GOe2X | 3,993 | Streaming dataset + interleave + DataLoader hangs with multiple workers | {
"login": "jpilaul",
"id": 614861,
"node_id": "MDQ6VXNlcjYxNDg2MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/614861?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jpilaul",
"html_url": "https://github.com/jpilaul",
"followers_url": "https://api.github.com/users/jpilaul/followers",
"following_url": "https://api.github.com/users/jpilaul/following{/other_user}",
"gists_url": "https://api.github.com/users/jpilaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jpilaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jpilaul/subscriptions",
"organizations_url": "https://api.github.com/users/jpilaul/orgs",
"repos_url": "https://api.github.com/users/jpilaul/repos",
"events_url": "https://api.github.com/users/jpilaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/jpilaul/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Same thing occurs when streaming files loaded from disk.",
"Hi ! Thanks for reporting, could this be related to https://github.com/huggingface/datasets/issues/3950 ?\r\n\r\nCurrently streaming datasets only works in single process, but we're working on having in work in distributed setups as well :)",
"Hi, thanks for your reply. It seems related :)"
] | 1,648,045,649,000 | 1,648,562,585,000 | null | NONE | null | ## Describe the bug
Interleaving multiple iterable datasets that use `load_dataset` on streaming mode hangs when passed to `torch.utils.data.DataLoader` with multiple workers.
## Steps to reproduce the bug
```python
from datasets import interleave_datasets, load_dataset
from torch.utils.data import DataLoader
en_dataset = load_dataset('oscar', "unshuffled_deduplicated_en", split='train', streaming=True)
fr_dataset = load_dataset('oscar', "unshuffled_deduplicated_fr", split='train', streaming=True)
it_dataset = load_dataset('oscar', "unshuffled_deduplicated_it", split='train', streaming=True)
de_dataset = load_dataset('oscar', "unshuffled_deduplicated_de", split='train', streaming=True)
multilingual_dataset = interleave_datasets([en_dataset, fr_dataset, de_dataset, it_dataset])
multilingual_dataset = multilingual_dataset.with_format('torch')
next(iter(multilingual_dataset)) # works fairly fast
dataloader = DataLoader(multilingual_dataset, batch_size=8, num_workers=4)
for batch in dataloader:
print(len(batch)) # prints nothing after 30 min of waiting
dataloader = DataLoader(multilingual_dataset, batch_size=8, num_workers=0)
for batch in dataloader:
print(len(batch)) # prints right away
```
## Expected results
It should be able to iterate the dataset with multiple workers.
## Actual results
Prints with results with `next(iter(multilingual_dataset)) ` and `num_workers=0` but it prints nothing with `num_workers=4` or any number above 0.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.1.dev0
- `pytorch` version: 1.10.0+cu113
- Python version: 3.7
- PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3993/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3993/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3992 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3992/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3992/comments | https://api.github.com/repos/huggingface/datasets/issues/3992/events | https://github.com/huggingface/datasets/issues/3992 | 1,177,946,153 | I_kwDODunzps5GNggp | 3,992 | Image column is not decoded in map when using with with_transform | {
"login": "phihung",
"id": 5902432,
"node_id": "MDQ6VXNlcjU5MDI0MzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5902432?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phihung",
"html_url": "https://github.com/phihung",
"followers_url": "https://api.github.com/users/phihung/followers",
"following_url": "https://api.github.com/users/phihung/following{/other_user}",
"gists_url": "https://api.github.com/users/phihung/gists{/gist_id}",
"starred_url": "https://api.github.com/users/phihung/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phihung/subscriptions",
"organizations_url": "https://api.github.com/users/phihung/orgs",
"repos_url": "https://api.github.com/users/phihung/repos",
"events_url": "https://api.github.com/users/phihung/events{/privacy}",
"received_events_url": "https://api.github.com/users/phihung/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi! This behavior stems from this line: https://github.com/huggingface/datasets/blob/799b817d97590ddc97cbd38d07469403e030de8c/src/datasets/arrow_dataset.py#L1919\r\nBasically, the `Image`/`Audio` columns are decoded only if the `format_type` attribute is `None` (`set_format`/`with_format` and `set_transform`/`with_transform` assign a non-`None` value to it) and the `input_columns` param is not specified (see https://github.com/huggingface/datasets/issues/3756). We will remove these limitations soon.\r\n\r\n\r\n\r\n"
] | 1,648,032,673,000 | 1,648,050,439,000 | null | NONE | null | ## Describe the bug
Image column is not _decoded_ in **map** when using with `with_transform`
## Steps to reproduce the bug
```python
from datasets import Image, Dataset
def add_C(batch):
batch["C"] = batch["A"]
return batch
ds = Dataset.from_dict({"A": ["image.png"]}).cast_column("A", Image())
ds = ds.with_transform(lambda x: x) # <= This line causes the problem
ds = ds.map(add_C, batched=True)
print(ds[0])
```
## Expected results
```
{'C': <PIL.PngImagePlugin.PngImageFile>, ...}
```
## Actual results
```
{'C': {'bytes': None, 'path': 'image.png'}, ...}
```
If we remove the `with_transform` line, we get the expected result.
## Environment info
- `datasets` version: 2.0.0
- Platform: Mac OSX
- Python version: 3.8.12
- PyArrow version: 7.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3992/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3992/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3991 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3991/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3991/comments | https://api.github.com/repos/huggingface/datasets/issues/3991/events | https://github.com/huggingface/datasets/issues/3991 | 1,177,362,901 | I_kwDODunzps5GLSHV | 3,991 | Add Lung Image Database Consortium image collection (LIDC-IDRI) dataset | {
"login": "omarespejel",
"id": 4755430,
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarespejel",
"html_url": "https://github.com/omarespejel",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 3608941089,
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision",
"name": "vision",
"color": "bfdadc",
"default": false,
"description": "Vision datasets"
}
] | open | false | null | [] | null | [] | 1,647,987,365,000 | 1,648,040,236,000 | null | NONE | null | ## Adding a Dataset
- **Name:** *Lung Image Database Consortium image collection (LIDC-IDRI)*
- **Description:** *Consists of diagnostic and lung cancer screening thoracic computed tomography (CT) scans with marked-up annotated lesions. It is a web-accessible international resource for development, training, and evaluation of computer-assisted diagnostic (CAD) methods for lung cancer detection and diagnosis. Initiated by the National Cancer Institute (NCI), further advanced by the Foundation for the National Institutes of Health (FNIH), and accompanied by the Food and Drug Administration (FDA) through active participation, this public-private partnership demonstrates the success of a consortium founded on a consensus-based process.*
- **Data:** *[link to the Github repository or current dataset location](https://wiki.cancerimagingarchive.net/display/Public/LIDC-IDRI)*
- **Motivation:** *Key dataset in the healthcare community*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
FYI @osanseviero @abidlabs | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3991/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3991/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3990 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3990/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3990/comments | https://api.github.com/repos/huggingface/datasets/issues/3990/events | https://github.com/huggingface/datasets/issues/3990 | 1,176,976,247 | I_kwDODunzps5GJzt3 | 3,990 | Improve AutomaticSpeechRecognition task template | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"There is an open PR to do that: #3364. I just haven't had time to finish it... ",
"> There is an open PR to do that: #3364. I just haven't had time to finish it...\r\n\r\n😬 thanks..."
] | 1,647,963,668,000 | 1,648,055,560,000 | 1,648,055,560,000 | CONTRIBUTOR | null | **Is your feature request related to a problem? Please describe.**
[AutomaticSpeechRecognition task template](https://github.com/huggingface/datasets/blob/master/src/datasets/tasks/automatic_speech_recognition.py) is outdated as it uses path to audiofile as an audio column instead of a Audio feature itself (I guess it's because Audio feature didn't exist at the time this template was created).
**Describe the solution you'd like**
Change audio columns from string path to Audio feature.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3990/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3990/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3989 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3989/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3989/comments | https://api.github.com/repos/huggingface/datasets/issues/3989/events | https://github.com/huggingface/datasets/pull/3989 | 1,176,955,078 | PR_kwDODunzps400l1S | 3,989 | Remove old wikipedia leftovers | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> This makes me think we shouldn't advise the use of load_dataset in dataset scripts, since it doesn't guarantee that the cache will work as expected (the cache directory is not set correctly, and the required disk space for downloaded files is not recorded)\r\n\r\n@lhoestq, do you think it could be a good idea to add a comment in this script WARNING that using load_dataset in a script is not good practice and that people should avoid using that script as a template to create other scripts? ",
"good idea ! :)"
] | 1,647,962,746,000 | 1,648,740,926,000 | 1,648,740,616,000 | MEMBER | null | After updating Wikipedia dataset, remove old wikipedia leftovers from doc.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3989/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3989/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3989",
"html_url": "https://github.com/huggingface/datasets/pull/3989",
"diff_url": "https://github.com/huggingface/datasets/pull/3989.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3989.patch",
"merged_at": 1648740616000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3988 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3988/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3988/comments | https://api.github.com/repos/huggingface/datasets/issues/3988/events | https://github.com/huggingface/datasets/pull/3988 | 1,176,858,540 | PR_kwDODunzps400RGb | 3,988 | More consistent references in docs | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Looks good, thanks for working on this!"
] | 1,647,958,721,000 | 1,647,968,792,000 | 1,647,967,844,000 | CONTRIBUTOR | null | Aligns the internal references with style discussed in https://github.com/huggingface/datasets/pull/3980.
cc @stevhliu | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3988/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3988/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3988",
"html_url": "https://github.com/huggingface/datasets/pull/3988",
"diff_url": "https://github.com/huggingface/datasets/pull/3988.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3988.patch",
"merged_at": 1647967843000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3987 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3987/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3987/comments | https://api.github.com/repos/huggingface/datasets/issues/3987/events | https://github.com/huggingface/datasets/pull/3987 | 1,176,481,659 | PR_kwDODunzps40zAxF | 3,987 | Fix Faiss custom_index device | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,647,940,284,000 | 1,648,124,339,000 | 1,648,124,052,000 | MEMBER | null | Currently, if both `custom_index` and `device` are passed to `FaissIndex`, `device` is silently ignored.
This PR fixes this by raising a ValueError if both arguments are passed.
Alternatively, the `custom_index` could be transferred to the target `device`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3987/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3987/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3987",
"html_url": "https://github.com/huggingface/datasets/pull/3987",
"diff_url": "https://github.com/huggingface/datasets/pull/3987.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3987.patch",
"merged_at": 1648124052000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3986 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3986/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3986/comments | https://api.github.com/repos/huggingface/datasets/issues/3986/events | https://github.com/huggingface/datasets/issues/3986 | 1,176,429,565 | I_kwDODunzps5GHuP9 | 3,986 | Dataset loads indefinitely after modifying default cache path (~/.cache/huggingface) | {
"login": "kelvinAI",
"id": 10686779,
"node_id": "MDQ6VXNlcjEwNjg2Nzc5",
"avatar_url": "https://avatars.githubusercontent.com/u/10686779?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kelvinAI",
"html_url": "https://github.com/kelvinAI",
"followers_url": "https://api.github.com/users/kelvinAI/followers",
"following_url": "https://api.github.com/users/kelvinAI/following{/other_user}",
"gists_url": "https://api.github.com/users/kelvinAI/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kelvinAI/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kelvinAI/subscriptions",
"organizations_url": "https://api.github.com/users/kelvinAI/orgs",
"repos_url": "https://api.github.com/users/kelvinAI/repos",
"events_url": "https://api.github.com/users/kelvinAI/events{/privacy}",
"received_events_url": "https://api.github.com/users/kelvinAI/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi ! I didn't managed to reproduce the issue. When you kill the process, is there any stacktrace that shows at what point in the code python is hanging ?",
"Hi @lhoestq , I've traced the issue back to file locking. It's similar to this thread, using Lustre filesystem as well. https://github.com/huggingface/datasets/issues/329 . In this case the user was able to modify and add -o flock option while mounting and it solved the problem. \r\nHowever in other cases such as mine, we do not have the permissions to modify the commands while mounting. I'm still trying to figure out a workaround. Any ideas how can we use a mounted Lustre filesystem with no flock option?\r\n",
"Hi @kelvinAI , I've had this issue on our institution's system which uses Lustre (in addition to our compute nodes being siloed off from external network access). The workaround I made for downloading/loading datasets was to set the `$HFHOME` environment variable to a location on the node's local storage (SSD), effectively a location that gets cleared regularly and sometimes gets used for temporary or cached files which is pretty common, e.g. \"scratch\" storage. Maybe your sysadmins, if you have them, could point you to subdirectories on a node that aren't linked to the Lustre filesystem. After downloading to scratch I found that the transformers, modules, and metrics cached folders were fine to move to my user drives on the Lustre filesystem but cached datasets that had fingerprints still had some issues with filelock, so it would help to use the function `my_dataset.save_to_disk('path/on/lustre_fs')` and static class function `Dataset.load_from_disk('path/on/lustre_fs')`. In rough steps:\r\n\r\n1. Initially download to scratch storage with `ds = datasets.load_dataset(dataset_name)`\r\n2. Call `ds.save_to_disk(my_path_on_lustre)` with a path in your user space on the Lustre filesystem\r\n3. Load datasets with `from datasets import Dataset; new_ds = Dataset.load_from_disk(my_path_on_lustre)`\r\n\r\nObviously this hinges on there existing scratch storage on the nodes you're using. Fingers crossed.",
"Hi @jpmcd , thanks for sharing your experience. For my case, the Lustre filesystem (with more storage space) is the scratch storage like the one you've mentioned. We have a local storage for each user but unfortunately there's not enough space in it to 'cache' huge datasets, hence that is why I tried changing HF_HOME to point to the scratch disk with more space and encountered the flock issue. Unfortunately I'm not aware of any viable solution to this for now so I simply fall back to using torch dataset. "
] | 1,647,937,401,000 | 1,655,191,207,000 | null | NONE | null | ## Describe the bug
Dataset loads indefinitely after modifying cache path (~/.cache/huggingface)
If none of the environment variables are set, this custom dataset loads fine ( json-based dataset with custom dataset load script)
** Update: Transformer modules faces the same issue as well during loading
## A clear and concise description of what the bug is.
Issue:
- Dataset loading stalls / freezes indefinitely when HF_HOME is changed to a custom directory
- No error code, had to terminate the process
- There are some files created in the cache directory:
```
custom_cache_dir
| -- modules
| -- __init__.py
| -- datasets_modules
| -- __init__.py
| -- datasets
| -- __init__.py
| -- script.py (Dataset loading script)
| -- script.lock
```
There's no error nor any logs thrown so I'm out of ideas of how to to debug this. The custom dataset works fine if the default ~/.cache dir is used, but unfortunately it's out of space and we do not have permissions to modify the disk.
## Steps to reproduce the bug
What I've tried:
- Modifying HF_HOME (https://github.com/huggingface/transformers/issues/8703)
- Modifying HF_DATASETS_CACHE (https://huggingface.co/docs/datasets/v1.12.0/cache.html)
- Modifying cache_dir param during runtime
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset('test_dataset', cache_dir='/path/to/new/cache')
```
- Disabling dataset cache
```python
>>> from datasets import set_caching_enabled
>>> set_caching_enabled(False)
```
## Expected results
Datasets should load / cache as usual with the only exception that cache directory is different
## Actual results
Any actions taken above to change the cache directory results in loading indefinitely without terminating.
## Environment info
- `transformers` version: 4.18.0.dev0
- Platform: Linux-4.15.0-54-generic-x86_64-with-glibc2.10
- Python version: 3.8.8
- Huggingface_hub version: 0.4.0
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): 2.4.1 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3986/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3986/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3985 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3985/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3985/comments | https://api.github.com/repos/huggingface/datasets/issues/3985/events | https://github.com/huggingface/datasets/issues/3985 | 1,175,982,937 | I_kwDODunzps5GGBNZ | 3,985 | [image feature] Too many files open error when image feature is returned as a path | {
"login": "apsdehal",
"id": 3616806,
"node_id": "MDQ6VXNlcjM2MTY4MDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/3616806?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/apsdehal",
"html_url": "https://github.com/apsdehal",
"followers_url": "https://api.github.com/users/apsdehal/followers",
"following_url": "https://api.github.com/users/apsdehal/following{/other_user}",
"gists_url": "https://api.github.com/users/apsdehal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/apsdehal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/apsdehal/subscriptions",
"organizations_url": "https://api.github.com/users/apsdehal/orgs",
"repos_url": "https://api.github.com/users/apsdehal/repos",
"events_url": "https://api.github.com/users/apsdehal/events{/privacy}",
"received_events_url": "https://api.github.com/users/apsdehal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 1,647,899,645,000 | 1,648,059,567,000 | 1,648,059,567,000 | MEMBER | null | ## Describe the bug
PR in context: #3967. If I load the dataset in this PR (TextVQA), and do a simple list comprehension on the dataset, I get `Too many open files error`. This is happening due to the way we are loading the image feature when a str path is returned from the `_generate_examples`. Specifically at https://github.com/huggingface/datasets/blob/508eb4ab5d52f590baa677b4f64b1cc069139f7b/src/datasets/features/image.py#L110, we are open the file handle to the image but never closing it. This in my understanding is causing the issue.
## Steps to reproduce the bug
Pull the PR locally and run the following code
```python
from datasets import load_dataset
dataset = load_dataset("./datasets/textvqa")["train"]
data = [item for item in dataset]
# Error happens
```
## Expected results
List comprehension should work smoothly
## Actual results
`Too many open files error`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.1.dev0
- Platform: macOS-12.2-arm64-arm-64bit
- Python version: 3.10.0
- PyArrow version: 7.0.0
- Pandas version: 1.4.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3985/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3985/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3984 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3984/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3984/comments | https://api.github.com/repos/huggingface/datasets/issues/3984/events | https://github.com/huggingface/datasets/issues/3984 | 1,175,822,117 | I_kwDODunzps5GFZ8l | 3,984 | Local and automatic tests fail | {
"login": "MarkusSagen",
"id": 20767068,
"node_id": "MDQ6VXNlcjIwNzY3MDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/20767068?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MarkusSagen",
"html_url": "https://github.com/MarkusSagen",
"followers_url": "https://api.github.com/users/MarkusSagen/followers",
"following_url": "https://api.github.com/users/MarkusSagen/following{/other_user}",
"gists_url": "https://api.github.com/users/MarkusSagen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MarkusSagen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MarkusSagen/subscriptions",
"organizations_url": "https://api.github.com/users/MarkusSagen/orgs",
"repos_url": "https://api.github.com/users/MarkusSagen/repos",
"events_url": "https://api.github.com/users/MarkusSagen/events{/privacy}",
"received_events_url": "https://api.github.com/users/MarkusSagen/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi ! To be able to run the tests, you need to install all the test dependencies and additional ones with\r\n```\r\npip install -e .[tests]\r\npip install -r additional-tests-requirements.txt --no-deps\r\n```\r\n\r\nIn particular, you probably need to `sacrebleu`. It looks like it wasn't able to instantiate `sacrebleu.TER` properly."
] | 1,647,889,657,000 | 1,648,473,525,000 | null | NONE | null | ## Describe the bug
Running the tests from CircleCI on a PR or locally fails, even with no changes. Tests seem to fail on `test_metric_common.py`
## Steps to reproduce the bug
```shell
git clone https://huggingface/datasets.git
cd datasets
```
```python
python -m pip install -e .
pytest
```
## Expected results
All tests passing
## Actual results
```
tests/test_metric_common.py:91:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../.pyenv/versions/3.8.5/lib/python3.8/doctest.py:1336: in __run
exec(compile(example.source, filename, "single",
<doctest datasets_modules.metrics.ter.c0cfb5adedac7eb15ffa47bba6a70fabd80f3eb906ee508abf5e1906285d1155.ter.Ter[3]>:1: in <module>
???
../datasets/src/datasets/metric.py:430: in compute
output = self._compute(**inputs, **compute_kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = Metric(name: "ter", features: {'predictions': Value(dtype='string', id='sequence'), 'references': Sequence(feature=Val...ences=references)
>>> print(results)
{'score': 0.0, 'num_edits': 0, 'ref_length': 6.5}
""", stored examples: 0)
predictions = ['hello there general kenobi', 'foo bar foobar']
references = [['hello there general kenobi', 'hello there !'], ['foo bar foobar', 'foo bar foobar']]
normalized = False, no_punct = False, asian_support = False, case_sensitive = False
def _compute(
self,
predictions,
references,
normalized: bool = False,
no_punct: bool = False,
asian_support: bool = False,
case_sensitive: bool = False,
):
references_per_prediction = len(references[0])
if any(len(refs) != references_per_prediction for refs in references):
raise ValueError("Sacrebleu requires the same number of references for each prediction")
transformed_references = [[refs[i] for refs in references] for i in range(references_per_prediction)]
> sb_ter = TER(normalized, no_punct, asian_support, case_sensitive)
E TypeError: __init__() takes 2 positional arguments but 5 were given
/tmp/pytest-of-markussagen/pytest-1/cache/modules/datasets_modules/metrics/ter/c0cfb5adedac7eb15ffa47bba6a70fabd80f3eb906ee508abf5e1906285d1155/ter.py:130: TypeError
------------------------------ Captured stdout call -------------------------------
Trying:
predictions = ["hello there general kenobi", "foo bar foobar"]
Expecting nothing
ok
Trying:
references = [["hello there general kenobi", "hello there !"], ["foo bar foobar", "foo bar foobar"]]
Expecting nothing
ok
Trying:
ter = datasets.load_metric("ter")
Expecting nothing
ok
Trying:
results = ter.compute(predictions=predictions, references=references)
Expecting nothing
================================ warnings summary =================================
../.pyenv/versions/3.8.5/envs/huggingface/lib/python3.8/site-packages/hdfs/config.py:15
/home/markussagen/.pyenv/versions/3.8.5/envs/huggingface/lib/python3.8/site-packages/hdfs/config.py:15: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
from imp import load_source
../datasets/src/datasets/commands/test.py:35
/home/markussagen/datasets/src/datasets/commands/test.py:35: PytestCollectionWarning: cannot collect test class 'TestCommand' because it has a __init__ constructor (from: tests/commands/test_test.py)
class TestCommand(BaseDatasetsCLICommand):
tests/commands/test_test.py:33
/home/markussagen/mydataset/tests/commands/test_test.py:33: PytestCollectionWarning: cannot collect test class 'TestCommandArgs' because it has a __new__ constructor (from: tests/commands/test_test.py)
class TestCommandArgs:
tests/test_arrow_dataset.py: 760 warnings
tests/test_formatting.py: 60 warnings
tests/test_search.py: 31 warnings
tests/features/test_array_xd.py: 117 warnings
/home/markussagen/datasets/src/datasets/formatting/formatting.py:197: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
(isinstance(x, np.ndarray) and (x.dtype == np.object or x.shape != array[0].shape))
tests/test_arrow_dataset.py: 154 warnings
tests/features/test_array_xd.py: 1 warning
/home/markussagen/datasets/src/datasets/formatting/formatting.py:201: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
return np.array(array, copy=False, **{**self.np_array_kwargs, "dtype": np.object})
tests/test_arrow_dataset.py: 60 warnings
/home/markussagen/datasets/src/datasets/arrow_dataset.py:3105: DeprecationWarning: `np.str` is a deprecated alias for the builtin `str`. To silence this warning, use `str` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.str_` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
elif np.issubdtype(values.dtype, np.str):
tests/test_arrow_dataset.py: 138 warnings
tests/test_formatting.py: 21 warnings
/home/markussagen/datasets/src/datasets/formatting/tf_formatter.py:69: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
data_struct.dtype == np.object
tests/test_arrow_dataset.py: 240 warnings
tests/test_formatting.py: 20 warnings
/home/markussagen/datasets/src/datasets/formatting/torch_formatter.py:49: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
if data_struct.dtype == np.object: # pytorch tensors cannot be instantied from an array of objects
tests/test_arrow_dataset.py: 12 warnings
tests/test_search.py: 2 warnings
tests/features/test_array_xd.py: 6 warnings
tests/features/test_image.py: 4 warnings
/home/markussagen/datasets/src/datasets/features/features.py:1129: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
[0] + [len(arr) for arr in l_arr], dtype=np.object
tests/test_dataset_common.py::LocalDatasetTest::test_builder_class_banking77
/tmp/pytest-of-markussagen/pytest-1/cache/modules/datasets_modules/datasets/banking77/aec0289529599d4572d76ab00c8944cb84f88410ad0c9e7da26189d31f62a55b/banking77.py:24: DeprecationWarning: invalid escape sequence \~
_CITATION = """\
tests/test_dataset_common.py::LocalDatasetTest::test_builder_class_universal_dependencies
/tmp/pytest-of-markussagen/pytest-1/cache/modules/datasets_modules/datasets/universal_dependencies/065e728dfe9a8371434a6e87132c2386a6eacab1a076d3a12aa417b994e6ef7d/universal_dependencies.py:6: DeprecationWarning: invalid escape sequence \=
_CITATION = """\
tests/test_filesystem.py: 105 warnings
/home/markussagen/.pyenv/versions/3.8.5/envs/huggingface/lib/python3.8/site-packages/responses/__init__.py:398: DeprecationWarning: stream argument is deprecated. Use stream parameter in request directly
warn(
tests/test_formatting.py::FormatterTest::test_jax_formatter
tests/test_formatting.py::FormatterTest::test_jax_formatter
tests/test_formatting.py::FormatterTest::test_jax_formatter
tests/test_formatting.py::FormatterTest::test_jax_formatter
tests/test_formatting.py::FormatterTest::test_jax_formatter_np_array_kwargs
tests/test_formatting.py::FormatterTest::test_jax_formatter_np_array_kwargs
tests/test_formatting.py::FormatterTest::test_jax_formatter_np_array_kwargs
tests/test_formatting.py::FormatterTest::test_jax_formatter_np_array_kwargs
/home/markussagen/datasets/src/datasets/formatting/jax_formatter.py:57: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
if data_struct.dtype == np.object: # jax arrays cannot be instantied from an array of objects
tests/test_formatting.py::FormatterTest::test_jax_formatter
tests/test_formatting.py::FormatterTest::test_jax_formatter
tests/test_formatting.py::FormatterTest::test_jax_formatter
/home/markussagen/.pyenv/versions/3.8.5/envs/huggingface/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py:3567: UserWarning: Explicitly requested dtype <class 'jax._src.numpy.lax_numpy.int64'> requested in array is not available, and will be truncated to dtype int32. To enable more dtypes, set the jax_enable_x64 configuration option or the JAX_ENABLE_X64 shell environment variable. See https://github.com/google/jax#current-gotchas for more.
lax._check_user_dtype_supported(dtype, "array")
tests/test_metric_common.py::LocalMetricTest::test_load_metric_frugalscore
/home/markussagen/.pyenv/versions/3.8.5/envs/huggingface/lib/python3.8/site-packages/apscheduler/util.py:95: PytzUsageWarning: The zone attribute is specific to pytz's interface; please migrate to a new time zone provider. For more details on how to do so, see https://pytz-deprecation-shim.readthedocs.io/en/latest/migration.html
if obj.zone == 'local':
tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_to_hub_custom_features
_audio
/home/markussagen/.pyenv/versions/3.8.5/envs/huggingface/lib/python3.8/site-packages/librosa/core/constantq.py:1059: DeprecationWarning: `np.complex` is a deprecated alias for the builtin `complex`. To silence this warning, use `complex` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.complex128` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
dtype=np.complex,
tests/features/test_array_xd.py::test_array_xd_with_none
/home/markussagen/mydataset/tests/features/test_array_xd.py:338: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
assert isinstance(arr, np.ndarray) and arr.dtype == np.object and arr.shape == (3,)
-- Docs: https://docs.pytest.org/en/stable/warnings.html
============================= short test summary info =============================
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_bleurt - I...
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_chrf - Att...
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_code_eval
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_comet - Im...
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_competition_math
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_coval - Im...
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_frugalscore
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_perplexity
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_ter - Type...
```
## Environment info
- `datasets` version: 2.0.1.dev0
- Platform: Linux-5.16.11-76051611-generic-x86_64-with-glibc2.33
- Python version: 3.8.5
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3984/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3984/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3983 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3983/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3983/comments | https://api.github.com/repos/huggingface/datasets/issues/3983/events | https://github.com/huggingface/datasets/issues/3983 | 1,175,759,412 | I_kwDODunzps5GFKo0 | 3,983 | Infinitely attempting lock | {
"login": "jyrr",
"id": 11869652,
"node_id": "MDQ6VXNlcjExODY5NjUy",
"avatar_url": "https://avatars.githubusercontent.com/u/11869652?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jyrr",
"html_url": "https://github.com/jyrr",
"followers_url": "https://api.github.com/users/jyrr/followers",
"following_url": "https://api.github.com/users/jyrr/following{/other_user}",
"gists_url": "https://api.github.com/users/jyrr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jyrr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jyrr/subscriptions",
"organizations_url": "https://api.github.com/users/jyrr/orgs",
"repos_url": "https://api.github.com/users/jyrr/repos",
"events_url": "https://api.github.com/users/jyrr/events{/privacy}",
"received_events_url": "https://api.github.com/users/jyrr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! Thanks for reporting. We're using `py-filelock` as our locking mechanism.\r\n\r\nCan you try deleting the .lock file mentioned in the logs and try again ? Make sure that no other process is generating the `cnn_dailymail` dataset.\r\n\r\nIf it doesn't work, could you try to set up a lock using the latest version of `py-filelock` and see if it works ?\r\n\r\n```\r\npip install filelock\r\n```\r\nhere is a code example from the `py-filelock` documentation that you can try:\r\n```python\r\nfrom filelock import Timeout, FileLock\r\n\r\nlock = FileLock(\"high_ground.txt.lock\")\r\nwith lock:\r\n with open(\"high_ground.txt\", \"a\") as f:\r\n f.write(\"You were the chosen one.\")\r\n```"
] | 1,647,886,317,000 | 1,651,853,538,000 | 1,651,853,538,000 | NONE | null | I am trying to run one of the examples of the `transformers` repo, which makes use of `datasets`.
Important to note is that I am trying to run this via a Databricks notebook, and all the files reside in the Databricks Filesystem (DBFS).
```
%sh
python /dbfs/transformers/examples/pytorch/summarization/run_summarization.py \
--model_name_or_path t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /dbfs/transformers/tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate \
--log_level debug \
--cache_dir /dbfs/transformers/cache
```
All goes well until acquiring a lock --
```
03/21/2022 17:53:19 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock
03/21/2022 17:53:19 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ...
03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock
03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ...
03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock
03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ...
03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock
03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ...
03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock
03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ...
03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock
03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ...
```
and so on.
I imagine this has to do with DBFS -- is there a way to tackle this? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3983/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3983/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3982 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3982/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3982/comments | https://api.github.com/repos/huggingface/datasets/issues/3982/events | https://github.com/huggingface/datasets/pull/3982 | 1,175,478,099 | PR_kwDODunzps40vrR_ | 3,982 | Exclude Google Drive tests of the CI | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I was thinking exactly the same: running unit tests that request continuously a third-party API is not a good idea."
] | 1,647,873,256,000 | 1,648,744,682,000 | 1,647,874,295,000 | MEMBER | null | These tests make the CI spam the Google Drive API, the CI now gets banned by Google Drive very often.
I think we can just skip these tests from the CI for now.
In the future we could have a CI job that runs only once a day or once a week for such cases
cc @albertvillanova @mariosasko @severo
Close #3415
![image](https://user-images.githubusercontent.com/42851186/159283608-fdeca1ac-b57f-4fa3-bf09-6fa5361c494f.png)
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3982/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3982/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3982",
"html_url": "https://github.com/huggingface/datasets/pull/3982",
"diff_url": "https://github.com/huggingface/datasets/pull/3982.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3982.patch",
"merged_at": 1647874295000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3981 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3981/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3981/comments | https://api.github.com/repos/huggingface/datasets/issues/3981/events | https://github.com/huggingface/datasets/pull/3981 | 1,175,423,517 | PR_kwDODunzps40vfra | 3,981 | Add TER metric card | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,647,870,876,000 | 1,648,562,231,000 | 1,648,561,900,000 | CONTRIBUTOR | null | Add TER metric card
This card is still missing content for the following sections:
- **Limitations & Biases**
- **Values from Papers**
If anyone has any ideas for either of the above, feel free to either add them or point me to them and I'll add them! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3981/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3981/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3981",
"html_url": "https://github.com/huggingface/datasets/pull/3981",
"diff_url": "https://github.com/huggingface/datasets/pull/3981.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3981.patch",
"merged_at": 1648561900000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3980 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3980/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3980/comments | https://api.github.com/repos/huggingface/datasets/issues/3980/events | https://github.com/huggingface/datasets/pull/3980 | 1,175,412,905 | PR_kwDODunzps40vdcH | 3,980 | Add tip on how to speed up loading with ImageFolder | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for adding that tip! 👍 \r\n\r\nFor the docs syntax, it might be better if we hide the package name/full path to the class or function and only show the name of it. I think it's easier for users to read the function name (eg,`cast_column`) instead of the full path which can be a bit lengthy for some functions like `datasets.IterableDataset.remove_columns` (and if we like this idea, we can align the rest of the docs on it). ",
"> For the docs syntax, it might be better if we hide the package name/full path to the class or function and only show the name of it. I think it's easier for users to read the function name (eg,cast_column) instead of the full path which can be a bit lengthy for some functions like datasets.IterableDataset.remove_columns (and if we like this idea, we can align the rest of the docs on it).\r\n\r\nThat's also OK, as long as we are consistent.\r\n\r\n@lhoestq @albertvillanova @polinaeterna Which one of these two styles do you prefer?",
"Agree on hiding `datasets` name. Not sure about hiding class name as it's anyway not visible for users if they use `Dataset.cast_column` or `IterableDataset.cast_column` when working with their datasets. But I agree that the most important thing is to be consistent :)",
"Good points! :)\r\n\r\nI think it'll be good to show the class name since some functions have different parameters. For example, if users click on `IterableDataset.map` and then `Dataset.map`, they'll see different parameters and have to figure out why (which isn't too difficult I guess lol). But showing the class name avoids any confusion upfront. "
] | 1,647,870,358,000 | 1,647,956,385,000 | 1,647,956,096,000 | CONTRIBUTOR | null | This PR does two things:
* adds a tip on how to speed up loading of a large number of files with ImageFolder (motivated by [this issue](https://github.com/huggingface/datasets/issues/3960))
* replaces the current references to the `Dataset` methods in the Image Processing doc with their fully qualified counterparts (to align it with the Audio Processing doc)
cc @stevhliu | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3980/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3980/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3980",
"html_url": "https://github.com/huggingface/datasets/pull/3980",
"diff_url": "https://github.com/huggingface/datasets/pull/3980.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3980.patch",
"merged_at": 1647956096000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3979 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3979/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3979/comments | https://api.github.com/repos/huggingface/datasets/issues/3979/events | https://github.com/huggingface/datasets/pull/3979 | 1,175,258,969 | PR_kwDODunzps40u8NY | 3,979 | Fix google drive streaming for small files | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Actually the CI fails because of this\r\n![image](https://user-images.githubusercontent.com/42851186/159281771-78e611b1-6b04-4a87-8324-b6ba2d8c6a6a.png)\r\n\r\nIt looks like we can't have a proper way to test google drive in the CI right now. Though it seems to work locally if you're not banned. I think I'll just disable those tests for now",
"this fix will not be included?",
"No we can't do anything except stop using google drive when possible"
] | 1,647,862,726,000 | 1,648,141,151,000 | 1,647,872,758,000 | MEMBER | null | Google drive did another change recently, following #3787 #3843 .
In particular Google Drive now returns 403 for GET requests with `confirm=t` when a files doesn't have a virus warning message. I fixed this by passing `confirm=t` if and only if when there is one (i.e. when status code is 200 for HEAD) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3979/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3979/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3979",
"html_url": "https://github.com/huggingface/datasets/pull/3979",
"diff_url": "https://github.com/huggingface/datasets/pull/3979.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3979.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3978 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3978/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3978/comments | https://api.github.com/repos/huggingface/datasets/issues/3978/events | https://github.com/huggingface/datasets/issues/3978 | 1,175,226,456 | I_kwDODunzps5GDIhY | 3,978 | I can't view HFcallback dataset for ASR Space | {
"login": "kingabzpro",
"id": 36753484,
"node_id": "MDQ6VXNlcjM2NzUzNDg0",
"avatar_url": "https://avatars.githubusercontent.com/u/36753484?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kingabzpro",
"html_url": "https://github.com/kingabzpro",
"followers_url": "https://api.github.com/users/kingabzpro/followers",
"following_url": "https://api.github.com/users/kingabzpro/following{/other_user}",
"gists_url": "https://api.github.com/users/kingabzpro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kingabzpro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kingabzpro/subscriptions",
"organizations_url": "https://api.github.com/users/kingabzpro/orgs",
"repos_url": "https://api.github.com/users/kingabzpro/repos",
"events_url": "https://api.github.com/users/kingabzpro/events{/privacy}",
"received_events_url": "https://api.github.com/users/kingabzpro/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"the dataset viewer is working on this dataset. I imagine the issue is that we would expect to be able to listen to the audio files in the `Please Record Your Voice file` column, right?\r\n\r\nmaybe @lhoestq or @albertvillanova could help\r\n\r\n<img width=\"1019\" alt=\"Capture d’écran 2022-03-24 à 17 36 20\" src=\"https://user-images.githubusercontent.com/1676121/159966006-57dcf8f7-b65f-4200-ac8c-66859318a8bb.png\">\r\n",
"The structure of the dataset is not supported. Only the CSV file is parsed and the audio files are ignored.\r\n\r\nWe're working on supporting audio datasets with a specific structure in #3963 ",
"Got it."
] | 1,647,860,869,000 | 1,649,079,278,000 | null | NONE | null | ## Dataset viewer issue for '*Urdu-ASR-flags*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/kingabzpro/Urdu-ASR-flags)*
*I think dataset should show some thing and if you want me to add script, please show me the documentation. I thought this was suppose to be automatic task.*
Am I the one who added this dataset ? Yes
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3978/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3978/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3977 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3977/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3977/comments | https://api.github.com/repos/huggingface/datasets/issues/3977/events | https://github.com/huggingface/datasets/issues/3977 | 1,175,049,927 | I_kwDODunzps5GCdbH | 3,977 | Adapt `docs/README.md` for datasets | {
"login": "qqaatw",
"id": 24835382,
"node_id": "MDQ6VXNlcjI0ODM1Mzgy",
"avatar_url": "https://avatars.githubusercontent.com/u/24835382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qqaatw",
"html_url": "https://github.com/qqaatw",
"followers_url": "https://api.github.com/users/qqaatw/followers",
"following_url": "https://api.github.com/users/qqaatw/following{/other_user}",
"gists_url": "https://api.github.com/users/qqaatw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qqaatw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qqaatw/subscriptions",
"organizations_url": "https://api.github.com/users/qqaatw/orgs",
"repos_url": "https://api.github.com/users/qqaatw/repos",
"events_url": "https://api.github.com/users/qqaatw/events{/privacy}",
"received_events_url": "https://api.github.com/users/qqaatw/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | open | false | null | [] | null | [
"Thanks for reporting @qqaatw.\r\n\r\nYes, we should definitely adapt that file for `datasets`. "
] | 1,647,851,209,000 | 1,647,852,855,000 | null | CONTRIBUTOR | null | ## Describe the bug
Currently `docs/README.md` is a direct copy from `transformers`, we should probably adapt this file for `datasets`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3977/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3977/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3976 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3976/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3976/comments | https://api.github.com/repos/huggingface/datasets/issues/3976/events | https://github.com/huggingface/datasets/pull/3976 | 1,175,043,780 | PR_kwDODunzps40uOY6 | 3,976 | Fix main classes reference in docs | {
"login": "qqaatw",
"id": 24835382,
"node_id": "MDQ6VXNlcjI0ODM1Mzgy",
"avatar_url": "https://avatars.githubusercontent.com/u/24835382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qqaatw",
"html_url": "https://github.com/qqaatw",
"followers_url": "https://api.github.com/users/qqaatw/followers",
"following_url": "https://api.github.com/users/qqaatw/following{/other_user}",
"gists_url": "https://api.github.com/users/qqaatw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qqaatw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qqaatw/subscriptions",
"organizations_url": "https://api.github.com/users/qqaatw/orgs",
"repos_url": "https://api.github.com/users/qqaatw/repos",
"events_url": "https://api.github.com/users/qqaatw/events{/privacy}",
"received_events_url": "https://api.github.com/users/qqaatw/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3976). All of your documentation changes will be reflected on that endpoint.",
"Not sure why some section titles end with `[[datasets.xxx]]`, like this: https://moon-ci-docs.huggingface.co/docs/datasets/pr_3976/en/package_reference/main_classes#datasetdict[[datasets.datasetdict]]",
"Thanks ! I think this has been fixed already in https://github.com/huggingface/datasets/pull/3925 though\r\n\r\nI'm closing this one then if it's fine for you"
] | 1,647,850,786,000 | 1,649,773,179,000 | 1,649,773,178,000 | CONTRIBUTOR | null | Currently the section index (on the page's right side) of the [main classes reference](https://huggingface.co/docs/datasets/master/en/package_reference/main_classes) incorrectly displays `Tensor returned:`, this PR fixes this issue by wrapping code examples in this page with markdown code block.
There are other examples in datasets library having this issue. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3976/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3976/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3976",
"html_url": "https://github.com/huggingface/datasets/pull/3976",
"diff_url": "https://github.com/huggingface/datasets/pull/3976.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3976.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3975 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3975/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3975/comments | https://api.github.com/repos/huggingface/datasets/issues/3975/events | https://github.com/huggingface/datasets/pull/3975 | 1,174,678,942 | PR_kwDODunzps40tKdS | 3,975 | Update many missing tags to dataset README's | {
"login": "MarkusSagen",
"id": 20767068,
"node_id": "MDQ6VXNlcjIwNzY3MDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/20767068?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MarkusSagen",
"html_url": "https://github.com/MarkusSagen",
"followers_url": "https://api.github.com/users/MarkusSagen/followers",
"following_url": "https://api.github.com/users/MarkusSagen/following{/other_user}",
"gists_url": "https://api.github.com/users/MarkusSagen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MarkusSagen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MarkusSagen/subscriptions",
"organizations_url": "https://api.github.com/users/MarkusSagen/orgs",
"repos_url": "https://api.github.com/users/MarkusSagen/repos",
"events_url": "https://api.github.com/users/MarkusSagen/events{/privacy}",
"received_events_url": "https://api.github.com/users/MarkusSagen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,647,808,947,000 | 1,647,887,992,000 | 1,647,887,992,000 | NONE | null | I've started to go through the datasets available and noticed that there are 127 datasets that does not have all the tags so I started filling them in; starting with some of the most common and QA datasets
Not 100% certain that the task_id is correct for SuperGLUE
If anyone is browsing the issues and would like to help make Hugging face datasets even more feature complete and awesome, feel free to use this tool I wrote to find the missing tags in the [datacards](https://github.com/Hugging-Face-Supporter/datacards) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3975/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3975/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3975",
"html_url": "https://github.com/huggingface/datasets/pull/3975",
"diff_url": "https://github.com/huggingface/datasets/pull/3975.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3975.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3974 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3974/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3974/comments | https://api.github.com/repos/huggingface/datasets/issues/3974/events | https://github.com/huggingface/datasets/pull/3974 | 1,174,485,044 | PR_kwDODunzps40ssrA | 3,974 | Add XFUN dataset | {
"login": "qqaatw",
"id": 24835382,
"node_id": "MDQ6VXNlcjI0ODM1Mzgy",
"avatar_url": "https://avatars.githubusercontent.com/u/24835382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qqaatw",
"html_url": "https://github.com/qqaatw",
"followers_url": "https://api.github.com/users/qqaatw/followers",
"following_url": "https://api.github.com/users/qqaatw/following{/other_user}",
"gists_url": "https://api.github.com/users/qqaatw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qqaatw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qqaatw/subscriptions",
"organizations_url": "https://api.github.com/users/qqaatw/orgs",
"repos_url": "https://api.github.com/users/qqaatw/repos",
"events_url": "https://api.github.com/users/qqaatw/events{/privacy}",
"received_events_url": "https://api.github.com/users/qqaatw/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3974). All of your documentation changes will be reflected on that endpoint.",
"Not sure how to generate dummy data.\r\n\r\nThe downloaded file structure is \r\n\r\n- document file paths\r\n - (a json file containing all documents info, document images folder)\r\n - (a json file containing all documents info, document images folder)\r\n - ...",
"Hey @mariosasko, thanks for the review. I'm not sure how to suggest these changes to the owner @ranpox, and I did spend some time to write the model card and hope to get it on the official repo. Is that possible?",
"Since the author is not responding, maybe we can go ahead with this PR ?",
"Go for it!\n\nOn Tue, Apr 12, 2022 at 10:24 AM Quentin Lhoest ***@***.***>\nwrote:\n\n> Since the author is not responding, maybe we can go ahead with this PR ?\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/pull/3974#issuecomment-1096797650>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ATFNL66EVUFWS3P2FOAS7SLVEWBP3ANCNFSM5RFH3MXA>\n> .\n> You are receiving this because you are subscribed to this thread.Message\n> ID: ***@***.***>\n>\n",
"@qqaatw Do you plan to finish this PR? I can give you some pointers and help you with the code if needed.",
"@mariosasko Yes, I'll apply all of the suggestions when I have some time."
] | 1,647,768,294,000 | 1,657,120,791,000 | null | CONTRIBUTOR | null | This PR adds XFUN dataset.
Home page and repository: https://github.com/doc-analysis/XFUND
Source code: https://github.com/microsoft/unilm/blob/master/layoutlmft/layoutlmft/data/datasets/xfun.py | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3974/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3974/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3974",
"html_url": "https://github.com/huggingface/datasets/pull/3974",
"diff_url": "https://github.com/huggingface/datasets/pull/3974.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3974.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3973 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3973/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3973/comments | https://api.github.com/repos/huggingface/datasets/issues/3973/events | https://github.com/huggingface/datasets/issues/3973 | 1,174,455,431 | I_kwDODunzps5GAMSH | 3,973 | ConnectionError and SSLError | {
"login": "yanyu2015",
"id": 11142054,
"node_id": "MDQ6VXNlcjExMTQyMDU0",
"avatar_url": "https://avatars.githubusercontent.com/u/11142054?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yanyu2015",
"html_url": "https://github.com/yanyu2015",
"followers_url": "https://api.github.com/users/yanyu2015/followers",
"following_url": "https://api.github.com/users/yanyu2015/following{/other_user}",
"gists_url": "https://api.github.com/users/yanyu2015/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yanyu2015/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yanyu2015/subscriptions",
"organizations_url": "https://api.github.com/users/yanyu2015/orgs",
"repos_url": "https://api.github.com/users/yanyu2015/repos",
"events_url": "https://api.github.com/users/yanyu2015/events{/privacy}",
"received_events_url": "https://api.github.com/users/yanyu2015/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi ! You can download the `oscar.py` file from this repository at `/datasets/oscar/oscar.py`.\r\n\r\nThen you can load the dataset by passing the local path to `oscar.py` to `load_dataset`:\r\n```python\r\nload_dataset(\"path/to/oscar.py\", \"unshuffled_deduplicated_it\")\r\n```",
"it works,but another error occurs.\r\n```\r\nConnectionError: Couldn't reach https://s3.amazonaws.com/datasets.huggingface.co/oscar/1.0/unshuffled/deduplicated/it/it_sha256.txt (SSLError(MaxRetryError(\"HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/oscar/1.0/unshuffled/deduplicated/it/it_sha256.txt (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1129)')))\")))\r\n```\r\nI can access `https://s3.amazonaws.com/datasets.huggingface.co/oscar/1.0/unshuffled/deduplicated/it/it_sha256.txt` and `https://aws.amazon.com/cn/s3/` directly, so why it reports a SSLError, should I need tomodify the host file?",
"Could it be an issue with your python environment or your version of OpenSSL ?",
"you are so wise!\r\nit report [ConnectionError] in python 3.9.7\r\nand works well in python 3.8.12\r\n\r\nI need you help again: how can I specify the path for download files?\r\nthe data is too large and my C hardware is not enough",
"Cool ! And you can specify the path for download files with to the `cache_dir` parameter:\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('oscar', 'unshuffled_deduplicated_it', cache_dir='path/to/directory')",
"It takes me some days to download data completely, Despise sometimes it occurs again, change py version is feasible way to avoid this ConnectionEror.\r\nparameter `cache_dir` works well, thanks for your kindness again!"
] | 1,647,758,737,000 | 1,648,628,012,000 | 1,648,628,012,000 | NONE | null | code
```
from datasets import load_dataset
dataset = load_dataset('oscar', 'unshuffled_deduplicated_it')
```
bug report
```
---------------------------------------------------------------------------
ConnectionError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_29788/2615425180.py in <module>
----> 1 dataset = load_dataset('oscar', 'unshuffled_deduplicated_it')
D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1658
1659 # Create a dataset builder
-> 1660 builder_instance = load_dataset_builder(
1661 path=path,
1662 name=name,
D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs)
1484 download_config = download_config.copy() if download_config else DownloadConfig()
1485 download_config.use_auth_token = use_auth_token
-> 1486 dataset_module = dataset_module_factory(
1487 path,
1488 revision=revision,
D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1236 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
1237 ) from None
-> 1238 raise e1 from None
1239 else:
1240 raise FileNotFoundError(
D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1173 if path.count("/") == 0: # even though the dataset is on the Hub, we get it from GitHub for now
1174 # TODO(QL): use a Hub dataset module factory instead of GitHub
-> 1175 return GithubDatasetModuleFactory(
1176 path,
1177 revision=revision,
D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\load.py in get_module(self)
531 revision = self.revision
532 try:
--> 533 local_path = self.download_loading_script(revision)
534 except FileNotFoundError:
535 if revision is not None or os.getenv("HF_SCRIPTS_VERSION", None) is not None:
D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\load.py in download_loading_script(self, revision)
511 if download_config.download_desc is None:
512 download_config.download_desc = "Downloading builder script"
--> 513 return cached_path(file_path, download_config=download_config)
514
515 def download_dataset_infos_file(self, revision: Optional[str]) -> str:
D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\utils\file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)
232 if is_remote_url(url_or_filename):
233 # URL, so get it from the cache (downloading if necessary)
--> 234 output_path = get_from_cache(
235 url_or_filename,
236 cache_dir=cache_dir,
D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\utils\file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token, ignore_url_params, download_desc)
580 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}")
581 if head_error is not None:
--> 582 raise ConnectionError(f"Couldn't reach {url} ({repr(head_error)})")
583 elif response is not None:
584 raise ConnectionError(f"Couldn't reach {url} (error {response.status_code})")
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.0.0/datasets/oscar/oscar.py (SSLError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.0.0/datasets/oscar/oscar.py (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1129)')))")))
```
It may be caused by Caused by SSLError(in China?) because it works well on google colab.
So how can I download this dataset manually?
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3973/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3973/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3972 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3972/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3972/comments | https://api.github.com/repos/huggingface/datasets/issues/3972/events | https://github.com/huggingface/datasets/pull/3972 | 1,174,402,033 | PR_kwDODunzps40sdVu | 3,972 | Adding Roman Urdu Hate Speech dataset | {
"login": "bp-high",
"id": 53102161,
"node_id": "MDQ6VXNlcjUzMTAyMTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/53102161?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bp-high",
"html_url": "https://github.com/bp-high",
"followers_url": "https://api.github.com/users/bp-high/followers",
"following_url": "https://api.github.com/users/bp-high/following{/other_user}",
"gists_url": "https://api.github.com/users/bp-high/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bp-high/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bp-high/subscriptions",
"organizations_url": "https://api.github.com/users/bp-high/orgs",
"repos_url": "https://api.github.com/users/bp-high/repos",
"events_url": "https://api.github.com/users/bp-high/events{/privacy}",
"received_events_url": "https://api.github.com/users/bp-high/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq can you review when you have some time? Also were the previous CI fails due to the Google Drive tests which were excluded by #3982 ?",
"> were the previous CI fails due to the Google Drive tests which were excluded by https://github.com/huggingface/datasets/pull/3982 ?\r\n\r\nYes exactly, merging `master` into your branch fixed the CI ;)",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,647,735,566,000 | 1,648,223,779,000 | 1,648,223,480,000 | CONTRIBUTOR | null | This Pull request will add the Roman Urdu Hate speech Dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3972/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3972/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3972",
"html_url": "https://github.com/huggingface/datasets/pull/3972",
"diff_url": "https://github.com/huggingface/datasets/pull/3972.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3972.patch",
"merged_at": 1648223480000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3971 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3971/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3971/comments | https://api.github.com/repos/huggingface/datasets/issues/3971/events | https://github.com/huggingface/datasets/pull/3971 | 1,174,329,442 | PR_kwDODunzps40sS4W | 3,971 | Applied index-filters on scores in search.py. | {
"login": "vishalsrao",
"id": 36671559,
"node_id": "MDQ6VXNlcjM2NjcxNTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/36671559?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vishalsrao",
"html_url": "https://github.com/vishalsrao",
"followers_url": "https://api.github.com/users/vishalsrao/followers",
"following_url": "https://api.github.com/users/vishalsrao/following{/other_user}",
"gists_url": "https://api.github.com/users/vishalsrao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vishalsrao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vishalsrao/subscriptions",
"organizations_url": "https://api.github.com/users/vishalsrao/orgs",
"repos_url": "https://api.github.com/users/vishalsrao/repos",
"events_url": "https://api.github.com/users/vishalsrao/events{/privacy}",
"received_events_url": "https://api.github.com/users/vishalsrao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,647,715,422,000 | 1,649,774,903,000 | 1,649,774,518,000 | CONTRIBUTOR | null | Updated search.py to resolve the issue mentioned in https://github.com/huggingface/datasets/issues/3961.
Applied index-filters on scores in get_nearest_examples and get_nearest_examples_batch methods of search.py. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3971/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3971/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3971",
"html_url": "https://github.com/huggingface/datasets/pull/3971",
"diff_url": "https://github.com/huggingface/datasets/pull/3971.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3971.patch",
"merged_at": 1649774518000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3970 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3970/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3970/comments | https://api.github.com/repos/huggingface/datasets/issues/3970/events | https://github.com/huggingface/datasets/pull/3970 | 1,174,327,367 | PR_kwDODunzps40sSfx | 3,970 | Apply index-filters on scores in get_nearest_examples and get_nearest… | {
"login": "vishalsrao",
"id": 36671559,
"node_id": "MDQ6VXNlcjM2NjcxNTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/36671559?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vishalsrao",
"html_url": "https://github.com/vishalsrao",
"followers_url": "https://api.github.com/users/vishalsrao/followers",
"following_url": "https://api.github.com/users/vishalsrao/following{/other_user}",
"gists_url": "https://api.github.com/users/vishalsrao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vishalsrao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vishalsrao/subscriptions",
"organizations_url": "https://api.github.com/users/vishalsrao/orgs",
"repos_url": "https://api.github.com/users/vishalsrao/repos",
"events_url": "https://api.github.com/users/vishalsrao/events{/privacy}",
"received_events_url": "https://api.github.com/users/vishalsrao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,647,714,751,000 | 1,647,715,092,000 | 1,647,715,092,000 | CONTRIBUTOR | null | Updated search.py to resolve the issue mentioned in https://github.com/huggingface/datasets/issues/3961.
Applied index-filters on scores in get_nearest_examples and get_nearest_examples_batch methods of search.py. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3970/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3970/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3970",
"html_url": "https://github.com/huggingface/datasets/pull/3970",
"diff_url": "https://github.com/huggingface/datasets/pull/3970.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3970.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3969 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3969/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3969/comments | https://api.github.com/repos/huggingface/datasets/issues/3969/events | https://github.com/huggingface/datasets/issues/3969 | 1,174,273,824 | I_kwDODunzps5F_f8g | 3,969 | Cannot preview cnn_dailymail dataset | {
"login": "hasan-besh",
"id": 75482871,
"node_id": "MDQ6VXNlcjc1NDgyODcx",
"avatar_url": "https://avatars.githubusercontent.com/u/75482871?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hasan-besh",
"html_url": "https://github.com/hasan-besh",
"followers_url": "https://api.github.com/users/hasan-besh/followers",
"following_url": "https://api.github.com/users/hasan-besh/following{/other_user}",
"gists_url": "https://api.github.com/users/hasan-besh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hasan-besh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hasan-besh/subscriptions",
"organizations_url": "https://api.github.com/users/hasan-besh/orgs",
"repos_url": "https://api.github.com/users/hasan-besh/repos",
"events_url": "https://api.github.com/users/hasan-besh/events{/privacy}",
"received_events_url": "https://api.github.com/users/hasan-besh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I guess the cache got corrupted due to a previous issue with Google Drive service.\r\n\r\nThe cache should be regenerated, e.g. by passing `download_mode=\"force_redownload\"`.\r\n\r\nCC: @severo ",
"Note that the dataset preview uses its own cache, not `datasets`' cache. So `download_mode=\"force_redownload\"` doesn't help. But yes indeed the cache must be refreshed.\r\n\r\nThe CNN Dailymail dataste is currently hosted on Google Drive, which is an unreliable host and we've had many issues with it. Unless we found another most reliable host for the data, we will keep running into issues from time to time.\r\n\r\nAt Hugging Face we're not allowed to host the CNN Dailymail data by ourselves AFAIK",
"Yes @lhoestq, I didn't explain myself well: my previous message was addressed to @severo. ",
"I remove the tag dataset-viewer, since it's more an issue with the hosting on Google Drive",
"Sounds good. I was looking for another host of this dataset but couldn't find any (yet)",
"It seems like the issue is with the streaming mode, not with the hosting:\r\n\r\n```python\r\n>>> import datasets\r\n>>> dataset = datasets.load_dataset('cnn_dailymail', name=\"3.0.0\", split=\"train\", streaming=True, download_mode=\"force_redownload\")\r\nDownloading builder script: 9.35kB [00:00, 10.2MB/s]\r\nDownloading metadata: 9.50kB [00:00, 12.2MB/s]\r\n>>> len(list(dataset))\r\n0\r\n>>> dataset = datasets.load_dataset('cnn_dailymail', name=\"3.0.0\", split=\"train\", streaming=False)\r\nReusing dataset cnn_dailymail (/home/slesage/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234)\r\n>>> len(dataset)\r\n287113\r\n```\r\n\r\nNote, in particular, that the streaming mode is failing silently, returning 0 row while I would have expected an exception instead. The result is that the dataset viewer shows `No data` instead of a detailed error.\r\n\r\n<img width=\"1511\" alt=\"Capture d’écran 2022-04-12 à 11 50 46\" src=\"https://user-images.githubusercontent.com/1676121/162935341-d50f1e73-d053-41d4-917f-e79708a0ca23.png\">\r\n",
"Well this is because the host (Google Drive) returns a document that is not the actual data, but an error page",
"Do you think that `datasets` should detect this anyway and throw an exception?",
"Yes it definitely should ! I don't have the bandwidth to work on this right now though",
"Indeed, streaming was not supported: tgz archives were not properly iterated.\r\n\r\nI've opened a PR to support streaming.\r\n\r\nHowever, keep in mind that Google Drive will keep generating issues from time to time, like 403,..."
] | 1,647,698,937,000 | 1,650,469,969,000 | 1,650,469,969,000 | NONE | null | ## Dataset viewer issue for '*cnn_dailymail*'
**Link:** https://huggingface.co/datasets/cnn_dailymail
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3969/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3969/timeline | null | completed | null | null | false |