id
int64 599M
2.47B
| url
stringlengths 58
61
| repository_url
stringclasses 1
value | events_url
stringlengths 65
68
| labels
listlengths 0
4
| active_lock_reason
null | updated_at
stringlengths 20
20
| assignees
listlengths 0
4
| html_url
stringlengths 46
51
| author_association
stringclasses 4
values | state_reason
stringclasses 3
values | draft
bool 2
classes | milestone
dict | comments
sequencelengths 0
30
| title
stringlengths 1
290
| reactions
dict | node_id
stringlengths 18
32
| pull_request
dict | created_at
stringlengths 20
20
| comments_url
stringlengths 67
70
| body
stringlengths 0
228k
⌀ | user
dict | labels_url
stringlengths 72
75
| timeline_url
stringlengths 67
70
| state
stringclasses 2
values | locked
bool 1
class | number
int64 1
7.11k
| performed_via_github_app
null | closed_at
stringlengths 20
20
⌀ | assignee
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,473,367,848 | https://api.github.com/repos/huggingface/datasets/issues/7109 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7109/events | [] | null | 2024-08-19T13:29:12Z | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | https://github.com/huggingface/datasets/issues/7109 | MEMBER | null | null | null | [] | ConnectionError for gated datasets and unauthenticated users | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7109/reactions"
} | I_kwDODunzps6TbJko | null | 2024-08-19T13:27:45Z | https://api.github.com/repos/huggingface/datasets/issues/7109/comments | Since the Hub returns dataset info for gated datasets and unauthenticated users, there is dead code: https://github.com/huggingface/datasets/blob/98fdc9e78e6d057ca66e58a37f49d6618aab8130/src/datasets/load.py#L1846-L1852
We should remove the dead code and properly handle this case: currently we are raising a `ConnectionError` instead of a `DatasetNotFoundError` (as before).
See:
- https://github.com/huggingface/dataset-viewer/issues/3025
- https://github.com/huggingface/huggingface_hub/issues/2457 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/7109/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7109/timeline | open | false | 7,109 | null | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | false |
2,470,665,327 | https://api.github.com/repos/huggingface/datasets/issues/7108 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7108/events | [] | null | 2024-08-19T13:21:12Z | [] | https://github.com/huggingface/datasets/issues/7108 | NONE | completed | null | null | [
"I don't reproduce, I was able to create a new repo: https://huggingface.co/datasets/severo/reproduce-datasets-issues-7108. Can you confirm it's still broken?",
"I have just tried again.\r\n\r\nFirefox: The `Create dataset` doesn't work. It has worked in the past. It's my preferred browser.\r\n\r\nChrome: The `Create dataset` works.\r\n\r\nIt seems to be a Firefox specific issue.",
"I have updated Firefox 129.0 (64 bit), and now the `Create dataset` is working again in Firefox.\r\n\r\nUX: It would be nice with better error messages on HuggingFace.",
"maybe an issue with the cookie. cc @Wauplin @coyotte508 "
] | website broken: Create a new dataset repository, doesn't create a new repo in Firefox | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7108/reactions"
} | I_kwDODunzps6TQ1xv | null | 2024-08-16T17:23:00Z | https://api.github.com/repos/huggingface/datasets/issues/7108/comments | ### Describe the bug
This issue is also reported here:
https://discuss.huggingface.co/t/create-a-new-dataset-repository-broken-page/102644
This page is broken.
https://huggingface.co/new-dataset
I fill in the form with my text, and click `Create Dataset`.
![Screenshot 2024-08-16 at 15 55 37](https://github.com/user-attachments/assets/de16627b-7a55-4bcf-9f0b-a48227aabfe6)
Then the form gets wiped. And no repo got created. No error message visible in the developer console.
![Screenshot 2024-08-16 at 15 56 54](https://github.com/user-attachments/assets/0520164b-431c-40a5-9634-11fd62c4f4c3)
# Idea for improvement
For better UX, if the repo cannot be created, then show an error message, that something went wrong.
# Work around, that works for me
```python
from huggingface_hub import HfApi, HfFolder
repo_id = 'simon-arc-solve-fractal-v3'
api = HfApi()
username = api.whoami()['name']
repo_url = api.create_repo(repo_id=repo_id, exist_ok=True, private=True, repo_type="dataset")
```
### Steps to reproduce the bug
Go https://huggingface.co/new-dataset
Fill in the form.
Click `Create dataset`.
Now the form is cleared. And the page doesn't jump anywhere.
### Expected behavior
The moment the user clicks `Create dataset`, the repo gets created and the page jumps to the created repo.
### Environment info
Firefox 128.0.3 (64-bit)
macOS Sonoma 14.5
| {
"avatar_url": "https://avatars.githubusercontent.com/u/147971?v=4",
"events_url": "https://api.github.com/users/neoneye/events{/privacy}",
"followers_url": "https://api.github.com/users/neoneye/followers",
"following_url": "https://api.github.com/users/neoneye/following{/other_user}",
"gists_url": "https://api.github.com/users/neoneye/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/neoneye",
"id": 147971,
"login": "neoneye",
"node_id": "MDQ6VXNlcjE0Nzk3MQ==",
"organizations_url": "https://api.github.com/users/neoneye/orgs",
"received_events_url": "https://api.github.com/users/neoneye/received_events",
"repos_url": "https://api.github.com/users/neoneye/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/neoneye/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neoneye/subscriptions",
"type": "User",
"url": "https://api.github.com/users/neoneye"
} | https://api.github.com/repos/huggingface/datasets/issues/7108/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7108/timeline | closed | false | 7,108 | null | 2024-08-19T06:52:48Z | null | false |
2,470,444,732 | https://api.github.com/repos/huggingface/datasets/issues/7107 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7107/events | [] | null | 2024-08-18T09:28:43Z | [] | https://github.com/huggingface/datasets/issues/7107 | NONE | completed | null | null | [
"There seems to be a PR related to the load_dataset path that went into 2.21.0 -- https://github.com/huggingface/datasets/pull/6862/files\r\n\r\nTaking a look at it now",
"+1\r\n\r\nDowngrading to 2.20.0 fixed my issue, hopefully helpful for others.",
"I tried adding a simple test to `test_load.py` with the alpaca eval dataset but the test didn't fail :(. \r\n\r\nSo looks like this might have something to do with the environment? ",
"There was an issue with the script of the \"tatsu-lab/alpaca_eval\" dataset.\r\n\r\nI was fixed with this PR: \r\n- [Fix FileNotFoundError](https://huggingface.co/datasets/tatsu-lab/alpaca_eval/discussions/2)\r\n\r\nIt should work now if you retry to load the dataset."
] | load_dataset broken in 2.21.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7107/reactions"
} | I_kwDODunzps6TP_68 | null | 2024-08-16T14:59:51Z | https://api.github.com/repos/huggingface/datasets/issues/7107/comments | ### Describe the bug
`eval_set = datasets.load_dataset("tatsu-lab/alpaca_eval", "alpaca_eval_gpt4_baseline", trust_remote_code=True)`
used to work till 2.20.0 but doesn't work in 2.21.0
In 2.20.0:
![Screenshot 2024-08-16 at 3 57 10 PM](https://github.com/user-attachments/assets/0516489b-8187-486d-bee8-88af3381dee9)
in 2.21.0:
![Screenshot 2024-08-16 at 3 57 24 PM](https://github.com/user-attachments/assets/bc257570-f461-41e4-8717-90a69ed7c24f)
### Steps to reproduce the bug
1. Spin up a new google collab
2. `pip install datasets==2.21.0`
3. `import datasets`
4. `eval_set = datasets.load_dataset("tatsu-lab/alpaca_eval", "alpaca_eval_gpt4_baseline", trust_remote_code=True)`
5. Will throw an error.
### Expected behavior
Try steps 1-5 again but replace datasets version with 2.20.0, it will work
### Environment info
- `datasets` version: 2.21.0
- Platform: Linux-6.1.85+-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.23.5
- PyArrow version: 17.0.0
- Pandas version: 2.1.4
- `fsspec` version: 2024.5.0
| {
"avatar_url": "https://avatars.githubusercontent.com/u/1911631?v=4",
"events_url": "https://api.github.com/users/anjor/events{/privacy}",
"followers_url": "https://api.github.com/users/anjor/followers",
"following_url": "https://api.github.com/users/anjor/following{/other_user}",
"gists_url": "https://api.github.com/users/anjor/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/anjor",
"id": 1911631,
"login": "anjor",
"node_id": "MDQ6VXNlcjE5MTE2MzE=",
"organizations_url": "https://api.github.com/users/anjor/orgs",
"received_events_url": "https://api.github.com/users/anjor/received_events",
"repos_url": "https://api.github.com/users/anjor/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/anjor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anjor/subscriptions",
"type": "User",
"url": "https://api.github.com/users/anjor"
} | https://api.github.com/repos/huggingface/datasets/issues/7107/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7107/timeline | closed | false | 7,107 | null | 2024-08-18T09:27:12Z | null | false |
2,469,854,262 | https://api.github.com/repos/huggingface/datasets/issues/7106 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7106/events | [] | null | 2024-08-16T09:31:37Z | [] | https://github.com/huggingface/datasets/pull/7106 | MEMBER | null | false | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7106). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | Rename LargeList.dtype to LargeList.feature | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7106/reactions"
} | PR_kwDODunzps54jntM | {
"diff_url": "https://github.com/huggingface/datasets/pull/7106.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7106",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7106.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7106"
} | 2024-08-16T09:12:04Z | https://api.github.com/repos/huggingface/datasets/issues/7106/comments | Rename `LargeList.dtype` to `LargeList.feature`.
Note that `dtype` is usually used for NumPy data types ("int64", "float32",...): see `Value.dtype`.
However, `LargeList` attribute (like `Sequence.feature`) expects a `FeatureType` instead.
With this renaming:
- we avoid confusion about the expected type and
- we also align `LargeList` with `Sequence`. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/7106/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7106/timeline | open | false | 7,106 | null | null | null | true |
2,468,207,039 | https://api.github.com/repos/huggingface/datasets/issues/7105 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7105/events | [] | null | 2024-08-19T15:08:49Z | [] | https://github.com/huggingface/datasets/pull/7105 | MEMBER | null | false | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7105). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Nice\r\n\r\n<img width=\"141\" alt=\"Capture d’écran 2024-08-19 à 15 25 00\" src=\"https://github.com/user-attachments/assets/18c7b3ec-a57e-45d7-9b19-0b12df9feccd\">\r\n",
"fyi the CI failure on test_py310_numpy2 is unrelated to this PR (it's a dependency install failure)"
] | Use `huggingface_hub` cache | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 2,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7105/reactions"
} | PR_kwDODunzps54eZ0D | {
"diff_url": "https://github.com/huggingface/datasets/pull/7105.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7105",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7105.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7105"
} | 2024-08-15T14:45:22Z | https://api.github.com/repos/huggingface/datasets/issues/7105/comments | wip
- use `hf_hub_download()` from `huggingface_hub` for HF files
- `datasets` cache_dir is still used for:
- caching datasets as Arrow files (that back `Dataset` objects)
- extracted archives, uncompressed files
- files downloaded via http (datasets with scripts)
- I removed code that were made for http files (and also the dummy_data / mock_download_manager stuff that happened to rely on them and have been legacy for a while now) | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | https://api.github.com/repos/huggingface/datasets/issues/7105/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7105/timeline | open | false | 7,105 | null | null | null | true |
2,467,788,212 | https://api.github.com/repos/huggingface/datasets/issues/7104 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7104/events | [] | null | 2024-08-15T10:24:13Z | [] | https://github.com/huggingface/datasets/pull/7104 | MEMBER | null | false | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7104). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005343 / 0.011353 (-0.006010) | 0.003562 / 0.011008 (-0.007447) | 0.062785 / 0.038508 (0.024277) | 0.031459 / 0.023109 (0.008349) | 0.246497 / 0.275898 (-0.029401) | 0.268258 / 0.323480 (-0.055222) | 0.003201 / 0.007986 (-0.004785) | 0.004153 / 0.004328 (-0.000175) | 0.049003 / 0.004250 (0.044753) | 0.042780 / 0.037052 (0.005728) | 0.263857 / 0.258489 (0.005368) | 0.278578 / 0.293841 (-0.015263) | 0.030357 / 0.128546 (-0.098190) | 0.012341 / 0.075646 (-0.063305) | 0.206010 / 0.419271 (-0.213262) | 0.036244 / 0.043533 (-0.007289) | 0.245799 / 0.255139 (-0.009340) | 0.265467 / 0.283200 (-0.017733) | 0.019473 / 0.141683 (-0.122210) | 1.147913 / 1.452155 (-0.304242) | 1.209968 / 1.492716 (-0.282749) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099393 / 0.018006 (0.081387) | 0.300898 / 0.000490 (0.300408) | 0.000258 / 0.000200 (0.000058) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018888 / 0.037411 (-0.018523) | 0.062452 / 0.014526 (0.047926) | 0.073799 / 0.176557 (-0.102757) | 0.121297 / 0.737135 (-0.615839) | 0.074855 / 0.296338 (-0.221484) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283969 / 0.215209 (0.068760) | 2.808820 / 2.077655 (0.731165) | 1.446106 / 1.504120 (-0.058014) | 1.321622 / 1.541195 (-0.219573) | 1.348317 / 1.468490 (-0.120173) | 0.738369 / 4.584777 (-3.846408) | 2.349825 / 3.745712 (-1.395887) | 2.913964 / 5.269862 (-2.355897) | 1.870585 / 4.565676 (-2.695092) | 0.080141 / 0.424275 (-0.344134) | 0.005174 / 0.007607 (-0.002433) | 0.335977 / 0.226044 (0.109933) | 3.356267 / 2.268929 (1.087338) | 1.811149 / 55.444624 (-53.633475) | 1.510685 / 6.876477 (-5.365792) | 1.524960 / 2.142072 (-0.617112) | 0.803900 / 4.805227 (-4.001328) | 0.138294 / 6.500664 (-6.362370) | 0.042241 / 0.075469 (-0.033229) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.975597 / 1.841788 (-0.866191) | 11.395109 / 8.074308 (3.320801) | 9.837724 / 10.191392 (-0.353668) | 0.141474 / 0.680424 (-0.538950) | 0.015075 / 0.534201 (-0.519126) | 0.304285 / 0.579283 (-0.274998) | 0.267845 / 0.434364 (-0.166519) | 0.342808 / 0.540337 (-0.197529) | 0.434299 / 1.386936 (-0.952637) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005612 / 0.011353 (-0.005741) | 0.003808 / 0.011008 (-0.007201) | 0.050533 / 0.038508 (0.012024) | 0.032635 / 0.023109 (0.009526) | 0.265522 / 0.275898 (-0.010376) | 0.289763 / 0.323480 (-0.033716) | 0.004395 / 0.007986 (-0.003590) | 0.002868 / 0.004328 (-0.001460) | 0.048443 / 0.004250 (0.044193) | 0.040047 / 0.037052 (0.002995) | 0.279013 / 0.258489 (0.020524) | 0.314499 / 0.293841 (0.020658) | 0.032321 / 0.128546 (-0.096225) | 0.011902 / 0.075646 (-0.063744) | 0.059827 / 0.419271 (-0.359445) | 0.034388 / 0.043533 (-0.009145) | 0.270660 / 0.255139 (0.015521) | 0.290776 / 0.283200 (0.007576) | 0.017875 / 0.141683 (-0.123808) | 1.188085 / 1.452155 (-0.264070) | 1.221384 / 1.492716 (-0.271332) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095619 / 0.018006 (0.077613) | 0.305331 / 0.000490 (0.304841) | 0.000217 / 0.000200 (0.000018) | 0.000049 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022481 / 0.037411 (-0.014930) | 0.076957 / 0.014526 (0.062431) | 0.087830 / 0.176557 (-0.088726) | 0.128290 / 0.737135 (-0.608845) | 0.090565 / 0.296338 (-0.205774) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291861 / 0.215209 (0.076652) | 2.869776 / 2.077655 (0.792121) | 1.575114 / 1.504120 (0.070994) | 1.449873 / 1.541195 (-0.091322) | 1.450333 / 1.468490 (-0.018158) | 0.723319 / 4.584777 (-3.861458) | 0.972603 / 3.745712 (-2.773109) | 2.940909 / 5.269862 (-2.328953) | 1.889664 / 4.565676 (-2.676012) | 0.078654 / 0.424275 (-0.345621) | 0.005197 / 0.007607 (-0.002410) | 0.344380 / 0.226044 (0.118336) | 3.387509 / 2.268929 (1.118580) | 1.981590 / 55.444624 (-53.463034) | 1.643214 / 6.876477 (-5.233263) | 1.640435 / 2.142072 (-0.501638) | 0.802037 / 4.805227 (-4.003191) | 0.133016 / 6.500664 (-6.367648) | 0.040861 / 0.075469 (-0.034608) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.026372 / 1.841788 (-0.815416) | 11.959931 / 8.074308 (3.885623) | 10.122523 / 10.191392 (-0.068869) | 0.144443 / 0.680424 (-0.535981) | 0.015629 / 0.534201 (-0.518572) | 0.304802 / 0.579283 (-0.274481) | 0.120538 / 0.434364 (-0.313826) | 0.343394 / 0.540337 (-0.196943) | 0.437544 / 1.386936 (-0.949392) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#84832c07f614e5f51a762166b2fa9ac27e988173 \"CML watermark\")\n"
] | remove more script docs | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7104/reactions"
} | PR_kwDODunzps54dAhE | {
"diff_url": "https://github.com/huggingface/datasets/pull/7104.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7104",
"merged_at": "2024-08-15T10:18:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7104.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7104"
} | 2024-08-15T10:13:26Z | https://api.github.com/repos/huggingface/datasets/issues/7104/comments | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | https://api.github.com/repos/huggingface/datasets/issues/7104/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7104/timeline | closed | false | 7,104 | null | 2024-08-15T10:18:25Z | null | true |
2,467,664,581 | https://api.github.com/repos/huggingface/datasets/issues/7103 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7103/events | [] | null | 2024-08-16T09:18:29Z | [] | https://github.com/huggingface/datasets/pull/7103 | MEMBER | null | false | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7103). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005255 / 0.011353 (-0.006098) | 0.003344 / 0.011008 (-0.007664) | 0.062062 / 0.038508 (0.023554) | 0.030154 / 0.023109 (0.007045) | 0.233728 / 0.275898 (-0.042170) | 0.258799 / 0.323480 (-0.064681) | 0.004105 / 0.007986 (-0.003880) | 0.002708 / 0.004328 (-0.001621) | 0.048689 / 0.004250 (0.044439) | 0.041864 / 0.037052 (0.004812) | 0.247221 / 0.258489 (-0.011268) | 0.274067 / 0.293841 (-0.019774) | 0.029108 / 0.128546 (-0.099439) | 0.011867 / 0.075646 (-0.063779) | 0.203181 / 0.419271 (-0.216090) | 0.035162 / 0.043533 (-0.008371) | 0.239723 / 0.255139 (-0.015416) | 0.256679 / 0.283200 (-0.026521) | 0.018362 / 0.141683 (-0.123321) | 1.139974 / 1.452155 (-0.312181) | 1.193946 / 1.492716 (-0.298770) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.135477 / 0.018006 (0.117471) | 0.298500 / 0.000490 (0.298011) | 0.000225 / 0.000200 (0.000025) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018743 / 0.037411 (-0.018668) | 0.062999 / 0.014526 (0.048474) | 0.073466 / 0.176557 (-0.103090) | 0.119227 / 0.737135 (-0.617908) | 0.074338 / 0.296338 (-0.222000) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.280747 / 0.215209 (0.065538) | 2.750660 / 2.077655 (0.673006) | 1.461004 / 1.504120 (-0.043116) | 1.348439 / 1.541195 (-0.192756) | 1.365209 / 1.468490 (-0.103281) | 0.718416 / 4.584777 (-3.866361) | 2.333568 / 3.745712 (-1.412144) | 2.854639 / 5.269862 (-2.415223) | 1.821144 / 4.565676 (-2.744532) | 0.077234 / 0.424275 (-0.347041) | 0.005111 / 0.007607 (-0.002497) | 0.330749 / 0.226044 (0.104705) | 3.277189 / 2.268929 (1.008260) | 1.825886 / 55.444624 (-53.618739) | 1.515078 / 6.876477 (-5.361399) | 1.527288 / 2.142072 (-0.614785) | 0.786922 / 4.805227 (-4.018305) | 0.131539 / 6.500664 (-6.369125) | 0.042365 / 0.075469 (-0.033104) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.961809 / 1.841788 (-0.879979) | 11.184540 / 8.074308 (3.110232) | 9.473338 / 10.191392 (-0.718054) | 0.138460 / 0.680424 (-0.541964) | 0.014588 / 0.534201 (-0.519613) | 0.301503 / 0.579283 (-0.277780) | 0.261092 / 0.434364 (-0.173271) | 0.336480 / 0.540337 (-0.203857) | 0.427665 / 1.386936 (-0.959271) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005517 / 0.011353 (-0.005836) | 0.003417 / 0.011008 (-0.007591) | 0.049338 / 0.038508 (0.010830) | 0.033411 / 0.023109 (0.010302) | 0.264328 / 0.275898 (-0.011570) | 0.286750 / 0.323480 (-0.036730) | 0.004299 / 0.007986 (-0.003686) | 0.002506 / 0.004328 (-0.001823) | 0.049511 / 0.004250 (0.045260) | 0.041471 / 0.037052 (0.004418) | 0.276732 / 0.258489 (0.018243) | 0.311908 / 0.293841 (0.018067) | 0.031683 / 0.128546 (-0.096863) | 0.011700 / 0.075646 (-0.063946) | 0.060084 / 0.419271 (-0.359188) | 0.037757 / 0.043533 (-0.005776) | 0.265342 / 0.255139 (0.010203) | 0.287782 / 0.283200 (0.004583) | 0.018692 / 0.141683 (-0.122990) | 1.163462 / 1.452155 (-0.288692) | 1.219236 / 1.492716 (-0.273481) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094102 / 0.018006 (0.076096) | 0.303976 / 0.000490 (0.303487) | 0.000208 / 0.000200 (0.000008) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023252 / 0.037411 (-0.014160) | 0.076986 / 0.014526 (0.062461) | 0.088831 / 0.176557 (-0.087726) | 0.128661 / 0.737135 (-0.608475) | 0.089082 / 0.296338 (-0.207256) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297428 / 0.215209 (0.082218) | 2.951568 / 2.077655 (0.873913) | 1.597627 / 1.504120 (0.093508) | 1.466556 / 1.541195 (-0.074639) | 1.455522 / 1.468490 (-0.012968) | 0.723576 / 4.584777 (-3.861201) | 0.951113 / 3.745712 (-2.794599) | 2.889671 / 5.269862 (-2.380190) | 1.877330 / 4.565676 (-2.688347) | 0.079124 / 0.424275 (-0.345151) | 0.005146 / 0.007607 (-0.002461) | 0.344063 / 0.226044 (0.118018) | 3.432190 / 2.268929 (1.163261) | 1.927049 / 55.444624 (-53.517576) | 1.638552 / 6.876477 (-5.237924) | 1.647791 / 2.142072 (-0.494282) | 0.800526 / 4.805227 (-4.004701) | 0.131858 / 6.500664 (-6.368806) | 0.040852 / 0.075469 (-0.034618) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.025536 / 1.841788 (-0.816252) | 11.798302 / 8.074308 (3.723994) | 10.012051 / 10.191392 (-0.179341) | 0.137701 / 0.680424 (-0.542723) | 0.015151 / 0.534201 (-0.519050) | 0.298972 / 0.579283 (-0.280311) | 0.123816 / 0.434364 (-0.310548) | 0.337292 / 0.540337 (-0.203046) | 0.432729 / 1.386936 (-0.954207) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#bececdac927160b5c7e883736d7cc79d5699ad0a \"CML watermark\")\n"
] | Fix args of feature docstrings | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7103/reactions"
} | PR_kwDODunzps54clrp | {
"diff_url": "https://github.com/huggingface/datasets/pull/7103.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7103",
"merged_at": "2024-08-15T10:33:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7103.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7103"
} | 2024-08-15T08:46:08Z | https://api.github.com/repos/huggingface/datasets/issues/7103/comments | Fix Args section of feature docstrings.
Currently, some args do not appear in the docs because they are not properly parsed due to the lack of their type (between parentheses). | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/7103/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7103/timeline | closed | false | 7,103 | null | 2024-08-15T10:33:30Z | null | true |
2,466,893,106 | https://api.github.com/repos/huggingface/datasets/issues/7102 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7102/events | [] | null | 2024-08-15T16:17:31Z | [] | https://github.com/huggingface/datasets/issues/7102 | NONE | null | null | null | [
"Hi @lajd , I was skeptical about how we are saving the shards each as their own dataset (arrow file) in the script above, and so I updated the script to try out saving the shards in a few different file formats. From the experiments I ran, I saw binary format show significantly the best performance, with arrow and parquet about the same. However, I was unable to reproduce a drastically slower iteration speed after shuffling in any case when using the revised script -- pasting below:\r\n\r\n```python\r\nimport time\r\nfrom datasets import load_dataset, Dataset, IterableDataset\r\nfrom pathlib import Path\r\nimport torch\r\nimport pandas as pd\r\nimport pickle\r\nimport pyarrow as pa\r\nimport pyarrow.parquet as pq\r\n\r\n\r\ndef generate_random_example():\r\n return {\r\n 'inputs': torch.randn(128).tolist(),\r\n 'indices': torch.randint(0, 10000, (2, 20000)).tolist(),\r\n 'values': torch.randn(20000).tolist(),\r\n }\r\n\r\n\r\ndef generate_shard_data(examples_per_shard: int = 512):\r\n return [generate_random_example() for _ in range(examples_per_shard)]\r\n\r\n\r\ndef save_shard_as_arrow(shard_idx, save_dir, examples_per_shard):\r\n # Generate shard data\r\n shard_data = generate_shard_data(examples_per_shard)\r\n\r\n # Convert data to a Hugging Face Dataset\r\n dataset = Dataset.from_dict({\r\n 'inputs': [example['inputs'] for example in shard_data],\r\n 'indices': [example['indices'] for example in shard_data],\r\n 'values': [example['values'] for example in shard_data],\r\n })\r\n\r\n # Define the shard save path\r\n shard_write_path = Path(save_dir) / f\"shard_{shard_idx}\"\r\n\r\n # Save the dataset to disk using the Arrow format\r\n dataset.save_to_disk(str(shard_write_path))\r\n\r\n return str(shard_write_path)\r\n\r\n\r\ndef save_shard_as_parquet(shard_idx, save_dir, examples_per_shard):\r\n # Generate shard data\r\n shard_data = generate_shard_data(examples_per_shard)\r\n\r\n # Convert data to a pandas DataFrame for easy conversion to Parquet\r\n df = pd.DataFrame(shard_data)\r\n\r\n # Define the shard save path\r\n shard_write_path = Path(save_dir) / f\"shard_{shard_idx}.parquet\"\r\n\r\n # Convert DataFrame to PyArrow Table for Parquet saving\r\n table = pa.Table.from_pandas(df)\r\n\r\n # Save the table as a Parquet file\r\n pq.write_table(table, shard_write_path)\r\n\r\n return str(shard_write_path)\r\n\r\n\r\ndef save_shard_as_binary(shard_idx, save_dir, examples_per_shard):\r\n # Generate shard data\r\n shard_data = generate_shard_data(examples_per_shard)\r\n\r\n # Define the shard save path\r\n shard_write_path = Path(save_dir) / f\"shard_{shard_idx}.bin\"\r\n\r\n # Save each example as a serialized binary object using pickle\r\n with open(shard_write_path, 'wb') as f:\r\n for example in shard_data:\r\n f.write(pickle.dumps(example))\r\n\r\n return str(shard_write_path)\r\n\r\n\r\ndef generate_split_shards(save_dir, filetype=\"parquet\", num_shards: int = 16, examples_per_shard: int = 512):\r\n shard_filepaths = []\r\n for shard_idx in range(num_shards):\r\n if filetype == \"parquet\":\r\n shard_filepaths.append(save_shard_as_parquet(shard_idx, save_dir, examples_per_shard))\r\n elif filetype == \"binary\":\r\n shard_filepaths.append(save_shard_as_binary(shard_idx, save_dir, examples_per_shard))\r\n elif filetype == \"arrow\":\r\n shard_filepaths.append(save_shard_as_arrow(shard_idx, save_dir, examples_per_shard))\r\n else:\r\n raise ValueError(f\"Unsupported filetype: {filetype}. Choose either 'parquet' or 'binary'.\")\r\n return shard_filepaths\r\n\r\n\r\ndef _binary_dataset_generator(files):\r\n for filepath in files:\r\n with open(filepath, 'rb') as f:\r\n while True:\r\n try:\r\n example = pickle.load(f)\r\n yield example\r\n except EOFError:\r\n break\r\n\r\n\r\ndef load_binary_dataset(shard_filepaths):\r\n return IterableDataset.from_generator(\r\n _binary_dataset_generator, gen_kwargs={\"files\": shard_filepaths},\r\n )\r\n\r\n\r\ndef load_parquet_dataset(shard_filepaths):\r\n # Load the dataset as an IterableDataset\r\n return load_dataset(\r\n \"parquet\",\r\n data_files={split: shard_filepaths},\r\n streaming=True,\r\n split=split,\r\n )\r\n\r\n\r\ndef load_arrow_dataset(shard_filepaths):\r\n # Load the dataset as an IterableDataset\r\n shard_filepaths = [f + \"/data-00000-of-00001.arrow\" for f in shard_filepaths]\r\n return load_dataset(\r\n \"arrow\",\r\n data_files={split: shard_filepaths},\r\n streaming=True,\r\n split=split,\r\n )\r\n\r\n\r\ndef load_dataset_wrapper(filetype: str, shard_filepaths: list[str]):\r\n if filetype == \"parquet\":\r\n return load_parquet_dataset(shard_filepaths)\r\n if filetype == \"binary\":\r\n return load_binary_dataset(shard_filepaths)\r\n if filetype == \"arrow\":\r\n return load_arrow_dataset(shard_filepaths)\r\n else:\r\n raise ValueError(\"Unsupported filetype\")\r\n\r\n\r\n# Example usage:\r\nsplit = \"train\"\r\nsplit_save_dir = \"/tmp/random_split\"\r\n\r\nfiletype = \"binary\" # or \"parquet\", or \"arrow\"\r\nnum_shards = 16\r\n\r\nshard_filepaths = generate_split_shards(split_save_dir, filetype=filetype, num_shards=num_shards)\r\ndataset = load_dataset_wrapper(filetype=filetype, shard_filepaths=shard_filepaths)\r\n\r\ndataset = dataset.shuffle(buffer_size=100, seed=42)\r\n\r\nstart_time = time.time()\r\nfor count, item in enumerate(dataset):\r\n if count > 0 and count % 100 == 0:\r\n elapsed_time = time.time() - start_time\r\n iterations_per_second = count / elapsed_time\r\n print(f\"Processed {count} items at an average of {iterations_per_second:.2f} iterations/second\")\r\n```",
"update: I was able to reproduce the issue you described -- but ONLY if I do \r\n\r\n```\r\nrandom_dataset = random_dataset.with_format(\"numpy\")\r\n```\r\n\r\nIf I do this, I see similar numbers as what you reported. If I do not use numpy format, parquet and arrow are about 17 iterations per second regardless of whether or not we shuffle. Using binary, (again no numpy format tried with this yet), still shows the fastest speeds on average (shuffle and no shuffle) of about 850 it/sec.\r\n\r\nI suspect some issues with arrow and numpy being optimized for sequential reads, and shuffling cuases issuses... hmm"
] | Slow iteration speeds when using IterableDataset.shuffle with load_dataset(data_files=..., streaming=True) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7102/reactions"
} | I_kwDODunzps6TCc0y | null | 2024-08-14T21:44:44Z | https://api.github.com/repos/huggingface/datasets/issues/7102/comments | ### Describe the bug
When I load a dataset from a number of arrow files, as in:
```
random_dataset = load_dataset(
"arrow",
data_files={split: shard_filepaths},
streaming=True,
split=split,
)
```
I'm able to get fast iteration speeds when iterating over the dataset without shuffling.
When I shuffle the dataset, the iteration speed is reduced by ~1000x.
It's very possible the way I'm loading dataset shards is not appropriate; if so please advise!
Thanks for the help
### Steps to reproduce the bug
Here's full code to reproduce the issue:
- Generate a random dataset
- Create shards of data independently using Dataset.save_to_disk()
- The below will generate 16 shards (arrow files), of 512 examples each
```
import time
from pathlib import Path
from multiprocessing import Pool, cpu_count
import torch
from datasets import Dataset, load_dataset
split = "train"
split_save_dir = "/tmp/random_split"
def generate_random_example():
return {
'inputs': torch.randn(128).tolist(),
'indices': torch.randint(0, 10000, (2, 20000)).tolist(),
'values': torch.randn(20000).tolist(),
}
def generate_shard_dataset(examples_per_shard: int = 512):
dataset_dict = {
'inputs': [],
'indices': [],
'values': []
}
for _ in range(examples_per_shard):
example = generate_random_example()
dataset_dict['inputs'].append(example['inputs'])
dataset_dict['indices'].append(example['indices'])
dataset_dict['values'].append(example['values'])
return Dataset.from_dict(dataset_dict)
def save_shard(shard_idx, save_dir, examples_per_shard):
shard_dataset = generate_shard_dataset(examples_per_shard)
shard_write_path = Path(save_dir) / f"shard_{shard_idx}"
shard_dataset.save_to_disk(shard_write_path)
return str(Path(shard_write_path) / "data-00000-of-00001.arrow")
def generate_split_shards(save_dir, num_shards: int = 16, examples_per_shard: int = 512):
with Pool(cpu_count()) as pool:
args = [(m, save_dir, examples_per_shard) for m in range(num_shards)]
shard_filepaths = pool.starmap(save_shard, args)
return shard_filepaths
shard_filepaths = generate_split_shards(split_save_dir)
```
Load the dataset as IterableDataset:
```
random_dataset = load_dataset(
"arrow",
data_files={split: shard_filepaths},
streaming=True,
split=split,
)
random_dataset = random_dataset.with_format("numpy")
```
Observe the iterations/second when iterating over the dataset directly, and applying shuffling before iterating:
Without shuffling, this gives ~1500 iterations/second
```
start_time = time.time()
for count, item in enumerate(random_dataset):
if count > 0 and count % 100 == 0:
elapsed_time = time.time() - start_time
iterations_per_second = count / elapsed_time
print(f"Processed {count} items at an average of {iterations_per_second:.2f} iterations/second")
```
```
Processed 100 items at an average of 705.74 iterations/second
Processed 200 items at an average of 1169.68 iterations/second
Processed 300 items at an average of 1497.97 iterations/second
Processed 400 items at an average of 1739.62 iterations/second
Processed 500 items at an average of 1931.11 iterations/second`
```
When shuffling, this gives ~3 iterations/second:
```
random_dataset = random_dataset.shuffle(buffer_size=100,seed=42)
start_time = time.time()
for count, item in enumerate(random_dataset):
if count > 0 and count % 100 == 0:
elapsed_time = time.time() - start_time
iterations_per_second = count / elapsed_time
print(f"Processed {count} items at an average of {iterations_per_second:.2f} iterations/second")
```
```
Processed 100 items at an average of 3.75 iterations/second
Processed 200 items at an average of 3.93 iterations/second
```
### Expected behavior
Iterations per second should be barely affected by shuffling, especially with a small buffer size
### Environment info
Datasets version: 2.21.0
Python 3.10
Ubuntu 22.04 | {
"avatar_url": "https://avatars.githubusercontent.com/u/13192126?v=4",
"events_url": "https://api.github.com/users/lajd/events{/privacy}",
"followers_url": "https://api.github.com/users/lajd/followers",
"following_url": "https://api.github.com/users/lajd/following{/other_user}",
"gists_url": "https://api.github.com/users/lajd/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lajd",
"id": 13192126,
"login": "lajd",
"node_id": "MDQ6VXNlcjEzMTkyMTI2",
"organizations_url": "https://api.github.com/users/lajd/orgs",
"received_events_url": "https://api.github.com/users/lajd/received_events",
"repos_url": "https://api.github.com/users/lajd/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lajd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lajd/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lajd"
} | https://api.github.com/repos/huggingface/datasets/issues/7102/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7102/timeline | open | false | 7,102 | null | null | null | false |
2,466,510,783 | https://api.github.com/repos/huggingface/datasets/issues/7101 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7101/events | [] | null | 2024-08-18T10:33:38Z | [] | https://github.com/huggingface/datasets/issues/7101 | NONE | null | null | null | [
"Having looked into this further it seems the core of the issue is with two different formats in the same repo.\r\n\r\nWhen the `parquet` config is first, the `WebDataset`s are loaded as `parquet`, if the `WebDataset` configs are first, the `parquet` is loaded as `WebDataset`.\r\n\r\nA workaround in my case would be to just turn the `parquet` into a `WebDataset`, although I'd still need the Dataset Viewer config limit increasing. In other cases using the same format may not be possible.\r\n\r\nRelevant code: \r\n- [HubDatasetModuleFactoryWithoutScript](https://github.com/huggingface/datasets/blob/5f42139a2c5583a55d34a2f60d537f5fba285c28/src/datasets/load.py#L964)\r\n- [get_data_patterns](https://github.com/huggingface/datasets/blob/5f42139a2c5583a55d34a2f60d537f5fba285c28/src/datasets/data_files.py#L415)"
] | `load_dataset` from Hub with `name` to specify `config` using incorrect builder type when multiple data formats are present | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7101/reactions"
} | I_kwDODunzps6TA_e_ | null | 2024-08-14T18:12:25Z | https://api.github.com/repos/huggingface/datasets/issues/7101/comments | Following [documentation](https://huggingface.co/docs/datasets/repository_structure#define-your-splits-and-subsets-in-yaml) I had defined different configs for [`Dataception`](https://huggingface.co/datasets/bigdata-pw/Dataception), a dataset of datasets:
```yaml
configs:
- config_name: dataception
data_files:
- path: dataception.parquet
split: train
default: true
- config_name: dataset_5423
data_files:
- path: datasets/5423.tar
split: train
...
- config_name: dataset_721736
data_files:
- path: datasets/721736.tar
split: train
```
The intent was for metadata to be browsable via Dataset Viewer, in addition to each individual dataset, and to allow datasets to be loaded by specifying the config/name to `load_dataset`.
While testing `load_dataset` I encountered the following error:
```python
>>> dataset = load_dataset("bigdata-pw/Dataception", "dataset_7691")
Downloading readme: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 467k/467k [00:00<00:00, 1.99MB/s]
Downloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 71.0M/71.0M [00:02<00:00, 26.8MB/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "datasets\load.py", line 2145, in load_dataset
builder_instance.download_and_prepare(
File "datasets\builder.py", line 1027, in download_and_prepare
self._download_and_prepare(
File "datasets\builder.py", line 1100, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "datasets\packaged_modules\parquet\parquet.py", line 58, in _split_generators
self.info.features = datasets.Features.from_arrow_schema(pq.read_schema(f))
^^^^^^^^^^^^^^^^^
File "pyarrow\parquet\core.py", line 2325, in read_schema
file = ParquetFile(
^^^^^^^^^^^^
File "pyarrow\parquet\core.py", line 318, in __init__
self.reader.open(
File "pyarrow\_parquet.pyx", line 1470, in pyarrow._parquet.ParquetReader.open
File "pyarrow\error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.
```
The correct file is downloaded, however the incorrect builder type is detected; `parquet` due to other content of the repository. It would appear that the config needs to be taken into account.
Note that I have removed the additional configs from the repository because of this issue and there is a limit of 3000 configs anyway so the Dataset Viewer doesn't work as I intended. I'll add them back in if it assists with testing.
| {
"avatar_url": "https://avatars.githubusercontent.com/u/106811348?v=4",
"events_url": "https://api.github.com/users/hlky/events{/privacy}",
"followers_url": "https://api.github.com/users/hlky/followers",
"following_url": "https://api.github.com/users/hlky/following{/other_user}",
"gists_url": "https://api.github.com/users/hlky/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hlky",
"id": 106811348,
"login": "hlky",
"node_id": "U_kgDOBl3P1A",
"organizations_url": "https://api.github.com/users/hlky/orgs",
"received_events_url": "https://api.github.com/users/hlky/received_events",
"repos_url": "https://api.github.com/users/hlky/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hlky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hlky/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hlky"
} | https://api.github.com/repos/huggingface/datasets/issues/7101/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7101/timeline | open | false | 7,101 | null | null | null | false |
2,465,529,414 | https://api.github.com/repos/huggingface/datasets/issues/7100 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7100/events | [] | null | 2024-08-14T11:01:51Z | [] | https://github.com/huggingface/datasets/issues/7100 | NONE | null | null | null | [] | IterableDataset: cannot resolve features from list of numpy arrays | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7100/reactions"
} | I_kwDODunzps6S9P5G | null | 2024-08-14T11:01:51Z | https://api.github.com/repos/huggingface/datasets/issues/7100/comments | ### Describe the bug
when resolve features of `IterableDataset`, got `pyarrow.lib.ArrowInvalid: Can only convert 1-dimensional array values` error.
```
Traceback (most recent call last):
File "test.py", line 6
iter_ds = iter_ds._resolve_features()
File "lib/python3.10/site-packages/datasets/iterable_dataset.py", line 2876, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
File "lib/python3.10/site-packages/datasets/iterable_dataset.py", line 63, in _infer_features_from_batch
pa_table = pa.Table.from_pydict(batch)
File "pyarrow/table.pxi", line 1813, in pyarrow.lib._Tabular.from_pydict
File "pyarrow/table.pxi", line 5339, in pyarrow.lib._from_pydict
File "pyarrow/array.pxi", line 374, in pyarrow.lib.asarray
File "pyarrow/array.pxi", line 344, in pyarrow.lib.array
File "pyarrow/array.pxi", line 42, in pyarrow.lib._sequence_to_array
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Can only convert 1-dimensional array values
```
### Steps to reproduce the bug
```python
from datasets import Dataset
import numpy as np
# create list of numpy
iter_ds = Dataset.from_dict({'a': [[[1, 2, 3], [1, 2, 3]]]}).to_iterable_dataset().map(lambda x: {'a': [np.array(x['a'])]})
iter_ds = iter_ds._resolve_features() # errors here
```
### Expected behavior
features can be successfully resolved
### Environment info
- `datasets` version: 2.21.0
- Platform: Linux-5.15.0-94-generic-x86_64-with-glibc2.35
- Python version: 3.10.13
- `huggingface_hub` version: 0.23.4
- PyArrow version: 15.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.10.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/18899212?v=4",
"events_url": "https://api.github.com/users/VeryLazyBoy/events{/privacy}",
"followers_url": "https://api.github.com/users/VeryLazyBoy/followers",
"following_url": "https://api.github.com/users/VeryLazyBoy/following{/other_user}",
"gists_url": "https://api.github.com/users/VeryLazyBoy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/VeryLazyBoy",
"id": 18899212,
"login": "VeryLazyBoy",
"node_id": "MDQ6VXNlcjE4ODk5MjEy",
"organizations_url": "https://api.github.com/users/VeryLazyBoy/orgs",
"received_events_url": "https://api.github.com/users/VeryLazyBoy/received_events",
"repos_url": "https://api.github.com/users/VeryLazyBoy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/VeryLazyBoy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VeryLazyBoy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/VeryLazyBoy"
} | https://api.github.com/repos/huggingface/datasets/issues/7100/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7100/timeline | open | false | 7,100 | null | null | null | false |
2,465,221,827 | https://api.github.com/repos/huggingface/datasets/issues/7099 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7099/events | [] | null | 2024-08-14T08:45:17Z | [] | https://github.com/huggingface/datasets/pull/7099 | MEMBER | null | false | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7099). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005649 / 0.011353 (-0.005704) | 0.003918 / 0.011008 (-0.007091) | 0.064333 / 0.038508 (0.025825) | 0.031909 / 0.023109 (0.008800) | 0.249020 / 0.275898 (-0.026878) | 0.273563 / 0.323480 (-0.049917) | 0.004184 / 0.007986 (-0.003802) | 0.002809 / 0.004328 (-0.001519) | 0.049066 / 0.004250 (0.044816) | 0.043324 / 0.037052 (0.006272) | 0.257889 / 0.258489 (-0.000600) | 0.285410 / 0.293841 (-0.008431) | 0.030681 / 0.128546 (-0.097865) | 0.012389 / 0.075646 (-0.063258) | 0.206172 / 0.419271 (-0.213100) | 0.036500 / 0.043533 (-0.007032) | 0.253674 / 0.255139 (-0.001465) | 0.272086 / 0.283200 (-0.011114) | 0.019558 / 0.141683 (-0.122125) | 1.149501 / 1.452155 (-0.302653) | 1.198036 / 1.492716 (-0.294680) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.139977 / 0.018006 (0.121971) | 0.301149 / 0.000490 (0.300659) | 0.000253 / 0.000200 (0.000053) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019137 / 0.037411 (-0.018274) | 0.062616 / 0.014526 (0.048090) | 0.075965 / 0.176557 (-0.100591) | 0.120976 / 0.737135 (-0.616159) | 0.076384 / 0.296338 (-0.219954) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283801 / 0.215209 (0.068592) | 2.794074 / 2.077655 (0.716419) | 1.475633 / 1.504120 (-0.028487) | 1.336270 / 1.541195 (-0.204925) | 1.376159 / 1.468490 (-0.092331) | 0.718768 / 4.584777 (-3.866009) | 2.375970 / 3.745712 (-1.369742) | 2.969121 / 5.269862 (-2.300741) | 1.900236 / 4.565676 (-2.665440) | 0.082463 / 0.424275 (-0.341812) | 0.005159 / 0.007607 (-0.002448) | 0.329057 / 0.226044 (0.103012) | 3.250535 / 2.268929 (0.981607) | 1.846415 / 55.444624 (-53.598210) | 1.496622 / 6.876477 (-5.379855) | 1.538125 / 2.142072 (-0.603947) | 0.806127 / 4.805227 (-3.999101) | 0.135272 / 6.500664 (-6.365392) | 0.042668 / 0.075469 (-0.032801) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.983035 / 1.841788 (-0.858753) | 11.725835 / 8.074308 (3.651527) | 9.962818 / 10.191392 (-0.228574) | 0.131928 / 0.680424 (-0.548496) | 0.015784 / 0.534201 (-0.518417) | 0.301640 / 0.579283 (-0.277643) | 0.266251 / 0.434364 (-0.168113) | 0.339723 / 0.540337 (-0.200614) | 0.443384 / 1.386936 (-0.943552) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006301 / 0.011353 (-0.005052) | 0.004346 / 0.011008 (-0.006662) | 0.051406 / 0.038508 (0.012898) | 0.032263 / 0.023109 (0.009154) | 0.273715 / 0.275898 (-0.002183) | 0.300982 / 0.323480 (-0.022498) | 0.004533 / 0.007986 (-0.003452) | 0.002911 / 0.004328 (-0.001418) | 0.050464 / 0.004250 (0.046214) | 0.041131 / 0.037052 (0.004078) | 0.289958 / 0.258489 (0.031469) | 0.328632 / 0.293841 (0.034791) | 0.033545 / 0.128546 (-0.095001) | 0.013145 / 0.075646 (-0.062501) | 0.062241 / 0.419271 (-0.357031) | 0.035095 / 0.043533 (-0.008438) | 0.273303 / 0.255139 (0.018164) | 0.293652 / 0.283200 (0.010452) | 0.019980 / 0.141683 (-0.121703) | 1.155432 / 1.452155 (-0.296722) | 1.211409 / 1.492716 (-0.281307) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094885 / 0.018006 (0.076879) | 0.307423 / 0.000490 (0.306933) | 0.000254 / 0.000200 (0.000054) | 0.000068 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023462 / 0.037411 (-0.013949) | 0.081980 / 0.014526 (0.067454) | 0.089890 / 0.176557 (-0.086666) | 0.131058 / 0.737135 (-0.606078) | 0.091873 / 0.296338 (-0.204465) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.298522 / 0.215209 (0.083313) | 2.981771 / 2.077655 (0.904116) | 1.632515 / 1.504120 (0.128395) | 1.502885 / 1.541195 (-0.038310) | 1.496868 / 1.468490 (0.028377) | 0.750145 / 4.584777 (-3.834632) | 0.988853 / 3.745712 (-2.756859) | 3.029162 / 5.269862 (-2.240700) | 1.952304 / 4.565676 (-2.613373) | 0.082418 / 0.424275 (-0.341857) | 0.005724 / 0.007607 (-0.001883) | 0.356914 / 0.226044 (0.130870) | 3.523804 / 2.268929 (1.254875) | 1.983254 / 55.444624 (-53.461370) | 1.673135 / 6.876477 (-5.203342) | 1.716639 / 2.142072 (-0.425433) | 0.821568 / 4.805227 (-3.983659) | 0.136113 / 6.500664 (-6.364551) | 0.041593 / 0.075469 (-0.033876) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.044670 / 1.841788 (-0.797118) | 12.739375 / 8.074308 (4.665066) | 10.263619 / 10.191392 (0.072227) | 0.132811 / 0.680424 (-0.547613) | 0.015491 / 0.534201 (-0.518710) | 0.305545 / 0.579283 (-0.273738) | 0.129226 / 0.434364 (-0.305138) | 0.345532 / 0.540337 (-0.194805) | 0.460406 / 1.386936 (-0.926530) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ebec2691fb1e40145429f63375cef3f46d3011ab \"CML watermark\")\n"
] | Set dev version | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7099/reactions"
} | PR_kwDODunzps54U7s4 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7099.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7099",
"merged_at": "2024-08-14T08:39:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7099.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7099"
} | 2024-08-14T08:31:17Z | https://api.github.com/repos/huggingface/datasets/issues/7099/comments | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/7099/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7099/timeline | closed | false | 7,099 | null | 2024-08-14T08:39:25Z | null | true |
2,465,016,562 | https://api.github.com/repos/huggingface/datasets/issues/7098 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7098/events | [] | null | 2024-08-14T06:41:07Z | [] | https://github.com/huggingface/datasets/pull/7098 | MEMBER | null | false | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7098). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | Release: 2.21.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7098/reactions"
} | PR_kwDODunzps54UPMS | {
"diff_url": "https://github.com/huggingface/datasets/pull/7098.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7098",
"merged_at": "2024-08-14T06:41:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7098.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7098"
} | 2024-08-14T06:35:13Z | https://api.github.com/repos/huggingface/datasets/issues/7098/comments | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/7098/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7098/timeline | closed | false | 7,098 | null | 2024-08-14T06:41:06Z | null | true |
2,458,455,489 | https://api.github.com/repos/huggingface/datasets/issues/7097 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7097/events | [] | null | 2024-08-09T18:26:37Z | [] | https://github.com/huggingface/datasets/issues/7097 | NONE | null | null | null | [] | Some of DownloadConfig's properties are always being overridden in load.py | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7097/reactions"
} | I_kwDODunzps6SiQ3B | null | 2024-08-09T18:26:37Z | https://api.github.com/repos/huggingface/datasets/issues/7097/comments | ### Describe the bug
The `extract_compressed_file` and `force_extract` properties of DownloadConfig are always being set to True in the function `dataset_module_factory` in the `load.py` file. This behavior is very annoying because data extracted will just be ignored the next time the dataset is loaded.
See this image below:
![image](https://github.com/user-attachments/assets/9e76ebb7-09b1-4c95-adc8-a959b536f93c)
### Steps to reproduce the bug
1. Have a local dataset that contains archived files (zip, tar.gz, etc)
2. Build a dataset loading script to download and extract these files
3. Run the load_dataset function with a DownloadConfig that specifically set `force_extract` to False
4. The extraction process will start no matter if the archives was extracted previously
### Expected behavior
The extraction process should not run when the archives were previously extracted and `force_extract` is set to False.
### Environment info
datasets==2.20.0
python3.9 | {
"avatar_url": "https://avatars.githubusercontent.com/u/29772899?v=4",
"events_url": "https://api.github.com/users/ductai199x/events{/privacy}",
"followers_url": "https://api.github.com/users/ductai199x/followers",
"following_url": "https://api.github.com/users/ductai199x/following{/other_user}",
"gists_url": "https://api.github.com/users/ductai199x/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ductai199x",
"id": 29772899,
"login": "ductai199x",
"node_id": "MDQ6VXNlcjI5NzcyODk5",
"organizations_url": "https://api.github.com/users/ductai199x/orgs",
"received_events_url": "https://api.github.com/users/ductai199x/received_events",
"repos_url": "https://api.github.com/users/ductai199x/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ductai199x/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ductai199x/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ductai199x"
} | https://api.github.com/repos/huggingface/datasets/issues/7097/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7097/timeline | open | false | 7,097 | null | null | null | false |
2,456,929,173 | https://api.github.com/repos/huggingface/datasets/issues/7096 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7096/events | [] | null | 2024-08-15T17:25:26Z | [] | https://github.com/huggingface/datasets/pull/7096 | CONTRIBUTOR | null | false | null | [
"Hi @albertvillanova, is this PR looking okay to you? Anything else you'd like to see?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7096). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005278 / 0.011353 (-0.006075) | 0.003536 / 0.011008 (-0.007472) | 0.062604 / 0.038508 (0.024096) | 0.030704 / 0.023109 (0.007595) | 0.242178 / 0.275898 (-0.033720) | 0.264335 / 0.323480 (-0.059145) | 0.004118 / 0.007986 (-0.003868) | 0.002789 / 0.004328 (-0.001539) | 0.048813 / 0.004250 (0.044563) | 0.041787 / 0.037052 (0.004735) | 0.252369 / 0.258489 (-0.006120) | 0.280981 / 0.293841 (-0.012859) | 0.029646 / 0.128546 (-0.098900) | 0.012093 / 0.075646 (-0.063553) | 0.203036 / 0.419271 (-0.216235) | 0.035814 / 0.043533 (-0.007719) | 0.248929 / 0.255139 (-0.006210) | 0.266568 / 0.283200 (-0.016632) | 0.018761 / 0.141683 (-0.122922) | 1.188443 / 1.452155 (-0.263712) | 1.219324 / 1.492716 (-0.273392) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095256 / 0.018006 (0.077250) | 0.301069 / 0.000490 (0.300579) | 0.000219 / 0.000200 (0.000019) | 0.000054 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018541 / 0.037411 (-0.018870) | 0.067333 / 0.014526 (0.052807) | 0.075483 / 0.176557 (-0.101073) | 0.121301 / 0.737135 (-0.615834) | 0.076924 / 0.296338 (-0.219414) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284722 / 0.215209 (0.069513) | 2.817656 / 2.077655 (0.740001) | 1.483827 / 1.504120 (-0.020293) | 1.363072 / 1.541195 (-0.178123) | 1.380472 / 1.468490 (-0.088018) | 0.739543 / 4.584777 (-3.845234) | 2.390699 / 3.745712 (-1.355013) | 2.980347 / 5.269862 (-2.289515) | 1.897881 / 4.565676 (-2.667795) | 0.078827 / 0.424275 (-0.345448) | 0.005193 / 0.007607 (-0.002414) | 0.342739 / 0.226044 (0.116695) | 3.370871 / 2.268929 (1.101942) | 1.846475 / 55.444624 (-53.598150) | 1.577860 / 6.876477 (-5.298617) | 1.628606 / 2.142072 (-0.513466) | 0.815686 / 4.805227 (-3.989541) | 0.134985 / 6.500664 (-6.365679) | 0.042330 / 0.075469 (-0.033139) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.962530 / 1.841788 (-0.879258) | 11.271449 / 8.074308 (3.197141) | 9.615452 / 10.191392 (-0.575940) | 0.140322 / 0.680424 (-0.540101) | 0.014057 / 0.534201 (-0.520144) | 0.306212 / 0.579283 (-0.273071) | 0.266758 / 0.434364 (-0.167606) | 0.341229 / 0.540337 (-0.199108) | 0.428974 / 1.386936 (-0.957962) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005980 / 0.011353 (-0.005373) | 0.003831 / 0.011008 (-0.007177) | 0.049837 / 0.038508 (0.011329) | 0.030602 / 0.023109 (0.007493) | 0.274107 / 0.275898 (-0.001791) | 0.298175 / 0.323480 (-0.025305) | 0.004492 / 0.007986 (-0.003494) | 0.002840 / 0.004328 (-0.001489) | 0.048984 / 0.004250 (0.044733) | 0.040001 / 0.037052 (0.002949) | 0.286130 / 0.258489 (0.027641) | 0.321546 / 0.293841 (0.027705) | 0.032675 / 0.128546 (-0.095871) | 0.012222 / 0.075646 (-0.063424) | 0.060321 / 0.419271 (-0.358950) | 0.034456 / 0.043533 (-0.009077) | 0.272408 / 0.255139 (0.017269) | 0.294714 / 0.283200 (0.011515) | 0.018568 / 0.141683 (-0.123115) | 1.169826 / 1.452155 (-0.282329) | 1.223906 / 1.492716 (-0.268810) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093734 / 0.018006 (0.075727) | 0.305915 / 0.000490 (0.305425) | 0.000210 / 0.000200 (0.000010) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022389 / 0.037411 (-0.015022) | 0.076640 / 0.014526 (0.062114) | 0.088660 / 0.176557 (-0.087897) | 0.128998 / 0.737135 (-0.608137) | 0.090346 / 0.296338 (-0.205992) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291642 / 0.215209 (0.076433) | 2.897270 / 2.077655 (0.819615) | 1.571564 / 1.504120 (0.067444) | 1.449533 / 1.541195 (-0.091662) | 1.458744 / 1.468490 (-0.009746) | 0.725465 / 4.584777 (-3.859312) | 0.962597 / 3.745712 (-2.783115) | 3.035056 / 5.269862 (-2.234806) | 1.902542 / 4.565676 (-2.663135) | 0.079869 / 0.424275 (-0.344407) | 0.005172 / 0.007607 (-0.002435) | 0.352099 / 0.226044 (0.126055) | 3.469058 / 2.268929 (1.200129) | 1.953402 / 55.444624 (-53.491222) | 1.647182 / 6.876477 (-5.229294) | 1.686473 / 2.142072 (-0.455599) | 0.797218 / 4.805227 (-4.008009) | 0.134161 / 6.500664 (-6.366503) | 0.041563 / 0.075469 (-0.033906) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.045855 / 1.841788 (-0.795933) | 12.271390 / 8.074308 (4.197082) | 10.186889 / 10.191392 (-0.004503) | 0.141141 / 0.680424 (-0.539283) | 0.015482 / 0.534201 (-0.518719) | 0.305699 / 0.579283 (-0.273584) | 0.128539 / 0.434364 (-0.305825) | 0.348492 / 0.540337 (-0.191845) | 0.444867 / 1.386936 (-0.942069) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#93dc73501298ccb1d31d854ba20fcf2c3b2fea8b \"CML watermark\")\n"
] | Automatically create `cache_dir` from `cache_file_name` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7096/reactions"
} | PR_kwDODunzps535Xkr | {
"diff_url": "https://github.com/huggingface/datasets/pull/7096.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7096",
"merged_at": "2024-08-15T10:13:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7096.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7096"
} | 2024-08-09T01:34:06Z | https://api.github.com/repos/huggingface/datasets/issues/7096/comments | You get a pretty unhelpful error message when specifying a `cache_file_name` in a directory that doesn't exist, e.g. `cache_file_name="./cache/data.map"`
```python
import datasets
cache_file_name="./cache/train.map"
dataset = datasets.load_dataset("ylecun/mnist")
dataset["train"].map(lambda x: x, cache_file_name=cache_file_name)
```
```
FileNotFoundError: [Errno 2] No such file or directory: '/.../cache/tmp48r61siw'
```
It is simple enough to create and I was expecting that this would have been the case.
cc: @albertvillanova @lhoestq | {
"avatar_url": "https://avatars.githubusercontent.com/u/27844407?v=4",
"events_url": "https://api.github.com/users/ringohoffman/events{/privacy}",
"followers_url": "https://api.github.com/users/ringohoffman/followers",
"following_url": "https://api.github.com/users/ringohoffman/following{/other_user}",
"gists_url": "https://api.github.com/users/ringohoffman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ringohoffman",
"id": 27844407,
"login": "ringohoffman",
"node_id": "MDQ6VXNlcjI3ODQ0NDA3",
"organizations_url": "https://api.github.com/users/ringohoffman/orgs",
"received_events_url": "https://api.github.com/users/ringohoffman/received_events",
"repos_url": "https://api.github.com/users/ringohoffman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ringohoffman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ringohoffman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ringohoffman"
} | https://api.github.com/repos/huggingface/datasets/issues/7096/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7096/timeline | closed | false | 7,096 | null | 2024-08-15T10:13:22Z | null | true |
2,454,418,130 | https://api.github.com/repos/huggingface/datasets/issues/7094 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7094/events | [] | null | 2024-08-07T21:53:06Z | [] | https://github.com/huggingface/datasets/pull/7094 | NONE | null | false | null | [] | Add Arabic Docs to Datasets | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7094/reactions"
} | PR_kwDODunzps53w2b7 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7094.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7094",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7094.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7094"
} | 2024-08-07T21:53:06Z | https://api.github.com/repos/huggingface/datasets/issues/7094/comments | Translate Docs into Arabic issue-number : #7093
[Arabic Docs](https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/ar/index.mdx)
[English Docs](https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/en/index.mdx)
@stevhliu | {
"avatar_url": "https://avatars.githubusercontent.com/u/53489256?v=4",
"events_url": "https://api.github.com/users/AhmedAlmaghz/events{/privacy}",
"followers_url": "https://api.github.com/users/AhmedAlmaghz/followers",
"following_url": "https://api.github.com/users/AhmedAlmaghz/following{/other_user}",
"gists_url": "https://api.github.com/users/AhmedAlmaghz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AhmedAlmaghz",
"id": 53489256,
"login": "AhmedAlmaghz",
"node_id": "MDQ6VXNlcjUzNDg5MjU2",
"organizations_url": "https://api.github.com/users/AhmedAlmaghz/orgs",
"received_events_url": "https://api.github.com/users/AhmedAlmaghz/received_events",
"repos_url": "https://api.github.com/users/AhmedAlmaghz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AhmedAlmaghz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AhmedAlmaghz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AhmedAlmaghz"
} | https://api.github.com/repos/huggingface/datasets/issues/7094/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7094/timeline | open | false | 7,094 | null | null | null | true |
2,454,413,074 | https://api.github.com/repos/huggingface/datasets/issues/7093 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7093/events | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | null | 2024-08-07T21:48:05Z | [] | https://github.com/huggingface/datasets/issues/7093 | NONE | null | null | null | [] | Add Arabic Docs to datasets | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7093/reactions"
} | I_kwDODunzps6SS18S | null | 2024-08-07T21:48:05Z | https://api.github.com/repos/huggingface/datasets/issues/7093/comments | ### Feature request
Add Arabic Docs to datasets
[Datasets Arabic](https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/ar/index.mdx)
### Motivation
@AhmedAlmaghz
https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/ar/index.mdx
### Your contribution
@AhmedAlmaghz
https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/ar/index.mdx | {
"avatar_url": "https://avatars.githubusercontent.com/u/53489256?v=4",
"events_url": "https://api.github.com/users/AhmedAlmaghz/events{/privacy}",
"followers_url": "https://api.github.com/users/AhmedAlmaghz/followers",
"following_url": "https://api.github.com/users/AhmedAlmaghz/following{/other_user}",
"gists_url": "https://api.github.com/users/AhmedAlmaghz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AhmedAlmaghz",
"id": 53489256,
"login": "AhmedAlmaghz",
"node_id": "MDQ6VXNlcjUzNDg5MjU2",
"organizations_url": "https://api.github.com/users/AhmedAlmaghz/orgs",
"received_events_url": "https://api.github.com/users/AhmedAlmaghz/received_events",
"repos_url": "https://api.github.com/users/AhmedAlmaghz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AhmedAlmaghz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AhmedAlmaghz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AhmedAlmaghz"
} | https://api.github.com/repos/huggingface/datasets/issues/7093/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7093/timeline | open | false | 7,093 | null | null | null | false |
2,451,393,658 | https://api.github.com/repos/huggingface/datasets/issues/7092 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7092/events | [] | null | 2024-08-08T16:35:01Z | [] | https://github.com/huggingface/datasets/issues/7092 | NONE | null | null | null | [
"I’ll take a look",
"Possible definitions of done for this issue:\r\n\r\n1. A fix so you can load your dataset specifically\r\n2. A general fix for datasets similar to this in the `datasets` library\r\n\r\nOption 1 is trivial. I think option 2 requires significant changes to the library.\r\n\r\nSince you outlined something akin to option 2 in `Expected behavior` I'm assuming that's what you'd like to see done. Is that right?\r\n\r\nIn the meantime, here's a solution for option 1:\r\n\r\n```python\r\nimport datasets\r\n\r\ndata_dir = './data/annotated/api'\r\n\r\nfeatures = datasets.Features({'id': datasets.Value(dtype='string'),\r\n 'name': datasets.Value(dtype='string'),\r\n 'author': datasets.Value(dtype='string'),\r\n 'description': datasets.Value(dtype='string'),\r\n 'tags': datasets.Sequence(feature=datasets.Value(dtype='string'), length=-1),\r\n 'likes': datasets.Value(dtype='int64'),\r\n 'viewed': datasets.Value(dtype='int64'),\r\n 'published': datasets.Value(dtype='int64'),\r\n 'date': datasets.Value(dtype='string'),\r\n 'time_retrieved': datasets.Value(dtype='string'),\r\n 'image_code': datasets.Value(dtype='string'),\r\n 'image_inputs': [{'channel': datasets.Value(dtype='int64'),\r\n 'ctype': datasets.Value(dtype='string'),\r\n 'id': datasets.Value(dtype='int64'),\r\n 'published': datasets.Value(dtype='int64'),\r\n 'sampler': {'filter': datasets.Value(dtype='string'),\r\n 'internal': datasets.Value(dtype='string'),\r\n 'srgb': datasets.Value(dtype='string'),\r\n 'vflip': datasets.Value(dtype='string'),\r\n 'wrap': datasets.Value(dtype='string')},\r\n 'src': datasets.Value(dtype='string')}],\r\n 'common_code': datasets.Value(dtype='string'),\r\n 'sound_code': datasets.Value(dtype='string'),\r\n 'sound_inputs': [{'channel': datasets.Value(dtype='int64'),\r\n 'ctype': datasets.Value(dtype='string'),\r\n 'id': datasets.Value(dtype='int64'),\r\n 'published': datasets.Value(dtype='int64'),\r\n 'sampler': {'filter': datasets.Value(dtype='string'),\r\n 'internal': datasets.Value(dtype='string'),\r\n 'srgb': datasets.Value(dtype='string'),\r\n 'vflip': datasets.Value(dtype='string'),\r\n 'wrap': datasets.Value(dtype='string')},\r\n 'src': datasets.Value(dtype='string')}],\r\n 'buffer_a_code': datasets.Value(dtype='string'),\r\n 'buffer_a_inputs': [{'channel': datasets.Value(dtype='int64'),\r\n 'ctype': datasets.Value(dtype='string'),\r\n 'id': datasets.Value(dtype='int64'),\r\n 'published': datasets.Value(dtype='int64'),\r\n 'sampler': {'filter': datasets.Value(dtype='string'),\r\n 'internal': datasets.Value(dtype='string'),\r\n 'srgb': datasets.Value(dtype='string'),\r\n 'vflip': datasets.Value(dtype='string'),\r\n 'wrap': datasets.Value(dtype='string')},\r\n 'src': datasets.Value(dtype='string')}],\r\n 'buffer_b_code': datasets.Value(dtype='string'),\r\n 'buffer_b_inputs': [{'channel': datasets.Value(dtype='int64'),\r\n 'ctype': datasets.Value(dtype='string'),\r\n 'id': datasets.Value(dtype='int64'),\r\n 'published': datasets.Value(dtype='int64'),\r\n 'sampler': {'filter': datasets.Value(dtype='string'),\r\n 'internal': datasets.Value(dtype='string'),\r\n 'srgb': datasets.Value(dtype='string'),\r\n 'vflip': datasets.Value(dtype='string'),\r\n 'wrap': datasets.Value(dtype='string')},\r\n 'src': datasets.Value(dtype='string')}],\r\n 'buffer_c_code': datasets.Value(dtype='string'),\r\n 'buffer_c_inputs': [{'channel': datasets.Value(dtype='int64'),\r\n 'ctype': datasets.Value(dtype='string'),\r\n 'id': datasets.Value(dtype='int64'),\r\n 'published': datasets.Value(dtype='int64'),\r\n 'sampler': {'filter': datasets.Value(dtype='string'),\r\n 'internal': datasets.Value(dtype='string'),\r\n 'srgb': datasets.Value(dtype='string'),\r\n 'vflip': datasets.Value(dtype='string'),\r\n 'wrap': datasets.Value(dtype='string')},\r\n 'src': datasets.Value(dtype='string')}],\r\n 'buffer_d_code': datasets.Value(dtype='string'),\r\n 'buffer_d_inputs': [{'channel': datasets.Value(dtype='int64'),\r\n 'ctype': datasets.Value(dtype='string'),\r\n 'id': datasets.Value(dtype='int64'),\r\n 'published': datasets.Value(dtype='int64'),\r\n 'sampler': {'filter': datasets.Value(dtype='string'),\r\n 'internal': datasets.Value(dtype='string'),\r\n 'srgb': datasets.Value(dtype='string'),\r\n 'vflip': datasets.Value(dtype='string'),\r\n 'wrap': datasets.Value(dtype='string')},\r\n 'src': datasets.Value(dtype='string')}],\r\n 'cube_a_code': datasets.Value(dtype='string'),\r\n 'cube_a_inputs': [{'channel': datasets.Value(dtype='int64'),\r\n 'ctype': datasets.Value(dtype='string'),\r\n 'id': datasets.Value(dtype='int64'),\r\n 'published': datasets.Value(dtype='int64'),\r\n 'sampler': {'filter': datasets.Value(dtype='string'),\r\n 'internal': datasets.Value(dtype='string'),\r\n 'srgb': datasets.Value(dtype='string'),\r\n 'vflip': datasets.Value(dtype='string'),\r\n 'wrap': datasets.Value(dtype='string')},\r\n 'src': datasets.Value(dtype='string')}],\r\n 'thumbnail': datasets.Value(dtype='string'),\r\n 'access': datasets.Value(dtype='string'),\r\n 'license': datasets.Value(dtype='string'),\r\n 'functions': datasets.Sequence(feature=datasets.Sequence(feature=datasets.Value(dtype='int64'), length=-1), length=-1),\r\n 'test': datasets.Value(dtype='string')})\r\n\r\ndatasets.load_dataset('json', data_dir=data_dir, features=features)\r\n```",
"As pointed out by @hvaara, you can define explicit features so that you avoid the `datasets` library having to infer them (from the first few samples).\r\n\r\nNote that the feature inference is done from the first few samples of JSON-Lines on purpose, so that the entire data does not need to be parsed twice (it would be inefficient for very large datasets).",
"I understand this. But can there be a solution that doesn't require the end user to write this shema by hand(in my case there is some fields that contain a nested structure)? \r\n\r\nMaybe offer an option to infer the shema automatically before loading the dataset. Or perhaps - trigger such a method when this error arises? \r\n\r\nIs this \"first few files\" heuristics accessible via kwargs perhaps. Maybe an error that says \r\n`Cloud not cast some structure into feature shema, consider increasing shema_files to a large number or all\".\r\n\r\nThere might be efficient implementations to solve this problem for larger datasets. ",
"@Vipitis raised a good point on the HF Discord regarding the use of a [dataset script](https://huggingface.co/docs/datasets/en/dataset_script) to provide the schema during initialization. Using this approach requires setting `trust_remote_code=True`, which is not allowed in certain evaluation frameworks.\r\n\r\nFor cases where using a dataset script is acceptable, would it be helpful to add functionality to the library (not necessarily in `load_dataset`) that can automatically discover the feature definitions and output them, so you don't have to manually define them?\r\n\r\nAlternatively, for situations where features need to be known at load-time without using a dataset script, another option could be loading the dataset schema from a file format that doesn't require `trust_remote_code=True`."
] | load_dataset with multiple jsonlines files interprets datastructure too early | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7092/reactions"
} | I_kwDODunzps6SHUx6 | null | 2024-08-06T17:42:55Z | https://api.github.com/repos/huggingface/datasets/issues/7092/comments | ### Describe the bug
likely related to #6460
using `datasets.load_dataset("json", data_dir= ... )` with multiple `.jsonl` files will error if one of the files (maybe the first file?) contains a full column of empty data.
### Steps to reproduce the bug
real world example:
data is available in this [PR-branch](https://github.com/Vipitis/shadertoys-dataset/pull/3/commits/cb1e7157814f74acb09d5dc2f1be3c0a868a9933). Because my files are chunked by months, some months contain all empty data for some columns, just by chance - these are `[]`. Otherwise it's all the same structure.
```python
from datasets import load_dataset
ds = load_dataset("json", data_dir="./data/annotated/api")
```
you get a long error trace, where in the middle it says something like
```cs
TypeError: Couldn't cast array of type struct<id: int64, src: string, ctype: string, channel: int64, sampler: struct<filter: string, wrap: string, vflip: string, srgb: string, internal: string>, published: int64> to null
```
toy example: (on request)
### Expected behavior
Some suggestions
1. give a better error message to the user
2. consider all files before deciding on a data structure for a given column.
3. if you encounter a new structure, and can't cast that to null, replace the null-hypothesis. (maybe something for pyarrow)
as a workaround I have lazily implemented the following (essentially step 2)
```python
import os
import jsonlines
import datasets
api_files = os.listdir("./data/annotated/api")
api_files = [f"./data/annotated/api/{f}" for f in api_files]
api_file_contents = []
for f in api_files:
with jsonlines.open(f) as reader:
for obj in reader:
api_file_contents.append(obj)
ds = datasets.Dataset.from_list(api_file_contents)
```
this works fine for my usecase, but is potentially slower and less memory efficient for really large datasets (where this is unlikely to happen in the first place).
### Environment info
- `datasets` version: 2.20.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.9.4
- `huggingface_hub` version: 0.23.4
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2023.10.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/23384483?v=4",
"events_url": "https://api.github.com/users/Vipitis/events{/privacy}",
"followers_url": "https://api.github.com/users/Vipitis/followers",
"following_url": "https://api.github.com/users/Vipitis/following{/other_user}",
"gists_url": "https://api.github.com/users/Vipitis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Vipitis",
"id": 23384483,
"login": "Vipitis",
"node_id": "MDQ6VXNlcjIzMzg0NDgz",
"organizations_url": "https://api.github.com/users/Vipitis/orgs",
"received_events_url": "https://api.github.com/users/Vipitis/received_events",
"repos_url": "https://api.github.com/users/Vipitis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Vipitis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vipitis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Vipitis"
} | https://api.github.com/repos/huggingface/datasets/issues/7092/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7092/timeline | open | false | 7,092 | null | null | null | false |
2,449,699,490 | https://api.github.com/repos/huggingface/datasets/issues/7090 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7090/events | [] | null | 2024-08-06T00:35:05Z | [] | https://github.com/huggingface/datasets/issues/7090 | NONE | null | null | null | [] | The test test_move_script_doesnt_change_hash fails because it runs the 'python' command while the python executable has a different name | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7090/reactions"
} | I_kwDODunzps6SA3Ki | null | 2024-08-06T00:35:05Z | https://api.github.com/repos/huggingface/datasets/issues/7090/comments | ### Describe the bug
Tests should use the same pythin path as they are launched with, which in the case of FreeBSD is /usr/local/bin/python3.11
Failure:
```
if err_filename is not None:
> raise child_exception_type(errno_num, err_msg, err_filename)
E FileNotFoundError: [Errno 2] No such file or directory: 'python'
```
### Steps to reproduce the bug
regular test run using PyTest
### Expected behavior
n/a
### Environment info
FreeBSD 14.1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/271906?v=4",
"events_url": "https://api.github.com/users/yurivict/events{/privacy}",
"followers_url": "https://api.github.com/users/yurivict/followers",
"following_url": "https://api.github.com/users/yurivict/following{/other_user}",
"gists_url": "https://api.github.com/users/yurivict/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yurivict",
"id": 271906,
"login": "yurivict",
"node_id": "MDQ6VXNlcjI3MTkwNg==",
"organizations_url": "https://api.github.com/users/yurivict/orgs",
"received_events_url": "https://api.github.com/users/yurivict/received_events",
"repos_url": "https://api.github.com/users/yurivict/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yurivict/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yurivict/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yurivict"
} | https://api.github.com/repos/huggingface/datasets/issues/7090/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7090/timeline | open | false | 7,090 | null | null | null | false |
2,449,479,500 | https://api.github.com/repos/huggingface/datasets/issues/7089 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7089/events | [] | null | 2024-08-05T21:05:11Z | [] | https://github.com/huggingface/datasets/issues/7089 | NONE | null | null | null | [] | Missing pyspark dependency causes the testsuite to error out, instead of a few tests to be skipped | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7089/reactions"
} | I_kwDODunzps6SABdM | null | 2024-08-05T21:05:11Z | https://api.github.com/repos/huggingface/datasets/issues/7089/comments | ### Describe the bug
see the subject
### Steps to reproduce the bug
regular tests
### Expected behavior
n/a
### Environment info
version 2.20.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/271906?v=4",
"events_url": "https://api.github.com/users/yurivict/events{/privacy}",
"followers_url": "https://api.github.com/users/yurivict/followers",
"following_url": "https://api.github.com/users/yurivict/following{/other_user}",
"gists_url": "https://api.github.com/users/yurivict/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yurivict",
"id": 271906,
"login": "yurivict",
"node_id": "MDQ6VXNlcjI3MTkwNg==",
"organizations_url": "https://api.github.com/users/yurivict/orgs",
"received_events_url": "https://api.github.com/users/yurivict/received_events",
"repos_url": "https://api.github.com/users/yurivict/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yurivict/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yurivict/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yurivict"
} | https://api.github.com/repos/huggingface/datasets/issues/7089/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7089/timeline | open | false | 7,089 | null | null | null | false |
2,447,383,940 | https://api.github.com/repos/huggingface/datasets/issues/7088 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7088/events | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | null | 2024-08-05T00:45:50Z | [] | https://github.com/huggingface/datasets/issues/7088 | NONE | null | null | null | [] | Disable warning when using with_format format on tensors | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7088/reactions"
} | I_kwDODunzps6R4B2E | null | 2024-08-05T00:45:50Z | https://api.github.com/repos/huggingface/datasets/issues/7088/comments | ### Feature request
If we write this code:
```python
"""Get data and define datasets."""
from enum import StrEnum
from datasets import load_dataset
from torch.utils.data import DataLoader
from torchvision import transforms
class Split(StrEnum):
"""Describes what type of split to use in the dataloader"""
TRAIN = "train"
TEST = "test"
VAL = "validation"
class ImageNetDataLoader(DataLoader):
"""Create an ImageNetDataloader"""
_preprocess_transform = transforms.Compose(
[
transforms.Resize(256),
transforms.CenterCrop(224),
]
)
def __init__(self, batch_size: int = 4, split: Split = Split.TRAIN):
dataset = (
load_dataset(
"imagenet-1k",
split=split,
trust_remote_code=True,
streaming=True,
)
.with_format("torch")
.map(self._preprocess)
)
super().__init__(dataset=dataset, batch_size=batch_size)
def _preprocess(self, data):
if data["image"].shape[0] < 3:
data["image"] = data["image"].repeat(3, 1, 1)
data["image"] = self._preprocess_transform(data["image"].float())
return data
if __name__ == "__main__":
dataloader = ImageNetDataLoader(batch_size=2)
for batch in dataloader:
print(batch["image"])
break
```
This will trigger an user warning :
```bash
datasets\formatting\torch_formatter.py:85: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
return torch.tensor(value, **{**default_dtype, **self.torch_tensor_kwargs})
```
### Motivation
This happens because the the way the formatted tensor is returned in `TorchFormatter._tensorize`.
This function handle values of different types, according to some tests it seems that possible value types are `int`, `numpy.ndarray` and `torch.Tensor`.
In particular this warning is triggered when the value type is `torch.Tensor`, because is not the suggested Pytorch way of doing it:
- https://stackoverflow.com/questions/55266154/pytorch-preferred-way-to-copy-a-tensor
- https://discuss.pytorch.org/t/it-is-recommended-to-use-source-tensor-clone-detach-or-sourcetensor-clone-detach-requires-grad-true/101218#:~:text=The%20warning%20points%20to%20wrapping%20a%20tensor%20in%20torch.tensor%2C%20which%20is%20not%20recommended.%0AInstead%20of%20torch.tensor(outputs)%20use%20outputs.clone().detach()%20or%20the%20same%20with%20.requires_grad_(True)%2C%20if%20necessary.
### Your contribution
A solution that I found to be working is to change the current way of doing it:
```python
return torch.tensor(value, **{**default_dtype, **self.torch_tensor_kwargs})
```
To:
```python
if (isinstance(value, torch.Tensor)):
tensor = value.clone().detach()
if self.torch_tensor_kwargs.get('requires_grad', False):
tensor.requires_grad_()
return tensor
else:
return torch.tensor(value, **{**default_dtype, **self.torch_tensor_kwargs})
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/42048782?v=4",
"events_url": "https://api.github.com/users/Haislich/events{/privacy}",
"followers_url": "https://api.github.com/users/Haislich/followers",
"following_url": "https://api.github.com/users/Haislich/following{/other_user}",
"gists_url": "https://api.github.com/users/Haislich/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Haislich",
"id": 42048782,
"login": "Haislich",
"node_id": "MDQ6VXNlcjQyMDQ4Nzgy",
"organizations_url": "https://api.github.com/users/Haislich/orgs",
"received_events_url": "https://api.github.com/users/Haislich/received_events",
"repos_url": "https://api.github.com/users/Haislich/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Haislich/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Haislich/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Haislich"
} | https://api.github.com/repos/huggingface/datasets/issues/7088/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7088/timeline | open | false | 7,088 | null | null | null | false |
2,447,158,643 | https://api.github.com/repos/huggingface/datasets/issues/7087 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7087/events | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | null | 2024-08-06T06:59:23Z | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | https://github.com/huggingface/datasets/issues/7087 | NONE | completed | null | null | [
"Thanks for reporting.\r\n\r\nIt is weird, because the language entry is in the list. See: https://github.com/huggingface/huggingface.js/blob/98e32f0ed4ee057a596f66a1dec738e5db9643d5/packages/languages/src/languages_iso_639_3.ts#L15186-L15189\r\n\r\nI have reported the issue:\r\n- https://github.com/huggingface/huggingface.js/issues/834\r\n\r\n",
"As explained in the reported issue above, the problem only appears in the autocomplete field: you can still enter the `lut` language directly in the markdown editor window."
] | Unable to create dataset card for Lushootseed language | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7087/reactions"
} | I_kwDODunzps6R3K1z | null | 2024-08-04T14:27:04Z | https://api.github.com/repos/huggingface/datasets/issues/7087/comments | ### Feature request
While I was creating the dataset which contained all documents from the Lushootseed Wikipedia, the dataset card asked me to enter which language the dataset was in. Since Lushootseed is a critically endangered language, it was not available as one of the options. Is it possible to allow entering languages that aren't available in the options?
### Motivation
I'd like to add more information about my dataset in the dataset card, and the language is one of the most important pieces of information, since the entire dataset is primarily concerned collecting Lushootseed documents.
### Your contribution
I can submit a pull request | {
"avatar_url": "https://avatars.githubusercontent.com/u/134876525?v=4",
"events_url": "https://api.github.com/users/vaishnavsudarshan/events{/privacy}",
"followers_url": "https://api.github.com/users/vaishnavsudarshan/followers",
"following_url": "https://api.github.com/users/vaishnavsudarshan/following{/other_user}",
"gists_url": "https://api.github.com/users/vaishnavsudarshan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vaishnavsudarshan",
"id": 134876525,
"login": "vaishnavsudarshan",
"node_id": "U_kgDOCAoNbQ",
"organizations_url": "https://api.github.com/users/vaishnavsudarshan/orgs",
"received_events_url": "https://api.github.com/users/vaishnavsudarshan/received_events",
"repos_url": "https://api.github.com/users/vaishnavsudarshan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vaishnavsudarshan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vaishnavsudarshan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vaishnavsudarshan"
} | https://api.github.com/repos/huggingface/datasets/issues/7087/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7087/timeline | closed | false | 7,087 | null | 2024-08-06T06:59:22Z | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | false |
2,445,516,829 | https://api.github.com/repos/huggingface/datasets/issues/7086 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7086/events | [] | null | 2024-08-02T18:12:23Z | [] | https://github.com/huggingface/datasets/issues/7086 | NONE | null | null | null | [] | load_dataset ignores cached datasets and tries to hit HF Hub, resulting in API rate limit errors | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7086/reactions"
} | I_kwDODunzps6Rw6Ad | null | 2024-08-02T18:12:23Z | https://api.github.com/repos/huggingface/datasets/issues/7086/comments | ### Describe the bug
I have been running lm-eval-harness a lot which has results in an API rate limit. This seems strange, since all of the data should be cached locally. I have in fact verified this.
### Steps to reproduce the bug
1. Be Me
2. Run `load_dataset("TAUR-Lab/MuSR")`
3. Hit rate limit error
4. Dataset is in .cache/huggingface/datasets
5. ???
### Expected behavior
We should not run into API rate limits if we have cached the dataset
### Environment info
datasets 2.16.0
python 3.10.4 | {
"avatar_url": "https://avatars.githubusercontent.com/u/11379648?v=4",
"events_url": "https://api.github.com/users/tginart/events{/privacy}",
"followers_url": "https://api.github.com/users/tginart/followers",
"following_url": "https://api.github.com/users/tginart/following{/other_user}",
"gists_url": "https://api.github.com/users/tginart/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tginart",
"id": 11379648,
"login": "tginart",
"node_id": "MDQ6VXNlcjExMzc5NjQ4",
"organizations_url": "https://api.github.com/users/tginart/orgs",
"received_events_url": "https://api.github.com/users/tginart/received_events",
"repos_url": "https://api.github.com/users/tginart/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tginart/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tginart/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tginart"
} | https://api.github.com/repos/huggingface/datasets/issues/7086/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7086/timeline | open | false | 7,086 | null | null | null | false |
2,440,008,618 | https://api.github.com/repos/huggingface/datasets/issues/7085 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7085/events | [] | null | 2024-08-14T16:04:24Z | [] | https://github.com/huggingface/datasets/issues/7085 | NONE | null | null | null | [
"@lhoestq I detected this regression over on [DataDreamer](https://github.com/datadreamer-dev/DataDreamer)'s test suite. I put in these [monkey patches](https://github.com/datadreamer-dev/DataDreamer/blob/4cbaf9f39cf7bedde72bbaa68346e169788fbecb/src/_patches/datasets_reset_state_hack.py) in case that fixed it our tests failing in case it helps you figure out where this is coming from. I found it hard to reason through the resumable IterableDataset code though, so hopefully you have more intuition to implement a proper fix.",
"I believe these lines in `TypedExamplesIterable` are responsible for stopping the re-iteration of `IterableDataset`:\r\n\r\nhttps://github.com/huggingface/datasets/blob/ebec2691fb1e40145429f63375cef3f46d3011ab/src/datasets/iterable_dataset.py#L1616-L1619\r\n\r\nIn contrast to other `Iterable`s, there is no check on whether `self._state_dict` is None or not. This particular case stands out and seems less straightforward to comprehend why. @lhoestq could you please assist us with this? Your help is much appreciated."
] | [Regression] IterableDataset is broken on 2.20.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7085/reactions"
} | I_kwDODunzps6Rb5Oq | null | 2024-07-31T13:01:59Z | https://api.github.com/repos/huggingface/datasets/issues/7085/comments | ### Describe the bug
In the latest version of datasets there is a major regression, after creating an `IterableDataset` from a generator and applying a few operations (`map`, `select`), you can no longer iterate through the dataset multiple times.
The issue seems to stem from the recent addition of "resumable IterableDatasets" (#6658) (@lhoestq). It seems like it's keeping state when it shouldn't.
### Steps to reproduce the bug
Minimal Reproducible Example (comparing `datasets==2.17.0` and `datasets==2.20.0`)
```
#!/bin/bash
# List of dataset versions to test
versions=("2.17.0" "2.20.0")
# Loop through each version
for version in "${versions[@]}"; do
# Install the specific version of the datasets library
pip3 install -q datasets=="$version" 2>/dev/null
# Run the Python script
python3 - <<EOF
from datasets import IterableDataset
from datasets.features.features import Features, Value
def test_gen():
yield from [{"foo": i} for i in range(10)]
features = Features([("foo", Value("int64"))])
d = IterableDataset.from_generator(test_gen, features=features)
mapped = d.map(lambda row: {"foo": row["foo"] * 2})
column = mapped.select_columns(["foo"])
print("Version $version - Iterate Once:", list(column))
print("Version $version - Iterate Twice:", list(column))
EOF
done
```
The output looks like this:
```
Version 2.17.0 - Iterate Once: [{'foo': 0}, {'foo': 2}, {'foo': 4}, {'foo': 6}, {'foo': 8}, {'foo': 10}, {'foo': 12}, {'foo': 14}, {'foo': 16}, {'foo': 18}]
Version 2.17.0 - Iterate Twice: [{'foo': 0}, {'foo': 2}, {'foo': 4}, {'foo': 6}, {'foo': 8}, {'foo': 10}, {'foo': 12}, {'foo': 14}, {'foo': 16}, {'foo': 18}]
Version 2.20.0 - Iterate Once: [{'foo': 0}, {'foo': 2}, {'foo': 4}, {'foo': 6}, {'foo': 8}, {'foo': 10}, {'foo': 12}, {'foo': 14}, {'foo': 16}, {'foo': 18}]
Version 2.20.0 - Iterate Twice: []
```
### Expected behavior
The expected behavior is it version 2.20.0 should behave the same as 2.17.0.
### Environment info
`datasets==2.20.0` on any platform. | {
"avatar_url": "https://avatars.githubusercontent.com/u/5404177?v=4",
"events_url": "https://api.github.com/users/AjayP13/events{/privacy}",
"followers_url": "https://api.github.com/users/AjayP13/followers",
"following_url": "https://api.github.com/users/AjayP13/following{/other_user}",
"gists_url": "https://api.github.com/users/AjayP13/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AjayP13",
"id": 5404177,
"login": "AjayP13",
"node_id": "MDQ6VXNlcjU0MDQxNzc=",
"organizations_url": "https://api.github.com/users/AjayP13/orgs",
"received_events_url": "https://api.github.com/users/AjayP13/received_events",
"repos_url": "https://api.github.com/users/AjayP13/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AjayP13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AjayP13/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AjayP13"
} | https://api.github.com/repos/huggingface/datasets/issues/7085/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7085/timeline | open | false | 7,085 | null | null | null | false |
2,439,519,534 | https://api.github.com/repos/huggingface/datasets/issues/7084 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7084/events | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | null | 2024-07-31T09:05:58Z | [] | https://github.com/huggingface/datasets/issues/7084 | NONE | null | null | null | [] | More easily support streaming local files | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7084/reactions"
} | I_kwDODunzps6RaB0u | null | 2024-07-31T09:03:15Z | https://api.github.com/repos/huggingface/datasets/issues/7084/comments | ### Feature request
Simplify downloading and streaming datasets locally. Specifically, perhaps add an option to `load_dataset(..., streaming="download_first")` or add better support for streaming symlinked or arrow files.
### Motivation
I have downloaded FineWeb-edu locally and currently trying to stream the dataset from the local files. I have both the raw parquet files using `hugginface-cli download --repo-type dataset HuggingFaceFW/fineweb-edu` and the processed arrow files using `load_dataset("HuggingFaceFW/fineweb-edu")`.
Streaming the files locally does not work well for both file types for two different reasons.
**Arrow files**
When running `load_dataset("arrow", data_files={"train": "~/.cache/huggingface/datasets/HuggingFaceFW___fineweb-edu/default/0.0.0/5b89d1ea9319fe101b3cbdacd89a903aca1d6052/fineweb-edu-train-*.arrow"})` resolving the data files is fast, but because `arrow` is not included in the known [extensions file list](https://github.com/huggingface/datasets/blob/ce4a0c573920607bc6c814605734091b06b860e7/src/datasets/utils/file_utils.py#L738) , all files are opened and scanned to determine the compression type. Adding `arrow` to the known extension types resolves this issue.
**Parquet files**
When running `load_dataset("arrow", data_files={"train": "~/.cache/huggingface/hub/dataset-HuggingFaceFW___fineweb-edu/snapshots/5b89d1ea9319fe101b3cbdacd89a903aca1d6052/data/CC-MAIN-*/train-*.parquet"})` the paths do not get resolved because the parquet files are symlinked from the blobs (which contain all files in case there are different versions). This occurs because the [pattern matching](https://github.com/huggingface/datasets/blob/ce4a0c573920607bc6c814605734091b06b860e7/src/datasets/data_files.py#L389) checks if the path is a file and does not check for symlinks. Symlinks (at least on my machine) are of type "other".
### Your contribution
I have created a PR for fixing arrow file streaming and symlinks. However, I have not checked locally if the tests work or new tests need to be added.
IMO, the easiest option would be to add a `streaming=download_first` option, but I'm afraid that exceeds my current knowledge of how the datasets library works. https://github.com/huggingface/datasets/pull/7083 | {
"avatar_url": "https://avatars.githubusercontent.com/u/23191892?v=4",
"events_url": "https://api.github.com/users/fschlatt/events{/privacy}",
"followers_url": "https://api.github.com/users/fschlatt/followers",
"following_url": "https://api.github.com/users/fschlatt/following{/other_user}",
"gists_url": "https://api.github.com/users/fschlatt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fschlatt",
"id": 23191892,
"login": "fschlatt",
"node_id": "MDQ6VXNlcjIzMTkxODky",
"organizations_url": "https://api.github.com/users/fschlatt/orgs",
"received_events_url": "https://api.github.com/users/fschlatt/received_events",
"repos_url": "https://api.github.com/users/fschlatt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fschlatt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fschlatt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fschlatt"
} | https://api.github.com/repos/huggingface/datasets/issues/7084/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7084/timeline | open | false | 7,084 | null | null | null | false |
2,439,518,466 | https://api.github.com/repos/huggingface/datasets/issues/7083 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7083/events | [] | null | 2024-08-15T14:08:04Z | [] | https://github.com/huggingface/datasets/pull/7083 | NONE | null | false | null | [] | fix streaming from arrow files | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7083/reactions"
} | PR_kwDODunzps5292hC | {
"diff_url": "https://github.com/huggingface/datasets/pull/7083.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7083",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7083.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7083"
} | 2024-07-31T09:02:42Z | https://api.github.com/repos/huggingface/datasets/issues/7083/comments | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/23191892?v=4",
"events_url": "https://api.github.com/users/fschlatt/events{/privacy}",
"followers_url": "https://api.github.com/users/fschlatt/followers",
"following_url": "https://api.github.com/users/fschlatt/following{/other_user}",
"gists_url": "https://api.github.com/users/fschlatt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fschlatt",
"id": 23191892,
"login": "fschlatt",
"node_id": "MDQ6VXNlcjIzMTkxODky",
"organizations_url": "https://api.github.com/users/fschlatt/orgs",
"received_events_url": "https://api.github.com/users/fschlatt/received_events",
"repos_url": "https://api.github.com/users/fschlatt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fschlatt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fschlatt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fschlatt"
} | https://api.github.com/repos/huggingface/datasets/issues/7083/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7083/timeline | open | false | 7,083 | null | null | null | true |
2,437,354,975 | https://api.github.com/repos/huggingface/datasets/issues/7082 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7082/events | [] | null | 2024-08-08T08:29:55Z | [] | https://github.com/huggingface/datasets/pull/7082 | MEMBER | null | false | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7082). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005280 / 0.011353 (-0.006073) | 0.003726 / 0.011008 (-0.007282) | 0.067028 / 0.038508 (0.028520) | 0.030833 / 0.023109 (0.007724) | 0.256888 / 0.275898 (-0.019010) | 0.271252 / 0.323480 (-0.052228) | 0.003149 / 0.007986 (-0.004836) | 0.004031 / 0.004328 (-0.000298) | 0.051178 / 0.004250 (0.046927) | 0.042751 / 0.037052 (0.005699) | 0.268385 / 0.258489 (0.009896) | 0.295547 / 0.293841 (0.001706) | 0.030218 / 0.128546 (-0.098328) | 0.012033 / 0.075646 (-0.063613) | 0.206389 / 0.419271 (-0.212882) | 0.036227 / 0.043533 (-0.007306) | 0.258778 / 0.255139 (0.003639) | 0.276027 / 0.283200 (-0.007172) | 0.020309 / 0.141683 (-0.121374) | 1.109689 / 1.452155 (-0.342466) | 1.139979 / 1.492716 (-0.352738) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093615 / 0.018006 (0.075609) | 0.301279 / 0.000490 (0.300789) | 0.000207 / 0.000200 (0.000007) | 0.000048 / 0.000054 (-0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018697 / 0.037411 (-0.018715) | 0.062627 / 0.014526 (0.048101) | 0.075119 / 0.176557 (-0.101438) | 0.119960 / 0.737135 (-0.617175) | 0.074606 / 0.296338 (-0.221732) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281042 / 0.215209 (0.065833) | 2.746232 / 2.077655 (0.668578) | 1.422351 / 1.504120 (-0.081769) | 1.290087 / 1.541195 (-0.251108) | 1.321067 / 1.468490 (-0.147423) | 0.727514 / 4.584777 (-3.857263) | 2.407086 / 3.745712 (-1.338626) | 2.914191 / 5.269862 (-2.355670) | 1.872206 / 4.565676 (-2.693471) | 0.079538 / 0.424275 (-0.344738) | 0.005250 / 0.007607 (-0.002357) | 0.335536 / 0.226044 (0.109491) | 3.324922 / 2.268929 (1.055994) | 1.790688 / 55.444624 (-53.653936) | 1.475738 / 6.876477 (-5.400739) | 1.492465 / 2.142072 (-0.649607) | 0.812342 / 4.805227 (-3.992885) | 0.135036 / 6.500664 (-6.365628) | 0.041484 / 0.075469 (-0.033985) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.948425 / 1.841788 (-0.893363) | 11.321564 / 8.074308 (3.247256) | 9.635661 / 10.191392 (-0.555731) | 0.142793 / 0.680424 (-0.537631) | 0.014988 / 0.534201 (-0.519213) | 0.300209 / 0.579283 (-0.279074) | 0.262303 / 0.434364 (-0.172061) | 0.337927 / 0.540337 (-0.202411) | 0.427962 / 1.386936 (-0.958975) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005664 / 0.011353 (-0.005689) | 0.003946 / 0.011008 (-0.007062) | 0.050034 / 0.038508 (0.011526) | 0.031652 / 0.023109 (0.008543) | 0.281139 / 0.275898 (0.005241) | 0.299203 / 0.323480 (-0.024277) | 0.004332 / 0.007986 (-0.003653) | 0.002769 / 0.004328 (-0.001560) | 0.048336 / 0.004250 (0.044086) | 0.039744 / 0.037052 (0.002692) | 0.289344 / 0.258489 (0.030855) | 0.320470 / 0.293841 (0.026629) | 0.032372 / 0.128546 (-0.096174) | 0.012090 / 0.075646 (-0.063557) | 0.060838 / 0.419271 (-0.358433) | 0.034227 / 0.043533 (-0.009306) | 0.275007 / 0.255139 (0.019868) | 0.293455 / 0.283200 (0.010256) | 0.017203 / 0.141683 (-0.124480) | 1.141577 / 1.452155 (-0.310578) | 1.176761 / 1.492716 (-0.315955) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093562 / 0.018006 (0.075556) | 0.302695 / 0.000490 (0.302205) | 0.000215 / 0.000200 (0.000015) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022638 / 0.037411 (-0.014774) | 0.078788 / 0.014526 (0.064262) | 0.088474 / 0.176557 (-0.088082) | 0.128421 / 0.737135 (-0.608714) | 0.089297 / 0.296338 (-0.207041) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.302669 / 0.215209 (0.087459) | 2.963855 / 2.077655 (0.886200) | 1.600053 / 1.504120 (0.095933) | 1.461456 / 1.541195 (-0.079739) | 1.469877 / 1.468490 (0.001387) | 0.725752 / 4.584777 (-3.859025) | 0.968970 / 3.745712 (-2.776742) | 2.910502 / 5.269862 (-2.359359) | 1.902762 / 4.565676 (-2.662914) | 0.079977 / 0.424275 (-0.344298) | 0.005582 / 0.007607 (-0.002025) | 0.351626 / 0.226044 (0.125581) | 3.520593 / 2.268929 (1.251664) | 1.968950 / 55.444624 (-53.475675) | 1.662190 / 6.876477 (-5.214286) | 1.677909 / 2.142072 (-0.464163) | 0.791541 / 4.805227 (-4.013687) | 0.134647 / 6.500664 (-6.366017) | 0.040687 / 0.075469 (-0.034782) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.028885 / 1.841788 (-0.812903) | 11.928358 / 8.074308 (3.854050) | 10.199165 / 10.191392 (0.007773) | 0.142930 / 0.680424 (-0.537493) | 0.016479 / 0.534201 (-0.517722) | 0.302993 / 0.579283 (-0.276290) | 0.128878 / 0.434364 (-0.305486) | 0.342591 / 0.540337 (-0.197747) | 0.456735 / 1.386936 (-0.930201) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d298f5549893228c03e9e3a42727327cb83f3dff \"CML watermark\")\n"
] | Support HTTP authentication in non-streaming mode | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7082/reactions"
} | PR_kwDODunzps522dTJ | {
"diff_url": "https://github.com/huggingface/datasets/pull/7082.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7082",
"merged_at": "2024-08-08T08:24:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7082.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7082"
} | 2024-07-30T09:25:49Z | https://api.github.com/repos/huggingface/datasets/issues/7082/comments | Support HTTP authentication in non-streaming mode, by support passing HTTP storage_options in non-streaming mode.
- Note that currently, HTTP authentication is supported only in streaming mode.
For example, this is necessary if a remote HTTP host requires authentication to download the data. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/7082/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7082/timeline | closed | false | 7,082 | null | 2024-08-08T08:24:06Z | null | true |
2,437,059,657 | https://api.github.com/repos/huggingface/datasets/issues/7081 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7081/events | [] | null | 2024-07-30T08:30:37Z | [] | https://github.com/huggingface/datasets/pull/7081 | MEMBER | null | false | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7081). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005665 / 0.011353 (-0.005688) | 0.004130 / 0.011008 (-0.006878) | 0.064231 / 0.038508 (0.025723) | 0.030738 / 0.023109 (0.007628) | 0.251896 / 0.275898 (-0.024002) | 0.275182 / 0.323480 (-0.048298) | 0.003364 / 0.007986 (-0.004621) | 0.003569 / 0.004328 (-0.000759) | 0.049407 / 0.004250 (0.045157) | 0.048177 / 0.037052 (0.011124) | 0.253739 / 0.258489 (-0.004751) | 0.304087 / 0.293841 (0.010246) | 0.030457 / 0.128546 (-0.098089) | 0.012762 / 0.075646 (-0.062885) | 0.214312 / 0.419271 (-0.204959) | 0.036673 / 0.043533 (-0.006860) | 0.251838 / 0.255139 (-0.003301) | 0.274049 / 0.283200 (-0.009151) | 0.021133 / 0.141683 (-0.120550) | 1.143743 / 1.452155 (-0.308412) | 1.203681 / 1.492716 (-0.289036) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094668 / 0.018006 (0.076662) | 0.300323 / 0.000490 (0.299833) | 0.000222 / 0.000200 (0.000022) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018565 / 0.037411 (-0.018846) | 0.066096 / 0.014526 (0.051570) | 0.075700 / 0.176557 (-0.100857) | 0.122185 / 0.737135 (-0.614950) | 0.077688 / 0.296338 (-0.218651) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288804 / 0.215209 (0.073595) | 2.838336 / 2.077655 (0.760681) | 1.530575 / 1.504120 (0.026455) | 1.406716 / 1.541195 (-0.134478) | 1.438885 / 1.468490 (-0.029605) | 0.744809 / 4.584777 (-3.839968) | 2.447992 / 3.745712 (-1.297721) | 3.126261 / 5.269862 (-2.143601) | 1.999687 / 4.565676 (-2.565990) | 0.081536 / 0.424275 (-0.342739) | 0.005827 / 0.007607 (-0.001780) | 0.346367 / 0.226044 (0.120323) | 3.373268 / 2.268929 (1.104339) | 1.890293 / 55.444624 (-53.554332) | 1.590384 / 6.876477 (-5.286093) | 1.652101 / 2.142072 (-0.489971) | 0.805888 / 4.805227 (-3.999339) | 0.137687 / 6.500664 (-6.362977) | 0.044536 / 0.075469 (-0.030933) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.998393 / 1.841788 (-0.843395) | 12.392241 / 8.074308 (4.317933) | 10.055638 / 10.191392 (-0.135754) | 0.132347 / 0.680424 (-0.548077) | 0.014635 / 0.534201 (-0.519566) | 0.301939 / 0.579283 (-0.277344) | 0.266756 / 0.434364 (-0.167608) | 0.342730 / 0.540337 (-0.197608) | 0.435463 / 1.386936 (-0.951473) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006421 / 0.011353 (-0.004932) | 0.004494 / 0.011008 (-0.006514) | 0.051315 / 0.038508 (0.012806) | 0.035570 / 0.023109 (0.012460) | 0.271635 / 0.275898 (-0.004263) | 0.297082 / 0.323480 (-0.026398) | 0.004572 / 0.007986 (-0.003414) | 0.002886 / 0.004328 (-0.001443) | 0.049152 / 0.004250 (0.044902) | 0.043000 / 0.037052 (0.005948) | 0.281921 / 0.258489 (0.023432) | 0.321097 / 0.293841 (0.027256) | 0.033488 / 0.128546 (-0.095058) | 0.012835 / 0.075646 (-0.062811) | 0.061831 / 0.419271 (-0.357441) | 0.034674 / 0.043533 (-0.008858) | 0.272885 / 0.255139 (0.017746) | 0.292726 / 0.283200 (0.009527) | 0.019906 / 0.141683 (-0.121777) | 1.132234 / 1.452155 (-0.319920) | 1.155359 / 1.492716 (-0.337357) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096943 / 0.018006 (0.078937) | 0.308980 / 0.000490 (0.308490) | 0.000225 / 0.000200 (0.000025) | 0.000047 / 0.000054 (-0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023551 / 0.037411 (-0.013861) | 0.081682 / 0.014526 (0.067156) | 0.090987 / 0.176557 (-0.085569) | 0.132542 / 0.737135 (-0.604593) | 0.092844 / 0.296338 (-0.203494) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.304190 / 0.215209 (0.088981) | 2.958591 / 2.077655 (0.880936) | 1.610211 / 1.504120 (0.106091) | 1.488216 / 1.541195 (-0.052978) | 1.525429 / 1.468490 (0.056939) | 0.752811 / 4.584777 (-3.831966) | 0.967887 / 3.745712 (-2.777825) | 2.982760 / 5.269862 (-2.287102) | 1.996623 / 4.565676 (-2.569053) | 0.080783 / 0.424275 (-0.343492) | 0.005337 / 0.007607 (-0.002270) | 0.354996 / 0.226044 (0.128951) | 3.540788 / 2.268929 (1.271860) | 1.997445 / 55.444624 (-53.447179) | 1.682232 / 6.876477 (-5.194245) | 1.883198 / 2.142072 (-0.258875) | 0.814444 / 4.805227 (-3.990783) | 0.135798 / 6.500664 (-6.364867) | 0.041750 / 0.075469 (-0.033719) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.048688 / 1.841788 (-0.793099) | 13.122809 / 8.074308 (5.048501) | 10.893354 / 10.191392 (0.701962) | 0.133710 / 0.680424 (-0.546713) | 0.016357 / 0.534201 (-0.517844) | 0.304364 / 0.579283 (-0.274919) | 0.126457 / 0.434364 (-0.307907) | 0.345747 / 0.540337 (-0.194591) | 0.441620 / 1.386936 (-0.945316) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#27ea8e8ead3e76bb07aa645f882945495d238ef3 \"CML watermark\")\n"
] | Set load_from_disk path type as PathLike | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7081/reactions"
} | PR_kwDODunzps521cGm | {
"diff_url": "https://github.com/huggingface/datasets/pull/7081.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7081",
"merged_at": "2024-07-30T08:21:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7081.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7081"
} | 2024-07-30T07:00:38Z | https://api.github.com/repos/huggingface/datasets/issues/7081/comments | Set `load_from_disk` path type as `PathLike`. This way it is aligned with `save_to_disk`. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/7081/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7081/timeline | closed | false | 7,081 | null | 2024-07-30T08:21:50Z | null | true |
2,434,275,664 | https://api.github.com/repos/huggingface/datasets/issues/7080 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7080/events | [] | null | 2024-07-29T01:42:43Z | [] | https://github.com/huggingface/datasets/issues/7080 | NONE | null | null | null | [] | Generating train split takes a long time | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7080/reactions"
} | I_kwDODunzps6RGBlQ | null | 2024-07-29T01:42:43Z | https://api.github.com/repos/huggingface/datasets/issues/7080/comments | ### Describe the bug
Loading a simple webdataset takes ~45 minutes.
### Steps to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset("PixArt-alpha/SAM-LLaVA-Captions10M")
```
### Expected behavior
The dataset should load immediately as it does when loaded through a normal indexed WebDataset loader. Generating splits should be optional and there should be a message showing how to disable it.
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-4.18.0-372.32.1.el8_6.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.14
- `huggingface_hub` version: 0.24.1
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/35648800?v=4",
"events_url": "https://api.github.com/users/alexanderswerdlow/events{/privacy}",
"followers_url": "https://api.github.com/users/alexanderswerdlow/followers",
"following_url": "https://api.github.com/users/alexanderswerdlow/following{/other_user}",
"gists_url": "https://api.github.com/users/alexanderswerdlow/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alexanderswerdlow",
"id": 35648800,
"login": "alexanderswerdlow",
"node_id": "MDQ6VXNlcjM1NjQ4ODAw",
"organizations_url": "https://api.github.com/users/alexanderswerdlow/orgs",
"received_events_url": "https://api.github.com/users/alexanderswerdlow/received_events",
"repos_url": "https://api.github.com/users/alexanderswerdlow/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alexanderswerdlow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexanderswerdlow/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alexanderswerdlow"
} | https://api.github.com/repos/huggingface/datasets/issues/7080/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7080/timeline | open | false | 7,080 | null | null | null | false |
2,433,363,298 | https://api.github.com/repos/huggingface/datasets/issues/7079 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7079/events | [] | null | 2024-07-27T20:06:44Z | [] | https://github.com/huggingface/datasets/issues/7079 | NONE | completed | null | null | [
"same issue here. @albertvillanova @lhoestq ",
"Also impacted by this issue in many of my datasets (though not all) - in my case, this also seems to affect datasets that have been updated recently. Git cloning and the web interface still work:\r\n- https://huggingface.co/api/datasets/acmc/cheat_reduced\r\n- https://huggingface.co/api/datasets/acmc/ghostbuster_reuter_reduced\r\n- https://huggingface.co/api/datasets/acmc/ghostbuster_wp_reduced\r\n- https://huggingface.co/api/datasets/acmc/ghostbuster_essay_reduced\r\n\r\nOddly enough, the system status looks good: https://status.huggingface.co/",
"Hey how to download these datasets using git cloning?",
"Also reported here\r\nhttps://github.com/huggingface/huggingface_hub/issues/2425",
"I have been getting the same error for the past 8 hours as well",
"Same error since yesterday, fails on any new dataset created",
"Same here. I cannot download the HelpSteer2 dataset: https://huggingface.co/datasets/nvidia/HelpSteer2 which has been uploaded about a month ago",
"> Hey how to download these datasets using git cloning?\n\nYou'll find a guide [here](https://huggingface.co/docs/hub/en/datasets-downloading) 👍🏻",
"Same here for imdb dataset",
"It also happens with this dataset: https://huggingface.co/datasets/ylacombe/jenny-tts-6h-tagged",
"same here for all datsets in the sentence-tramsformers repo and related collections.\r\n\r\nsame issue with dataset that i recently uploaded on my repo.\r\nseems that the upload date of the datset is not relevat (getting this issue with both old datasets and newer ones)\r\n\r\nfor some reason, i was able to get the dataset by turning it private and accessing it with the id token (accessing it as public while providing the token doesn not work)..... but i can say if that is just a random coincidence.\r\n\r\nseems not much deterministic, for a specific dataset (sentence-transformer nq ) , that was \"down\" since some hours , worked for like 5-10 minutes, then stopped again\r\n\r\nnow even this dataset (that worked since some min ago, and that i'm in the middle of processing steps) stopped working: _https://huggingface.co/datasets/bobox/msmarco-bm25-EduScore/_\r\n\r\nas already pointed out, there are no updates on **_https://status.huggingface.co/_**\r\n\r\n\\n\r\n\\n\r\n\r\nan example of the whole error message:\r\n``` \r\nHfHubHTTPError \r\n\r\n[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)\r\n 2592 \r\n 2593 # Create a dataset builder\r\n-> 2594 builder_instance = load_dataset_builder(\r\n 2595 path=path,\r\n 2596 name=name,\r\n\r\n[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, trust_remote_code, _require_default_config_name, **config_kwargs)\r\n 2264 download_config = download_config.copy() if download_config else DownloadConfig()\r\n 2265 download_config.storage_options.update(storage_options)\r\n-> 2266 dataset_module = dataset_module_factory(\r\n 2267 path,\r\n 2268 revision=revision,\r\n\r\n[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs)\r\n 1912 f\"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}\"\r\n 1913 ) from None\r\n-> 1914 raise e1 from None\r\n 1915 else:\r\n 1916 raise FileNotFoundError(\r\n\r\n[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs)\r\n 1832 hf_api = HfApi(config.HF_ENDPOINT)\r\n 1833 try:\r\n-> 1834 dataset_info = hf_api.dataset_info(\r\n 1835 repo_id=path,\r\n 1836 revision=revision,\r\n\r\n[/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_validators.py](https://localhost:8080/#) in _inner_fn(*args, **kwargs)\r\n 112 kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs)\r\n 113 \r\n--> 114 return fn(*args, **kwargs)\r\n 115 \r\n 116 return _inner_fn # type: ignore\r\n\r\n[/usr/local/lib/python3.10/dist-packages/huggingface_hub/hf_api.py](https://localhost:8080/#) in dataset_info(self, repo_id, revision, timeout, files_metadata, token)\r\n 2362 \r\n 2363 r = get_session().get(path, headers=headers, timeout=timeout, params=params)\r\n-> 2364 hf_raise_for_status(r)\r\n 2365 data = r.json()\r\n 2366 return DatasetInfo(**data)\r\n\r\n[/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_errors.py](https://localhost:8080/#) in hf_raise_for_status(response, endpoint_name)\r\n 369 # Convert `HTTPError` into a `HfHubHTTPError` to display request information\r\n 370 # as well (request id and/or server error message)\r\n--> 371 raise HfHubHTTPError(str(e), response=response) from e\r\n 372 \r\n 373 \r\n\r\nHfHubHTTPError: 500 Server Error: Internal Server Error for url: https://huggingface.co/api/datasets/bobox/xSum-processed (Request ID: Root=1-66a527f0-756cfbc35cc466f075382289;7d5dc06a-37e9-4c22-874d-92b0b1023276)\r\n\r\nInternal Error - We're working hard to fix this as soon as possible!\r\n``` ",
"we're working on a fix !",
"We fixed the issue, you can load datasets again, sorry for the inconvenience !",
"I can confirm, it's working now. I can load the dataset, yay. Thank you @lhoestq ",
"@lhoestq thank you so much! "
] | HfHubHTTPError: 500 Server Error: Internal Server Error for url: | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 4,
"heart": 3,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 7,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7079/reactions"
} | I_kwDODunzps6RCi1i | null | 2024-07-27T08:21:03Z | https://api.github.com/repos/huggingface/datasets/issues/7079/comments | ### Describe the bug
newly uploaded datasets, since yesterday, yields an error.
old datasets, works fine.
Seems like the datasets api server returns a 500
I'm getting the same error, when I invoke `load_dataset` with my dataset.
Long discussion about it here, but I'm not sure anyone from huggingface have seen it.
https://discuss.huggingface.co/t/hfhubhttperror-500-server-error-internal-server-error-for-url/99580/1
### Steps to reproduce the bug
this api url:
https://huggingface.co/api/datasets/neoneye/simon-arc-shape-v4-rev3
respond with:
```
{"error":"Internal Error - We're working hard to fix this as soon as possible!"}
```
### Expected behavior
return no error with newer datasets.
With older datasets I can load the datasets fine.
### Environment info
# Browser
When I access the api in the browser:
https://huggingface.co/api/datasets/neoneye/simon-arc-shape-v4-rev3
```
{"error":"Internal Error - We're working hard to fix this as soon as possible!"}
```
### Request headers
```
Accept text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8
Accept-Encoding gzip, deflate, br, zstd
Accept-Language en-US,en;q=0.5
Connection keep-alive
Host huggingface.co
Priority u=1
Sec-Fetch-Dest document
Sec-Fetch-Mode navigate
Sec-Fetch-Site cross-site
Upgrade-Insecure-Requests 1
User-Agent Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:127.0) Gecko/20100101 Firefox/127.0
```
### Response headers
```
X-Firefox-Spdy h2
access-control-allow-origin https://huggingface.co
access-control-expose-headers X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,X-Total-Count,ETag,Link,Accept-Ranges,Content-Range
content-length 80
content-type application/json; charset=utf-8
cross-origin-opener-policy same-origin
date Fri, 26 Jul 2024 19:09:45 GMT
etag W/"50-9qrwU+BNI4SD0Fe32p/nofkmv0c"
referrer-policy strict-origin-when-cross-origin
vary Origin
via 1.1 1624c79cd07e6098196697a6a7907e4a.cloudfront.net (CloudFront)
x-amz-cf-id SP8E7n5qRaP6i9c9G83dNAiOzJBU4GXSrDRAcVNTomY895K35H0nJQ==
x-amz-cf-pop CPH50-C1
x-cache Error from cloudfront
x-error-message Internal Error - We're working hard to fix this as soon as possible!
x-powered-by huggingface-moon
x-request-id Root=1-66a3f479-026417465ef42f49349fdca1
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/147971?v=4",
"events_url": "https://api.github.com/users/neoneye/events{/privacy}",
"followers_url": "https://api.github.com/users/neoneye/followers",
"following_url": "https://api.github.com/users/neoneye/following{/other_user}",
"gists_url": "https://api.github.com/users/neoneye/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/neoneye",
"id": 147971,
"login": "neoneye",
"node_id": "MDQ6VXNlcjE0Nzk3MQ==",
"organizations_url": "https://api.github.com/users/neoneye/orgs",
"received_events_url": "https://api.github.com/users/neoneye/received_events",
"repos_url": "https://api.github.com/users/neoneye/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/neoneye/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neoneye/subscriptions",
"type": "User",
"url": "https://api.github.com/users/neoneye"
} | https://api.github.com/repos/huggingface/datasets/issues/7079/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7079/timeline | closed | false | 7,079 | null | 2024-07-27T19:52:30Z | null | false |
2,433,270,271 | https://api.github.com/repos/huggingface/datasets/issues/7078 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7078/events | [] | null | 2024-07-27T05:50:57Z | [] | https://github.com/huggingface/datasets/pull/7078 | MEMBER | null | false | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7078). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005262 / 0.011353 (-0.006090) | 0.003733 / 0.011008 (-0.007275) | 0.062619 / 0.038508 (0.024111) | 0.029491 / 0.023109 (0.006382) | 0.248947 / 0.275898 (-0.026951) | 0.278741 / 0.323480 (-0.044739) | 0.003173 / 0.007986 (-0.004813) | 0.002777 / 0.004328 (-0.001551) | 0.049344 / 0.004250 (0.045094) | 0.043103 / 0.037052 (0.006051) | 0.252402 / 0.258489 (-0.006087) | 0.288030 / 0.293841 (-0.005811) | 0.029425 / 0.128546 (-0.099121) | 0.012058 / 0.075646 (-0.063589) | 0.204509 / 0.419271 (-0.214762) | 0.035721 / 0.043533 (-0.007812) | 0.249121 / 0.255139 (-0.006018) | 0.272171 / 0.283200 (-0.011029) | 0.019515 / 0.141683 (-0.122168) | 1.130088 / 1.452155 (-0.322067) | 1.148856 / 1.492716 (-0.343860) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093613 / 0.018006 (0.075607) | 0.300830 / 0.000490 (0.300340) | 0.000219 / 0.000200 (0.000019) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018381 / 0.037411 (-0.019030) | 0.061515 / 0.014526 (0.046989) | 0.074370 / 0.176557 (-0.102186) | 0.120751 / 0.737135 (-0.616384) | 0.074971 / 0.296338 (-0.221367) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.280499 / 0.215209 (0.065290) | 2.763114 / 2.077655 (0.685459) | 1.458696 / 1.504120 (-0.045424) | 1.331214 / 1.541195 (-0.209981) | 1.343157 / 1.468490 (-0.125333) | 0.732775 / 4.584777 (-3.852002) | 2.381485 / 3.745712 (-1.364227) | 2.930117 / 5.269862 (-2.339745) | 1.887617 / 4.565676 (-2.678059) | 0.080543 / 0.424275 (-0.343732) | 0.005136 / 0.007607 (-0.002471) | 0.336924 / 0.226044 (0.110879) | 3.343071 / 2.268929 (1.074142) | 1.823677 / 55.444624 (-53.620948) | 1.572300 / 6.876477 (-5.304176) | 1.564040 / 2.142072 (-0.578032) | 0.802369 / 4.805227 (-4.002858) | 0.135198 / 6.500664 (-6.365466) | 0.041499 / 0.075469 (-0.033970) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.961202 / 1.841788 (-0.880585) | 11.275695 / 8.074308 (3.201387) | 9.508052 / 10.191392 (-0.683340) | 0.136921 / 0.680424 (-0.543503) | 0.014055 / 0.534201 (-0.520146) | 0.300076 / 0.579283 (-0.279208) | 0.263403 / 0.434364 (-0.170961) | 0.340871 / 0.540337 (-0.199466) | 0.433452 / 1.386936 (-0.953484) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005683 / 0.011353 (-0.005670) | 0.003596 / 0.011008 (-0.007412) | 0.049913 / 0.038508 (0.011405) | 0.033275 / 0.023109 (0.010166) | 0.266011 / 0.275898 (-0.009887) | 0.295182 / 0.323480 (-0.028298) | 0.004336 / 0.007986 (-0.003649) | 0.002787 / 0.004328 (-0.001541) | 0.049035 / 0.004250 (0.044784) | 0.039833 / 0.037052 (0.002781) | 0.283520 / 0.258489 (0.025031) | 0.317437 / 0.293841 (0.023596) | 0.032578 / 0.128546 (-0.095968) | 0.011744 / 0.075646 (-0.063902) | 0.060174 / 0.419271 (-0.359097) | 0.034182 / 0.043533 (-0.009351) | 0.271821 / 0.255139 (0.016682) | 0.292189 / 0.283200 (0.008989) | 0.017045 / 0.141683 (-0.124638) | 1.127742 / 1.452155 (-0.324413) | 1.180621 / 1.492716 (-0.312095) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093798 / 0.018006 (0.075792) | 0.310715 / 0.000490 (0.310226) | 0.000213 / 0.000200 (0.000013) | 0.000046 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022379 / 0.037411 (-0.015032) | 0.076823 / 0.014526 (0.062298) | 0.088086 / 0.176557 (-0.088471) | 0.128926 / 0.737135 (-0.608210) | 0.089187 / 0.296338 (-0.207151) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293982 / 0.215209 (0.078773) | 2.930932 / 2.077655 (0.853277) | 1.576425 / 1.504120 (0.072305) | 1.445163 / 1.541195 (-0.096031) | 1.462118 / 1.468490 (-0.006372) | 0.725816 / 4.584777 (-3.858961) | 0.949767 / 3.745712 (-2.795945) | 2.832821 / 5.269862 (-2.437041) | 1.897064 / 4.565676 (-2.668612) | 0.079853 / 0.424275 (-0.344423) | 0.005352 / 0.007607 (-0.002255) | 0.344551 / 0.226044 (0.118507) | 3.442506 / 2.268929 (1.173578) | 1.938700 / 55.444624 (-53.505925) | 1.662205 / 6.876477 (-5.214272) | 1.769061 / 2.142072 (-0.373011) | 0.818089 / 4.805227 (-3.987139) | 0.134612 / 6.500664 (-6.366052) | 0.040419 / 0.075469 (-0.035050) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.032267 / 1.841788 (-0.809521) | 11.902598 / 8.074308 (3.828290) | 10.342229 / 10.191392 (0.150837) | 0.140509 / 0.680424 (-0.539915) | 0.015593 / 0.534201 (-0.518608) | 0.303326 / 0.579283 (-0.275957) | 0.127391 / 0.434364 (-0.306973) | 0.342095 / 0.540337 (-0.198243) | 0.438978 / 1.386936 (-0.947958) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#30000fb6ca53126917ee17e1b1987f94f07a1569 \"CML watermark\")\n"
] | Fix CI test_convert_to_parquet | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7078/reactions"
} | PR_kwDODunzps52oq4n | {
"diff_url": "https://github.com/huggingface/datasets/pull/7078.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7078",
"merged_at": "2024-07-27T05:44:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7078.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7078"
} | 2024-07-27T05:32:40Z | https://api.github.com/repos/huggingface/datasets/issues/7078/comments | Fix `test_convert_to_parquet` by patching `HfApi.preupload_lfs_files` and revert temporary fix:
- #7074 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/7078/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7078/timeline | closed | false | 7,078 | null | 2024-07-27T05:44:32Z | null | true |
2,432,345,489 | https://api.github.com/repos/huggingface/datasets/issues/7077 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7077/events | [] | null | 2024-07-30T07:52:26Z | [] | https://github.com/huggingface/datasets/issues/7077 | NONE | null | null | null | [
"I confirm that `column_names` values are not copied to `names` variable because in this case `CsvConfig.__post_init__` is not called: `CsvConfig` is instantiated with default values and afterwards the `config_kwargs` are used to overwrite its attributes.\r\n\r\n@luismsgomes in the meantime, you can avoid the bug if you pass `names` instead of `column_names`."
] | column_names ignored by load_dataset() when loading CSV file | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7077/reactions"
} | I_kwDODunzps6Q-qWR | null | 2024-07-26T14:18:04Z | https://api.github.com/repos/huggingface/datasets/issues/7077/comments | ### Describe the bug
load_dataset() ignores the column_names kwarg when loading a CSV file. Instead, it uses whatever values are on the first line of the file.
### Steps to reproduce the bug
Call `load_dataset` to load data from a CSV file and specify `column_names` kwarg.
### Expected behavior
The resulting dataset should have the specified column names **and** the first line of the file should be considered as data values.
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-5.10.0-30-cloud-amd64-x86_64-with-glibc2.31
- Python version: 3.9.2
- `huggingface_hub` version: 0.24.2
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/9130265?v=4",
"events_url": "https://api.github.com/users/luismsgomes/events{/privacy}",
"followers_url": "https://api.github.com/users/luismsgomes/followers",
"following_url": "https://api.github.com/users/luismsgomes/following{/other_user}",
"gists_url": "https://api.github.com/users/luismsgomes/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/luismsgomes",
"id": 9130265,
"login": "luismsgomes",
"node_id": "MDQ6VXNlcjkxMzAyNjU=",
"organizations_url": "https://api.github.com/users/luismsgomes/orgs",
"received_events_url": "https://api.github.com/users/luismsgomes/received_events",
"repos_url": "https://api.github.com/users/luismsgomes/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/luismsgomes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/luismsgomes/subscriptions",
"type": "User",
"url": "https://api.github.com/users/luismsgomes"
} | https://api.github.com/repos/huggingface/datasets/issues/7077/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7077/timeline | open | false | 7,077 | null | null | null | false |
2,432,275,393 | https://api.github.com/repos/huggingface/datasets/issues/7076 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7076/events | [] | null | 2024-07-27T05:48:17Z | [] | https://github.com/huggingface/datasets/pull/7076 | MEMBER | null | true | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7076). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 🧪 Do not mock create_commit | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7076/reactions"
} | PR_kwDODunzps52lTDe | {
"diff_url": "https://github.com/huggingface/datasets/pull/7076.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7076",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7076.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7076"
} | 2024-07-26T13:44:42Z | https://api.github.com/repos/huggingface/datasets/issues/7076/comments | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/342922?v=4",
"events_url": "https://api.github.com/users/coyotte508/events{/privacy}",
"followers_url": "https://api.github.com/users/coyotte508/followers",
"following_url": "https://api.github.com/users/coyotte508/following{/other_user}",
"gists_url": "https://api.github.com/users/coyotte508/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/coyotte508",
"id": 342922,
"login": "coyotte508",
"node_id": "MDQ6VXNlcjM0MjkyMg==",
"organizations_url": "https://api.github.com/users/coyotte508/orgs",
"received_events_url": "https://api.github.com/users/coyotte508/received_events",
"repos_url": "https://api.github.com/users/coyotte508/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/coyotte508/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/coyotte508/subscriptions",
"type": "User",
"url": "https://api.github.com/users/coyotte508"
} | https://api.github.com/repos/huggingface/datasets/issues/7076/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7076/timeline | closed | false | 7,076 | null | 2024-07-27T05:48:17Z | null | true |
2,432,027,412 | https://api.github.com/repos/huggingface/datasets/issues/7075 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7075/events | [] | null | 2024-07-26T11:46:52Z | [] | https://github.com/huggingface/datasets/pull/7075 | MEMBER | null | false | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7075). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005717 / 0.011353 (-0.005636) | 0.004102 / 0.011008 (-0.006906) | 0.064343 / 0.038508 (0.025835) | 0.031510 / 0.023109 (0.008400) | 0.254534 / 0.275898 (-0.021364) | 0.275080 / 0.323480 (-0.048400) | 0.004243 / 0.007986 (-0.003742) | 0.002782 / 0.004328 (-0.001546) | 0.049554 / 0.004250 (0.045303) | 0.045291 / 0.037052 (0.008239) | 0.264118 / 0.258489 (0.005629) | 0.296476 / 0.293841 (0.002635) | 0.030298 / 0.128546 (-0.098248) | 0.012646 / 0.075646 (-0.063000) | 0.208403 / 0.419271 (-0.210869) | 0.036365 / 0.043533 (-0.007168) | 0.250294 / 0.255139 (-0.004845) | 0.276057 / 0.283200 (-0.007143) | 0.018687 / 0.141683 (-0.122996) | 1.128970 / 1.452155 (-0.323184) | 1.170923 / 1.492716 (-0.321793) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.134953 / 0.018006 (0.116947) | 0.301722 / 0.000490 (0.301232) | 0.000242 / 0.000200 (0.000042) | 0.000050 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019650 / 0.037411 (-0.017761) | 0.063404 / 0.014526 (0.048878) | 0.074883 / 0.176557 (-0.101674) | 0.122846 / 0.737135 (-0.614289) | 0.077410 / 0.296338 (-0.218928) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287710 / 0.215209 (0.072501) | 2.813834 / 2.077655 (0.736179) | 1.454710 / 1.504120 (-0.049410) | 1.327303 / 1.541195 (-0.213891) | 1.375064 / 1.468490 (-0.093426) | 0.746831 / 4.584777 (-3.837946) | 2.361008 / 3.745712 (-1.384705) | 3.080869 / 5.269862 (-2.188993) | 1.969927 / 4.565676 (-2.595749) | 0.081045 / 0.424275 (-0.343230) | 0.005168 / 0.007607 (-0.002440) | 0.342657 / 0.226044 (0.116613) | 3.404883 / 2.268929 (1.135955) | 1.840761 / 55.444624 (-53.603863) | 1.535400 / 6.876477 (-5.341076) | 1.584613 / 2.142072 (-0.557460) | 0.828003 / 4.805227 (-3.977224) | 0.135564 / 6.500664 (-6.365100) | 0.042717 / 0.075469 (-0.032752) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.985301 / 1.841788 (-0.856487) | 11.945913 / 8.074308 (3.871605) | 9.887577 / 10.191392 (-0.303815) | 0.141261 / 0.680424 (-0.539163) | 0.014961 / 0.534201 (-0.519240) | 0.304134 / 0.579283 (-0.275150) | 0.264733 / 0.434364 (-0.169631) | 0.349993 / 0.540337 (-0.190345) | 0.440390 / 1.386936 (-0.946546) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006145 / 0.011353 (-0.005207) | 0.004259 / 0.011008 (-0.006749) | 0.051245 / 0.038508 (0.012737) | 0.034873 / 0.023109 (0.011764) | 0.274149 / 0.275898 (-0.001749) | 0.299761 / 0.323480 (-0.023719) | 0.004457 / 0.007986 (-0.003529) | 0.002938 / 0.004328 (-0.001390) | 0.049547 / 0.004250 (0.045297) | 0.042441 / 0.037052 (0.005389) | 0.284961 / 0.258489 (0.026472) | 0.322197 / 0.293841 (0.028356) | 0.033850 / 0.128546 (-0.094696) | 0.012615 / 0.075646 (-0.063031) | 0.061967 / 0.419271 (-0.357304) | 0.035229 / 0.043533 (-0.008304) | 0.273941 / 0.255139 (0.018802) | 0.293395 / 0.283200 (0.010195) | 0.020566 / 0.141683 (-0.121117) | 1.173423 / 1.452155 (-0.278732) | 1.219948 / 1.492716 (-0.272768) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096131 / 0.018006 (0.078125) | 0.305548 / 0.000490 (0.305059) | 0.000217 / 0.000200 (0.000017) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023847 / 0.037411 (-0.013564) | 0.079536 / 0.014526 (0.065010) | 0.088889 / 0.176557 (-0.087667) | 0.129181 / 0.737135 (-0.607954) | 0.090879 / 0.296338 (-0.205460) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.299315 / 0.215209 (0.084106) | 2.952656 / 2.077655 (0.875001) | 1.587354 / 1.504120 (0.083234) | 1.453420 / 1.541195 (-0.087774) | 1.501784 / 1.468490 (0.033294) | 0.711481 / 4.584777 (-3.873296) | 0.971790 / 3.745712 (-2.773922) | 2.897636 / 5.269862 (-2.372226) | 1.947086 / 4.565676 (-2.618591) | 0.079700 / 0.424275 (-0.344575) | 0.005395 / 0.007607 (-0.002212) | 0.351340 / 0.226044 (0.125296) | 3.416472 / 2.268929 (1.147543) | 2.007559 / 55.444624 (-53.437066) | 1.660401 / 6.876477 (-5.216076) | 1.837049 / 2.142072 (-0.305024) | 0.817306 / 4.805227 (-3.987921) | 0.135176 / 6.500664 (-6.365488) | 0.041477 / 0.075469 (-0.033992) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.030033 / 1.841788 (-0.811755) | 12.528661 / 8.074308 (4.454353) | 10.603212 / 10.191392 (0.411820) | 0.142434 / 0.680424 (-0.537989) | 0.015603 / 0.534201 (-0.518598) | 0.304516 / 0.579283 (-0.274767) | 0.125324 / 0.434364 (-0.309040) | 0.343092 / 0.540337 (-0.197245) | 0.443359 / 1.386936 (-0.943577) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#45c5b3daacd7af212cae4c848a56e14d3cac291f \"CML watermark\")\n"
] | Update required soxr version from pre-release to release | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7075/reactions"
} | PR_kwDODunzps52kciD | {
"diff_url": "https://github.com/huggingface/datasets/pull/7075.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7075",
"merged_at": "2024-07-26T11:40:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7075.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7075"
} | 2024-07-26T11:24:35Z | https://api.github.com/repos/huggingface/datasets/issues/7075/comments | Update required `soxr` version from pre-release to release 0.4.0: https://github.com/dofuuz/python-soxr/releases/tag/v0.4.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/7075/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7075/timeline | closed | false | 7,075 | null | 2024-07-26T11:40:49Z | null | true |
2,431,772,703 | https://api.github.com/repos/huggingface/datasets/issues/7074 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7074/events | [] | null | 2024-07-26T09:23:33Z | [] | https://github.com/huggingface/datasets/pull/7074 | MEMBER | null | false | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7074). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005168 / 0.011353 (-0.006185) | 0.003572 / 0.011008 (-0.007436) | 0.062755 / 0.038508 (0.024247) | 0.030371 / 0.023109 (0.007262) | 0.250240 / 0.275898 (-0.025658) | 0.268091 / 0.323480 (-0.055389) | 0.003260 / 0.007986 (-0.004726) | 0.002706 / 0.004328 (-0.001622) | 0.048957 / 0.004250 (0.044706) | 0.044441 / 0.037052 (0.007389) | 0.251801 / 0.258489 (-0.006688) | 0.289401 / 0.293841 (-0.004440) | 0.028991 / 0.128546 (-0.099555) | 0.011871 / 0.075646 (-0.063775) | 0.203722 / 0.419271 (-0.215549) | 0.035911 / 0.043533 (-0.007622) | 0.248070 / 0.255139 (-0.007069) | 0.266480 / 0.283200 (-0.016720) | 0.019831 / 0.141683 (-0.121852) | 1.143429 / 1.452155 (-0.308726) | 1.160102 / 1.492716 (-0.332614) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096740 / 0.018006 (0.078734) | 0.302473 / 0.000490 (0.301983) | 0.000219 / 0.000200 (0.000019) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018367 / 0.037411 (-0.019045) | 0.062346 / 0.014526 (0.047820) | 0.074416 / 0.176557 (-0.102140) | 0.120507 / 0.737135 (-0.616628) | 0.076536 / 0.296338 (-0.219802) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284093 / 0.215209 (0.068884) | 2.738805 / 2.077655 (0.661150) | 1.469263 / 1.504120 (-0.034856) | 1.349122 / 1.541195 (-0.192073) | 1.355578 / 1.468490 (-0.112912) | 0.720364 / 4.584777 (-3.864413) | 2.360339 / 3.745712 (-1.385373) | 2.941134 / 5.269862 (-2.328728) | 1.888692 / 4.565676 (-2.676984) | 0.077111 / 0.424275 (-0.347164) | 0.005070 / 0.007607 (-0.002537) | 0.334122 / 0.226044 (0.108078) | 3.298378 / 2.268929 (1.029450) | 1.868514 / 55.444624 (-53.576111) | 1.528561 / 6.876477 (-5.347916) | 1.535319 / 2.142072 (-0.606754) | 0.778591 / 4.805227 (-4.026636) | 0.131364 / 6.500664 (-6.369300) | 0.041697 / 0.075469 (-0.033773) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.970243 / 1.841788 (-0.871544) | 11.324752 / 8.074308 (3.250443) | 9.612381 / 10.191392 (-0.579011) | 0.138842 / 0.680424 (-0.541582) | 0.014479 / 0.534201 (-0.519722) | 0.309415 / 0.579283 (-0.269868) | 0.264654 / 0.434364 (-0.169710) | 0.343695 / 0.540337 (-0.196642) | 0.435323 / 1.386936 (-0.951613) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005680 / 0.011353 (-0.005673) | 0.003614 / 0.011008 (-0.007394) | 0.060575 / 0.038508 (0.022067) | 0.031103 / 0.023109 (0.007994) | 0.269083 / 0.275898 (-0.006815) | 0.291556 / 0.323480 (-0.031923) | 0.004354 / 0.007986 (-0.003632) | 0.002739 / 0.004328 (-0.001589) | 0.049056 / 0.004250 (0.044806) | 0.039759 / 0.037052 (0.002707) | 0.280608 / 0.258489 (0.022119) | 0.324798 / 0.293841 (0.030957) | 0.032030 / 0.128546 (-0.096516) | 0.011862 / 0.075646 (-0.063784) | 0.060011 / 0.419271 (-0.359261) | 0.033960 / 0.043533 (-0.009573) | 0.271114 / 0.255139 (0.015975) | 0.293922 / 0.283200 (0.010722) | 0.019497 / 0.141683 (-0.122185) | 1.137871 / 1.452155 (-0.314284) | 1.180656 / 1.492716 (-0.312061) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094201 / 0.018006 (0.076194) | 0.306657 / 0.000490 (0.306167) | 0.000215 / 0.000200 (0.000015) | 0.000068 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022562 / 0.037411 (-0.014850) | 0.077170 / 0.014526 (0.062644) | 0.088915 / 0.176557 (-0.087642) | 0.129455 / 0.737135 (-0.607680) | 0.091571 / 0.296338 (-0.204767) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300753 / 0.215209 (0.085544) | 2.941929 / 2.077655 (0.864274) | 1.613451 / 1.504120 (0.109331) | 1.498365 / 1.541195 (-0.042830) | 1.517124 / 1.468490 (0.048634) | 0.709209 / 4.584777 (-3.875568) | 0.950478 / 3.745712 (-2.795235) | 2.799328 / 5.269862 (-2.470533) | 1.872895 / 4.565676 (-2.692782) | 0.078233 / 0.424275 (-0.346042) | 0.005613 / 0.007607 (-0.001994) | 0.349590 / 0.226044 (0.123545) | 3.500213 / 2.268929 (1.231284) | 2.001155 / 55.444624 (-53.443469) | 1.704845 / 6.876477 (-5.171632) | 1.810722 / 2.142072 (-0.331350) | 0.795326 / 4.805227 (-4.009901) | 0.132913 / 6.500664 (-6.367751) | 0.041209 / 0.075469 (-0.034260) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.029513 / 1.841788 (-0.812274) | 12.005617 / 8.074308 (3.931309) | 10.119379 / 10.191392 (-0.072013) | 0.139767 / 0.680424 (-0.540657) | 0.015241 / 0.534201 (-0.518960) | 0.301164 / 0.579283 (-0.278119) | 0.121563 / 0.434364 (-0.312801) | 0.336672 / 0.540337 (-0.203666) | 0.431526 / 1.386936 (-0.955410) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#92bdab56a3c7d5ded10e8ae4134c943e32d3bc86 \"CML watermark\")\n"
] | Fix CI by temporarily marking test_convert_to_parquet as expected to fail | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7074/reactions"
} | PR_kwDODunzps52jkw4 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7074.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7074",
"merged_at": "2024-07-26T09:16:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7074.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7074"
} | 2024-07-26T09:03:33Z | https://api.github.com/repos/huggingface/datasets/issues/7074/comments | As a hotfix for CI, temporarily mark test_convert_to_parquet as expected to fail.
Fix #7073.
Revert once root cause is fixed. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/7074/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7074/timeline | closed | false | 7,074 | null | 2024-07-26T09:16:12Z | null | true |
2,431,706,568 | https://api.github.com/repos/huggingface/datasets/issues/7073 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7073/events | [] | null | 2024-07-27T05:48:02Z | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | https://github.com/huggingface/datasets/issues/7073 | MEMBER | completed | null | null | [
"Any recent change in the API backend rejecting parameter `revision=\"refs/pr/1\"` to `HfApi.preupload_lfs_files`?\r\n```\r\nf\"{endpoint}/api/{repo_type}s/{repo_id}/preupload/{revision}\"\r\n\r\nhttps://hub-ci.huggingface.co/api/datasets/__DUMMY_TRANSFORMERS_USER__/test-dataset-5188a8-17219154347516/preupload/refs%2Fpr%2F1.\r\nInvalid rev id: refs/pr/1\r\n```\r\n@Wauplin @huggingface/datasets @huggingface/moon-landing @huggingface/moon-landing-back ",
"I have temporarily fixed the CI with:\r\n- #7074\r\n\r\nHowever, the underlying issue must be fixed and #7074 must be reverted.",
"Hmm does it do the preupload call before creating the ref cc @Wauplin ?\r\n\r\n(in that case it should do a preupload call on the base branch with `?create_pr=1`)",
"@coyotte508, the CI test was implemented 2 months ago and it was working OK until yesterday. See the CI status of the commits in the main branch of `datasets`: https://github.com/huggingface/datasets/commits/main/",
"Yes i get that\r\n\r\nWe changed the preupload response to return the commit id in https://github.com/huggingface-internal/moon-landing/pull/10756\r\n\r\nThis line is probably causing the error: https://github.com/huggingface-internal/moon-landing/pull/10756/files#diff-558f6f9865e30bfa091b94d6a4a900138103ddb4eb0bec96b6deec5bf5626fa0R2322\r\n\r\nIt's weird the error is returned, it means that maybe a ref with 0 history (not even the first commit) was created\r\n\r\nDoes this change have any impact in production, or just the CI test? If it's just the CI test it should be fixed on your side, if it impacts production we can look at a solution",
"@coyotte508 it impacts production: `convert_to_parquet` raises the above error when the dataset has more that one configs/subsets:\r\n- First subset calls `push_to_hub` with `create_pr=True`\r\n- Second subset uses the `refs/pr/#` returned by the call above, and calls `push_to_hub` with `revision=\"refs/pr/#\"`",
"I tried removing the `mock_commit` call: https://github.com/huggingface/datasets/pull/7076\r\n\r\nAnd the tests seem to work.\r\n\r\nSo it's probably because the commit is not actually called, it doesn't actually create the pull request on the remote (and the associated `refs/pr/1`). But the `preupload` call is not mocked.\r\n\r\nAnyway it shouldn't impact production, since production isn't mocked",
"@coyotte508 thanks a lot for the investigation and sorry for the noise. \r\nI promise not trying to fix things when I have a slight fever: my head does not work well.\r\n\r\nWe need indeed to mock `preupload_lfs_files`: before it was not necessary, but now it is.",
"I fixed the test in:\r\n- #7078\r\n\r\nThanks again, @coyotte508."
] | CI is broken for convert_to_parquet: Invalid rev id: refs/pr/1 404 error causes RevisionNotFoundError | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7073/reactions"
} | I_kwDODunzps6Q8OXI | null | 2024-07-26T08:27:41Z | https://api.github.com/repos/huggingface/datasets/issues/7073/comments | See: https://github.com/huggingface/datasets/actions/runs/10095313567/job/27915185756
```
FAILED tests/test_hub.py::test_convert_to_parquet - huggingface_hub.utils._errors.RevisionNotFoundError: 404 Client Error. (Request ID: Root=1-66a25839-31ce7b475e70e7db1e4d44c2;b0c8870f-d5ef-4bf2-a6ff-0191f3df0f64)
Revision Not Found for url: https://hub-ci.huggingface.co/api/datasets/__DUMMY_TRANSFORMERS_USER__/test-dataset-5188a8-17219154347516/preupload/refs%2Fpr%2F1.
Invalid rev id: refs/pr/1
```
```
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/datasets/hub.py:86: in convert_to_parquet
dataset.push_to_hub(
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/datasets/dataset_dict.py:1722: in push_to_hub
split_additions, uploaded_size, dataset_nbytes = self[split]._push_parquet_shards_to_hub(
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/datasets/arrow_dataset.py:5511: in _push_parquet_shards_to_hub
api.preupload_lfs_files(
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/huggingface_hub/hf_api.py:4231: in preupload_lfs_files
_fetch_upload_modes(
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py:118: in _inner_fn
return fn(*args, **kwargs)
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/huggingface_hub/_commit_api.py:507: in _fetch_upload_modes
hf_raise_for_status(resp)
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/7073/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7073/timeline | closed | false | 7,073 | null | 2024-07-26T09:16:13Z | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | false |
2,430,577,916 | https://api.github.com/repos/huggingface/datasets/issues/7072 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7072/events | [] | null | 2024-07-25T20:36:11Z | [] | https://github.com/huggingface/datasets/issues/7072 | NONE | not_planned | null | null | [] | nm | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7072/reactions"
} | I_kwDODunzps6Q36z8 | null | 2024-07-25T17:03:24Z | https://api.github.com/repos/huggingface/datasets/issues/7072/comments | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/26392883?v=4",
"events_url": "https://api.github.com/users/brettdavies/events{/privacy}",
"followers_url": "https://api.github.com/users/brettdavies/followers",
"following_url": "https://api.github.com/users/brettdavies/following{/other_user}",
"gists_url": "https://api.github.com/users/brettdavies/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/brettdavies",
"id": 26392883,
"login": "brettdavies",
"node_id": "MDQ6VXNlcjI2MzkyODgz",
"organizations_url": "https://api.github.com/users/brettdavies/orgs",
"received_events_url": "https://api.github.com/users/brettdavies/received_events",
"repos_url": "https://api.github.com/users/brettdavies/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/brettdavies/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brettdavies/subscriptions",
"type": "User",
"url": "https://api.github.com/users/brettdavies"
} | https://api.github.com/repos/huggingface/datasets/issues/7072/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7072/timeline | closed | false | 7,072 | null | 2024-07-25T20:36:11Z | null | false |
2,430,313,011 | https://api.github.com/repos/huggingface/datasets/issues/7071 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7071/events | [] | null | 2024-07-25T15:36:59Z | [] | https://github.com/huggingface/datasets/issues/7071 | NONE | null | null | null | [] | Filter hangs | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7071/reactions"
} | I_kwDODunzps6Q26Iz | null | 2024-07-25T15:29:05Z | https://api.github.com/repos/huggingface/datasets/issues/7071/comments | ### Describe the bug
When trying to filter my custom dataset, the process hangs, regardless of the lambda function used. It appears to be an issue with the way the Images are being handled. The dataset in question is a preprocessed version of https://huggingface.co/datasets/danaaubakirova/patfig where notably, I have converted the data to the Parquet format.
### Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset('lcolonn/patfig', split='test')
ds_filtered = ds.filter(lambda row: row['cpc_class'] != 'Y')
```
Eventually I ctrl+C and I obtain this stack trace:
```
>>> ds_filtered = ds.filter(lambda row: row['cpc_class'] != 'Y')
Filter: 0%| | 0/998 [00:00<?, ? examples/s]Filter: 0%| | 0/998 [00:35<?, ? examples/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 567, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/fingerprint.py", line 482, in wrapper
out = func(dataset, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3714, in filter
indices = self.map(
^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 602, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 567, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3161, in map
for rank, done, content in Dataset._map_single(**dataset_kwargs):
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3552, in _map_single
batch = apply_function_on_filtered_inputs(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3421, in apply_function_on_filtered_inputs
processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 6478, in get_indices_from_mask_function
num_examples = len(batch[next(iter(batch.keys()))])
~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/formatting/formatting.py", line 273, in __getitem__
value = self.format(key)
^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/formatting/formatting.py", line 376, in format
return self.formatter.format_column(self.pa_table.select([key]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/formatting/formatting.py", line 443, in format_column
column = self.python_features_decoder.decode_column(column, pa_table.column_names[0])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/formatting/formatting.py", line 219, in decode_column
return self.features.decode_column(column, column_name) if self.features else column
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/features/features.py", line 2008, in decode_column
[decode_nested_example(self[column_name], value) if value is not None else None for value in column]
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/features/features.py", line 2008, in <listcomp>
[decode_nested_example(self[column_name], value) if value is not None else None for value in column]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/features/features.py", line 1351, in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/features/image.py", line 188, in decode_example
image.load() # to avoid "Too many open files" errors
^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/PIL/ImageFile.py", line 293, in load
n, err_code = decoder.decode(b)
^^^^^^^^^^^^^^^^^
KeyboardInterrupt
```
Warning! This can even seem to cause some computers to crash.
### Expected behavior
Should return the filtered dataset
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-6.5.0-41-generic-x86_64-with-glibc2.35
- Python version: 3.11.9
- `huggingface_hub` version: 0.24.0
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/61711045?v=4",
"events_url": "https://api.github.com/users/lucienwalewski/events{/privacy}",
"followers_url": "https://api.github.com/users/lucienwalewski/followers",
"following_url": "https://api.github.com/users/lucienwalewski/following{/other_user}",
"gists_url": "https://api.github.com/users/lucienwalewski/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lucienwalewski",
"id": 61711045,
"login": "lucienwalewski",
"node_id": "MDQ6VXNlcjYxNzExMDQ1",
"organizations_url": "https://api.github.com/users/lucienwalewski/orgs",
"received_events_url": "https://api.github.com/users/lucienwalewski/received_events",
"repos_url": "https://api.github.com/users/lucienwalewski/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lucienwalewski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucienwalewski/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lucienwalewski"
} | https://api.github.com/repos/huggingface/datasets/issues/7071/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7071/timeline | open | false | 7,071 | null | null | null | false |
2,430,285,235 | https://api.github.com/repos/huggingface/datasets/issues/7070 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7070/events | [] | null | 2024-07-25T15:19:34Z | [] | https://github.com/huggingface/datasets/issues/7070 | NONE | null | null | null | [] | how set_transform affects batch size? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7070/reactions"
} | I_kwDODunzps6Q2zWz | null | 2024-07-25T15:19:34Z | https://api.github.com/repos/huggingface/datasets/issues/7070/comments | ### Describe the bug
I am trying to fine-tune w2v-bert for ASR task. Since my dataset is so big, I preferred to use the on-the-fly method with set_transform. So i change the preprocessing function to this:
```
def prepare_dataset(batch):
input_features = processor(batch["audio"], sampling_rate=16000).input_features[0]
input_length = len(input_features)
labels = processor.tokenizer(batch["text"], padding=False).input_ids
batch = {
"input_features": [input_features],
"input_length": [input_length],
"labels": [labels]
}
return batch
train_ds.set_transform(prepare_dataset)
val_ds.set_transform(prepare_dataset)
```
After this, I also had to change the DataCollatorCTCWithPadding class like this:
```
@dataclass
class DataCollatorCTCWithPadding:
processor: Wav2Vec2BertProcessor
padding: Union[bool, str] = True
def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:
# Separate input_features and labels
input_features = [{"input_features": feature["input_features"][0]} for feature in features]
labels = [feature["labels"][0] for feature in features]
# Pad input features
batch = self.processor.pad(
input_features,
padding=self.padding,
return_tensors="pt",
)
# Pad and process labels
label_features = self.processor.tokenizer.pad(
{"input_ids": labels},
padding=self.padding,
return_tensors="pt",
)
labels = label_features["input_ids"]
attention_mask = label_features["attention_mask"]
# Replace padding with -100 to ignore these tokens during loss calculation
labels = labels.masked_fill(attention_mask.ne(1), -100)
batch["labels"] = labels
return batch
```
But now a strange thing is happening, no matter how much I increase the batch size, the amount of V-RAM GPU usage does not change, while the number of total steps in the progress-bar (logging) changes. Is this normal or have I made a mistake?
### Steps to reproduce the bug
i can share my code if needed
### Expected behavior
Equal to the batch size value, the set_transform function is applied to the dataset and given to the model as a batch.
### Environment info
all updated versions | {
"avatar_url": "https://avatars.githubusercontent.com/u/103993288?v=4",
"events_url": "https://api.github.com/users/VafaKnm/events{/privacy}",
"followers_url": "https://api.github.com/users/VafaKnm/followers",
"following_url": "https://api.github.com/users/VafaKnm/following{/other_user}",
"gists_url": "https://api.github.com/users/VafaKnm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/VafaKnm",
"id": 103993288,
"login": "VafaKnm",
"node_id": "U_kgDOBjLPyA",
"organizations_url": "https://api.github.com/users/VafaKnm/orgs",
"received_events_url": "https://api.github.com/users/VafaKnm/received_events",
"repos_url": "https://api.github.com/users/VafaKnm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/VafaKnm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VafaKnm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/VafaKnm"
} | https://api.github.com/repos/huggingface/datasets/issues/7070/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7070/timeline | open | false | 7,070 | null | null | null | false |
2,429,281,339 | https://api.github.com/repos/huggingface/datasets/issues/7069 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7069/events | [] | null | 2024-07-31T07:10:07Z | [] | https://github.com/huggingface/datasets/pull/7069 | MEMBER | null | false | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7069). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"cc @Wauplin maybe it's a `huggingface_hub` bug ?\r\n\r\nEDIT: ah actually the issue is opened at https://github.com/huggingface/huggingface_hub/issues/2419",
"I think we need to make this fix anyway, ~~unless we pin the lower version of huggingface-hub (once they release the patch)~~.\r\n- Calling create_branch with a PR ref raises an error",
"Comment by @Wauplin: https://github.com/huggingface/huggingface_hub/pull/2426#issuecomment-2257657543\r\n> I think this should be something to fix in datasets directly. Having a 400 Bad request when trying to create the branch refs/pr/1 seems normal to me since it's not a branch.",
"does this mean we should use `create_pull_request()` in that case ?",
"> does this mean we should use create_pull_request() in that case ?\r\n\r\nIf user wants to push some data to a new PR, they can already pass `create_pr=True` which will automatically do the job for you (without using `revision`). If user is passing `revision=\"refs/pr/1\"` explicitly, you should assume the PR already exists.",
"ah yes we do pass create_pr in `preupload_lfs_files()` ! sounds good then",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005806 / 0.011353 (-0.005547) | 0.004082 / 0.011008 (-0.006927) | 0.064277 / 0.038508 (0.025769) | 0.032289 / 0.023109 (0.009180) | 0.242066 / 0.275898 (-0.033832) | 0.272574 / 0.323480 (-0.050906) | 0.003281 / 0.007986 (-0.004705) | 0.002957 / 0.004328 (-0.001371) | 0.049930 / 0.004250 (0.045679) | 0.047306 / 0.037052 (0.010253) | 0.252216 / 0.258489 (-0.006273) | 0.286678 / 0.293841 (-0.007163) | 0.030182 / 0.128546 (-0.098364) | 0.012967 / 0.075646 (-0.062680) | 0.204949 / 0.419271 (-0.214323) | 0.036999 / 0.043533 (-0.006534) | 0.243577 / 0.255139 (-0.011562) | 0.265044 / 0.283200 (-0.018156) | 0.021149 / 0.141683 (-0.120534) | 1.112293 / 1.452155 (-0.339862) | 1.186483 / 1.492716 (-0.306233) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093239 / 0.018006 (0.075233) | 0.286372 / 0.000490 (0.285883) | 0.000224 / 0.000200 (0.000024) | 0.000062 / 0.000054 (0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019042 / 0.037411 (-0.018369) | 0.063690 / 0.014526 (0.049164) | 0.075034 / 0.176557 (-0.101523) | 0.123053 / 0.737135 (-0.614083) | 0.076843 / 0.296338 (-0.219495) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.276554 / 0.215209 (0.061345) | 2.749338 / 2.077655 (0.671683) | 1.442764 / 1.504120 (-0.061356) | 1.327860 / 1.541195 (-0.213335) | 1.369885 / 1.468490 (-0.098606) | 0.722645 / 4.584777 (-3.862132) | 2.430707 / 3.745712 (-1.315005) | 3.105293 / 5.269862 (-2.164568) | 1.961617 / 4.565676 (-2.604060) | 0.077728 / 0.424275 (-0.346547) | 0.005189 / 0.007607 (-0.002418) | 0.335511 / 0.226044 (0.109467) | 3.315618 / 2.268929 (1.046690) | 1.858254 / 55.444624 (-53.586371) | 1.552173 / 6.876477 (-5.324304) | 1.627086 / 2.142072 (-0.514987) | 0.790871 / 4.805227 (-4.014356) | 0.136958 / 6.500664 (-6.363706) | 0.043207 / 0.075469 (-0.032262) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.969314 / 1.841788 (-0.872473) | 12.145318 / 8.074308 (4.071010) | 9.834839 / 10.191392 (-0.356553) | 0.141896 / 0.680424 (-0.538528) | 0.014304 / 0.534201 (-0.519897) | 0.306091 / 0.579283 (-0.273192) | 0.260994 / 0.434364 (-0.173369) | 0.348096 / 0.540337 (-0.192242) | 0.441458 / 1.386936 (-0.945478) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005989 / 0.011353 (-0.005363) | 0.003907 / 0.011008 (-0.007102) | 0.050819 / 0.038508 (0.012310) | 0.033178 / 0.023109 (0.010069) | 0.279059 / 0.275898 (0.003161) | 0.300161 / 0.323480 (-0.023319) | 0.004383 / 0.007986 (-0.003603) | 0.002834 / 0.004328 (-0.001495) | 0.048779 / 0.004250 (0.044528) | 0.040502 / 0.037052 (0.003450) | 0.291786 / 0.258489 (0.033297) | 0.323827 / 0.293841 (0.029986) | 0.032175 / 0.128546 (-0.096371) | 0.012157 / 0.075646 (-0.063489) | 0.060796 / 0.419271 (-0.358476) | 0.033924 / 0.043533 (-0.009609) | 0.278047 / 0.255139 (0.022908) | 0.297878 / 0.283200 (0.014678) | 0.019137 / 0.141683 (-0.122546) | 1.138996 / 1.452155 (-0.313158) | 1.172731 / 1.492716 (-0.319985) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.110148 / 0.018006 (0.092142) | 0.307232 / 0.000490 (0.306742) | 0.000209 / 0.000200 (0.000009) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023082 / 0.037411 (-0.014330) | 0.076590 / 0.014526 (0.062065) | 0.088444 / 0.176557 (-0.088113) | 0.129293 / 0.737135 (-0.607842) | 0.090470 / 0.296338 (-0.205868) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.305016 / 0.215209 (0.089807) | 2.931671 / 2.077655 (0.854016) | 1.586055 / 1.504120 (0.081935) | 1.463517 / 1.541195 (-0.077678) | 1.479654 / 1.468490 (0.011164) | 0.726194 / 4.584777 (-3.858583) | 0.970512 / 3.745712 (-2.775200) | 2.850496 / 5.269862 (-2.419365) | 1.920112 / 4.565676 (-2.645564) | 0.079921 / 0.424275 (-0.344354) | 0.005367 / 0.007607 (-0.002240) | 0.347022 / 0.226044 (0.120978) | 3.472425 / 2.268929 (1.203497) | 1.965400 / 55.444624 (-53.479225) | 1.669116 / 6.876477 (-5.207361) | 1.859504 / 2.142072 (-0.282568) | 0.802703 / 4.805227 (-4.002525) | 0.134776 / 6.500664 (-6.365888) | 0.041800 / 0.075469 (-0.033669) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.039665 / 1.841788 (-0.802122) | 12.024071 / 8.074308 (3.949763) | 10.338743 / 10.191392 (0.147351) | 0.139495 / 0.680424 (-0.540929) | 0.015249 / 0.534201 (-0.518952) | 0.298580 / 0.579283 (-0.280703) | 0.124625 / 0.434364 (-0.309739) | 0.341868 / 0.540337 (-0.198470) | 0.431396 / 1.386936 (-0.955540) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#65b9499348fa4c6e5bfa977ee9b5e8574bf64eea \"CML watermark\")\n"
] | Fix push_to_hub by not calling create_branch if PR branch | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7069/reactions"
} | PR_kwDODunzps52betB | {
"diff_url": "https://github.com/huggingface/datasets/pull/7069.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7069",
"merged_at": "2024-07-30T10:51:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7069.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7069"
} | 2024-07-25T07:50:04Z | https://api.github.com/repos/huggingface/datasets/issues/7069/comments | Fix push_to_hub by not calling create_branch if PR branch (e.g. `refs/pr/1`).
Note that currently create_branch raises a 400 Bad Request error if the user passes a PR branch (e.g. `refs/pr/1`).
EDIT:
~~Fix push_to_hub by not calling create_branch if branch exists.~~
Note that currently create_branch raises a 403 Forbidden error even if all these conditions are met:
- exist_ok is passed
- the branch already exists
- the user does not have WRITE permission
Fix #7067.
Related issue:
- https://github.com/huggingface/huggingface_hub/issues/2419 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/7069/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7069/timeline | closed | false | 7,069 | null | 2024-07-30T10:51:01Z | null | true |
2,426,657,434 | https://api.github.com/repos/huggingface/datasets/issues/7068 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7068/events | [] | null | 2024-07-29T07:02:07Z | [] | https://github.com/huggingface/datasets/pull/7068 | MEMBER | null | false | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7068). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005725 / 0.011353 (-0.005628) | 0.004149 / 0.011008 (-0.006859) | 0.065051 / 0.038508 (0.026543) | 0.030220 / 0.023109 (0.007111) | 0.256768 / 0.275898 (-0.019130) | 0.269767 / 0.323480 (-0.053713) | 0.003256 / 0.007986 (-0.004730) | 0.003378 / 0.004328 (-0.000951) | 0.049407 / 0.004250 (0.045156) | 0.046041 / 0.037052 (0.008988) | 0.270977 / 0.258489 (0.012488) | 0.288771 / 0.293841 (-0.005070) | 0.030401 / 0.128546 (-0.098145) | 0.012203 / 0.075646 (-0.063443) | 0.227365 / 0.419271 (-0.191906) | 0.036356 / 0.043533 (-0.007176) | 0.262763 / 0.255139 (0.007624) | 0.268172 / 0.283200 (-0.015028) | 0.020698 / 0.141683 (-0.120984) | 1.171679 / 1.452155 (-0.280476) | 1.155353 / 1.492716 (-0.337363) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.138740 / 0.018006 (0.120733) | 0.300962 / 0.000490 (0.300473) | 0.000240 / 0.000200 (0.000040) | 0.000050 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019056 / 0.037411 (-0.018355) | 0.062922 / 0.014526 (0.048396) | 0.075339 / 0.176557 (-0.101218) | 0.122587 / 0.737135 (-0.614548) | 0.078622 / 0.296338 (-0.217716) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.273878 / 0.215209 (0.058669) | 2.753188 / 2.077655 (0.675533) | 1.446877 / 1.504120 (-0.057243) | 1.325034 / 1.541195 (-0.216160) | 1.332849 / 1.468490 (-0.135641) | 0.721042 / 4.584777 (-3.863735) | 2.457241 / 3.745712 (-1.288471) | 3.008013 / 5.269862 (-2.261848) | 1.925773 / 4.565676 (-2.639903) | 0.077725 / 0.424275 (-0.346550) | 0.005232 / 0.007607 (-0.002375) | 0.331398 / 0.226044 (0.105354) | 3.273689 / 2.268929 (1.004761) | 1.818291 / 55.444624 (-53.626334) | 1.532233 / 6.876477 (-5.344244) | 1.545236 / 2.142072 (-0.596837) | 0.809853 / 4.805227 (-3.995374) | 0.137571 / 6.500664 (-6.363093) | 0.042829 / 0.075469 (-0.032640) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.962599 / 1.841788 (-0.879189) | 11.593394 / 8.074308 (3.519086) | 9.564848 / 10.191392 (-0.626544) | 0.131547 / 0.680424 (-0.548876) | 0.014724 / 0.534201 (-0.519477) | 0.309343 / 0.579283 (-0.269940) | 0.263476 / 0.434364 (-0.170888) | 0.350755 / 0.540337 (-0.189582) | 0.445279 / 1.386936 (-0.941657) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005818 / 0.011353 (-0.005534) | 0.004028 / 0.011008 (-0.006980) | 0.050337 / 0.038508 (0.011829) | 0.033234 / 0.023109 (0.010125) | 0.273498 / 0.275898 (-0.002400) | 0.299130 / 0.323480 (-0.024350) | 0.004391 / 0.007986 (-0.003595) | 0.002854 / 0.004328 (-0.001474) | 0.048616 / 0.004250 (0.044365) | 0.040354 / 0.037052 (0.003302) | 0.287980 / 0.258489 (0.029491) | 0.323940 / 0.293841 (0.030099) | 0.033031 / 0.128546 (-0.095515) | 0.012539 / 0.075646 (-0.063108) | 0.061129 / 0.419271 (-0.358143) | 0.034410 / 0.043533 (-0.009123) | 0.276367 / 0.255139 (0.021228) | 0.295266 / 0.283200 (0.012066) | 0.018558 / 0.141683 (-0.123125) | 1.149051 / 1.452155 (-0.303104) | 1.207995 / 1.492716 (-0.284721) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095732 / 0.018006 (0.077726) | 0.305774 / 0.000490 (0.305284) | 0.000222 / 0.000200 (0.000022) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023680 / 0.037411 (-0.013731) | 0.077147 / 0.014526 (0.062621) | 0.088850 / 0.176557 (-0.087706) | 0.130219 / 0.737135 (-0.606917) | 0.090582 / 0.296338 (-0.205756) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.306099 / 0.215209 (0.090890) | 2.952515 / 2.077655 (0.874861) | 1.593090 / 1.504120 (0.088970) | 1.471887 / 1.541195 (-0.069308) | 1.484277 / 1.468490 (0.015787) | 0.741158 / 4.584777 (-3.843619) | 0.976520 / 3.745712 (-2.769192) | 2.904631 / 5.269862 (-2.365231) | 1.940287 / 4.565676 (-2.625389) | 0.079828 / 0.424275 (-0.344447) | 0.005482 / 0.007607 (-0.002125) | 0.353376 / 0.226044 (0.127332) | 3.502412 / 2.268929 (1.233483) | 1.976571 / 55.444624 (-53.468053) | 1.675141 / 6.876477 (-5.201336) | 1.821075 / 2.142072 (-0.320998) | 0.814290 / 4.805227 (-3.990937) | 0.135227 / 6.500664 (-6.365437) | 0.041631 / 0.075469 (-0.033838) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.041495 / 1.841788 (-0.800293) | 12.275647 / 8.074308 (4.201339) | 10.569540 / 10.191392 (0.378148) | 0.143136 / 0.680424 (-0.537288) | 0.015010 / 0.534201 (-0.519191) | 0.302177 / 0.579283 (-0.277106) | 0.125924 / 0.434364 (-0.308440) | 0.340977 / 0.540337 (-0.199360) | 0.438467 / 1.386936 (-0.948469) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#baea190799dfa22493621fe06584b006b57f16ce \"CML watermark\")\n"
] | Fix prepare_single_hop_path_and_storage_options | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7068/reactions"
} | PR_kwDODunzps52SwXS | {
"diff_url": "https://github.com/huggingface/datasets/pull/7068.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7068",
"merged_at": "2024-07-29T06:56:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7068.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7068"
} | 2024-07-24T05:52:34Z | https://api.github.com/repos/huggingface/datasets/issues/7068/comments | Fix `_prepare_single_hop_path_and_storage_options`:
- Do not pass HF authentication headers and HF user-agent to non-HF HTTP URLs
- Do not overwrite passed `storage_options` nested values:
- Before, when passed
```DownloadConfig(storage_options={"https": {"client_kwargs": {"raise_for_status": True}}})```,
it was overwritten to
```{"https": {"client_kwargs": {"trust_env": True}}}```
- Now, the result combines both:
```{"https": {"client_kwargs": {"trust_env": True, "raise_for_status": True}}}``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/7068/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7068/timeline | closed | false | 7,068 | null | 2024-07-29T06:56:15Z | null | true |
2,425,460,168 | https://api.github.com/repos/huggingface/datasets/issues/7067 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7067/events | [] | null | 2024-07-30T10:51:02Z | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | https://github.com/huggingface/datasets/issues/7067 | NONE | completed | null | null | [
"Many users have encountered the same issue, which has caused inconvenience.\r\n\r\nhttps://discuss.huggingface.co/t/convert-to-parquet-fails-for-datasets-with-multiple-configs/86733\r\n",
"Thanks for reporting.\r\n\r\nI will make the code more robust.",
"I have opened an issue in the huggingface-hub repo:\r\n- https://github.com/huggingface/huggingface_hub/issues/2419\r\n\r\nI am opening a PR to avoid calling `create_branch` if the branch already exists."
] | Convert_to_parquet fails for datasets with multiple configs | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7067/reactions"
} | I_kwDODunzps6QkZXI | null | 2024-07-23T15:09:33Z | https://api.github.com/repos/huggingface/datasets/issues/7067/comments | If the dataset has multiple configs, when using the `datasets-cli convert_to_parquet` command to avoid issues with the data viewer caused by loading scripts, the conversion process only successfully converts the data corresponding to the first config. When it starts converting the second config, it throws an error:
```
Traceback (most recent call last):
File "/opt/anaconda3/envs/dl/bin/datasets-cli", line 8, in <module>
sys.exit(main())
File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/datasets/commands/datasets_cli.py", line 41, in main
service.run()
File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/datasets/commands/convert_to_parquet.py", line 83, in run
dataset.push_to_hub(
File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/datasets/dataset_dict.py", line 1713, in push_to_hub
api.create_branch(repo_id, branch=revision, token=token, repo_type="dataset", exist_ok=True)
File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 5503, in create_branch
hf_raise_for_status(response)
File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 358, in hf_raise_for_status
raise BadRequestError(message, response=response) from e
huggingface_hub.utils._errors.BadRequestError: (Request ID: Root=1-669fc665-7c2e80d75f4337496ee95402;731fcdc7-0950-4eec-99cf-ce047b8d003f)
Bad request:
Invalid reference for a branch: refs/pr/1
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/97585031?v=4",
"events_url": "https://api.github.com/users/HuangZhen02/events{/privacy}",
"followers_url": "https://api.github.com/users/HuangZhen02/followers",
"following_url": "https://api.github.com/users/HuangZhen02/following{/other_user}",
"gists_url": "https://api.github.com/users/HuangZhen02/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/HuangZhen02",
"id": 97585031,
"login": "HuangZhen02",
"node_id": "U_kgDOBdEHhw",
"organizations_url": "https://api.github.com/users/HuangZhen02/orgs",
"received_events_url": "https://api.github.com/users/HuangZhen02/received_events",
"repos_url": "https://api.github.com/users/HuangZhen02/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/HuangZhen02/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HuangZhen02/subscriptions",
"type": "User",
"url": "https://api.github.com/users/HuangZhen02"
} | https://api.github.com/repos/huggingface/datasets/issues/7067/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7067/timeline | closed | false | 7,067 | null | 2024-07-30T10:51:02Z | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | false |
2,425,125,160 | https://api.github.com/repos/huggingface/datasets/issues/7066 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7066/events | [] | null | 2024-07-23T12:43:59Z | [] | https://github.com/huggingface/datasets/issues/7066 | MEMBER | null | null | null | [] | One subset per file in repo ? | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7066/reactions"
} | I_kwDODunzps6QjHko | null | 2024-07-23T12:43:59Z | https://api.github.com/repos/huggingface/datasets/issues/7066/comments | Right now we consider all the files of a dataset to be the same data, e.g.
```
single_subset_dataset/
├── train0.jsonl
├── train1.jsonl
└── train2.jsonl
```
but in cases like this, each file is actually a different subset of the dataset and should be loaded separately
```
many_subsets_dataset/
├── animals.jsonl
├── trees.jsonl
└── metadata.jsonl
```
It would be nice to detect those subsets automatically using a simple heuristic. For example we can group files together if their paths names are the same except some digits ? | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | https://api.github.com/repos/huggingface/datasets/issues/7066/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7066/timeline | open | false | 7,066 | null | null | null | false |
2,424,734,953 | https://api.github.com/repos/huggingface/datasets/issues/7065 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7065/events | [] | null | 2024-07-23T09:37:56Z | [] | https://github.com/huggingface/datasets/issues/7065 | NONE | null | null | null | [] | Cannot get item after loading from disk and then converting to iterable. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7065/reactions"
} | I_kwDODunzps6QhoTp | null | 2024-07-23T09:37:56Z | https://api.github.com/repos/huggingface/datasets/issues/7065/comments | ### Describe the bug
The dataset generated from local file works fine.
```py
root = "/home/data/train"
file_list1 = glob(os.path.join(root, "*part1.flac"))
file_list2 = glob(os.path.join(root, "*part2.flac"))
ds = (
Dataset.from_dict({"part1": file_list1, "part2": file_list2})
.cast_column("part1", Audio(sampling_rate=None, mono=False))
.cast_column("part2", Audio(sampling_rate=None, mono=False))
)
ids = ds.to_iterable_dataset(128)
ids = ids.shuffle(buffer_size=10000, seed=42)
dataloader = DataLoader(ids, num_workers=4, batch_size=8, persistent_workers=True)
for batch in dataloader:
break
```
But after saving it to disk and then loading it from disk, I cannot get data as expected.
```py
root = "/home/data/train"
file_list1 = glob(os.path.join(root, "*part1.flac"))
file_list2 = glob(os.path.join(root, "*part2.flac"))
ds = (
Dataset.from_dict({"part1": file_list1, "part2": file_list2})
.cast_column("part1", Audio(sampling_rate=None, mono=False))
.cast_column("part2", Audio(sampling_rate=None, mono=False))
)
ds.save_to_disk("./train")
ds = datasets.load_from_disk("./train")
ids = ds.to_iterable_dataset(128)
ids = ids.shuffle(buffer_size=10000, seed=42)
dataloader = DataLoader(ids, num_workers=4, batch_size=8, persistent_workers=True)
for batch in dataloader:
break
```
After a long time waiting, an error occurs:
```
Loading dataset from disk: 100%|█████████████████████████████████████████████████████████████████████████| 165/165 [00:00<00:00, 6422.18it/s]
Traceback (most recent call last):
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1133, in _try_get_data
data = self._data_queue.get(timeout=timeout)
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/multiprocessing/queues.py", line 113, in get
if not self._poll(timeout):
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/multiprocessing/connection.py", line 257, in poll
return self._poll(timeout)
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/multiprocessing/connection.py", line 424, in _poll
r = wait([self], timeout)
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/multiprocessing/connection.py", line 931, in wait
ready = selector.select(timeout)
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/selectors.py", line 416, in select
fd_event_list = self._selector.poll(timeout)
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
_error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 3490529) is killed by signal: Killed.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py", line 39, in <module>
cli.main()
File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 430, in main
run()
File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 284, in run_file
runpy.run_path(target, run_name="__main__")
File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 321, in run_path
return _run_module_code(code, init_globals, run_name,
File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 135, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 124, in _run_code
exec(code, run_globals)
File "/home/hanzerui/workspace/NetEase/test/test_datasets.py", line 60, in <module>
for batch in dataloader:
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 631, in __next__
data = self._next_data()
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1329, in _next_data
idx, data = self._get_data()
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1295, in _get_data
success, data = self._try_get_data()
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1146, in _try_get_data
raise RuntimeError(f'DataLoader worker (pid(s) {pids_str}) exited unexpectedly') from e
RuntimeError: DataLoader worker (pid(s) 3490529) exited unexpectedly
```
It seems that streaming is not supported by `laod_from_disk`, so does that mean I cannot convert it to iterable?
### Steps to reproduce the bug
1. Create a `Dataset` from local files with `from_dict`
2. Save it to disk with `save_to_disk`
3. Load it from disk with `load_from_disk`
4. Convert to iterable with `to_iterable_dataset`
5. Loop the dataset
### Expected behavior
Get items faster than the original dataset generated from dict.
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-6.5.0-41-generic-x86_64-with-glibc2.35
- Python version: 3.10.14
- `huggingface_hub` version: 0.23.2
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/21305646?v=4",
"events_url": "https://api.github.com/users/happyTonakai/events{/privacy}",
"followers_url": "https://api.github.com/users/happyTonakai/followers",
"following_url": "https://api.github.com/users/happyTonakai/following{/other_user}",
"gists_url": "https://api.github.com/users/happyTonakai/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/happyTonakai",
"id": 21305646,
"login": "happyTonakai",
"node_id": "MDQ6VXNlcjIxMzA1NjQ2",
"organizations_url": "https://api.github.com/users/happyTonakai/orgs",
"received_events_url": "https://api.github.com/users/happyTonakai/received_events",
"repos_url": "https://api.github.com/users/happyTonakai/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/happyTonakai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/happyTonakai/subscriptions",
"type": "User",
"url": "https://api.github.com/users/happyTonakai"
} | https://api.github.com/repos/huggingface/datasets/issues/7065/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7065/timeline | open | false | 7,065 | null | null | null | false |
2,424,613,104 | https://api.github.com/repos/huggingface/datasets/issues/7064 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7064/events | [] | null | 2024-07-25T13:51:25Z | [] | https://github.com/huggingface/datasets/pull/7064 | CONTRIBUTOR | null | false | null | [
"Looks good to me ! :)\r\n\r\nyou might want to add the `map` num_proc argument as well, for people who want to make it run faster",
"Thanks for the feedback @lhoestq! The last commits include:\r\n- Adding the `num_proc` parameter to `batch`\r\n- Adding tests similar to the one done for `IterableDataset.batch()`\r\n- Updated the documentation -> I think they are actually misplaced in the `Stream` page. But could not find a better place atm. Where would you put this documentation?\r\n\r\nWDYT?",
"You can put the documentation in process.mdx :)",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7064). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"I reset the head to the commit before I added the `Dataset.batch()` documentation to `stream.mdx` and instead added the documentation to `process.mdx`. ",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005736 / 0.011353 (-0.005617) | 0.003959 / 0.011008 (-0.007049) | 0.063259 / 0.038508 (0.024751) | 0.030705 / 0.023109 (0.007596) | 0.245706 / 0.275898 (-0.030192) | 0.278766 / 0.323480 (-0.044714) | 0.003354 / 0.007986 (-0.004632) | 0.004246 / 0.004328 (-0.000082) | 0.049346 / 0.004250 (0.045095) | 0.046439 / 0.037052 (0.009386) | 0.257930 / 0.258489 (-0.000559) | 0.295562 / 0.293841 (0.001722) | 0.030529 / 0.128546 (-0.098017) | 0.012465 / 0.075646 (-0.063182) | 0.205595 / 0.419271 (-0.213677) | 0.036319 / 0.043533 (-0.007214) | 0.243872 / 0.255139 (-0.011267) | 0.275834 / 0.283200 (-0.007366) | 0.020330 / 0.141683 (-0.121353) | 1.108337 / 1.452155 (-0.343817) | 1.150406 / 1.492716 (-0.342310) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.113498 / 0.018006 (0.095491) | 0.306654 / 0.000490 (0.306164) | 0.000238 / 0.000200 (0.000038) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019092 / 0.037411 (-0.018319) | 0.063180 / 0.014526 (0.048654) | 0.078244 / 0.176557 (-0.098313) | 0.126106 / 0.737135 (-0.611030) | 0.078651 / 0.296338 (-0.217687) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284132 / 0.215209 (0.068923) | 2.781250 / 2.077655 (0.703595) | 1.471864 / 1.504120 (-0.032256) | 1.354661 / 1.541195 (-0.186534) | 1.362839 / 1.468490 (-0.105651) | 0.719126 / 4.584777 (-3.865651) | 2.396969 / 3.745712 (-1.348743) | 2.987924 / 5.269862 (-2.281938) | 1.910555 / 4.565676 (-2.655121) | 0.078612 / 0.424275 (-0.345663) | 0.005170 / 0.007607 (-0.002437) | 0.333876 / 0.226044 (0.107832) | 3.298340 / 2.268929 (1.029412) | 1.853332 / 55.444624 (-53.591292) | 1.551919 / 6.876477 (-5.324557) | 1.585677 / 2.142072 (-0.556395) | 0.802487 / 4.805227 (-4.002741) | 0.134828 / 6.500664 (-6.365837) | 0.041966 / 0.075469 (-0.033503) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.992277 / 1.841788 (-0.849511) | 11.626887 / 8.074308 (3.552578) | 9.715623 / 10.191392 (-0.475769) | 0.140306 / 0.680424 (-0.540117) | 0.014528 / 0.534201 (-0.519673) | 0.306247 / 0.579283 (-0.273036) | 0.263067 / 0.434364 (-0.171297) | 0.342325 / 0.540337 (-0.198013) | 0.432299 / 1.386936 (-0.954637) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006004 / 0.011353 (-0.005349) | 0.003890 / 0.011008 (-0.007118) | 0.050408 / 0.038508 (0.011900) | 0.031880 / 0.023109 (0.008771) | 0.273114 / 0.275898 (-0.002784) | 0.296653 / 0.323480 (-0.026826) | 0.004569 / 0.007986 (-0.003416) | 0.002831 / 0.004328 (-0.001497) | 0.050032 / 0.004250 (0.045782) | 0.040468 / 0.037052 (0.003415) | 0.284718 / 0.258489 (0.026229) | 0.321754 / 0.293841 (0.027913) | 0.033863 / 0.128546 (-0.094684) | 0.012183 / 0.075646 (-0.063463) | 0.060805 / 0.419271 (-0.358466) | 0.034919 / 0.043533 (-0.008614) | 0.274354 / 0.255139 (0.019215) | 0.293477 / 0.283200 (0.010277) | 0.019418 / 0.141683 (-0.122265) | 1.151571 / 1.452155 (-0.300584) | 1.217174 / 1.492716 (-0.275542) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097326 / 0.018006 (0.079320) | 0.316277 / 0.000490 (0.315787) | 0.000225 / 0.000200 (0.000025) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022932 / 0.037411 (-0.014479) | 0.077455 / 0.014526 (0.062929) | 0.088949 / 0.176557 (-0.087608) | 0.129447 / 0.737135 (-0.607688) | 0.093705 / 0.296338 (-0.202634) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.303918 / 0.215209 (0.088709) | 2.973866 / 2.077655 (0.896211) | 1.593165 / 1.504120 (0.089045) | 1.465312 / 1.541195 (-0.075883) | 1.484503 / 1.468490 (0.016013) | 0.731849 / 4.584777 (-3.852928) | 0.953337 / 3.745712 (-2.792375) | 2.887815 / 5.269862 (-2.382047) | 1.923618 / 4.565676 (-2.642058) | 0.080073 / 0.424275 (-0.344202) | 0.005460 / 0.007607 (-0.002148) | 0.359876 / 0.226044 (0.133832) | 3.532251 / 2.268929 (1.263323) | 1.987778 / 55.444624 (-53.456846) | 1.685572 / 6.876477 (-5.190905) | 1.827141 / 2.142072 (-0.314932) | 0.815953 / 4.805227 (-3.989274) | 0.136698 / 6.500664 (-6.363967) | 0.042185 / 0.075469 (-0.033285) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.032508 / 1.841788 (-0.809280) | 12.526918 / 8.074308 (4.452610) | 10.202942 / 10.191392 (0.011550) | 0.145920 / 0.680424 (-0.534504) | 0.015643 / 0.534201 (-0.518558) | 0.300465 / 0.579283 (-0.278818) | 0.126786 / 0.434364 (-0.307578) | 0.342885 / 0.540337 (-0.197453) | 0.438139 / 1.386936 (-0.948797) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9c98e069b47d40a219b6f27e62ed85a5bb17449e \"CML watermark\")\n"
] | Add `batch` method to `Dataset` class | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7064/reactions"
} | PR_kwDODunzps52Lz2- | {
"diff_url": "https://github.com/huggingface/datasets/pull/7064.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7064",
"merged_at": "2024-07-25T13:45:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7064.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7064"
} | 2024-07-23T08:40:43Z | https://api.github.com/repos/huggingface/datasets/issues/7064/comments | This PR introduces a new `batch` method to the `Dataset` class, aligning its functionality with the `IterableDataset.batch()` method (implemented in #7054). The implementation uses as well the existing `map` method for efficient batching of examples.
Key changes:
- Add `batch` method to `Dataset` class in `arrow_dataset.py`
- Utilize `map` method for batching
Closes #7063
Once the approach is approved, i will create the tests and update the documentation. | {
"avatar_url": "https://avatars.githubusercontent.com/u/61876623?v=4",
"events_url": "https://api.github.com/users/lappemic/events{/privacy}",
"followers_url": "https://api.github.com/users/lappemic/followers",
"following_url": "https://api.github.com/users/lappemic/following{/other_user}",
"gists_url": "https://api.github.com/users/lappemic/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lappemic",
"id": 61876623,
"login": "lappemic",
"node_id": "MDQ6VXNlcjYxODc2NjIz",
"organizations_url": "https://api.github.com/users/lappemic/orgs",
"received_events_url": "https://api.github.com/users/lappemic/received_events",
"repos_url": "https://api.github.com/users/lappemic/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lappemic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lappemic/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lappemic"
} | https://api.github.com/repos/huggingface/datasets/issues/7064/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7064/timeline | closed | false | 7,064 | null | 2024-07-25T13:45:20Z | null | true |
2,424,488,648 | https://api.github.com/repos/huggingface/datasets/issues/7063 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7063/events | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | null | 2024-07-25T13:45:21Z | [] | https://github.com/huggingface/datasets/issues/7063 | CONTRIBUTOR | completed | null | null | [] | Add `batch` method to `Dataset` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7063/reactions"
} | I_kwDODunzps6QgsLI | null | 2024-07-23T07:36:59Z | https://api.github.com/repos/huggingface/datasets/issues/7063/comments | ### Feature request
Add a `batch` method to the Dataset class, similar to the one recently implemented for `IterableDataset` in PR #7054.
### Motivation
A batched iteration speeds up data loading significantly (see e.g. #6279)
### Your contribution
I plan to open a PR to implement this. | {
"avatar_url": "https://avatars.githubusercontent.com/u/61876623?v=4",
"events_url": "https://api.github.com/users/lappemic/events{/privacy}",
"followers_url": "https://api.github.com/users/lappemic/followers",
"following_url": "https://api.github.com/users/lappemic/following{/other_user}",
"gists_url": "https://api.github.com/users/lappemic/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lappemic",
"id": 61876623,
"login": "lappemic",
"node_id": "MDQ6VXNlcjYxODc2NjIz",
"organizations_url": "https://api.github.com/users/lappemic/orgs",
"received_events_url": "https://api.github.com/users/lappemic/received_events",
"repos_url": "https://api.github.com/users/lappemic/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lappemic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lappemic/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lappemic"
} | https://api.github.com/repos/huggingface/datasets/issues/7063/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7063/timeline | closed | false | 7,063 | null | 2024-07-25T13:45:21Z | null | false |
2,424,467,484 | https://api.github.com/repos/huggingface/datasets/issues/7062 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7062/events | [] | null | 2024-07-23T14:28:27Z | [] | https://github.com/huggingface/datasets/pull/7062 | MEMBER | null | false | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7062). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005591 / 0.011353 (-0.005761) | 0.003992 / 0.011008 (-0.007016) | 0.063932 / 0.038508 (0.025424) | 0.034572 / 0.023109 (0.011463) | 0.252532 / 0.275898 (-0.023366) | 0.271233 / 0.323480 (-0.052247) | 0.005146 / 0.007986 (-0.002840) | 0.002844 / 0.004328 (-0.001484) | 0.049555 / 0.004250 (0.045305) | 0.044111 / 0.037052 (0.007059) | 0.270131 / 0.258489 (0.011642) | 0.318109 / 0.293841 (0.024269) | 0.030247 / 0.128546 (-0.098300) | 0.012438 / 0.075646 (-0.063209) | 0.205160 / 0.419271 (-0.214112) | 0.036228 / 0.043533 (-0.007305) | 0.250664 / 0.255139 (-0.004475) | 0.263884 / 0.283200 (-0.019315) | 0.018141 / 0.141683 (-0.123541) | 1.128504 / 1.452155 (-0.323650) | 1.182543 / 1.492716 (-0.310173) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094576 / 0.018006 (0.076570) | 0.301153 / 0.000490 (0.300664) | 0.000246 / 0.000200 (0.000046) | 0.000065 / 0.000054 (0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019143 / 0.037411 (-0.018268) | 0.062788 / 0.014526 (0.048262) | 0.074688 / 0.176557 (-0.101869) | 0.121799 / 0.737135 (-0.615336) | 0.076200 / 0.296338 (-0.220138) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.277002 / 0.215209 (0.061793) | 2.735738 / 2.077655 (0.658083) | 1.430408 / 1.504120 (-0.073712) | 1.309795 / 1.541195 (-0.231400) | 1.339083 / 1.468490 (-0.129407) | 0.702540 / 4.584777 (-3.882237) | 2.352468 / 3.745712 (-1.393244) | 2.913698 / 5.269862 (-2.356164) | 1.871739 / 4.565676 (-2.693938) | 0.077054 / 0.424275 (-0.347221) | 0.005055 / 0.007607 (-0.002552) | 0.330550 / 0.226044 (0.104505) | 3.272556 / 2.268929 (1.003627) | 1.805268 / 55.444624 (-53.639356) | 1.504791 / 6.876477 (-5.371686) | 1.511361 / 2.142072 (-0.630712) | 0.784451 / 4.805227 (-4.020776) | 0.132182 / 6.500664 (-6.368482) | 0.042516 / 0.075469 (-0.032954) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.946939 / 1.841788 (-0.894849) | 11.369607 / 8.074308 (3.295299) | 9.667350 / 10.191392 (-0.524042) | 0.138689 / 0.680424 (-0.541735) | 0.014416 / 0.534201 (-0.519785) | 0.300685 / 0.579283 (-0.278598) | 0.259709 / 0.434364 (-0.174655) | 0.341271 / 0.540337 (-0.199066) | 0.435609 / 1.386936 (-0.951327) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005726 / 0.011353 (-0.005627) | 0.004071 / 0.011008 (-0.006937) | 0.050837 / 0.038508 (0.012329) | 0.047000 / 0.023109 (0.023890) | 0.278543 / 0.275898 (0.002645) | 0.300526 / 0.323480 (-0.022954) | 0.004483 / 0.007986 (-0.003503) | 0.002835 / 0.004328 (-0.001494) | 0.050925 / 0.004250 (0.046675) | 0.041834 / 0.037052 (0.004782) | 0.285059 / 0.258489 (0.026570) | 0.324557 / 0.293841 (0.030716) | 0.038949 / 0.128546 (-0.089597) | 0.012145 / 0.075646 (-0.063501) | 0.061791 / 0.419271 (-0.357481) | 0.034493 / 0.043533 (-0.009040) | 0.274034 / 0.255139 (0.018895) | 0.295886 / 0.283200 (0.012686) | 0.018524 / 0.141683 (-0.123159) | 1.148766 / 1.452155 (-0.303388) | 1.207966 / 1.492716 (-0.284750) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094078 / 0.018006 (0.076071) | 0.307850 / 0.000490 (0.307361) | 0.000224 / 0.000200 (0.000024) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023502 / 0.037411 (-0.013910) | 0.077321 / 0.014526 (0.062795) | 0.091147 / 0.176557 (-0.085410) | 0.131111 / 0.737135 (-0.606025) | 0.090906 / 0.296338 (-0.205432) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290700 / 0.215209 (0.075491) | 2.833655 / 2.077655 (0.756001) | 1.546371 / 1.504120 (0.042251) | 1.415337 / 1.541195 (-0.125858) | 1.445752 / 1.468490 (-0.022738) | 0.737880 / 4.584777 (-3.846897) | 0.961549 / 3.745712 (-2.784164) | 2.844021 / 5.269862 (-2.425841) | 2.023547 / 4.565676 (-2.542130) | 0.079791 / 0.424275 (-0.344484) | 0.005449 / 0.007607 (-0.002158) | 0.356381 / 0.226044 (0.130337) | 3.515555 / 2.268929 (1.246627) | 1.920407 / 55.444624 (-53.524217) | 1.628637 / 6.876477 (-5.247839) | 1.752995 / 2.142072 (-0.389077) | 0.807264 / 4.805227 (-3.997963) | 0.133627 / 6.500664 (-6.367037) | 0.041861 / 0.075469 (-0.033609) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.035643 / 1.841788 (-0.806144) | 12.114792 / 8.074308 (4.040484) | 10.185844 / 10.191392 (-0.005548) | 0.142354 / 0.680424 (-0.538070) | 0.015466 / 0.534201 (-0.518734) | 0.304681 / 0.579283 (-0.274603) | 0.124297 / 0.434364 (-0.310067) | 0.339907 / 0.540337 (-0.200430) | 0.436266 / 1.386936 (-0.950670) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#856eb84569006ab9389ddbcce8b7141befeab9cc \"CML watermark\")\n"
] | Avoid calling http_head for non-HTTP URLs | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7062/reactions"
} | PR_kwDODunzps52LUPR | {
"diff_url": "https://github.com/huggingface/datasets/pull/7062.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7062",
"merged_at": "2024-07-23T14:21:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7062.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7062"
} | 2024-07-23T07:25:09Z | https://api.github.com/repos/huggingface/datasets/issues/7062/comments | Avoid calling `http_head` for non-HTTP URLs, by adding and `else` statement.
Currently, it makes an unnecessary HTTP call (which adds latency) for non-HTTP protocols, like FTP, S3,...
I discovered this while working in an unrelated issue. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/7062/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7062/timeline | closed | false | 7,062 | null | 2024-07-23T14:21:08Z | null | true |
2,423,786,881 | https://api.github.com/repos/huggingface/datasets/issues/7061 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7061/events | [] | null | 2024-07-22T21:18:12Z | [] | https://github.com/huggingface/datasets/issues/7061 | NONE | null | null | null | [] | Custom Dataset | Still Raise Error while handling errors in _generate_examples | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7061/reactions"
} | I_kwDODunzps6QeA2B | null | 2024-07-22T21:18:12Z | https://api.github.com/repos/huggingface/datasets/issues/7061/comments | ### Describe the bug
I follow this [example](https://discuss.huggingface.co/t/error-handling-in-iterabledataset/72827/3) to handle errors in custom dataset. I am writing a dataset script which read jsonl files and i need to handle errors and continue reading files without raising exception and exit the execution.
```
def _generate_examples(self, filepaths):
errors=[]
id_ = 0
for filepath in filepaths:
try:
with open(filepath, 'r') as f:
for line in f:
json_obj = json.loads(line)
yield id_, json_obj
id_ += 1
except Exception as exc:
logger.error(f"error occur at filepath: {filepath}")
errors.append(error)
```
seems the logger.error is printed but still exception is raised the the run is exit.
```
Downloading and preparing dataset custom_dataset/default to /home/myuser/.cache/huggingface/datasets/custom_dataset/default-a14cdd566afee0a6/1.0.0/acfcc9fb9c57034b580c4252841
ERROR: datasets_modules.datasets.custom_dataset.acfcc9fb9c57034b580c4252841bb890a5617cbd28678dd4be5e52b81188ad02.custom_dataset: 2024-07-22 10:47:42,167: error occur at filepath: '/home/myuser/ds/corrupted-file.jsonl
Traceback (most recent call last):
File "/home/myuser/.cache/huggingface/modules/datasets_modules/datasets/custom_dataset/ac..2/custom_dataset.py", line 48, in _generate_examples
json_obj = json.loads(line)
File "myenv/lib/python3.8/json/__init__.py", line 357, in loads
return _default_decoder.decode(s)
File "myenv/lib/python3.8/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "myenv/lib/python3.8/json/decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Invalid control character at: line 1 column 4 (char 3)
Generating train split: 0 examples [00:06, ? examples/s]>
RemoteTraceback:
"""
Traceback (most recent call last):
File "myenv/lib/python3.8/site-packages/datasets/builder.py", line 1637, in _prepare_split_single
num_examples, num_bytes = writer.finalize()
File "myenv/lib/python3.8/site-packages/datasets/arrow_writer.py", line 594, in finalize
raise SchemaInferenceError("Please pass `features` or at least one example when writing data")
datasets.arrow_writer.SchemaInferenceError: Please pass `features` or at least one example when writing data
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "myenv/lib/python3.8/site-packages/multiprocess/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "myenv/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 1353, in
_write_generator_to_queue
for i, result in enumerate(func(**kwargs)):
File "myenv/lib/python3.8/site-packages/datasets/builder.py", line 1646, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
"""
The above exception was the direct cause of the following exception:
│ │
│ myenv/lib/python3.8/site-packages/datasets/utils/py_utils. │
│ py:1377 in <listcomp> │
│ │
│ 1374 │ │ │ │ if all(async_result.ready() for async_result in async_results) and queue │
│ 1375 │ │ │ │ │ break │
│ 1376 │ │ # we get the result in case there's an error to raise │
│ ❱ 1377 │ │ [async_result.get() for async_result in async_results] │
│ 1378 │
│ │
│ ╭──────────────────────────────── locals ─────────────────────────────────╮ │
│ │ .0 = <list_iterator object at 0x7f2cc1f0ce20> │ │
│ │ async_result = <multiprocess.pool.ApplyResult object at 0x7f2cc1f79c10> │ │
│ ╰─────────────────────────────────────────────────────────────────────────╯ │
│ │
│ myenv/lib/python3.8/site-packages/multiprocess/pool.py:771 │
│ in get │
│ │
│ 768 │ │ if self._success: │
│ 769 │ │ │ return self._value │
│ 770 │ │ else: │
│ ❱ 771 │ │ │ raise self._value │
│ 772 │ │
│ 773 │ def _set(self, i, obj): │
│ 774 │ │ self._success, self._value = obj │
│ │
│ ╭────────────────────────────── locals ──────────────────────────────╮ │
│ │ self = <multiprocess.pool.ApplyResult object at 0x7f2cc1f79c10> │ │
│ │ timeout = None │ │
│ ╰────────────────────────────────────────────────────────────────────╯ │
DatasetGenerationError: An error occurred while generating the dataset
```
### Steps to reproduce the bug
same as above
### Expected behavior
should handle error and continue reading remaining files
### Environment info
python 3.9 | {
"avatar_url": "https://avatars.githubusercontent.com/u/68266028?v=4",
"events_url": "https://api.github.com/users/hahmad2008/events{/privacy}",
"followers_url": "https://api.github.com/users/hahmad2008/followers",
"following_url": "https://api.github.com/users/hahmad2008/following{/other_user}",
"gists_url": "https://api.github.com/users/hahmad2008/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hahmad2008",
"id": 68266028,
"login": "hahmad2008",
"node_id": "MDQ6VXNlcjY4MjY2MDI4",
"organizations_url": "https://api.github.com/users/hahmad2008/orgs",
"received_events_url": "https://api.github.com/users/hahmad2008/received_events",
"repos_url": "https://api.github.com/users/hahmad2008/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hahmad2008/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hahmad2008/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hahmad2008"
} | https://api.github.com/repos/huggingface/datasets/issues/7061/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7061/timeline | open | false | 7,061 | null | null | null | false |
2,423,188,419 | https://api.github.com/repos/huggingface/datasets/issues/7060 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7060/events | [] | null | 2024-07-23T13:28:44Z | [] | https://github.com/huggingface/datasets/pull/7060 | NONE | null | false | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7060). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | WebDataset BuilderConfig | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7060/reactions"
} | PR_kwDODunzps52G71g | {
"diff_url": "https://github.com/huggingface/datasets/pull/7060.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7060",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7060.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7060"
} | 2024-07-22T15:41:07Z | https://api.github.com/repos/huggingface/datasets/issues/7060/comments | This PR adds `WebDatasetConfig`.
Closes #7055 | {
"avatar_url": "https://avatars.githubusercontent.com/u/106811348?v=4",
"events_url": "https://api.github.com/users/hlky/events{/privacy}",
"followers_url": "https://api.github.com/users/hlky/followers",
"following_url": "https://api.github.com/users/hlky/following{/other_user}",
"gists_url": "https://api.github.com/users/hlky/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hlky",
"id": 106811348,
"login": "hlky",
"node_id": "U_kgDOBl3P1A",
"organizations_url": "https://api.github.com/users/hlky/orgs",
"received_events_url": "https://api.github.com/users/hlky/received_events",
"repos_url": "https://api.github.com/users/hlky/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hlky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hlky/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hlky"
} | https://api.github.com/repos/huggingface/datasets/issues/7060/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7060/timeline | closed | false | 7,060 | null | 2024-07-23T13:28:44Z | null | true |
2,422,827,892 | https://api.github.com/repos/huggingface/datasets/issues/7059 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7059/events | [] | null | 2024-07-22T13:02:53Z | [] | https://github.com/huggingface/datasets/issues/7059 | NONE | null | null | null | [] | None values are skipped when reading jsonl in subobjects | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7059/reactions"
} | I_kwDODunzps6QaWt0 | null | 2024-07-22T13:02:42Z | https://api.github.com/repos/huggingface/datasets/issues/7059/comments | ### Describe the bug
I have been fighting against my machine since this morning only to find out this is some kind of a bug.
When loading a dataset composed of `metadata.jsonl`, if you have nullable values (Optional[str]), they can be ignored by the parser, shifting things around.
E.g., let's take this example
Here are two version of a same dataset:
[not-buggy.tar.gz](https://github.com/user-attachments/files/16333532/not-buggy.tar.gz)
[buggy.tar.gz](https://github.com/user-attachments/files/16333553/buggy.tar.gz)
### Steps to reproduce the bug
1. Load the `buggy.tar.gz` dataset
2. Print baseline of `dts = load_dataset("./data")["train"][0]["baselines]`
3. Load the `not-buggy.tar.gz` dataset
4. Print baseline of `dts = load_dataset("./data")["train"][0]["baselines]`
### Expected behavior
Both should have 4 baseline entries:
1. Buggy should have None followed by three lists
2. Non-Buggy should have four lists, and the first one should be an empty list.
One does not work, 2 works. Despite accepting None in another position than the first one.
### Environment info
- `datasets` version: 2.19.1
- Platform: Linux-6.5.0-44-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.23.0
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.3.1
| {
"avatar_url": "https://avatars.githubusercontent.com/u/1929830?v=4",
"events_url": "https://api.github.com/users/PonteIneptique/events{/privacy}",
"followers_url": "https://api.github.com/users/PonteIneptique/followers",
"following_url": "https://api.github.com/users/PonteIneptique/following{/other_user}",
"gists_url": "https://api.github.com/users/PonteIneptique/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/PonteIneptique",
"id": 1929830,
"login": "PonteIneptique",
"node_id": "MDQ6VXNlcjE5Mjk4MzA=",
"organizations_url": "https://api.github.com/users/PonteIneptique/orgs",
"received_events_url": "https://api.github.com/users/PonteIneptique/received_events",
"repos_url": "https://api.github.com/users/PonteIneptique/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/PonteIneptique/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PonteIneptique/subscriptions",
"type": "User",
"url": "https://api.github.com/users/PonteIneptique"
} | https://api.github.com/repos/huggingface/datasets/issues/7059/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7059/timeline | open | false | 7,059 | null | null | null | false |
2,422,560,355 | https://api.github.com/repos/huggingface/datasets/issues/7058 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7058/events | [] | null | 2024-07-22T10:49:20Z | [] | https://github.com/huggingface/datasets/issues/7058 | CONTRIBUTOR | null | null | null | [] | New feature type: Document | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7058/reactions"
} | I_kwDODunzps6QZVZj | null | 2024-07-22T10:49:20Z | https://api.github.com/repos/huggingface/datasets/issues/7058/comments | It would be useful for PDF.
https://github.com/huggingface/dataset-viewer/issues/2991#issuecomment-2242656069 | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
} | https://api.github.com/repos/huggingface/datasets/issues/7058/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7058/timeline | open | false | 7,058 | null | null | null | false |
2,422,498,520 | https://api.github.com/repos/huggingface/datasets/issues/7057 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7057/events | [] | null | 2024-07-22T10:34:14Z | [] | https://github.com/huggingface/datasets/pull/7057 | CONTRIBUTOR | null | false | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7057). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005617 / 0.011353 (-0.005736) | 0.003994 / 0.011008 (-0.007014) | 0.064188 / 0.038508 (0.025680) | 0.030939 / 0.023109 (0.007829) | 0.248712 / 0.275898 (-0.027186) | 0.273417 / 0.323480 (-0.050063) | 0.003340 / 0.007986 (-0.004646) | 0.002823 / 0.004328 (-0.001506) | 0.049985 / 0.004250 (0.045734) | 0.046872 / 0.037052 (0.009820) | 0.254554 / 0.258489 (-0.003935) | 0.288142 / 0.293841 (-0.005699) | 0.030540 / 0.128546 (-0.098006) | 0.012295 / 0.075646 (-0.063352) | 0.204589 / 0.419271 (-0.214683) | 0.036383 / 0.043533 (-0.007150) | 0.254277 / 0.255139 (-0.000862) | 0.267962 / 0.283200 (-0.015237) | 0.021173 / 0.141683 (-0.120510) | 1.126933 / 1.452155 (-0.325221) | 1.190841 / 1.492716 (-0.301875) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093622 / 0.018006 (0.075616) | 0.297967 / 0.000490 (0.297477) | 0.000241 / 0.000200 (0.000041) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018623 / 0.037411 (-0.018789) | 0.062210 / 0.014526 (0.047684) | 0.074369 / 0.176557 (-0.102187) | 0.120585 / 0.737135 (-0.616550) | 0.075966 / 0.296338 (-0.220372) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285440 / 0.215209 (0.070231) | 2.804275 / 2.077655 (0.726620) | 1.484539 / 1.504120 (-0.019580) | 1.366587 / 1.541195 (-0.174607) | 1.355269 / 1.468490 (-0.113221) | 0.722289 / 4.584777 (-3.862488) | 2.344567 / 3.745712 (-1.401145) | 2.831779 / 5.269862 (-2.438083) | 1.899800 / 4.565676 (-2.665876) | 0.078657 / 0.424275 (-0.345619) | 0.005188 / 0.007607 (-0.002420) | 0.340150 / 0.226044 (0.114106) | 3.390915 / 2.268929 (1.121986) | 1.836473 / 55.444624 (-53.608152) | 1.520718 / 6.876477 (-5.355759) | 1.723448 / 2.142072 (-0.418624) | 0.810281 / 4.805227 (-3.994946) | 0.136008 / 6.500664 (-6.364657) | 0.044005 / 0.075469 (-0.031465) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.989982 / 1.841788 (-0.851806) | 11.671075 / 8.074308 (3.596767) | 9.805471 / 10.191392 (-0.385921) | 0.141637 / 0.680424 (-0.538787) | 0.014551 / 0.534201 (-0.519650) | 0.310077 / 0.579283 (-0.269206) | 0.266838 / 0.434364 (-0.167526) | 0.348894 / 0.540337 (-0.191444) | 0.451530 / 1.386936 (-0.935406) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005639 / 0.011353 (-0.005713) | 0.003935 / 0.011008 (-0.007074) | 0.050147 / 0.038508 (0.011639) | 0.031023 / 0.023109 (0.007914) | 0.268361 / 0.275898 (-0.007537) | 0.295774 / 0.323480 (-0.027706) | 0.005029 / 0.007986 (-0.002956) | 0.002832 / 0.004328 (-0.001496) | 0.049806 / 0.004250 (0.045556) | 0.040515 / 0.037052 (0.003463) | 0.283298 / 0.258489 (0.024809) | 0.321946 / 0.293841 (0.028105) | 0.031833 / 0.128546 (-0.096714) | 0.012137 / 0.075646 (-0.063510) | 0.060510 / 0.419271 (-0.358761) | 0.033754 / 0.043533 (-0.009779) | 0.268079 / 0.255139 (0.012940) | 0.292468 / 0.283200 (0.009268) | 0.017268 / 0.141683 (-0.124414) | 1.159922 / 1.452155 (-0.292233) | 1.188961 / 1.492716 (-0.303755) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096930 / 0.018006 (0.078923) | 0.306921 / 0.000490 (0.306431) | 0.000226 / 0.000200 (0.000026) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022811 / 0.037411 (-0.014600) | 0.077298 / 0.014526 (0.062772) | 0.088949 / 0.176557 (-0.087608) | 0.130763 / 0.737135 (-0.606372) | 0.090429 / 0.296338 (-0.205909) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300866 / 0.215209 (0.085657) | 2.963375 / 2.077655 (0.885720) | 1.595753 / 1.504120 (0.091633) | 1.463091 / 1.541195 (-0.078104) | 1.481182 / 1.468490 (0.012692) | 0.712939 / 4.584777 (-3.871838) | 0.956694 / 3.745712 (-2.789018) | 2.802890 / 5.269862 (-2.466971) | 1.891092 / 4.565676 (-2.674585) | 0.077570 / 0.424275 (-0.346706) | 0.005536 / 0.007607 (-0.002072) | 0.351958 / 0.226044 (0.125914) | 3.459114 / 2.268929 (1.190185) | 1.989488 / 55.444624 (-53.455137) | 1.676271 / 6.876477 (-5.200205) | 1.808073 / 2.142072 (-0.334000) | 0.786920 / 4.805227 (-4.018307) | 0.132220 / 6.500664 (-6.368444) | 0.041602 / 0.075469 (-0.033867) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.031759 / 1.841788 (-0.810029) | 12.007776 / 8.074308 (3.933467) | 10.568254 / 10.191392 (0.376862) | 0.143176 / 0.680424 (-0.537248) | 0.015556 / 0.534201 (-0.518645) | 0.304484 / 0.579283 (-0.274799) | 0.125508 / 0.434364 (-0.308855) | 0.340017 / 0.540337 (-0.200320) | 0.434285 / 1.386936 (-0.952651) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#16fa4421f44b22bbbc607f379a93f45af468d1fc \"CML watermark\")\n"
] | Update load_hub.mdx | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7057/reactions"
} | PR_kwDODunzps52EjGC | {
"diff_url": "https://github.com/huggingface/datasets/pull/7057.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7057",
"merged_at": "2024-07-22T10:28:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7057.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7057"
} | 2024-07-22T10:17:46Z | https://api.github.com/repos/huggingface/datasets/issues/7057/comments | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
} | https://api.github.com/repos/huggingface/datasets/issues/7057/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7057/timeline | closed | false | 7,057 | null | 2024-07-22T10:28:10Z | null | true |
2,422,192,257 | https://api.github.com/repos/huggingface/datasets/issues/7056 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7056/events | [] | null | 2024-07-22T15:37:01Z | [] | https://github.com/huggingface/datasets/pull/7056 | CONTRIBUTOR | null | false | null | [
"Oh cool !\r\n\r\nThe time it takes to resume depends on the expected maximum distance in this case right ? Do you know its relationship with $B$ ?\r\n\r\nIn your test it already as high as 15k for $B=1024$, which is ok for text datasets but is maybe not ideal for datasets with heavy samples like audio/image/video ? Though for heavy samples datasets the buffer size is generally much smaller to avoid memory issues.\r\n\r\nMaybe we could just add a warning message on resuming to tell the user that it might take some time to recover the shuffle buffer (with a progress bar maybe ?), and have the option to stop + re-run with an env variable to disable shuffle buffer recovering ? WDYT ?",
"> The time it takes to resume depends on the expected maximum distance in this case right ? Do you know its relationship with $B$\r\n\r\nHi, I created a histogram to visualize the distances in the simulation exp.\r\n![](https://github.com/user-attachments/assets/464f7a86-051c-412f-b48a-461f7e7c9f20)\r\nI think there is no guarantee as to when the oldest example will be yielded. It could stay in the buffer until the entire shard is consumed. However, this can be rare, and in most cases, the pushed examples will be yielded very quickly. In the figure above, most examples are yielded within $2B$ steps. Things will improve if the dataset is split into enough shards and each shard is not too large.\r\n\r\nI agree that we may need to add some warnings or provide some options to allow users to make their own choices.",
"Maybe there's a middle ground between rebuilding the buffer from scratch and storing the entire buffer, but the logic is a bit complicated and takes time to implement. At least for now, we have a way to make shuffled `IterableDataset` resumable :)",
"@lhoestq I'm not sure if it's ok to use progress bar when having multiple workers. \r\nHow about passing an arg `resumable=True` to `IterableDataset.shuffle` to allow for controling of the behaviors?",
"I feel like the default behavior should ideally be fast and perfect resuming.\r\n\r\nLoading from disk is a good option for this (although it's not always possible to serialize the content of the buffer, in that case the buffer would restart empty and we can show a warning). \r\n\r\nThe state_dict() would be part of the training state_dict that is saved to disk along with the model and optimizer anyway. Cc @muellerzr from that worked on storing training state_dicts for the `accelerate` lib, in case you have an opinion.\r\n\r\nI also feel like it is simpler and more intuitive to users. It doesn't require to explain why we need to stream a lot of data just to recover a buffer.\r\n\r\n> Maybe there's a middle ground between rebuilding the buffer from scratch and storing the entire buffer, but the logic is a bit complicated and takes time to implement.\r\n\r\ndefinitely, and it would also make things even harder to understand to users",
"@lhoestq \r\n> Loading from disk is a good option for this (although it's not always possible to serialize the content of the buffer, in that case the buffer would restart empty and we can show a warning).\r\nThe state_dict() would be part of the training state_dict that is saved to disk along with the model and optimizer anyway. Cc @muellerzr from that worked on storing training state_dicts for the accelerate lib, in case you have an opinion.\r\nI also feel like it is simpler and more intuitive to users. It doesn't require to explain why we need to stream a lot of data just to recover a buffer.\r\n\r\nYea, agree with you. But here's the thing: saving buffers as state dict can get pretty tricky. When it comes to tokenized text data, working with multi-worker shuffle can take around x hundreds GB of memories in my case. That's just not feasible for most machine envs out there, and can be more severe for audio/video data.\r\n\r\nAlso, serializing the buffer does take a major toll on performance, and in my experience, I've had to lean heavily on numpy/torch tensor operations to manage those tokenized text data efficiently, which isn't easily transferable to other scenarios—it's kind of a custom fix that works for now, but it's not a one-size-fits-all solution. So, for me it's not that ideal to directly serialize the buffer content with those limitations.\r\n\r\n",
"> When it comes to tokenized text data, working with multi-worker shuffle can taken around x hundreds GB memories in my case.\r\n\r\nit's kinda close to the size of a model + optimizer no ?\r\n\r\nAnyway that makes sense and adding the feature to recover a buffer shuffle (at least as an opt-in for now, we can decide on the default later based on users feedback and experience).\r\n\r\nAre you ok with adding `buffer_resuming_mode=` to `.shuffle()` to enable buffer recovering using your method with `buffer_resuming_mode=\"recover_from_source\"` ? (feel free to suggest other names for the parameter and value)",
"@lhoestq \r\n> Are you ok with adding buffer_resuming_mode= to .shuffle() to enable buffer recovering using your method with buffer_resuming_mode=\"recover_from_source\" ? (feel free to suggest other names for the parameter and value)\r\n\r\nOf course, appreciate your feedbacks."
] | Make `BufferShuffledExamplesIterable` resumable | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7056/reactions"
} | PR_kwDODunzps52DgOu | {
"diff_url": "https://github.com/huggingface/datasets/pull/7056.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7056",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7056.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7056"
} | 2024-07-22T07:50:02Z | https://api.github.com/repos/huggingface/datasets/issues/7056/comments | This PR aims to implement a resumable `BufferShuffledExamplesIterable`.
Instead of saving the entire buffer content, which is very memory-intensive, the newly implemented `BufferShuffledExamplesIterable` saves only the minimal state necessary for recovery, e.g., the random generator states and the state of the first example in the buffer dict.
The idea is that since the buffer size is limited, even if the entire buffer is discarded, we can rebuild it as long as the state of the oldest example is recorded. For buffer size $B$, the expected distance between when an example is pushed and when it is yielded is
$d = \sum_{k=1}^{\infty} k\frac{1}{B} (1 - \frac{1}{B} )^{k-1} =B$.
Simulation experiments support these claims:
```py
from random import randint
BUFFER_SIZE = 1024
dists = []
buffer = []
for i in range(10000000):
if i < BUFFER_SIZE:
buffer.append(i)
else:
index = randint(0, BUFFER_SIZE - 1)
dists.append(i - buffer[index])
buffer[index] = i
print(f"MIN DIST: {min(dists)}\nMAX DIST: {max(dists)}\nAVG DIST: {sum(dists) / len(dists):.2f}\n")
```
which produces the following output:
```py
MIN DIST: 1
MAX DIST: 15136
AVG DIST: 1023.95
```
The overall time for reconstructing the buffer and recovery should not be too long.
The following code mimics the cases of resuming online tokenization by `datasets` and `StatefulDataLoader` under distributed scenarios,
```py
import pickle
import time
from itertools import chain
from typing import Any, Dict, List
import torch
from datasets import load_dataset
from torchdata.stateful_dataloader import StatefulDataLoader
from tqdm import tqdm
from transformers import AutoTokenizer, DataCollatorForLanguageModeling
tokenizer = AutoTokenizer.from_pretrained('fla-hub/gla-1.3B-100B')
tokenizer.pad_token = tokenizer.eos_token
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)
torch.manual_seed(42)
def tokenize(examples: Dict[str, List[Any]]) -> Dict[str, List[List[int]]]:
input_ids = tokenizer(examples['text'])['input_ids']
input_ids = list(chain(*input_ids))
total_length = len(input_ids)
chunk_size = 2048
total_length = (total_length // chunk_size) * chunk_size
# the last chunk smaller than chunk_size will be discarded
return {'input_ids': [input_ids[i: i+chunk_size] for i in range(0, total_length, chunk_size)]}
batch_size = 16
num_workers = 5
context_length = 2048
rank = 1
world_size = 32
prefetch_factor = 2
steps = 2048
path = 'fla-hub/slimpajama-test'
dataset = load_dataset(
path=path,
split='train',
streaming=True,
trust_remote_code=True
)
dataset = dataset.map(tokenize, batched=True, remove_columns=next(iter(dataset)).keys())
dataset = dataset.shuffle(seed=42)
loader = StatefulDataLoader(dataset=dataset,
batch_size=batch_size,
collate_fn=data_collator,
num_workers=num_workers,
persistent_workers=False,
prefetch_factor=prefetch_factor)
start = time.time()
for i, batch in tqdm(enumerate(loader)):
if i == 0:
print(f'{i}\n{batch["input_ids"]}')
if i == steps - 1:
print(f'{i}\n{batch["input_ids"]}')
state_dict = loader.state_dict()
if i == steps:
print(f'{i}\n{batch["input_ids"]}')
break
print(f"{time.time() - start:.2f}s elapsed")
print(f"{len(pickle.dumps(state_dict)) / 1024**2:.2f}MB states in total")
for worker in state_dict['_snapshot']['_worker_snapshots'].keys():
print(f"{worker} {len(pickle.dumps(state_dict['_snapshot']['_worker_snapshots'][worker])) / 1024**2:.2f}MB")
print(state_dict['_snapshot']['_worker_snapshots']['worker_0']['dataset_state'])
loader = StatefulDataLoader(dataset=dataset,
batch_size=batch_size,
collate_fn=data_collator,
num_workers=num_workers,
persistent_workers=False,
prefetch_factor=prefetch_factor)
print("Loading state dict")
loader.load_state_dict(state_dict)
start = time.time()
for batch in loader:
print(batch['input_ids'])
break
print(f"{time.time() - start:.2f}s elapsed")
```
and the outputs are
```py
0
tensor([[ 909, 395, 19082, ..., 13088, 16232, 395],
[ 601, 28705, 28770, ..., 28733, 923, 288],
[21753, 15071, 13977, ..., 9369, 28723, 415],
...,
[21763, 28751, 20300, ..., 28781, 28734, 4775],
[ 354, 396, 10214, ..., 298, 429, 28770],
[ 333, 6149, 28768, ..., 2773, 340, 351]])
2047
tensor([[28723, 415, 3889, ..., 272, 3065, 2609],
[ 403, 3214, 3629, ..., 403, 21163, 16434],
[28723, 13, 28749, ..., 28705, 28750, 28734],
...,
[ 2778, 2251, 28723, ..., 354, 684, 429],
[ 5659, 298, 1038, ..., 5290, 297, 22153],
[ 938, 28723, 1537, ..., 9123, 28733, 12154]])
2048
tensor([[ 769, 278, 12531, ..., 28721, 19309, 28739],
[ 415, 23347, 622, ..., 3937, 2426, 28725],
[28745, 4345, 28723, ..., 338, 28725, 583],
...,
[ 1670, 28709, 5809, ..., 28734, 28760, 393],
[ 340, 1277, 624, ..., 325, 28790, 1329],
[ 523, 1144, 3409, ..., 359, 359, 17422]])
65.97s elapsed
0.00MB states in total
worker_0 0.00MB
worker_1 0.00MB
worker_2 0.00MB
worker_3 0.00MB
worker_4 0.00MB
{'ex_iterable': {'ex_iterable': {'shard_idx': 0, 'shard_example_idx': 14000}, 'num_examples_since_previous_state': 166, 'previous_state_example_idx': 7394, 'previous_state': {'shard_idx': 0, 'shard_example_idx': 13000}}, 'num_taken': 6560, 'global_example_idx': 7560, 'buffer_state_dict': {'num_taken': 6560, 'global_example_idx': 356, 'index_offset': 0, 'first_state': {'ex_iterable': {'shard_idx': 0, 'shard_example_idx': 1000}, 'num_examples_since_previous_state': 356, 'previous_state_example_idx': 0, 'previous_state': {'shard_idx': 0, 'shard_example_idx': 0}}, 'bit_generator_state': {'state': {'state': 274674114334540486603088602300644985544, 'inc': 332724090758049132448979897138935081983}, 'bit_generator': 'PCG64', 'has_uint32': 0, 'uinteger': 0}}}
Loading state dict
tensor([[ 769, 278, 12531, ..., 28721, 19309, 28739],
[ 415, 23347, 622, ..., 3937, 2426, 28725],
[28745, 4345, 28723, ..., 338, 28725, 583],
...,
[ 1670, 28709, 5809, ..., 28734, 28760, 393],
[ 340, 1277, 624, ..., 325, 28790, 1329],
[ 523, 1144, 3409, ..., 359, 359, 17422]])
24.60s elapsed
```
Not sure if this PR complies with the `datasets` code style. Looking for your help @lhoestq, also very willing to further improve the code if any suggestions are given.
| {
"avatar_url": "https://avatars.githubusercontent.com/u/18402347?v=4",
"events_url": "https://api.github.com/users/yzhangcs/events{/privacy}",
"followers_url": "https://api.github.com/users/yzhangcs/followers",
"following_url": "https://api.github.com/users/yzhangcs/following{/other_user}",
"gists_url": "https://api.github.com/users/yzhangcs/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yzhangcs",
"id": 18402347,
"login": "yzhangcs",
"node_id": "MDQ6VXNlcjE4NDAyMzQ3",
"organizations_url": "https://api.github.com/users/yzhangcs/orgs",
"received_events_url": "https://api.github.com/users/yzhangcs/received_events",
"repos_url": "https://api.github.com/users/yzhangcs/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yzhangcs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yzhangcs/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yzhangcs"
} | https://api.github.com/repos/huggingface/datasets/issues/7056/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7056/timeline | open | false | 7,056 | null | null | null | true |
2,421,708,891 | https://api.github.com/repos/huggingface/datasets/issues/7055 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7055/events | [] | null | 2024-07-24T13:26:30Z | [] | https://github.com/huggingface/datasets/issues/7055 | NONE | completed | null | null | [
"Since `datasets` uses is built on Arrow to store the data, it requires each sample to have the same columns.\r\n\r\nThis can be fixed by specifyign in advance the name of all the possible columns in the `dataset_info` in YAML, and missing values will be `None`",
"Thanks. This currently doesn't work for WebDataset because there's no `BuilderConfig` with `features` and in turn `_info` is missing `features=self.config.features`. I'll prepare a PR to fix this.\r\n\r\nNote it may be useful to add the [expected format of `features`](https://github.com/huggingface/datasets/blob/16fa4421f44b22bbbc607f379a93f45af468d1fc/src/datasets/features/features.py#L1757) to the documentation for [`Builder Parameters`](https://huggingface.co/docs/datasets/repository_structure#builder-parameters).\r\n",
"Oh good catch ! thanks\r\n\r\n> Note it may be useful to add the [expected format of features](https://github.com/huggingface/datasets/blob/16fa4421f44b22bbbc607f379a93f45af468d1fc/src/datasets/features/features.py#L1757) to the documentation for [Buil](https://huggingface.co/docs/datasets/repository_structure#builder-parameters)\r\n\r\nGood idea, let me open a PR",
"#7060 ",
"Actually I just tried with `datasets` on the `main` branch and having `features` defined in `dataset_info` worked for me\r\n\r\n```python\r\n>>> list(load_dataset(\"/Users/quentinlhoest/tmp\", streaming=True, split=\"train\"))\r\n[{'txt': 'hello there\\n', 'other': None}]\r\n```\r\nwhere `tmp` contains data.tar with \"hello there\\n\" in a text file and the README.md:\r\n```\r\n---\r\ndataset_info:\r\n features:\r\n - name: txt\r\n dtype: string\r\n - name: other\r\n dtype: string\r\n---\r\n\r\nThis is a dataset card\r\n```\r\n\r\nWhat error did you get when you tried to specify the columns in `dataset_info` ?",
"If you review the changes in #7060 you'll note that `features` are not passed to `DatasetInfo`.\r\n\r\nIn your case the features are being extracted by [this code](https://github.com/huggingface/datasets/blob/e83d6fa574710fcb44e341087239d2687183f62b/src/datasets/packaged_modules/webdataset/webdataset.py#L72-L98).\r\n\r\nTry with the `Steps to reproduce the bug`. It's the same error mentioned in `Describe the bug` because `features` are not passed to `DatasetInfo`.\r\n\r\n`features` are [not used](https://github.com/huggingface/datasets/blob/e83d6fa574710fcb44e341087239d2687183f62b/src/datasets/builder.py#L365-L366) when the `BuilderConfig` has no `features` attribute. `WebDataset` uses the default [`BuilderConfig`](https://github.com/huggingface/datasets/blob/e83d6fa574710fcb44e341087239d2687183f62b/src/datasets/builder.py#L101-L124).\r\n\r\nThere is a [warning](https://github.com/huggingface/datasets/blob/e83d6fa574710fcb44e341087239d2687183f62b/src/datasets/load.py#L640-L648) that `features` are ignored.\r\n\r\nNote that as mentioned in `Describe the bug` this could also be resolved by removing the check [here](https://github.com/huggingface/datasets/blob/e83d6fa574710fcb44e341087239d2687183f62b/src/datasets/packaged_modules/webdataset/webdataset.py#L76-L80) because Arrow actually handles this itself, Arrow sets any missing fields to `None`, at least in my case.",
"Note for anyone else who encounters this issue, every dataset type except folder-based types supported features in the [documented](https://huggingface.co/docs/datasets/repository_structure#builder-parameters) manner; [Arrow](https://github.com/huggingface/datasets/blob/e83d6fa574710fcb44e341087239d2687183f62b/src/datasets/packaged_modules/arrow/arrow.py#L15-L21), [csv](https://github.com/huggingface/datasets/blob/e83d6fa574710fcb44e341087239d2687183f62b/src/datasets/packaged_modules/csv/csv.py#L25-L68), [generator](https://github.com/huggingface/datasets/blob/e83d6fa574710fcb44e341087239d2687183f62b/src/datasets/packaged_modules/generator/generator.py#L8-L19), [json](https://github.com/huggingface/datasets/blob/e83d6fa574710fcb44e341087239d2687183f62b/src/datasets/packaged_modules/json/json.py#L42-L52), [pandas](https://github.com/huggingface/datasets/blob/e83d6fa574710fcb44e341087239d2687183f62b/src/datasets/packaged_modules/pandas/pandas.py#L14-L20), [parquet](https://github.com/huggingface/datasets/blob/e83d6fa574710fcb44e341087239d2687183f62b/src/datasets/packaged_modules/parquet/parquet.py#L16-L24), [spark](https://github.com/huggingface/datasets/blob/e83d6fa574710fcb44e341087239d2687183f62b/src/datasets/packaged_modules/spark/spark.py#L31-L37), [sql](https://github.com/huggingface/datasets/blob/e83d6fa574710fcb44e341087239d2687183f62b/src/datasets/packaged_modules/sql/sql.py#L24-L35) and [text](https://github.com/huggingface/datasets/blob/e83d6fa574710fcb44e341087239d2687183f62b/src/datasets/packaged_modules/text/text.py#L18-L27). `WebDataset` is different and requires [`dataset_info` which is vaguely documented](https://huggingface.co/docs/datasets/dataset_script#optional-generate-dataset-metadata) under dataset loading scripts.",
"Thanks for explaining. I see the Dataset Viewer is still failing - I'll update `datasets` in the Viewer to fix this"
] | WebDataset with different prefixes are unsupported | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7055/reactions"
} | I_kwDODunzps6QWFhb | null | 2024-07-22T01:14:19Z | https://api.github.com/repos/huggingface/datasets/issues/7055/comments | ### Describe the bug
Consider a WebDataset with multiple images for each item where the number of images may vary: [example](https://huggingface.co/datasets/bigdata-pw/fashion-150k)
Due to this [code](https://github.com/huggingface/datasets/blob/87f4c2088854ff33e817e724e75179e9975c1b02/src/datasets/packaged_modules/webdataset/webdataset.py#L76-L80) an error is given.
```
The TAR archives of the dataset should be in WebDataset format, but the files in the archive don't share the same prefix or the same types.
```
The purpose of this check is unclear because PyArrow supports different keys.
Removing the check allows the dataset to be loaded and there's no issue when iterating through the dataset.
```
>>> from datasets import load_dataset
>>> path = "shards/*.tar"
>>> dataset = load_dataset("webdataset", data_files={"train": path}, split="train", streaming=True)
Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 152/152 [00:00<00:00, 56458.93it/s]
>>> dataset
IterableDataset({
features: ['__key__', '__url__', '1.jpg', '2.jpg', '3.jpg', '4.jpg', 'json'],
n_shards: 152
})
```
### Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset("bigdata-pw/fashion-150k")
```
### Expected behavior
Dataset loads without error
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-5.14.0-467.el9.x86_64-x86_64-with-glibc2.34
- Python version: 3.9.19
- `huggingface_hub` version: 0.23.4
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/106811348?v=4",
"events_url": "https://api.github.com/users/hlky/events{/privacy}",
"followers_url": "https://api.github.com/users/hlky/followers",
"following_url": "https://api.github.com/users/hlky/following{/other_user}",
"gists_url": "https://api.github.com/users/hlky/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hlky",
"id": 106811348,
"login": "hlky",
"node_id": "U_kgDOBl3P1A",
"organizations_url": "https://api.github.com/users/hlky/orgs",
"received_events_url": "https://api.github.com/users/hlky/received_events",
"repos_url": "https://api.github.com/users/hlky/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hlky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hlky/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hlky"
} | https://api.github.com/repos/huggingface/datasets/issues/7055/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7055/timeline | closed | false | 7,055 | null | 2024-07-23T13:28:46Z | null | false |
2,418,548,995 | https://api.github.com/repos/huggingface/datasets/issues/7054 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7054/events | [] | null | 2024-07-23T13:25:13Z | [] | https://github.com/huggingface/datasets/pull/7054 | CONTRIBUTOR | null | false | null | [
"Cool ! Thanks for diving into it :)\r\n\r\nYour implementation is great and indeed supports shuffling and batching, you just need to additionally account for state_dict (for dataset [checkpointing+resuming](https://huggingface.co/docs/datasets/main/en/use_with_pytorch#checkpoint-and-resume))\r\n\r\nThat being said, I believe the implementation can be made simpler by relying on `IterableDataset.map()` which already implements all this. Maybe something like\r\n\r\n```python\r\n\r\ndef batch(self, batch_size: int, drop_last_batch: bool = False) -> \"IterableDataset\":\r\n def batch(unbatched: dict[str, list]) -> dict[str, list]:\r\n return {k: [v] for k, v in unbatched}\r\n\r\n return self.map(batch, batched=True, batch_size=batch_size, drop_last_batch=drop_last_batch)\r\n```\r\n\r\nAnd this way no need to reimplement everything !\r\n\r\n(my only small concern is that it's not an Arrow-optimized function so it requires the examples to be manipulated as python objects even if the original data is in Arrow format (e.g. when streaming Parquet files) but it's not a big deal and we can see later if we need to optimize this)",
"Thanks a lot for the feedback @lhoestq! I definitely could have saved some time looking into it properly first. 😅 \r\n\r\nImplemented the `.batch()` method, added a proper docsrtring for documentation, and added tests.\r\n\r\nLet me know what you think and if this needs some update.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7054). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Thanks for the feedbak @lhoestq!\r\n\r\nApplied it and referenced the `batched=True` option in the `map` function and highlighted the difference. Hope i got this right.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005181 / 0.011353 (-0.006172) | 0.003714 / 0.011008 (-0.007294) | 0.063060 / 0.038508 (0.024552) | 0.030885 / 0.023109 (0.007776) | 0.239060 / 0.275898 (-0.036838) | 0.262480 / 0.323480 (-0.061000) | 0.004103 / 0.007986 (-0.003883) | 0.002696 / 0.004328 (-0.001632) | 0.048706 / 0.004250 (0.044456) | 0.042577 / 0.037052 (0.005525) | 0.249928 / 0.258489 (-0.008561) | 0.283252 / 0.293841 (-0.010589) | 0.029304 / 0.128546 (-0.099242) | 0.012001 / 0.075646 (-0.063646) | 0.204467 / 0.419271 (-0.214804) | 0.035639 / 0.043533 (-0.007894) | 0.243850 / 0.255139 (-0.011289) | 0.261609 / 0.283200 (-0.021590) | 0.018302 / 0.141683 (-0.123381) | 1.096040 / 1.452155 (-0.356115) | 1.135917 / 1.492716 (-0.356800) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091976 / 0.018006 (0.073970) | 0.296396 / 0.000490 (0.295906) | 0.000203 / 0.000200 (0.000003) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018405 / 0.037411 (-0.019007) | 0.062470 / 0.014526 (0.047944) | 0.073340 / 0.176557 (-0.103216) | 0.119474 / 0.737135 (-0.617661) | 0.075750 / 0.296338 (-0.220588) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279586 / 0.215209 (0.064377) | 2.768542 / 2.077655 (0.690887) | 1.449158 / 1.504120 (-0.054962) | 1.328760 / 1.541195 (-0.212435) | 1.336338 / 1.468490 (-0.132152) | 0.732582 / 4.584777 (-3.852195) | 2.325558 / 3.745712 (-1.420154) | 2.898077 / 5.269862 (-2.371784) | 1.893107 / 4.565676 (-2.672569) | 0.078788 / 0.424275 (-0.345487) | 0.005273 / 0.007607 (-0.002335) | 0.334887 / 0.226044 (0.108842) | 3.304173 / 2.268929 (1.035244) | 1.834743 / 55.444624 (-53.609882) | 1.527463 / 6.876477 (-5.349014) | 1.538824 / 2.142072 (-0.603249) | 0.785646 / 4.805227 (-4.019581) | 0.134876 / 6.500664 (-6.365788) | 0.042894 / 0.075469 (-0.032575) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.976635 / 1.841788 (-0.865152) | 11.217156 / 8.074308 (3.142848) | 9.616971 / 10.191392 (-0.574421) | 0.127276 / 0.680424 (-0.553148) | 0.014344 / 0.534201 (-0.519857) | 0.301896 / 0.579283 (-0.277387) | 0.259615 / 0.434364 (-0.174749) | 0.340693 / 0.540337 (-0.199645) | 0.429145 / 1.386936 (-0.957791) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005534 / 0.011353 (-0.005819) | 0.003795 / 0.011008 (-0.007213) | 0.049761 / 0.038508 (0.011253) | 0.031311 / 0.023109 (0.008202) | 0.276032 / 0.275898 (0.000134) | 0.297316 / 0.323480 (-0.026164) | 0.004396 / 0.007986 (-0.003590) | 0.002693 / 0.004328 (-0.001635) | 0.049025 / 0.004250 (0.044775) | 0.039707 / 0.037052 (0.002654) | 0.284264 / 0.258489 (0.025775) | 0.319962 / 0.293841 (0.026121) | 0.031842 / 0.128546 (-0.096705) | 0.012192 / 0.075646 (-0.063454) | 0.059895 / 0.419271 (-0.359376) | 0.033676 / 0.043533 (-0.009856) | 0.275917 / 0.255139 (0.020778) | 0.292637 / 0.283200 (0.009437) | 0.017992 / 0.141683 (-0.123691) | 1.199329 / 1.452155 (-0.252826) | 1.259083 / 1.492716 (-0.233633) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092770 / 0.018006 (0.074764) | 0.313363 / 0.000490 (0.312873) | 0.000212 / 0.000200 (0.000013) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022977 / 0.037411 (-0.014434) | 0.076839 / 0.014526 (0.062314) | 0.088289 / 0.176557 (-0.088267) | 0.128625 / 0.737135 (-0.608510) | 0.089348 / 0.296338 (-0.206990) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300881 / 0.215209 (0.085672) | 2.946499 / 2.077655 (0.868845) | 1.599686 / 1.504120 (0.095566) | 1.479332 / 1.541195 (-0.061862) | 1.476910 / 1.468490 (0.008420) | 0.720536 / 4.584777 (-3.864241) | 0.944822 / 3.745712 (-2.800890) | 2.771864 / 5.269862 (-2.497998) | 1.886573 / 4.565676 (-2.679103) | 0.078462 / 0.424275 (-0.345813) | 0.005392 / 0.007607 (-0.002215) | 0.354984 / 0.226044 (0.128939) | 3.516449 / 2.268929 (1.247520) | 1.977033 / 55.444624 (-53.467592) | 1.671922 / 6.876477 (-5.204555) | 1.785755 / 2.142072 (-0.356318) | 0.795330 / 4.805227 (-4.009897) | 0.132895 / 6.500664 (-6.367769) | 0.041178 / 0.075469 (-0.034291) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.031780 / 1.841788 (-0.810008) | 11.855600 / 8.074308 (3.781292) | 10.245599 / 10.191392 (0.054207) | 0.140649 / 0.680424 (-0.539775) | 0.015332 / 0.534201 (-0.518869) | 0.299402 / 0.579283 (-0.279881) | 0.120007 / 0.434364 (-0.314357) | 0.337770 / 0.540337 (-0.202568) | 0.433679 / 1.386936 (-0.953257) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e83d6fa574710fcb44e341087239d2687183f62b \"CML watermark\")\n"
] | Add batching to `IterableDataset` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7054/reactions"
} | PR_kwDODunzps514T1f | {
"diff_url": "https://github.com/huggingface/datasets/pull/7054.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7054",
"merged_at": "2024-07-23T10:34:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7054.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7054"
} | 2024-07-19T10:11:47Z | https://api.github.com/repos/huggingface/datasets/issues/7054/comments | I've taken a try at implementing a batched `IterableDataset` as requested in issue #6279. This PR adds a new `BatchedExamplesIterable` class and a `.batch()` method to the `IterableDataset` class.
The main changes are:
1. A new `BatchedExamplesIterable` that groups examples into batches.
2. A `.batch()` method for `IterableDataset` to easily create batched versions.
3. Support for shuffling and sharding to work with PyTorch DataLoader and multiple workers.
I'm not sure if this is exactly what you had in mind and also have not fully tested it atm, so I'd really appreciate your feedback. Does this seem like it's heading in the right direction? I'm happy to make any changes or explore different approaches if needed.
Pinging @lhoestq | {
"avatar_url": "https://avatars.githubusercontent.com/u/61876623?v=4",
"events_url": "https://api.github.com/users/lappemic/events{/privacy}",
"followers_url": "https://api.github.com/users/lappemic/followers",
"following_url": "https://api.github.com/users/lappemic/following{/other_user}",
"gists_url": "https://api.github.com/users/lappemic/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lappemic",
"id": 61876623,
"login": "lappemic",
"node_id": "MDQ6VXNlcjYxODc2NjIz",
"organizations_url": "https://api.github.com/users/lappemic/orgs",
"received_events_url": "https://api.github.com/users/lappemic/received_events",
"repos_url": "https://api.github.com/users/lappemic/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lappemic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lappemic/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lappemic"
} | https://api.github.com/repos/huggingface/datasets/issues/7054/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7054/timeline | closed | false | 7,054 | null | 2024-07-23T10:34:28Z | null | true |
2,416,423,791 | https://api.github.com/repos/huggingface/datasets/issues/7053 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7053/events | [] | null | 2024-07-18T15:17:42Z | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | https://github.com/huggingface/datasets/issues/7053 | NONE | completed | null | null | [
"Hi,\r\n\r\nThis issue was fixed in `datasets` 2.15.0:\r\n- #6105\r\n\r\nYou will need to update your `datasets`:\r\n```\r\npip install -U datasets\r\n```",
"Duplicate of:\r\n- #6100"
] | Datasets.datafiles resolve_pattern `TypeError: can only concatenate tuple (not "str") to tuple` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7053/reactions"
} | I_kwDODunzps6QB7Nv | null | 2024-07-18T13:42:35Z | https://api.github.com/repos/huggingface/datasets/issues/7053/comments | ### Describe the bug
in data_files.py, line 332,
`fs, _, _ = get_fs_token_paths(pattern, storage_options=storage_options)`
If we run the code on AWS, as fs.protocol will be a tuple like: `('file', 'local')`
So, `isinstance(fs.protocol, str) == False` and
`protocol_prefix = fs.protocol + "://" if fs.protocol != "file" else ""` will raise
`TypeError: can only concatenate tuple (not "str") to tuple`.
### Steps to reproduce the bug
Steps to reproduce:
1. Run on a cloud server like AWS,
2. `import datasets.data_files as datafile`
3. datafile.resolve_pattern('path/to/dataset', '.')
4. `TypeError: can only concatenate tuple (not "str") to tuple`
### Expected behavior
Should return path of the dataset, with fs.protocol at the beginning
### Environment info
- `datasets` version: 2.14.0
- Platform: Linux-3.10.0-1160.119.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.19
- Huggingface_hub version: 0.23.5
- PyArrow version: 16.1.0
- Pandas version: 1.1.5 | {
"avatar_url": "https://avatars.githubusercontent.com/u/48289218?v=4",
"events_url": "https://api.github.com/users/MatthewYZhang/events{/privacy}",
"followers_url": "https://api.github.com/users/MatthewYZhang/followers",
"following_url": "https://api.github.com/users/MatthewYZhang/following{/other_user}",
"gists_url": "https://api.github.com/users/MatthewYZhang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/MatthewYZhang",
"id": 48289218,
"login": "MatthewYZhang",
"node_id": "MDQ6VXNlcjQ4Mjg5MjE4",
"organizations_url": "https://api.github.com/users/MatthewYZhang/orgs",
"received_events_url": "https://api.github.com/users/MatthewYZhang/received_events",
"repos_url": "https://api.github.com/users/MatthewYZhang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/MatthewYZhang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MatthewYZhang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/MatthewYZhang"
} | https://api.github.com/repos/huggingface/datasets/issues/7053/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7053/timeline | closed | false | 7,053 | null | 2024-07-18T15:16:18Z | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | false |
2,411,682,730 | https://api.github.com/repos/huggingface/datasets/issues/7052 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7052/events | [] | null | 2024-07-29T06:47:55Z | [] | https://github.com/huggingface/datasets/pull/7052 | NONE | null | true | null | [] | Adding `Music` feature for symbolic music modality (MIDI, abc) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7052/reactions"
} | PR_kwDODunzps51iuop | {
"diff_url": "https://github.com/huggingface/datasets/pull/7052.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7052",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7052.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7052"
} | 2024-07-16T17:26:04Z | https://api.github.com/repos/huggingface/datasets/issues/7052/comments | ⚠️ (WIP) ⚠️
### What this PR does
This PR adds a `Music` feature for the symbolic music modality, in particular [MIDI](https://en.wikipedia.org/wiki/Musical_Instrument_Digital_Interface) and [abc](https://en.wikipedia.org/wiki/ABC_notation) files.
### Motivations
These two file formats are widely used in the [Music Information Retrieval (MIR)](https://en.wikipedia.org/wiki/Music_information_retrieval) for tasks such as music generation, music transcription, music synthesis or music transcription. Having a dedicated feature in the datasets library would allow to both encourage researchers to share datasets of this modality as well as making them more easily usable for end users, benefitting from the perks of the library.
These file formats are supported by [symusic](https://github.com/Yikai-Liao/symusic), a lightweight Python library with C bindings (using nanobind) allowing to efficiently read, write and manipulate them. The library is actively developed, and can in the future also implement other file formats such as [musicXML](https://en.wikipedia.org/wiki/MusicXML). As such, this PR relies on it.
The music data can then easily be tokenized with appropriate tokenizers such as [MidiTok](https://github.com/Natooz/MidiTok) or converted to pianorolls matrices by symusic.
**Jul 16th 2024:**
* the tests for the `Music` feature are currently failing due to non-supported access to the LazyBatch in `test_dataset_with_music_feature_map` and `test_dataset_with_music_feature_map_resample_music` (see TODOs). I am a beginner with pyArrow, I'll take any advice to make this work;
* additional tests including the `Music` feature with parquet and WebDataset should be implemented. As of right now, I am waiting for your feedback before taking further steps;
* a `MusicFolder` should also be implemented to comply with the usages of the `Image` and `Audio` features, waiting for your feedback too.
CCing @lhoestq and @albertvillanova | {
"avatar_url": "https://avatars.githubusercontent.com/u/56734983?v=4",
"events_url": "https://api.github.com/users/Natooz/events{/privacy}",
"followers_url": "https://api.github.com/users/Natooz/followers",
"following_url": "https://api.github.com/users/Natooz/following{/other_user}",
"gists_url": "https://api.github.com/users/Natooz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Natooz",
"id": 56734983,
"login": "Natooz",
"node_id": "MDQ6VXNlcjU2NzM0OTgz",
"organizations_url": "https://api.github.com/users/Natooz/orgs",
"received_events_url": "https://api.github.com/users/Natooz/received_events",
"repos_url": "https://api.github.com/users/Natooz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Natooz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Natooz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Natooz"
} | https://api.github.com/repos/huggingface/datasets/issues/7052/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7052/timeline | closed | false | 7,052 | null | 2024-07-29T06:47:55Z | null | true |
2,409,353,929 | https://api.github.com/repos/huggingface/datasets/issues/7051 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7051/events | [] | null | 2024-08-05T20:58:04Z | [] | https://github.com/huggingface/datasets/issues/7051 | NONE | completed | null | null | [
"This is not possible right now afaik :/\r\n\r\nMaybe we could have something like this ? wdyt ?\r\n\r\n```python\r\nds = interleave_datasets(\r\n [shuffled_dataset_a, dataset_b],\r\n probabilities=probabilities,\r\n stopping_strategy='all_exhausted',\r\n reshuffle_each_iteration=True,\r\n)",
"That would be helpful for this case! \r\n\r\nIf there was some way for from_generator to iterate over just a single shard of some dataset that would probably be more ideal. Maybe something like\r\n\r\n```\r\ndef from_dataset_generator(dataset, generator_fn, gen_kwargs):\r\n # calls generator_fn(dataset=dataset_shard, **gen_kwargs)\r\n```\r\n\r\nAnother transform I was trying to implement is an input bucketing transform. Essentially you need to iterate through a dataset and reorder the examples in them, which is not really possible with a `map()` call. But using `from_generator()` causes the final dataset to be a single shard and loses speed gains from multiple dataloader workers",
"I see, there are some internal functions to get a single shard already but the public `.shard()` method hasn't been implemented yet for `IterableDataset` :/\r\n\r\n(see the use of `ex_iterable.shard_data_sources` in `IterableDataset._prepare_ex_iterable_for_iteration` for example)",
"Would that be something planned on the roadmap for the near future, or do you suggest hacking through with internal APIs for now?",
"Ok this turned out to be not too difficult. Are there any obvious issues with my implementation?\r\n\r\n```\r\nclass ShuffleEveryEpochIterable(iterable_dataset._BaseExamplesIterable):\r\n \"\"\"ExamplesIterable that reshuffles the dataset every epoch.\"\"\"\r\n\r\n def __init__(\r\n self,\r\n ex_iterable: iterable_dataset._BaseExamplesIterable,\r\n generator: np.random.Generator,\r\n ):\r\n \"\"\"Constructor.\"\"\"\r\n super().__init__()\r\n self.ex_iterable = ex_iterable\r\n self.generator = generator\r\n\r\n def _init_state_dict(self) -> dict:\r\n self._state_dict = {\r\n 'ex_iterable': self.ex_iterable._init_state_dict(),\r\n 'epoch': 0,\r\n }\r\n return self._state_dict\r\n\r\n @typing.override\r\n def __iter__(self):\r\n epoch = self._state_dict['epoch'] if self._state_dict else 0\r\n for i in itertools.count(epoch):\r\n # Create effective seed using i (subtract in order to avoir overflow in long_scalars)\r\n effective_seed = copy.deepcopy(self.generator).integers(0, 1 << 63) - i\r\n effective_seed = (1 << 63) + effective_seed if effective_seed < 0 else effective_seed\r\n generator = np.random.default_rng(effective_seed)\r\n self.ex_iterable = self.ex_iterable.shuffle_data_sources(generator)\r\n if self._state_dict:\r\n self._state_dict['epoch'] = i\r\n self._state_dict['ex_iterable'] = self.ex_iterable._init_state_dict()\r\n it = iter(self.ex_iterable)\r\n yield from it\r\n\r\n @typing.override\r\n def shuffle_data_sources(self, generator):\r\n ex_iterable = self.ex_iterable.shuffle_data_sources(generator)\r\n return ShuffleEveryEpochIterable(ex_iterable, generator=generator)\r\n\r\n @typing.override\r\n def shard_data_sources(self, worker_id: int, num_workers: int):\r\n ex_iterable = self.ex_iterable.shard_data_sources(worker_id, num_workers)\r\n return ShuffleEveryEpochIterable(ex_iterable, generator=self.generator)\r\n\r\n @typing.override\r\n @property\r\n def n_shards(self) -> int:\r\n return self.ex_iterable.n_shards\r\n \r\ngenerator = np.random.default_rng(seed)\r\nshuffling = iterable_dataset.ShufflingConfig(generator=generator, _original_seed=seed)\r\nex_iterable = iterable_dataset.BufferShuffledExamplesIterable(\r\n dataset._ex_iterable, buffer_size=buffer_size, generator=generator\r\n)\r\nex_iterable = ShuffleEveryEpochIterable(ex_iterable, generator=generator)\r\ndataset = datasets.IterableDataset(\r\n ex_iterable=ex_iterable,\r\n info=dataset._info.copy(),\r\n split=dataset._split,\r\n formatting=dataset._formatting,\r\n shuffling=shuffling,\r\n distributed=copy.deepcopy(dataset._distributed),\r\n token_per_repo_id=dataset._token_per_repo_id,\r\n)\r\n```\r\n",
"Nice ! This iterable is infinite though no ? How would `interleave_dataset` know when to stop ?\r\n\r\nMaybe the re-shuffling can be implemented directly in `RandomlyCyclingMultiSourcesExamplesIterable` (which is the iterable used by `interleave_dataset`) ?",
"Infinite is fine for my usecases fortunately."
] | How to set_epoch with interleave_datasets? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7051/reactions"
} | I_kwDODunzps6Pm9LJ | null | 2024-07-15T18:24:52Z | https://api.github.com/repos/huggingface/datasets/issues/7051/comments | Let's say I have dataset A which has 100k examples, and dataset B which has 100m examples.
I want to train on an interleaved dataset of A+B, with stopping_strategy='all_exhausted' so dataset B doesn't repeat any examples. But every time A is exhausted I want it to be reshuffled (eg. calling set_epoch)
Of course I want to interleave as IterableDatasets / streaming mode so B doesn't have to get tokenized completely at the start.
How could I achieve this? I was thinking something like, if I wrap dataset A in some new IterableDataset with from_generator() and manually call set_epoch before interleaving it? But I'm not sure how to keep the number of shards in that dataset...
Something like
```
dataset_a = load_dataset(...)
dataset_b = load_dataset(...)
def epoch_shuffled_dataset(ds):
# How to make this maintain the number of shards in ds??
for epoch in itertools.count():
ds.set_epoch(epoch)
yield from iter(ds)
shuffled_dataset_a = IterableDataset.from_generator(epoch_shuffled_dataset, gen_kwargs={'ds': dataset_a})
interleaved = interleave_datasets([shuffled_dataset_a, dataset_b], probs, stopping_strategy='all_exhausted')
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4",
"events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}",
"followers_url": "https://api.github.com/users/jonathanasdf/followers",
"following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}",
"gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jonathanasdf",
"id": 511073,
"login": "jonathanasdf",
"node_id": "MDQ6VXNlcjUxMTA3Mw==",
"organizations_url": "https://api.github.com/users/jonathanasdf/orgs",
"received_events_url": "https://api.github.com/users/jonathanasdf/received_events",
"repos_url": "https://api.github.com/users/jonathanasdf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jonathanasdf"
} | https://api.github.com/repos/huggingface/datasets/issues/7051/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7051/timeline | closed | false | 7,051 | null | 2024-08-05T20:58:04Z | null | false |
2,409,048,733 | https://api.github.com/repos/huggingface/datasets/issues/7050 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7050/events | [] | null | 2024-07-15T16:06:15Z | [] | https://github.com/huggingface/datasets/pull/7050 | MEMBER | null | false | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7050). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005707 / 0.011353 (-0.005646) | 0.004381 / 0.011008 (-0.006627) | 0.063711 / 0.038508 (0.025202) | 0.031882 / 0.023109 (0.008772) | 0.250056 / 0.275898 (-0.025842) | 0.287616 / 0.323480 (-0.035863) | 0.003327 / 0.007986 (-0.004658) | 0.003717 / 0.004328 (-0.000611) | 0.049103 / 0.004250 (0.044853) | 0.048821 / 0.037052 (0.011769) | 0.259688 / 0.258489 (0.001199) | 0.311469 / 0.293841 (0.017628) | 0.030667 / 0.128546 (-0.097879) | 0.013091 / 0.075646 (-0.062555) | 0.204737 / 0.419271 (-0.214534) | 0.038312 / 0.043533 (-0.005221) | 0.250055 / 0.255139 (-0.005084) | 0.272199 / 0.283200 (-0.011001) | 0.021161 / 0.141683 (-0.120522) | 1.116095 / 1.452155 (-0.336060) | 1.153588 / 1.492716 (-0.339129) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.107828 / 0.018006 (0.089822) | 0.315898 / 0.000490 (0.315408) | 0.000228 / 0.000200 (0.000028) | 0.000048 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018873 / 0.037411 (-0.018539) | 0.063374 / 0.014526 (0.048848) | 0.076424 / 0.176557 (-0.100133) | 0.123468 / 0.737135 (-0.613667) | 0.077432 / 0.296338 (-0.218906) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288931 / 0.215209 (0.073722) | 2.828745 / 2.077655 (0.751091) | 1.471061 / 1.504120 (-0.033059) | 1.332289 / 1.541195 (-0.208906) | 1.379797 / 1.468490 (-0.088693) | 0.708053 / 4.584777 (-3.876724) | 2.382431 / 3.745712 (-1.363281) | 2.952672 / 5.269862 (-2.317190) | 1.957517 / 4.565676 (-2.608160) | 0.078730 / 0.424275 (-0.345546) | 0.005093 / 0.007607 (-0.002514) | 0.338147 / 0.226044 (0.112102) | 3.340841 / 2.268929 (1.071912) | 1.857083 / 55.444624 (-53.587541) | 1.533659 / 6.876477 (-5.342818) | 1.750549 / 2.142072 (-0.391523) | 0.804125 / 4.805227 (-4.001103) | 0.134618 / 6.500664 (-6.366046) | 0.042517 / 0.075469 (-0.032952) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.968608 / 1.841788 (-0.873180) | 12.326994 / 8.074308 (4.252686) | 9.464889 / 10.191392 (-0.726503) | 0.143979 / 0.680424 (-0.536445) | 0.014577 / 0.534201 (-0.519624) | 0.303205 / 0.579283 (-0.276078) | 0.269866 / 0.434364 (-0.164498) | 0.344846 / 0.540337 (-0.195491) | 0.443794 / 1.386936 (-0.943142) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006452 / 0.011353 (-0.004900) | 0.004264 / 0.011008 (-0.006745) | 0.051355 / 0.038508 (0.012847) | 0.035188 / 0.023109 (0.012079) | 0.267697 / 0.275898 (-0.008201) | 0.295853 / 0.323480 (-0.027627) | 0.004611 / 0.007986 (-0.003374) | 0.005395 / 0.004328 (0.001066) | 0.049903 / 0.004250 (0.045652) | 0.044582 / 0.037052 (0.007530) | 0.284706 / 0.258489 (0.026217) | 0.321623 / 0.293841 (0.027782) | 0.033228 / 0.128546 (-0.095318) | 0.013077 / 0.075646 (-0.062569) | 0.061867 / 0.419271 (-0.357405) | 0.034625 / 0.043533 (-0.008908) | 0.269088 / 0.255139 (0.013949) | 0.284899 / 0.283200 (0.001699) | 0.019972 / 0.141683 (-0.121710) | 1.157976 / 1.452155 (-0.294178) | 1.181658 / 1.492716 (-0.311058) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.111072 / 0.018006 (0.093066) | 0.333310 / 0.000490 (0.332820) | 0.000251 / 0.000200 (0.000051) | 0.000059 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023760 / 0.037411 (-0.013652) | 0.080746 / 0.014526 (0.066221) | 0.090231 / 0.176557 (-0.086326) | 0.132200 / 0.737135 (-0.604936) | 0.095679 / 0.296338 (-0.200660) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297404 / 0.215209 (0.082195) | 2.919779 / 2.077655 (0.842124) | 1.577470 / 1.504120 (0.073350) | 1.452924 / 1.541195 (-0.088271) | 1.523683 / 1.468490 (0.055193) | 0.743801 / 4.584777 (-3.840976) | 1.006944 / 3.745712 (-2.738768) | 3.218161 / 5.269862 (-2.051701) | 2.069762 / 4.565676 (-2.495914) | 0.082900 / 0.424275 (-0.341375) | 0.005239 / 0.007607 (-0.002368) | 0.360124 / 0.226044 (0.134080) | 3.505349 / 2.268929 (1.236420) | 1.959324 / 55.444624 (-53.485300) | 1.663782 / 6.876477 (-5.212694) | 1.725745 / 2.142072 (-0.416327) | 0.825268 / 4.805227 (-3.979959) | 0.138577 / 6.500664 (-6.362087) | 0.042716 / 0.075469 (-0.032753) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.021138 / 1.841788 (-0.820650) | 13.907954 / 8.074308 (5.833646) | 11.023796 / 10.191392 (0.832404) | 0.135224 / 0.680424 (-0.545200) | 0.016232 / 0.534201 (-0.517969) | 0.330389 / 0.579283 (-0.248894) | 0.131702 / 0.434364 (-0.302662) | 0.372499 / 0.540337 (-0.167838) | 0.472702 / 1.386936 (-0.914234) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#87f4c2088854ff33e817e724e75179e9975c1b02 \"CML watermark\")\n"
] | add checkpoint and resume title in docs | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7050/reactions"
} | PR_kwDODunzps51Z1Yp | {
"diff_url": "https://github.com/huggingface/datasets/pull/7050.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7050",
"merged_at": "2024-07-15T15:59:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7050.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7050"
} | 2024-07-15T15:38:04Z | https://api.github.com/repos/huggingface/datasets/issues/7050/comments | (minor) just to make it more prominent in the docs page for the soon-to-be-released new torchdata | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | https://api.github.com/repos/huggingface/datasets/issues/7050/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7050/timeline | closed | false | 7,050 | null | 2024-07-15T15:59:56Z | null | true |
2,408,514,366 | https://api.github.com/repos/huggingface/datasets/issues/7049 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7049/events | [] | null | 2024-07-18T11:33:34Z | [] | https://github.com/huggingface/datasets/issues/7049 | NONE | completed | null | null | [
"In addition, when I use `set_format ` and index the ds, the following error occurs:\r\nthe code\r\n```python\r\nds.set_format(type=\"np\", colums=\"pixel_values\")\r\n```\r\nerror\r\n<img width=\"918\" alt=\"image\" src=\"https://github.com/user-attachments/assets/b28bbff2-20ea-4d28-ab62-b4ed2d944996\">\r\n",
"> Some people use the set_format function to convert the column back, but doesn't this lose precision?\r\n\r\nUnder the hood the data is saved in Arrow format using the same precision as your numpy arrays?\r\nBy default the Arrow data is read as python lists, but you can indeed read them back as numpy arrays with the same precision",
"(you can fix your second issue by fixing the typo `colums` -> `columns`)",
"> (you can fix your second issue by fixing the typo `colums` -> `columns`)\r\n\r\nYou are right, I was careless. Thank you.",
"> > Some people use the set_format function to convert the column back, but doesn't this lose precision?\r\n> \r\n> Under the hood the data is saved in Arrow format using the same precision as your numpy arrays? By default the Arrow data is read as python lists, but you can indeed read them back as numpy arrays with the same precision\r\n\r\nYes, after testing I found that there was no loss of precision. Thanks again for your answer."
] | Save nparray as list | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7049/reactions"
} | I_kwDODunzps6PjwM- | null | 2024-07-15T11:36:11Z | https://api.github.com/repos/huggingface/datasets/issues/7049/comments | ### Describe the bug
When I use the `map` function to convert images into features, datasets saves nparray as a list. Some people use the `set_format` function to convert the column back, but doesn't this lose precision?
### Steps to reproduce the bug
the map function
```python
def convert_image_to_features(inst, processor, image_dir):
image_file = inst["image_url"]
file = image_file.split("/")[-1]
image_path = os.path.join(image_dir, file)
image = Image.open(image_path)
image = image.convert("RGBA")
inst["pixel_values"] = processor(images=image, return_tensors="np")["pixel_values"]
return inst
```
main function
```python
map_fun = partial(
convert_image_to_features, processor=processor, image_dir=image_dir
)
ds = ds.map(map_fun, batched=False, num_proc=20)
print(type(ds[0]["pixel_values"])
```
### Expected behavior
(type < list>)
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-4.19.91-009.ali4000.alios7.x86_64-x86_64-with-glibc2.35
- Python version: 3.11.5
- `huggingface_hub` version: 0.23.4
- PyArrow version: 14.0.2
- Pandas version: 2.1.4
- `fsspec` version: 2023.10.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/48399040?v=4",
"events_url": "https://api.github.com/users/Sakurakdx/events{/privacy}",
"followers_url": "https://api.github.com/users/Sakurakdx/followers",
"following_url": "https://api.github.com/users/Sakurakdx/following{/other_user}",
"gists_url": "https://api.github.com/users/Sakurakdx/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Sakurakdx",
"id": 48399040,
"login": "Sakurakdx",
"node_id": "MDQ6VXNlcjQ4Mzk5MDQw",
"organizations_url": "https://api.github.com/users/Sakurakdx/orgs",
"received_events_url": "https://api.github.com/users/Sakurakdx/received_events",
"repos_url": "https://api.github.com/users/Sakurakdx/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Sakurakdx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sakurakdx/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Sakurakdx"
} | https://api.github.com/repos/huggingface/datasets/issues/7049/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7049/timeline | closed | false | 7,049 | null | 2024-07-18T11:33:34Z | null | false |
2,408,487,547 | https://api.github.com/repos/huggingface/datasets/issues/7048 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7048/events | [] | null | 2024-07-16T10:11:25Z | [] | https://github.com/huggingface/datasets/issues/7048 | NONE | completed | null | null | [
"Could you please check your `numpy` version?",
"I got this issue while using numpy version 2.0. \r\n\r\nI solved it by switching back to numpy 1.26.0 :) ",
"We recently added support for numpy 2.0, but it is not released yet.",
"Ok I see, thanks! I think we can close this issue for now as switching back to version 1.26.0 solves the problem :) "
] | ImportError: numpy.core.multiarray when using `filter` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7048/reactions"
} | I_kwDODunzps6Pjpp7 | null | 2024-07-15T11:21:04Z | https://api.github.com/repos/huggingface/datasets/issues/7048/comments | ### Describe the bug
I can't apply the filter method on my dataset.
### Steps to reproduce the bug
The following snippet generates a bug:
```python
from datasets import load_dataset
ami = load_dataset('kamilakesbi/ami', 'ihm')
ami['train'].filter(
lambda example: example["file_name"] == 'EN2001a'
)
```
I get the following error:
`ImportError: numpy.core.multiarray failed to import (auto-generated because you didn't call 'numpy.import_array()' after cimporting numpy; use '<void>numpy._import_array' to disable if you are certain you don't need it).`
### Expected behavior
It should work properly!
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- `huggingface_hub` version: 0.23.4
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/45195979?v=4",
"events_url": "https://api.github.com/users/kamilakesbi/events{/privacy}",
"followers_url": "https://api.github.com/users/kamilakesbi/followers",
"following_url": "https://api.github.com/users/kamilakesbi/following{/other_user}",
"gists_url": "https://api.github.com/users/kamilakesbi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kamilakesbi",
"id": 45195979,
"login": "kamilakesbi",
"node_id": "MDQ6VXNlcjQ1MTk1OTc5",
"organizations_url": "https://api.github.com/users/kamilakesbi/orgs",
"received_events_url": "https://api.github.com/users/kamilakesbi/received_events",
"repos_url": "https://api.github.com/users/kamilakesbi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kamilakesbi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kamilakesbi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kamilakesbi"
} | https://api.github.com/repos/huggingface/datasets/issues/7048/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7048/timeline | closed | false | 7,048 | null | 2024-07-16T10:11:25Z | null | false |
2,406,495,084 | https://api.github.com/repos/huggingface/datasets/issues/7047 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7047/events | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | null | 2024-07-17T12:07:08Z | [] | https://github.com/huggingface/datasets/issues/7047 | NONE | null | null | null | [
"To anyone else who finds themselves in this predicament, it's possible to read the parquet file in the same way that datasets writes it, and then manually break it into pieces. Although, you need a couple of magic options (`thrift_*`) to deal with the huge metadata, otherwise pyarrow immediately crashes.\r\n```python\r\nimport pyarrow.parquet as pq\r\nimport pyarrow as pa\r\n\r\nr = pq.ParquetReader()\r\n\r\nr.open(\"./outrageous-file.parquet\",thrift_string_size_limit=2**31-1, thrift_container_size_limit=2**31-1)\r\n\r\nfrom more_itertools import chunked\r\nimport tqdm\r\n\r\nfor i,chunk in tqdm.tqdm(enumerate(chunked(range(r.num_row_groups),10000))):\r\n w = pq.ParquetWriter(f\"./chunks.parquet/chunk{i}.parquet\",schema=r.schema_arrow)\r\n for idx in chunk:\r\n w.write_table(r.read_row_group(idx))\r\n w.close()\r\n```",
"You can also use `.shard()` and call `to_parquet()` on each shard in the meantime:\r\n\r\n```python\r\nnum_shards = 128\r\noutput_path_template = \"output_dir/{index:05d}.parquet\"\r\nfor index in range(num_shards):\r\n shard = ds.shard(index=index, num_shards=num_shards, contiguous=True)\r\n shard.to_parquet(output_path_template.format(index=index))\r\n```"
] | Save Dataset as Sharded Parquet | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7047/reactions"
} | I_kwDODunzps6PcDNs | null | 2024-07-12T23:47:51Z | https://api.github.com/repos/huggingface/datasets/issues/7047/comments | ### Feature request
`to_parquet` currently saves the dataset as one massive, monolithic parquet file, rather than as several small parquet files. It should shard large datasets automatically.
### Motivation
This default behavior makes me very sad because a program I ran for 6 hours saved its results using `to_parquet`, putting the entire billion+ row dataset into a 171 GB *single shard parquet file* which pyarrow, apache spark, etc. all cannot work with without completely exhausting the memory of my system. I was previously able to work with larger-than-memory parquet files, but not this one. I *assume* the reason why this is happening is because it is a single shard. Making sharding the default behavior puts datasets in parity with other frameworks, such as spark, which automatically shard when a large dataset is saved as parquet.
### Your contribution
I could change the logic here https://github.com/huggingface/datasets/blob/bf6f41e94d9b2f1c620cf937a2e85e5754a8b960/src/datasets/io/parquet.py#L109-L158
to use `pyarrow.dataset.write_dataset`, which seems to support sharding, or periodically open new files. We would only shard if the user passed in a path rather than file handle. | {
"avatar_url": "https://avatars.githubusercontent.com/u/43631024?v=4",
"events_url": "https://api.github.com/users/tom-p-reichel/events{/privacy}",
"followers_url": "https://api.github.com/users/tom-p-reichel/followers",
"following_url": "https://api.github.com/users/tom-p-reichel/following{/other_user}",
"gists_url": "https://api.github.com/users/tom-p-reichel/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tom-p-reichel",
"id": 43631024,
"login": "tom-p-reichel",
"node_id": "MDQ6VXNlcjQzNjMxMDI0",
"organizations_url": "https://api.github.com/users/tom-p-reichel/orgs",
"received_events_url": "https://api.github.com/users/tom-p-reichel/received_events",
"repos_url": "https://api.github.com/users/tom-p-reichel/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tom-p-reichel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tom-p-reichel/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tom-p-reichel"
} | https://api.github.com/repos/huggingface/datasets/issues/7047/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7047/timeline | open | false | 7,047 | null | null | null | false |
2,405,485,582 | https://api.github.com/repos/huggingface/datasets/issues/7046 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7046/events | [] | null | 2024-07-12T13:04:40Z | [] | https://github.com/huggingface/datasets/pull/7046 | MEMBER | null | false | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7046). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005897 / 0.011353 (-0.005456) | 0.003958 / 0.011008 (-0.007050) | 0.063684 / 0.038508 (0.025176) | 0.031743 / 0.023109 (0.008634) | 0.246725 / 0.275898 (-0.029173) | 0.275519 / 0.323480 (-0.047961) | 0.003347 / 0.007986 (-0.004639) | 0.004089 / 0.004328 (-0.000240) | 0.049591 / 0.004250 (0.045341) | 0.049386 / 0.037052 (0.012333) | 0.264929 / 0.258489 (0.006440) | 0.317157 / 0.293841 (0.023316) | 0.029929 / 0.128546 (-0.098617) | 0.012264 / 0.075646 (-0.063382) | 0.209208 / 0.419271 (-0.210064) | 0.037073 / 0.043533 (-0.006460) | 0.247999 / 0.255139 (-0.007140) | 0.273457 / 0.283200 (-0.009742) | 0.020354 / 0.141683 (-0.121328) | 1.109874 / 1.452155 (-0.342281) | 1.180085 / 1.492716 (-0.312631) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099935 / 0.018006 (0.081929) | 0.305607 / 0.000490 (0.305118) | 0.000214 / 0.000200 (0.000014) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.020019 / 0.037411 (-0.017392) | 0.066608 / 0.014526 (0.052083) | 0.079354 / 0.176557 (-0.097202) | 0.123416 / 0.737135 (-0.613719) | 0.078171 / 0.296338 (-0.218167) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281627 / 0.215209 (0.066418) | 2.809807 / 2.077655 (0.732152) | 1.467007 / 1.504120 (-0.037112) | 1.351367 / 1.541195 (-0.189828) | 1.396782 / 1.468490 (-0.071708) | 0.735605 / 4.584777 (-3.849172) | 2.378455 / 3.745712 (-1.367257) | 2.971739 / 5.269862 (-2.298122) | 2.004970 / 4.565676 (-2.560707) | 0.078156 / 0.424275 (-0.346119) | 0.005276 / 0.007607 (-0.002331) | 0.340370 / 0.226044 (0.114325) | 3.347552 / 2.268929 (1.078624) | 1.851098 / 55.444624 (-53.593527) | 1.518079 / 6.876477 (-5.358398) | 1.703145 / 2.142072 (-0.438927) | 0.799574 / 4.805227 (-4.005654) | 0.133591 / 6.500664 (-6.367074) | 0.043329 / 0.075469 (-0.032141) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.977268 / 1.841788 (-0.864520) | 12.720209 / 8.074308 (4.645901) | 9.798126 / 10.191392 (-0.393266) | 0.132106 / 0.680424 (-0.548318) | 0.014456 / 0.534201 (-0.519745) | 0.312965 / 0.579283 (-0.266318) | 0.271348 / 0.434364 (-0.163016) | 0.343951 / 0.540337 (-0.196386) | 0.449814 / 1.386936 (-0.937122) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005944 / 0.011353 (-0.005409) | 0.004054 / 0.011008 (-0.006954) | 0.050573 / 0.038508 (0.012065) | 0.034580 / 0.023109 (0.011470) | 0.261439 / 0.275898 (-0.014459) | 0.286057 / 0.323480 (-0.037423) | 0.004463 / 0.007986 (-0.003523) | 0.002891 / 0.004328 (-0.001437) | 0.049169 / 0.004250 (0.044919) | 0.041622 / 0.037052 (0.004570) | 0.275216 / 0.258489 (0.016727) | 0.305847 / 0.293841 (0.012006) | 0.032615 / 0.128546 (-0.095932) | 0.012304 / 0.075646 (-0.063343) | 0.062890 / 0.419271 (-0.356382) | 0.033846 / 0.043533 (-0.009687) | 0.262758 / 0.255139 (0.007619) | 0.279451 / 0.283200 (-0.003748) | 0.018953 / 0.141683 (-0.122730) | 1.149158 / 1.452155 (-0.302997) | 1.173981 / 1.492716 (-0.318735) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.100462 / 0.018006 (0.082456) | 0.308390 / 0.000490 (0.307900) | 0.000207 / 0.000200 (0.000007) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023089 / 0.037411 (-0.014322) | 0.078610 / 0.014526 (0.064084) | 0.090348 / 0.176557 (-0.086208) | 0.130784 / 0.737135 (-0.606351) | 0.092538 / 0.296338 (-0.203801) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.296255 / 0.215209 (0.081046) | 2.899159 / 2.077655 (0.821504) | 1.603524 / 1.504120 (0.099404) | 1.418002 / 1.541195 (-0.123192) | 1.470221 / 1.468490 (0.001731) | 0.722129 / 4.584777 (-3.862648) | 0.956146 / 3.745712 (-2.789566) | 3.011640 / 5.269862 (-2.258222) | 1.910966 / 4.565676 (-2.654711) | 0.078771 / 0.424275 (-0.345504) | 0.005154 / 0.007607 (-0.002453) | 0.354001 / 0.226044 (0.127956) | 3.484224 / 2.268929 (1.215296) | 1.913612 / 55.444624 (-53.531012) | 1.634492 / 6.876477 (-5.241985) | 1.693292 / 2.142072 (-0.448780) | 0.816837 / 4.805227 (-3.988390) | 0.136631 / 6.500664 (-6.364033) | 0.042291 / 0.075469 (-0.033178) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.994887 / 1.841788 (-0.846901) | 13.144865 / 8.074308 (5.070557) | 10.820098 / 10.191392 (0.628706) | 0.132557 / 0.680424 (-0.547867) | 0.015467 / 0.534201 (-0.518734) | 0.302026 / 0.579283 (-0.277257) | 0.128763 / 0.434364 (-0.305601) | 0.347908 / 0.540337 (-0.192430) | 0.444829 / 1.386936 (-0.942107) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#bf6f41e94d9b2f1c620cf937a2e85e5754a8b960 \"CML watermark\")\n"
] | Support librosa and numpy 2.0 for Python 3.10 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7046/reactions"
} | PR_kwDODunzps51N05n | {
"diff_url": "https://github.com/huggingface/datasets/pull/7046.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7046",
"merged_at": "2024-07-12T12:58:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7046.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7046"
} | 2024-07-12T12:42:47Z | https://api.github.com/repos/huggingface/datasets/issues/7046/comments | Support librosa and numpy 2.0 for Python 3.10 by installing soxr 0.4.0b1 pre-release:
- https://github.com/dofuuz/python-soxr/releases/tag/v0.4.0b1
- https://github.com/dofuuz/python-soxr/issues/28 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/7046/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7046/timeline | closed | false | 7,046 | null | 2024-07-12T12:58:17Z | null | true |
2,405,447,858 | https://api.github.com/repos/huggingface/datasets/issues/7045 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7045/events | [] | null | 2024-07-12T12:38:53Z | [] | https://github.com/huggingface/datasets/pull/7045 | MEMBER | null | false | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7045). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005426 / 0.011353 (-0.005927) | 0.003896 / 0.011008 (-0.007112) | 0.063492 / 0.038508 (0.024984) | 0.030199 / 0.023109 (0.007090) | 0.249892 / 0.275898 (-0.026006) | 0.291311 / 0.323480 (-0.032168) | 0.004389 / 0.007986 (-0.003597) | 0.002829 / 0.004328 (-0.001500) | 0.049685 / 0.004250 (0.045435) | 0.043351 / 0.037052 (0.006299) | 0.264265 / 0.258489 (0.005776) | 0.290463 / 0.293841 (-0.003378) | 0.030007 / 0.128546 (-0.098539) | 0.012146 / 0.075646 (-0.063500) | 0.203841 / 0.419271 (-0.215430) | 0.037159 / 0.043533 (-0.006373) | 0.253377 / 0.255139 (-0.001762) | 0.275990 / 0.283200 (-0.007209) | 0.018334 / 0.141683 (-0.123349) | 1.112616 / 1.452155 (-0.339539) | 1.157507 / 1.492716 (-0.335209) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097781 / 0.018006 (0.079775) | 0.314381 / 0.000490 (0.313891) | 0.000217 / 0.000200 (0.000017) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018704 / 0.037411 (-0.018708) | 0.062293 / 0.014526 (0.047767) | 0.073997 / 0.176557 (-0.102559) | 0.120309 / 0.737135 (-0.616826) | 0.075592 / 0.296338 (-0.220747) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283178 / 0.215209 (0.067969) | 2.798027 / 2.077655 (0.720372) | 1.431320 / 1.504120 (-0.072800) | 1.316135 / 1.541195 (-0.225060) | 1.345528 / 1.468490 (-0.122962) | 0.717300 / 4.584777 (-3.867477) | 2.401019 / 3.745712 (-1.344693) | 2.866411 / 5.269862 (-2.403451) | 1.933198 / 4.565676 (-2.632479) | 0.079505 / 0.424275 (-0.344771) | 0.005089 / 0.007607 (-0.002519) | 0.333614 / 0.226044 (0.107569) | 3.315449 / 2.268929 (1.046520) | 1.807667 / 55.444624 (-53.636957) | 1.490537 / 6.876477 (-5.385939) | 1.633305 / 2.142072 (-0.508767) | 0.807732 / 4.805227 (-3.997495) | 0.133825 / 6.500664 (-6.366839) | 0.041696 / 0.075469 (-0.033774) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.969063 / 1.841788 (-0.872724) | 11.825985 / 8.074308 (3.751677) | 9.808041 / 10.191392 (-0.383351) | 0.143338 / 0.680424 (-0.537085) | 0.014714 / 0.534201 (-0.519487) | 0.304360 / 0.579283 (-0.274923) | 0.266863 / 0.434364 (-0.167501) | 0.342374 / 0.540337 (-0.197963) | 0.442120 / 1.386936 (-0.944816) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005574 / 0.011353 (-0.005778) | 0.003735 / 0.011008 (-0.007273) | 0.051021 / 0.038508 (0.012513) | 0.032825 / 0.023109 (0.009716) | 0.267775 / 0.275898 (-0.008123) | 0.286015 / 0.323480 (-0.037464) | 0.004332 / 0.007986 (-0.003653) | 0.002796 / 0.004328 (-0.001532) | 0.050183 / 0.004250 (0.045933) | 0.040191 / 0.037052 (0.003138) | 0.279777 / 0.258489 (0.021288) | 0.312161 / 0.293841 (0.018320) | 0.031993 / 0.128546 (-0.096553) | 0.012168 / 0.075646 (-0.063478) | 0.061622 / 0.419271 (-0.357650) | 0.033577 / 0.043533 (-0.009956) | 0.267300 / 0.255139 (0.012161) | 0.284595 / 0.283200 (0.001396) | 0.018476 / 0.141683 (-0.123207) | 1.135917 / 1.452155 (-0.316237) | 1.164516 / 1.492716 (-0.328200) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.108194 / 0.018006 (0.090188) | 0.309514 / 0.000490 (0.309025) | 0.000211 / 0.000200 (0.000011) | 0.000053 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022998 / 0.037411 (-0.014413) | 0.077126 / 0.014526 (0.062600) | 0.088779 / 0.176557 (-0.087778) | 0.128646 / 0.737135 (-0.608489) | 0.089895 / 0.296338 (-0.206443) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295131 / 0.215209 (0.079922) | 2.887380 / 2.077655 (0.809726) | 1.586450 / 1.504120 (0.082330) | 1.449831 / 1.541195 (-0.091363) | 1.468805 / 1.468490 (0.000315) | 0.721578 / 4.584777 (-3.863199) | 0.970499 / 3.745712 (-2.775214) | 2.975604 / 5.269862 (-2.294258) | 1.935809 / 4.565676 (-2.629867) | 0.078504 / 0.424275 (-0.345771) | 0.005219 / 0.007607 (-0.002388) | 0.347168 / 0.226044 (0.121124) | 3.417040 / 2.268929 (1.148111) | 1.928707 / 55.444624 (-53.515917) | 1.629398 / 6.876477 (-5.247078) | 1.653014 / 2.142072 (-0.489058) | 0.796097 / 4.805227 (-4.009130) | 0.133956 / 6.500664 (-6.366708) | 0.041567 / 0.075469 (-0.033902) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.995511 / 1.841788 (-0.846277) | 12.577211 / 8.074308 (4.502903) | 10.562561 / 10.191392 (0.371169) | 0.144288 / 0.680424 (-0.536136) | 0.016345 / 0.534201 (-0.517856) | 0.304364 / 0.579283 (-0.274920) | 0.134630 / 0.434364 (-0.299734) | 0.341494 / 0.540337 (-0.198843) | 0.436238 / 1.386936 (-0.950698) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3b708bb6611a88c3f00f58ec3c63fe0da2c2b1e1 \"CML watermark\")\n"
] | Fix tensorflow min version depending on Python version | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7045/reactions"
} | PR_kwDODunzps51Nsie | {
"diff_url": "https://github.com/huggingface/datasets/pull/7045.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7045",
"merged_at": "2024-07-12T12:33:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7045.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7045"
} | 2024-07-12T12:20:23Z | https://api.github.com/repos/huggingface/datasets/issues/7045/comments | Fix tensorflow min version depending on Python version.
Related to:
- #6991 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/7045/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7045/timeline | closed | false | 7,045 | null | 2024-07-12T12:33:00Z | null | true |
2,405,002,987 | https://api.github.com/repos/huggingface/datasets/issues/7044 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7044/events | [] | null | 2024-07-12T09:06:32Z | [] | https://github.com/huggingface/datasets/pull/7044 | MEMBER | null | false | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7044). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005797 / 0.011353 (-0.005556) | 0.004017 / 0.011008 (-0.006991) | 0.063829 / 0.038508 (0.025321) | 0.031329 / 0.023109 (0.008220) | 0.249388 / 0.275898 (-0.026510) | 0.273129 / 0.323480 (-0.050351) | 0.004250 / 0.007986 (-0.003736) | 0.002821 / 0.004328 (-0.001507) | 0.049250 / 0.004250 (0.044999) | 0.046175 / 0.037052 (0.009123) | 0.252040 / 0.258489 (-0.006449) | 0.296537 / 0.293841 (0.002696) | 0.030579 / 0.128546 (-0.097967) | 0.012436 / 0.075646 (-0.063210) | 0.205829 / 0.419271 (-0.213443) | 0.036979 / 0.043533 (-0.006554) | 0.251354 / 0.255139 (-0.003785) | 0.272262 / 0.283200 (-0.010938) | 0.019047 / 0.141683 (-0.122636) | 1.112410 / 1.452155 (-0.339745) | 1.137445 / 1.492716 (-0.355271) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097270 / 0.018006 (0.079264) | 0.309329 / 0.000490 (0.308839) | 0.000221 / 0.000200 (0.000021) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019021 / 0.037411 (-0.018390) | 0.066801 / 0.014526 (0.052276) | 0.075280 / 0.176557 (-0.101276) | 0.122499 / 0.737135 (-0.614637) | 0.077424 / 0.296338 (-0.218914) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279469 / 0.215209 (0.064259) | 2.787511 / 2.077655 (0.709856) | 1.411389 / 1.504120 (-0.092731) | 1.285796 / 1.541195 (-0.255399) | 1.354252 / 1.468490 (-0.114238) | 0.735341 / 4.584777 (-3.849436) | 2.418557 / 3.745712 (-1.327155) | 2.983406 / 5.269862 (-2.286455) | 2.005853 / 4.565676 (-2.559823) | 0.080440 / 0.424275 (-0.343835) | 0.005242 / 0.007607 (-0.002365) | 0.343557 / 0.226044 (0.117513) | 3.358984 / 2.268929 (1.090055) | 1.816709 / 55.444624 (-53.627915) | 1.500225 / 6.876477 (-5.376252) | 1.715405 / 2.142072 (-0.426667) | 0.829054 / 4.805227 (-3.976174) | 0.138352 / 6.500664 (-6.362312) | 0.043709 / 0.075469 (-0.031760) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.969135 / 1.841788 (-0.872652) | 12.510750 / 8.074308 (4.436442) | 10.140368 / 10.191392 (-0.051024) | 0.133117 / 0.680424 (-0.547307) | 0.015775 / 0.534201 (-0.518426) | 0.302203 / 0.579283 (-0.277080) | 0.268214 / 0.434364 (-0.166150) | 0.347041 / 0.540337 (-0.193296) | 0.456095 / 1.386936 (-0.930841) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006255 / 0.011353 (-0.005098) | 0.004453 / 0.011008 (-0.006555) | 0.052298 / 0.038508 (0.013790) | 0.034808 / 0.023109 (0.011699) | 0.274723 / 0.275898 (-0.001175) | 0.297199 / 0.323480 (-0.026281) | 0.004499 / 0.007986 (-0.003486) | 0.003086 / 0.004328 (-0.001242) | 0.051315 / 0.004250 (0.047065) | 0.042764 / 0.037052 (0.005712) | 0.285636 / 0.258489 (0.027147) | 0.321819 / 0.293841 (0.027978) | 0.033350 / 0.128546 (-0.095196) | 0.013457 / 0.075646 (-0.062189) | 0.063930 / 0.419271 (-0.355342) | 0.034537 / 0.043533 (-0.008996) | 0.272630 / 0.255139 (0.017491) | 0.289245 / 0.283200 (0.006045) | 0.018910 / 0.141683 (-0.122773) | 1.153064 / 1.452155 (-0.299091) | 1.207065 / 1.492716 (-0.285651) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093008 / 0.018006 (0.075002) | 0.301313 / 0.000490 (0.300823) | 0.000214 / 0.000200 (0.000014) | 0.000054 / 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023168 / 0.037411 (-0.014244) | 0.080837 / 0.014526 (0.066312) | 0.089667 / 0.176557 (-0.086889) | 0.135849 / 0.737135 (-0.601286) | 0.092082 / 0.296338 (-0.204257) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.298933 / 0.215209 (0.083723) | 2.847736 / 2.077655 (0.770082) | 1.550268 / 1.504120 (0.046148) | 1.425675 / 1.541195 (-0.115520) | 1.469251 / 1.468490 (0.000761) | 0.720446 / 4.584777 (-3.864331) | 0.976149 / 3.745712 (-2.769563) | 3.081804 / 5.269862 (-2.188057) | 1.982797 / 4.565676 (-2.582880) | 0.078598 / 0.424275 (-0.345677) | 0.005229 / 0.007607 (-0.002379) | 0.345475 / 0.226044 (0.119430) | 3.421312 / 2.268929 (1.152384) | 1.929034 / 55.444624 (-53.515590) | 1.631523 / 6.876477 (-5.244953) | 1.671996 / 2.142072 (-0.470077) | 0.776916 / 4.805227 (-4.028311) | 0.133966 / 6.500664 (-6.366699) | 0.042183 / 0.075469 (-0.033286) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.993023 / 1.841788 (-0.848764) | 12.981642 / 8.074308 (4.907334) | 10.610457 / 10.191392 (0.419065) | 0.146748 / 0.680424 (-0.533676) | 0.016556 / 0.534201 (-0.517645) | 0.303613 / 0.579283 (-0.275670) | 0.132671 / 0.434364 (-0.301693) | 0.344786 / 0.540337 (-0.195552) | 0.443049 / 1.386936 (-0.943887) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8419c40a085d67eb5832cecebf3ef8213112857d \"CML watermark\")\n"
] | Mark tests that require librosa | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7044/reactions"
} | PR_kwDODunzps51MLbh | {
"diff_url": "https://github.com/huggingface/datasets/pull/7044.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7044",
"merged_at": "2024-07-12T09:00:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7044.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7044"
} | 2024-07-12T08:06:59Z | https://api.github.com/repos/huggingface/datasets/issues/7044/comments | Mark tests that require `librosa`.
Note that `librosa` is an optional dependency (installed with `audio` option) and we should be able to test environments without that library installed. This is the case if we want to test Numpy 2.0, which is currently incompatible with `librosa` due to its dependency on `soxr`:
- https://github.com/dofuuz/python-soxr/issues/28 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/7044/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7044/timeline | closed | false | 7,044 | null | 2024-07-12T09:00:09Z | null | true |
2,404,951,714 | https://api.github.com/repos/huggingface/datasets/issues/7043 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7043/events | [] | null | 2024-07-12T08:12:55Z | [] | https://github.com/huggingface/datasets/pull/7043 | MEMBER | null | false | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7043). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005147 / 0.011353 (-0.006205) | 0.003403 / 0.011008 (-0.007605) | 0.061367 / 0.038508 (0.022859) | 0.030295 / 0.023109 (0.007186) | 0.233503 / 0.275898 (-0.042395) | 0.252644 / 0.323480 (-0.070836) | 0.004072 / 0.007986 (-0.003913) | 0.002678 / 0.004328 (-0.001650) | 0.049099 / 0.004250 (0.044848) | 0.043032 / 0.037052 (0.005979) | 0.248823 / 0.258489 (-0.009666) | 0.274895 / 0.293841 (-0.018946) | 0.029307 / 0.128546 (-0.099239) | 0.011186 / 0.075646 (-0.064460) | 0.197142 / 0.419271 (-0.222129) | 0.035924 / 0.043533 (-0.007609) | 0.234728 / 0.255139 (-0.020411) | 0.252990 / 0.283200 (-0.030209) | 0.017589 / 0.141683 (-0.124094) | 1.108252 / 1.452155 (-0.343903) | 1.135949 / 1.492716 (-0.356767) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093096 / 0.018006 (0.075090) | 0.289284 / 0.000490 (0.288794) | 0.000208 / 0.000200 (0.000008) | 0.000038 / 0.000054 (-0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017633 / 0.037411 (-0.019778) | 0.060621 / 0.014526 (0.046095) | 0.073194 / 0.176557 (-0.103363) | 0.120176 / 0.737135 (-0.616959) | 0.073575 / 0.296338 (-0.222764) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.277168 / 0.215209 (0.061959) | 2.689714 / 2.077655 (0.612060) | 1.427558 / 1.504120 (-0.076562) | 1.331350 / 1.541195 (-0.209844) | 1.353069 / 1.468490 (-0.115421) | 0.716657 / 4.584777 (-3.868120) | 2.321145 / 3.745712 (-1.424567) | 2.757986 / 5.269862 (-2.511876) | 1.851604 / 4.565676 (-2.714072) | 0.089530 / 0.424275 (-0.334745) | 0.004884 / 0.007607 (-0.002723) | 0.327859 / 0.226044 (0.101814) | 3.290749 / 2.268929 (1.021821) | 1.831090 / 55.444624 (-53.613535) | 1.509247 / 6.876477 (-5.367229) | 1.616545 / 2.142072 (-0.525527) | 0.775228 / 4.805227 (-4.029999) | 0.133794 / 6.500664 (-6.366870) | 0.040644 / 0.075469 (-0.034825) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.950816 / 1.841788 (-0.890972) | 11.109938 / 8.074308 (3.035630) | 9.560673 / 10.191392 (-0.630719) | 0.130685 / 0.680424 (-0.549738) | 0.014096 / 0.534201 (-0.520105) | 0.297222 / 0.579283 (-0.282061) | 0.262777 / 0.434364 (-0.171587) | 0.340983 / 0.540337 (-0.199355) | 0.426107 / 1.386936 (-0.960829) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005547 / 0.011353 (-0.005806) | 0.003425 / 0.011008 (-0.007584) | 0.049791 / 0.038508 (0.011283) | 0.032660 / 0.023109 (0.009550) | 0.257640 / 0.275898 (-0.018258) | 0.283483 / 0.323480 (-0.039997) | 0.004330 / 0.007986 (-0.003655) | 0.002297 / 0.004328 (-0.002032) | 0.047999 / 0.004250 (0.043748) | 0.039875 / 0.037052 (0.002822) | 0.273300 / 0.258489 (0.014811) | 0.303384 / 0.293841 (0.009543) | 0.031696 / 0.128546 (-0.096851) | 0.011913 / 0.075646 (-0.063733) | 0.060330 / 0.419271 (-0.358942) | 0.033253 / 0.043533 (-0.010280) | 0.255378 / 0.255139 (0.000240) | 0.271647 / 0.283200 (-0.011553) | 0.018772 / 0.141683 (-0.122910) | 1.116079 / 1.452155 (-0.336075) | 1.165133 / 1.492716 (-0.327583) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094325 / 0.018006 (0.076319) | 0.297523 / 0.000490 (0.297034) | 0.000210 / 0.000200 (0.000011) | 0.000047 / 0.000054 (-0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022485 / 0.037411 (-0.014926) | 0.073731 / 0.014526 (0.059205) | 0.089039 / 0.176557 (-0.087518) | 0.124035 / 0.737135 (-0.613101) | 0.088053 / 0.296338 (-0.208286) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286676 / 0.215209 (0.071467) | 2.794678 / 2.077655 (0.717024) | 1.541401 / 1.504120 (0.037281) | 1.432928 / 1.541195 (-0.108267) | 1.454940 / 1.468490 (-0.013550) | 0.721779 / 4.584777 (-3.862998) | 0.956514 / 3.745712 (-2.789198) | 2.889533 / 5.269862 (-2.380329) | 1.863980 / 4.565676 (-2.701696) | 0.078366 / 0.424275 (-0.345909) | 0.005137 / 0.007607 (-0.002470) | 0.338835 / 0.226044 (0.112791) | 3.320921 / 2.268929 (1.051993) | 1.903654 / 55.444624 (-53.540970) | 1.615294 / 6.876477 (-5.261182) | 1.624777 / 2.142072 (-0.517295) | 0.792417 / 4.805227 (-4.012810) | 0.133321 / 6.500664 (-6.367343) | 0.040127 / 0.075469 (-0.035342) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.982357 / 1.841788 (-0.859430) | 11.585106 / 8.074308 (3.510798) | 9.991577 / 10.191392 (-0.199815) | 0.149292 / 0.680424 (-0.531131) | 0.015693 / 0.534201 (-0.518508) | 0.297416 / 0.579283 (-0.281867) | 0.118565 / 0.434364 (-0.315799) | 0.335640 / 0.540337 (-0.204697) | 0.429484 / 1.386936 (-0.957452) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3091d7608f20e182f21bb7d0b68be66c0798509a \"CML watermark\")\n"
] | Add decorator as explicit test dependency | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7043/reactions"
} | PR_kwDODunzps51MAN0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7043.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7043",
"merged_at": "2024-07-12T08:07:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7043.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7043"
} | 2024-07-12T07:35:23Z | https://api.github.com/repos/huggingface/datasets/issues/7043/comments | Add decorator as explicit test dependency.
We use `decorator` library in our CI test since PR:
- #4845
However we did not add it as an explicit test requirement, and we depended on it indirectly through other libraries' dependencies.
I discovered this while testing Numpy 2.0 and removing incompatible libraries. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/7043/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7043/timeline | closed | false | 7,043 | null | 2024-07-12T08:07:10Z | null | true |
2,404,605,836 | https://api.github.com/repos/huggingface/datasets/issues/7042 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7042/events | [] | null | 2024-08-15T10:07:44Z | [] | https://github.com/huggingface/datasets/pull/7042 | CONTRIBUTOR | null | false | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005135 / 0.011353 (-0.006218) | 0.003389 / 0.011008 (-0.007619) | 0.063053 / 0.038508 (0.024545) | 0.031597 / 0.023109 (0.008487) | 0.237519 / 0.275898 (-0.038379) | 0.263101 / 0.323480 (-0.060379) | 0.003109 / 0.007986 (-0.004877) | 0.002699 / 0.004328 (-0.001630) | 0.048611 / 0.004250 (0.044361) | 0.042937 / 0.037052 (0.005884) | 0.253760 / 0.258489 (-0.004729) | 0.275444 / 0.293841 (-0.018397) | 0.028952 / 0.128546 (-0.099594) | 0.011837 / 0.075646 (-0.063809) | 0.207620 / 0.419271 (-0.211651) | 0.035727 / 0.043533 (-0.007806) | 0.241770 / 0.255139 (-0.013369) | 0.270509 / 0.283200 (-0.012691) | 0.020709 / 0.141683 (-0.120974) | 1.135722 / 1.452155 (-0.316432) | 1.200355 / 1.492716 (-0.292361) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092555 / 0.018006 (0.074549) | 0.284719 / 0.000490 (0.284229) | 0.000210 / 0.000200 (0.000010) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018431 / 0.037411 (-0.018980) | 0.063618 / 0.014526 (0.049092) | 0.075371 / 0.176557 (-0.101185) | 0.120982 / 0.737135 (-0.616153) | 0.075718 / 0.296338 (-0.220620) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279439 / 0.215209 (0.064230) | 2.722274 / 2.077655 (0.644619) | 1.442314 / 1.504120 (-0.061806) | 1.323166 / 1.541195 (-0.218029) | 1.339642 / 1.468490 (-0.128848) | 0.723451 / 4.584777 (-3.861326) | 2.334879 / 3.745712 (-1.410833) | 2.938745 / 5.269862 (-2.331116) | 1.867278 / 4.565676 (-2.698398) | 0.078704 / 0.424275 (-0.345571) | 0.005128 / 0.007607 (-0.002479) | 0.338634 / 0.226044 (0.112589) | 3.266239 / 2.268929 (0.997311) | 1.815276 / 55.444624 (-53.629349) | 1.487158 / 6.876477 (-5.389319) | 1.547550 / 2.142072 (-0.594522) | 0.804458 / 4.805227 (-4.000769) | 0.139186 / 6.500664 (-6.361479) | 0.042935 / 0.075469 (-0.032534) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.978223 / 1.841788 (-0.863564) | 11.350997 / 8.074308 (3.276689) | 10.082980 / 10.191392 (-0.108412) | 0.145067 / 0.680424 (-0.535357) | 0.014132 / 0.534201 (-0.520069) | 0.302162 / 0.579283 (-0.277121) | 0.264603 / 0.434364 (-0.169761) | 0.338466 / 0.540337 (-0.201871) | 0.427891 / 1.386936 (-0.959045) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006078 / 0.011353 (-0.005275) | 0.004030 / 0.011008 (-0.006978) | 0.051646 / 0.038508 (0.013138) | 0.031263 / 0.023109 (0.008154) | 0.279437 / 0.275898 (0.003539) | 0.304489 / 0.323480 (-0.018991) | 0.004553 / 0.007986 (-0.003433) | 0.002869 / 0.004328 (-0.001459) | 0.050638 / 0.004250 (0.046387) | 0.041091 / 0.037052 (0.004038) | 0.290681 / 0.258489 (0.032192) | 0.332059 / 0.293841 (0.038218) | 0.033353 / 0.128546 (-0.095193) | 0.012506 / 0.075646 (-0.063141) | 0.061788 / 0.419271 (-0.357484) | 0.034150 / 0.043533 (-0.009382) | 0.278258 / 0.255139 (0.023119) | 0.298084 / 0.283200 (0.014885) | 0.019106 / 0.141683 (-0.122577) | 1.164475 / 1.452155 (-0.287679) | 1.204804 / 1.492716 (-0.287912) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.100053 / 0.018006 (0.082047) | 0.301255 / 0.000490 (0.300765) | 0.000220 / 0.000200 (0.000020) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023536 / 0.037411 (-0.013876) | 0.078513 / 0.014526 (0.063987) | 0.090281 / 0.176557 (-0.086276) | 0.129607 / 0.737135 (-0.607528) | 0.090742 / 0.296338 (-0.205596) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.304082 / 0.215209 (0.088873) | 2.909401 / 2.077655 (0.831747) | 1.587210 / 1.504120 (0.083090) | 1.458713 / 1.541195 (-0.082482) | 1.472579 / 1.468490 (0.004089) | 0.716542 / 4.584777 (-3.868235) | 0.947557 / 3.745712 (-2.798155) | 2.908044 / 5.269862 (-2.361817) | 1.886382 / 4.565676 (-2.679294) | 0.078105 / 0.424275 (-0.346170) | 0.005802 / 0.007607 (-0.001805) | 0.357883 / 0.226044 (0.131839) | 3.490958 / 2.268929 (1.222029) | 1.946574 / 55.444624 (-53.498050) | 1.645167 / 6.876477 (-5.231310) | 1.649242 / 2.142072 (-0.492830) | 0.796864 / 4.805227 (-4.008363) | 0.134206 / 6.500664 (-6.366458) | 0.041439 / 0.075469 (-0.034030) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.012311 / 1.841788 (-0.829477) | 12.396967 / 8.074308 (4.322659) | 10.382494 / 10.191392 (0.191102) | 0.157395 / 0.680424 (-0.523029) | 0.015154 / 0.534201 (-0.519047) | 0.302209 / 0.579283 (-0.277074) | 0.127430 / 0.434364 (-0.306934) | 0.348933 / 0.540337 (-0.191404) | 0.442930 / 1.386936 (-0.944006) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#69d9f455c3c51625e6c9ffcade122313e9098f3c \"CML watermark\")\n"
] | Improved the tutorial by adding a link for loading datasets | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7042/reactions"
} | PR_kwDODunzps51K8CM | {
"diff_url": "https://github.com/huggingface/datasets/pull/7042.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7042",
"merged_at": "2024-08-15T10:01:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7042.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7042"
} | 2024-07-12T03:49:54Z | https://api.github.com/repos/huggingface/datasets/issues/7042/comments | Improved the tutorial by letting readers know about loading datasets with common files and including a link. I left the local files section alone because the methods were already listed with code snippets. | {
"avatar_url": "https://avatars.githubusercontent.com/u/41874659?v=4",
"events_url": "https://api.github.com/users/AmboThom/events{/privacy}",
"followers_url": "https://api.github.com/users/AmboThom/followers",
"following_url": "https://api.github.com/users/AmboThom/following{/other_user}",
"gists_url": "https://api.github.com/users/AmboThom/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AmboThom",
"id": 41874659,
"login": "AmboThom",
"node_id": "MDQ6VXNlcjQxODc0NjU5",
"organizations_url": "https://api.github.com/users/AmboThom/orgs",
"received_events_url": "https://api.github.com/users/AmboThom/received_events",
"repos_url": "https://api.github.com/users/AmboThom/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AmboThom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmboThom/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AmboThom"
} | https://api.github.com/repos/huggingface/datasets/issues/7042/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7042/timeline | closed | false | 7,042 | null | 2024-08-15T10:01:59Z | null | true |
2,404,576,038 | https://api.github.com/repos/huggingface/datasets/issues/7041 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7041/events | [] | null | 2024-07-22T13:55:17Z | [] | https://github.com/huggingface/datasets/issues/7041 | NONE | null | null | null | [
"`filter` add an indices mapping on top of the dataset, so `sort` has to gather all the rows that are kept to form a new Arrow table and sort the table. Gathering all the rows can take some time, but is a necessary step. You can try calling `ds = ds.flatten_indices()` before sorting to remove the indices mapping."
] | `sort` after `filter` unreasonably slow | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7041/reactions"
} | I_kwDODunzps6PUusm | null | 2024-07-12T03:29:27Z | https://api.github.com/repos/huggingface/datasets/issues/7041/comments | ### Describe the bug
as the tittle says ...
### Steps to reproduce the bug
`sort` seems to be normal.
```python
from datasets import Dataset
import random
nums = [{"k":random.choice(range(0,1000))} for _ in range(100000)]
ds = Dataset.from_list(nums)
print("start sort")
ds = ds.sort("k")
print("finish sort")
```
but `sort` after `filter` is extremely slow.
```python
from datasets import Dataset
import random
nums = [{"k":random.choice(range(0,1000))} for _ in range(100000)]
ds = Dataset.from_list(nums)
ds = ds.filter(lambda x:x > 100, input_columns="k")
print("start sort")
ds = ds.sort("k")
print("finish sort")
```
### Expected behavior
Is this a bug, or is it a misuse of the `sort` function?
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-3.10.0-1127.19.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.10.13
- `huggingface_hub` version: 0.23.4
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2023.10.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/56711045?v=4",
"events_url": "https://api.github.com/users/Tobin-rgb/events{/privacy}",
"followers_url": "https://api.github.com/users/Tobin-rgb/followers",
"following_url": "https://api.github.com/users/Tobin-rgb/following{/other_user}",
"gists_url": "https://api.github.com/users/Tobin-rgb/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Tobin-rgb",
"id": 56711045,
"login": "Tobin-rgb",
"node_id": "MDQ6VXNlcjU2NzExMDQ1",
"organizations_url": "https://api.github.com/users/Tobin-rgb/orgs",
"received_events_url": "https://api.github.com/users/Tobin-rgb/received_events",
"repos_url": "https://api.github.com/users/Tobin-rgb/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Tobin-rgb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tobin-rgb/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Tobin-rgb"
} | https://api.github.com/repos/huggingface/datasets/issues/7041/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7041/timeline | open | false | 7,041 | null | null | null | false |
2,402,918,335 | https://api.github.com/repos/huggingface/datasets/issues/7040 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7040/events | [] | null | 2024-07-11T14:11:56Z | [] | https://github.com/huggingface/datasets/issues/7040 | NONE | null | null | null | [
"When you pass `streaming=True`, the cache is ignored. The remote data URL is used instead and the data is streamed from the remote server.",
"Thanks for your reply! So is there any solution to get my expected behavior besides clone the whole repo ? Or could I adjust my script to load the downloaded arrow files and generate the dataset streamingly?"
] | load `streaming=True` dataset with downloaded cache | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7040/reactions"
} | I_kwDODunzps6POZ-_ | null | 2024-07-11T11:14:13Z | https://api.github.com/repos/huggingface/datasets/issues/7040/comments | ### Describe the bug
We build a dataset which contains several hdf5 files and write a script using `h5py` to generate the dataset. The hdf5 files are large and the processed dataset cache takes more disk space. So we hope to try streaming iterable dataset. Unfortunately, `h5py` can't convert a remote URL into a hdf5 file descriptor. So we use `fsspec` as an interface like below:
```python
def _generate_examples(self, filepath, split):
for file in filepath:
with fsspec.open(file, "rb") as fs:
with h5py.File(fs, "r") as fp:
# for event_id in sorted(list(fp.keys())):
event_ids = list(fp.keys())
......
```
### Steps to reproduce the bug
The `fsspec` works, but it takes 10+ min to print the first 10 examples, which is even longer than the downloading time. I'm not sure if it just caches the whole hdf5 file and generates the examples.
### Expected behavior
So does the following make sense so far?
1. download the files
```python
dataset = datasets.load('path/to/myscripts', split="train", name="event", trust_remote_code=True)
```
2. load the iterable dataset faster (using the raw file cache at path `.cache/huggingface/datasets/downloads`)
```python
dataset = datasets.load('path/to/myscripts', split="train", name="event", trust_remote_code=True, streaming=true)
```
I made some tests, but the code above can't get the expected result. I'm not sure if this is supported. I also find the issue #6327 . It seemed similar to mine, but I couldn't find a solution.
### Environment info
- `datasets` = 2.18.0
- `h5py` = 3.10.0
- `fsspec` = 2023.10.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/39429965?v=4",
"events_url": "https://api.github.com/users/wanghaoyucn/events{/privacy}",
"followers_url": "https://api.github.com/users/wanghaoyucn/followers",
"following_url": "https://api.github.com/users/wanghaoyucn/following{/other_user}",
"gists_url": "https://api.github.com/users/wanghaoyucn/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wanghaoyucn",
"id": 39429965,
"login": "wanghaoyucn",
"node_id": "MDQ6VXNlcjM5NDI5OTY1",
"organizations_url": "https://api.github.com/users/wanghaoyucn/orgs",
"received_events_url": "https://api.github.com/users/wanghaoyucn/received_events",
"repos_url": "https://api.github.com/users/wanghaoyucn/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wanghaoyucn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wanghaoyucn/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wanghaoyucn"
} | https://api.github.com/repos/huggingface/datasets/issues/7040/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7040/timeline | open | false | 7,040 | null | null | null | false |
2,402,403,390 | https://api.github.com/repos/huggingface/datasets/issues/7039 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7039/events | [] | null | 2024-07-11T07:27:58Z | [] | https://github.com/huggingface/datasets/pull/7039 | MEMBER | null | true | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7039). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"The test before confirms the bug.\r\n\r\nThere are different possible solutions to this issue:\r\n- the easiest would be to write multiple JSON files, one for each batch; this solution can be done in parallel if `num_proc` is passed\r\n- alternatively, we could tweak the writing and remove the extra `[` and `]` characters; this solution will only be valid if `orient=\"records\"`\r\n- others?"
] | Fix export to JSON when dataset larger than batch size | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7039/reactions"
} | PR_kwDODunzps51DgCY | {
"diff_url": "https://github.com/huggingface/datasets/pull/7039.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7039",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7039.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7039"
} | 2024-07-11T06:52:22Z | https://api.github.com/repos/huggingface/datasets/issues/7039/comments | Fix export to JSON (`files=False`) when dataset larger than batch size.
Fix #7037. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/7039/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7039/timeline | open | false | 7,039 | null | null | null | true |
2,402,081,227 | https://api.github.com/repos/huggingface/datasets/issues/7038 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7038/events | [] | null | 2024-07-11T05:28:39Z | [] | https://github.com/huggingface/datasets/issues/7038 | NONE | not_planned | null | null | [
"This is the `datasets` repository, and the issue should be opened in the `transformers` repo instead."
] | Yes, can definitely elaborate: | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7038/reactions"
} | I_kwDODunzps6PLNnL | null | 2024-07-11T02:22:30Z | https://api.github.com/repos/huggingface/datasets/issues/7038/comments | Yes, can definitely elaborate:
Say I want to use HF Trainer with an arbitrary PyTorch optimizer (`AdamW` here just as an example). Then I should intuitively extend `Trainer` like:
```python
class CustomOptimizerTrainer(Trainer):
@staticmethod
def get_optimizer_cls_and_kwargs(args: HfTrainingArguments, model=None) -> tuple[type[torch.optim.Optimizer], dict[str, Any]]:
optimizer = torch.optim.AdamW
optimizer_kwargs = {
"lr": 4e-3,
"betas": (0.9, 0.999),
"weight_decay": 0.05,
}
return optimizer, optimizer_kwargs
```
However, this won't take effect, because `Trainer.create_optimizer` hardcodes the `Trainer` class name when calling `get_optimizer_cls_and_kwargs`:
https://github.com/huggingface/transformers/blob/6c1d0b069de22d7ed8aa83f733c25045eea0585d/src/transformers/trainer.py#L1076
`CustomOptimizerTrainer.get_optimizer_cls_and_kwargs` will never be called.
So I could either:
- also override the entire `create_optimizer` and rewrite `Trainer.get_optimizer_cls_and_kwargs` to `self.get_optimizer_cls_and_kwargs` (overkill)
- or monkey-patch (not ideal):
```python
class CustomOptimizerTrainer(Trainer):
# def get_optimizer_cls_and_kwargs ...
def create_optimizer(self):
trainer_get_optimizer_fn = Trainer.get_optimizer_cls_and_kwargs
Trainer.get_optimizer_cls_and_kwargs = self.get_optimizer_cls_and_kwargs
optimizer = super().create_optimizer()
Trainer.get_optimizer_cls_and_kwargs = trainer_get_optimizer_fn
return optimizer
```
But I think the best fix is to change `Trainer.get_optimizer_cls_and_kwargs` to `self.get_optimizer_cls_and_kwargs` in the original source of `Trainer.create_optimizer`.
I also made `get_optimizer_cls_and_kwargs` an instance method instead of a static method, but that probably doesn't matter as much and can be reverted. It breaks the syntax of the tests.
Please let me know if that's clearer and if you agree! Thanks!
_Originally posted by @apoorvkh in https://github.com/huggingface/transformers/issues/31875#issuecomment-2221491647_
| {
"avatar_url": "https://avatars.githubusercontent.com/u/165458456?v=4",
"events_url": "https://api.github.com/users/Khaliq88/events{/privacy}",
"followers_url": "https://api.github.com/users/Khaliq88/followers",
"following_url": "https://api.github.com/users/Khaliq88/following{/other_user}",
"gists_url": "https://api.github.com/users/Khaliq88/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Khaliq88",
"id": 165458456,
"login": "Khaliq88",
"node_id": "U_kgDOCdyyGA",
"organizations_url": "https://api.github.com/users/Khaliq88/orgs",
"received_events_url": "https://api.github.com/users/Khaliq88/received_events",
"repos_url": "https://api.github.com/users/Khaliq88/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Khaliq88/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Khaliq88/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Khaliq88"
} | https://api.github.com/repos/huggingface/datasets/issues/7038/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7038/timeline | closed | false | 7,038 | null | 2024-07-11T05:28:39Z | null | false |
2,400,192,419 | https://api.github.com/repos/huggingface/datasets/issues/7037 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7037/events | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | null | 2024-07-10T13:07:44Z | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | https://github.com/huggingface/datasets/issues/7037 | NONE | null | null | null | [
"Thanks for reporting, @LinglingGreat.\r\n\r\nI confirm this is a bug."
] | A bug of Dataset.to_json() function | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7037/reactions"
} | I_kwDODunzps6PEAej | null | 2024-07-10T09:11:22Z | https://api.github.com/repos/huggingface/datasets/issues/7037/comments | ### Describe the bug
When using the Dataset.to_json() function, an unexpected error occurs if the parameter is set to lines=False. The stored data should be in the form of a list, but it actually turns into multiple lists, which causes an error when reading the data again.
The reason is that to_json() writes to the file in several segments based on the batch size. This is not a problem when lines=True, but it is incorrect when lines=False, because writing in several times will produce multiple lists(when len(dataset) > batch_size).
### Steps to reproduce the bug
try this code:
```python
from datasets import load_dataset
import json
train_dataset = load_dataset("Anthropic/hh-rlhf", data_dir="harmless-base")["train"]
output_path = "./harmless-base_hftojs.json"
print(len(train_dataset))
train_dataset.to_json(output_path, lines=False, force_ascii=False, indent=2)
with open(output_path, encoding="utf-8") as f:
data = json.loads(f.read())
```
it raise error: json.decoder.JSONDecodeError: Extra data: line 4003 column 1 (char 1373709)
Extra square brackets have appeared here:
<img width="265" alt="image" src="https://github.com/huggingface/datasets/assets/26499566/81492332-386d-42e8-88d1-b6d4ae3682cc">
### Expected behavior
The code runs normally.
### Environment info
datasets=2.20.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/26499566?v=4",
"events_url": "https://api.github.com/users/LinglingGreat/events{/privacy}",
"followers_url": "https://api.github.com/users/LinglingGreat/followers",
"following_url": "https://api.github.com/users/LinglingGreat/following{/other_user}",
"gists_url": "https://api.github.com/users/LinglingGreat/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LinglingGreat",
"id": 26499566,
"login": "LinglingGreat",
"node_id": "MDQ6VXNlcjI2NDk5NTY2",
"organizations_url": "https://api.github.com/users/LinglingGreat/orgs",
"received_events_url": "https://api.github.com/users/LinglingGreat/received_events",
"repos_url": "https://api.github.com/users/LinglingGreat/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LinglingGreat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LinglingGreat/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LinglingGreat"
} | https://api.github.com/repos/huggingface/datasets/issues/7037/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7037/timeline | open | false | 7,037 | null | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | false |
2,400,035,672 | https://api.github.com/repos/huggingface/datasets/issues/7036 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7036/events | [] | null | 2024-07-26T07:58:00Z | [] | https://github.com/huggingface/datasets/pull/7036 | MEMBER | null | false | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7036). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005582 / 0.011353 (-0.005771) | 0.003968 / 0.011008 (-0.007041) | 0.063672 / 0.038508 (0.025164) | 0.032360 / 0.023109 (0.009251) | 0.241351 / 0.275898 (-0.034547) | 0.264926 / 0.323480 (-0.058554) | 0.003186 / 0.007986 (-0.004800) | 0.003423 / 0.004328 (-0.000906) | 0.049600 / 0.004250 (0.045350) | 0.045558 / 0.037052 (0.008506) | 0.253326 / 0.258489 (-0.005163) | 0.289474 / 0.293841 (-0.004367) | 0.030285 / 0.128546 (-0.098261) | 0.012424 / 0.075646 (-0.063222) | 0.203914 / 0.419271 (-0.215358) | 0.036569 / 0.043533 (-0.006964) | 0.245252 / 0.255139 (-0.009887) | 0.261971 / 0.283200 (-0.021228) | 0.018276 / 0.141683 (-0.123406) | 1.120386 / 1.452155 (-0.331769) | 1.181736 / 1.492716 (-0.310980) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095427 / 0.018006 (0.077421) | 0.300666 / 0.000490 (0.300176) | 0.000205 / 0.000200 (0.000005) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019255 / 0.037411 (-0.018156) | 0.062645 / 0.014526 (0.048119) | 0.074822 / 0.176557 (-0.101734) | 0.121222 / 0.737135 (-0.615913) | 0.076136 / 0.296338 (-0.220202) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279756 / 0.215209 (0.064547) | 2.769680 / 2.077655 (0.692025) | 1.466156 / 1.504120 (-0.037964) | 1.348337 / 1.541195 (-0.192857) | 1.348311 / 1.468490 (-0.120179) | 0.710414 / 4.584777 (-3.874363) | 2.379192 / 3.745712 (-1.366520) | 2.990227 / 5.269862 (-2.279635) | 1.909749 / 4.565676 (-2.655928) | 0.079677 / 0.424275 (-0.344598) | 0.005116 / 0.007607 (-0.002491) | 0.335442 / 0.226044 (0.109398) | 3.308757 / 2.268929 (1.039828) | 1.831681 / 55.444624 (-53.612944) | 1.528642 / 6.876477 (-5.347835) | 1.554577 / 2.142072 (-0.587496) | 0.777722 / 4.805227 (-4.027505) | 0.132164 / 6.500664 (-6.368501) | 0.042277 / 0.075469 (-0.033193) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.964461 / 1.841788 (-0.877327) | 11.436569 / 8.074308 (3.362261) | 9.801367 / 10.191392 (-0.390025) | 0.130214 / 0.680424 (-0.550210) | 0.015288 / 0.534201 (-0.518913) | 0.303992 / 0.579283 (-0.275292) | 0.258128 / 0.434364 (-0.176236) | 0.347259 / 0.540337 (-0.193078) | 0.438156 / 1.386936 (-0.948780) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006019 / 0.011353 (-0.005334) | 0.003872 / 0.011008 (-0.007136) | 0.050763 / 0.038508 (0.012255) | 0.033993 / 0.023109 (0.010884) | 0.271789 / 0.275898 (-0.004109) | 0.298849 / 0.323480 (-0.024631) | 0.004486 / 0.007986 (-0.003500) | 0.002789 / 0.004328 (-0.001540) | 0.049926 / 0.004250 (0.045676) | 0.040470 / 0.037052 (0.003418) | 0.287533 / 0.258489 (0.029044) | 0.320066 / 0.293841 (0.026225) | 0.033039 / 0.128546 (-0.095508) | 0.011842 / 0.075646 (-0.063804) | 0.061016 / 0.419271 (-0.358256) | 0.034807 / 0.043533 (-0.008726) | 0.272079 / 0.255139 (0.016940) | 0.291603 / 0.283200 (0.008403) | 0.018676 / 0.141683 (-0.123007) | 1.171214 / 1.452155 (-0.280940) | 1.210691 / 1.492716 (-0.282025) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093045 / 0.018006 (0.075038) | 0.301045 / 0.000490 (0.300556) | 0.000213 / 0.000200 (0.000013) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022616 / 0.037411 (-0.014795) | 0.077271 / 0.014526 (0.062746) | 0.088959 / 0.176557 (-0.087598) | 0.129961 / 0.737135 (-0.607174) | 0.090495 / 0.296338 (-0.205843) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.301864 / 0.215209 (0.086655) | 2.947486 / 2.077655 (0.869831) | 1.587123 / 1.504120 (0.083003) | 1.453799 / 1.541195 (-0.087396) | 1.474296 / 1.468490 (0.005806) | 0.718609 / 4.584777 (-3.866168) | 0.948426 / 3.745712 (-2.797286) | 2.877275 / 5.269862 (-2.392586) | 1.930940 / 4.565676 (-2.634736) | 0.079207 / 0.424275 (-0.345068) | 0.005379 / 0.007607 (-0.002228) | 0.357969 / 0.226044 (0.131925) | 3.576455 / 2.268929 (1.307527) | 1.985058 / 55.444624 (-53.459566) | 1.663730 / 6.876477 (-5.212747) | 1.812752 / 2.142072 (-0.329320) | 0.800200 / 4.805227 (-4.005027) | 0.135124 / 6.500664 (-6.365540) | 0.041211 / 0.075469 (-0.034258) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.032394 / 1.841788 (-0.809394) | 12.082436 / 8.074308 (4.008128) | 10.198703 / 10.191392 (0.007311) | 0.143578 / 0.680424 (-0.536846) | 0.015576 / 0.534201 (-0.518625) | 0.301450 / 0.579283 (-0.277833) | 0.126596 / 0.434364 (-0.307768) | 0.339437 / 0.540337 (-0.200900) | 0.445454 / 1.386936 (-0.941482) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#347f1664a31c1c0fcb6a1a0914ebfb99c134e116 \"CML watermark\")\n"
] | Fix doc generation when NamedSplit is used as parameter default value | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7036/reactions"
} | PR_kwDODunzps507bZk | {
"diff_url": "https://github.com/huggingface/datasets/pull/7036.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7036",
"merged_at": "2024-07-26T07:51:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7036.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7036"
} | 2024-07-10T07:58:46Z | https://api.github.com/repos/huggingface/datasets/issues/7036/comments | Fix doc generation when `NamedSplit` is used as parameter default value.
Fix #7035. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/7036/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7036/timeline | closed | false | 7,036 | null | 2024-07-26T07:51:52Z | null | true |
2,400,021,225 | https://api.github.com/repos/huggingface/datasets/issues/7035 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7035/events | [
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] | null | 2024-07-26T07:51:53Z | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | https://github.com/huggingface/datasets/issues/7035 | MEMBER | completed | null | null | [] | Docs are not generated when a parameter defaults to a NamedSplit value | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7035/reactions"
} | I_kwDODunzps6PDWrp | null | 2024-07-10T07:51:24Z | https://api.github.com/repos/huggingface/datasets/issues/7035/comments | While generating the docs, we get an error when some parameter defaults to a `NamedSplit` value, like:
```python
def call_function(split=Split.TRAIN):
...
```
The error is: ValueError: Equality not supported between split train and <class 'inspect._empty'>
See: https://github.com/huggingface/datasets/actions/runs/9869660902/job/27254359863?pr=7015
```
Building the MDX files: 97%|█████████▋| 58/60 [00:00<00:00, 91.94it/s]
Traceback (most recent call last):
File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/build_doc.py", line 197, in build_mdx_files
content, new_anchors, source_files, errors = resolve_autodoc(
File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/build_doc.py", line 123, in resolve_autodoc
doc = autodoc(
File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/autodoc.py", line 499, in autodoc
method_doc, check = document_object(
File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/autodoc.py", line 395, in document_object
signature = format_signature(obj)
File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/autodoc.py", line 126, in format_signature
if param.default != inspect._empty:
File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/datasets/splits.py", line 136, in __ne__
return not self.__eq__(other)
File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/datasets/splits.py", line 379, in __eq__
raise ValueError(f"Equality not supported between split {self} and {other}")
ValueError: Equality not supported between split train and <class 'inspect._empty'>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/runner/work/datasets/datasets/.venv/bin/doc-builder", line 8, in <module>
sys.exit(main())
File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/commands/doc_builder_cli.py", line 47, in main
args.func(args)
File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/commands/build.py", line 102, in build_command
build_doc(
File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/build_doc.py", line 367, in build_doc
anchors_mapping, source_files_mapping = build_mdx_files(
File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/build_doc.py", line 230, in build_mdx_files
raise type(e)(f"There was an error when converting {file} to the MDX format.\n" + e.args[0]) from e
ValueError: There was an error when converting ../datasets/docs/source/package_reference/main_classes.mdx to the MDX format.
Equality not supported between split train and <class 'inspect._empty'>
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/7035/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7035/timeline | closed | false | 7,035 | null | 2024-07-26T07:51:53Z | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | false |
2,397,525,974 | https://api.github.com/repos/huggingface/datasets/issues/7034 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7034/events | [] | null | 2024-08-13T08:22:25Z | [] | https://github.com/huggingface/datasets/pull/7034 | CONTRIBUTOR | null | false | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005319 / 0.011353 (-0.006034) | 0.003979 / 0.011008 (-0.007030) | 0.063858 / 0.038508 (0.025350) | 0.031064 / 0.023109 (0.007955) | 0.232761 / 0.275898 (-0.043137) | 0.260362 / 0.323480 (-0.063118) | 0.004271 / 0.007986 (-0.003715) | 0.002801 / 0.004328 (-0.001527) | 0.049471 / 0.004250 (0.045220) | 0.043432 / 0.037052 (0.006379) | 0.247467 / 0.258489 (-0.011022) | 0.271926 / 0.293841 (-0.021915) | 0.030063 / 0.128546 (-0.098483) | 0.012659 / 0.075646 (-0.062988) | 0.204650 / 0.419271 (-0.214622) | 0.036340 / 0.043533 (-0.007192) | 0.237480 / 0.255139 (-0.017659) | 0.255955 / 0.283200 (-0.027244) | 0.017922 / 0.141683 (-0.123761) | 1.152251 / 1.452155 (-0.299904) | 1.195610 / 1.492716 (-0.297106) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095411 / 0.018006 (0.077405) | 0.296836 / 0.000490 (0.296346) | 0.000226 / 0.000200 (0.000026) | 0.000054 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018547 / 0.037411 (-0.018865) | 0.063423 / 0.014526 (0.048897) | 0.073587 / 0.176557 (-0.102970) | 0.120327 / 0.737135 (-0.616808) | 0.076185 / 0.296338 (-0.220154) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282815 / 0.215209 (0.067606) | 2.781204 / 2.077655 (0.703549) | 1.432489 / 1.504120 (-0.071631) | 1.312018 / 1.541195 (-0.229177) | 1.328290 / 1.468490 (-0.140200) | 0.734169 / 4.584777 (-3.850608) | 2.380654 / 3.745712 (-1.365058) | 2.904945 / 5.269862 (-2.364916) | 1.872079 / 4.565676 (-2.693598) | 0.078329 / 0.424275 (-0.345946) | 0.005151 / 0.007607 (-0.002457) | 0.338957 / 0.226044 (0.112912) | 3.353638 / 2.268929 (1.084709) | 1.812223 / 55.444624 (-53.632401) | 1.514860 / 6.876477 (-5.361617) | 1.528539 / 2.142072 (-0.613533) | 0.798711 / 4.805227 (-4.006516) | 0.135129 / 6.500664 (-6.365535) | 0.042355 / 0.075469 (-0.033114) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.954665 / 1.841788 (-0.887122) | 11.431925 / 8.074308 (3.357617) | 9.652583 / 10.191392 (-0.538809) | 0.132538 / 0.680424 (-0.547886) | 0.015517 / 0.534201 (-0.518683) | 0.303826 / 0.579283 (-0.275457) | 0.267530 / 0.434364 (-0.166834) | 0.340775 / 0.540337 (-0.199562) | 0.429909 / 1.386936 (-0.957027) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005819 / 0.011353 (-0.005533) | 0.003829 / 0.011008 (-0.007179) | 0.049707 / 0.038508 (0.011199) | 0.030810 / 0.023109 (0.007701) | 0.269637 / 0.275898 (-0.006261) | 0.295857 / 0.323480 (-0.027623) | 0.004462 / 0.007986 (-0.003523) | 0.002823 / 0.004328 (-0.001505) | 0.048544 / 0.004250 (0.044294) | 0.039692 / 0.037052 (0.002639) | 0.286837 / 0.258489 (0.028348) | 0.319874 / 0.293841 (0.026034) | 0.033319 / 0.128546 (-0.095227) | 0.012318 / 0.075646 (-0.063329) | 0.060319 / 0.419271 (-0.358953) | 0.034341 / 0.043533 (-0.009192) | 0.271132 / 0.255139 (0.015993) | 0.292577 / 0.283200 (0.009377) | 0.018298 / 0.141683 (-0.123384) | 1.136871 / 1.452155 (-0.315284) | 1.192894 / 1.492716 (-0.299822) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098890 / 0.018006 (0.080884) | 0.307830 / 0.000490 (0.307341) | 0.000214 / 0.000200 (0.000014) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023066 / 0.037411 (-0.014346) | 0.076732 / 0.014526 (0.062206) | 0.088154 / 0.176557 (-0.088403) | 0.129849 / 0.737135 (-0.607286) | 0.089368 / 0.296338 (-0.206970) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.298298 / 0.215209 (0.083089) | 2.914801 / 2.077655 (0.837147) | 1.609280 / 1.504120 (0.105160) | 1.486971 / 1.541195 (-0.054223) | 1.496254 / 1.468490 (0.027764) | 0.723780 / 4.584777 (-3.860997) | 0.972436 / 3.745712 (-2.773276) | 2.993773 / 5.269862 (-2.276089) | 1.911170 / 4.565676 (-2.654506) | 0.080599 / 0.424275 (-0.343677) | 0.005713 / 0.007607 (-0.001894) | 0.350510 / 0.226044 (0.124465) | 3.464035 / 2.268929 (1.195107) | 2.001558 / 55.444624 (-53.443066) | 1.691888 / 6.876477 (-5.184589) | 1.732348 / 2.142072 (-0.409724) | 0.818572 / 4.805227 (-3.986655) | 0.136770 / 6.500664 (-6.363894) | 0.041722 / 0.075469 (-0.033748) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.021225 / 1.841788 (-0.820563) | 11.941224 / 8.074308 (3.866915) | 10.118500 / 10.191392 (-0.072892) | 0.146167 / 0.680424 (-0.534257) | 0.015700 / 0.534201 (-0.518501) | 0.301511 / 0.579283 (-0.277772) | 0.122716 / 0.434364 (-0.311648) | 0.349048 / 0.540337 (-0.191290) | 0.444940 / 1.386936 (-0.941996) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a5c7fe5484e3b487eed4750fc6cc27c04bf90bd8 \"CML watermark\")\n"
] | chore: fix typos in docs | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7034/reactions"
} | PR_kwDODunzps50y-ya | {
"diff_url": "https://github.com/huggingface/datasets/pull/7034.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7034",
"merged_at": "2024-08-13T08:16:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7034.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7034"
} | 2024-07-09T08:35:05Z | https://api.github.com/repos/huggingface/datasets/issues/7034/comments | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/150505746?v=4",
"events_url": "https://api.github.com/users/hattizai/events{/privacy}",
"followers_url": "https://api.github.com/users/hattizai/followers",
"following_url": "https://api.github.com/users/hattizai/following{/other_user}",
"gists_url": "https://api.github.com/users/hattizai/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hattizai",
"id": 150505746,
"login": "hattizai",
"node_id": "U_kgDOCPiJEg",
"organizations_url": "https://api.github.com/users/hattizai/orgs",
"received_events_url": "https://api.github.com/users/hattizai/received_events",
"repos_url": "https://api.github.com/users/hattizai/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hattizai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hattizai/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hattizai"
} | https://api.github.com/repos/huggingface/datasets/issues/7034/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7034/timeline | closed | false | 7,034 | null | 2024-08-13T08:16:22Z | null | true |
2,397,419,768 | https://api.github.com/repos/huggingface/datasets/issues/7033 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7033/events | [] | null | 2024-07-26T12:56:16Z | [] | https://github.com/huggingface/datasets/issues/7033 | CONTRIBUTOR | completed | null | null | [
"Thanks for reporting, @pminervini.\r\n\r\nI agree we should give the option to define the split name.\r\n\r\nIndeed, there is a PR that addresses precisely this issue:\r\n- #7015\r\n\r\nI am reviewing it.",
"Booom! thank you guys :)"
] | `from_generator` does not allow to specify the split name | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7033/reactions"
} | I_kwDODunzps6O5bj4 | null | 2024-07-09T07:47:58Z | https://api.github.com/repos/huggingface/datasets/issues/7033/comments | ### Describe the bug
I'm building train, dev, and test using `from_generator`; however, in all three cases, the logger prints `Generating train split:`
It's not possible to change the split name since it seems to be hardcoded: https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/generator/generator.py
### Steps to reproduce the bug
```
In [1]: from datasets import Dataset
In [2]: def gen():
...: yield {"pokemon": "bulbasaur", "type": "grass"}
...:
In [3]: ds = Dataset.from_generator(gen)
Generating train split: 1 examples [00:00, 133.89 examples/s]
```
### Expected behavior
It should be possible to specify any split name
### Environment info
- `datasets` version: 2.19.2
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.5
- `huggingface_hub` version: 0.23.3
- PyArrow version: 15.0.0
- Pandas version: 2.0.3
- `fsspec` version: 2023.10.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/227357?v=4",
"events_url": "https://api.github.com/users/pminervini/events{/privacy}",
"followers_url": "https://api.github.com/users/pminervini/followers",
"following_url": "https://api.github.com/users/pminervini/following{/other_user}",
"gists_url": "https://api.github.com/users/pminervini/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/pminervini",
"id": 227357,
"login": "pminervini",
"node_id": "MDQ6VXNlcjIyNzM1Nw==",
"organizations_url": "https://api.github.com/users/pminervini/orgs",
"received_events_url": "https://api.github.com/users/pminervini/received_events",
"repos_url": "https://api.github.com/users/pminervini/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/pminervini/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pminervini/subscriptions",
"type": "User",
"url": "https://api.github.com/users/pminervini"
} | https://api.github.com/repos/huggingface/datasets/issues/7033/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7033/timeline | closed | false | 7,033 | null | 2024-07-26T09:31:56Z | null | false |
2,395,531,699 | https://api.github.com/repos/huggingface/datasets/issues/7032 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7032/events | [] | null | 2024-07-12T15:07:03Z | [] | https://github.com/huggingface/datasets/pull/7032 | CONTRIBUTOR | null | false | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7032). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@albertvillanova hm I don't know tbh, it's just that \"mlfoundations/dclm-baseline-1.0\" dataset contains [files](https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0/tree/main/global-shard_01_of_10/local-shard_0_of_10) with this extension and these files seem to be valid ",
"not sure why CI is failing but seems to be unrelated to this pr? can I merge @lhoestq @albertvillanova ?",
"yes you can merge, the CI failure is unrelated (surely an issue with hub-ci)",
"ah why not, you could try opening a PR\r\n\r\nbtw there is a channel with them at (internal) https://app.slack.com/client/T1RCG4490/C079AKTV11P if you want to let them know",
"@lhoestq, your previous comment was addressed to me or Polina?\r\n\r\n@polinaeterna let me know if it is OK for you.",
"I opened https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0/discussions/7",
"Should we close this PR then?"
] | Register `.zstd` extension for zstd-compressed files | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7032/reactions"
} | PR_kwDODunzps50sJTq | {
"diff_url": "https://github.com/huggingface/datasets/pull/7032.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7032",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7032.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7032"
} | 2024-07-08T12:39:50Z | https://api.github.com/repos/huggingface/datasets/issues/7032/comments | For example, https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0 dataset files have `.zstd` extension which is currently ignored (only `.zst` is registered). | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | https://api.github.com/repos/huggingface/datasets/issues/7032/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7032/timeline | closed | false | 7,032 | null | 2024-07-12T15:07:03Z | null | true |
2,395,401,692 | https://api.github.com/repos/huggingface/datasets/issues/7031 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7031/events | [] | null | 2024-07-08T11:47:29Z | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | https://github.com/huggingface/datasets/issues/7031 | MEMBER | not_planned | null | null | [] | CI quality is broken: use ruff check instead | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7031/reactions"
} | I_kwDODunzps6Oxu3c | null | 2024-07-08T11:42:24Z | https://api.github.com/repos/huggingface/datasets/issues/7031/comments | CI quality is broken: https://github.com/huggingface/datasets/actions/runs/9838873879/job/27159697027
```
error: `ruff <path>` has been removed. Use `ruff check <path>` instead.
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/7031/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7031/timeline | closed | false | 7,031 | null | 2024-07-08T11:47:29Z | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | false |
2,393,411,631 | https://api.github.com/repos/huggingface/datasets/issues/7030 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7030/events | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | null | 2024-07-13T14:35:59Z | [] | https://github.com/huggingface/datasets/issues/7030 | NONE | completed | null | null | [
"You can disable progress bars for all of `datasets` with `disable_progress_bars`. [Link](https://huggingface.co/docs/datasets/en/package_reference/utilities#datasets.enable_progress_bars)\r\n\r\nSo you could do something like:\r\n\r\n```python\r\nfrom datasets import load_from_disk, enable_progress_bars, disable_progress_bars\r\n\r\ndisable_progress_bars()\r\n# Your code\r\nload_from_disk(....)\r\n\r\nenable_progress_bars()\r\n```\r\n",
"Thank you! Closing the issue."
] | Add option to disable progress bar when reading a dataset ("Loading dataset from disk") | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7030/reactions"
} | I_kwDODunzps6OqJAv | null | 2024-07-06T05:43:37Z | https://api.github.com/repos/huggingface/datasets/issues/7030/comments | ### Feature request
Add an option in load_from_disk to disable the progress bar even if the number of files is larger than 16.
### Motivation
I am reading a lot of datasets that it creates lots of logs.
<img width="1432" alt="image" src="https://github.com/huggingface/datasets/assets/57996478/8d4bbf03-6b89-44b6-937c-932f01b4eb2a">
### Your contribution
Seems like an easy fix to make. I can create a PR if necessary. | {
"avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4",
"events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}",
"followers_url": "https://api.github.com/users/yuvalkirstain/followers",
"following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}",
"gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yuvalkirstain",
"id": 57996478,
"login": "yuvalkirstain",
"node_id": "MDQ6VXNlcjU3OTk2NDc4",
"organizations_url": "https://api.github.com/users/yuvalkirstain/orgs",
"received_events_url": "https://api.github.com/users/yuvalkirstain/received_events",
"repos_url": "https://api.github.com/users/yuvalkirstain/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yuvalkirstain"
} | https://api.github.com/repos/huggingface/datasets/issues/7030/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7030/timeline | closed | false | 7,030 | null | 2024-07-13T14:35:59Z | null | false |
2,391,366,696 | https://api.github.com/repos/huggingface/datasets/issues/7029 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7029/events | [] | null | 2024-07-17T12:44:03Z | [] | https://github.com/huggingface/datasets/issues/7029 | NONE | null | null | null | [
"hi ! can you share the full stack trace ? this should help locate what files is not written in the cache_dir"
] | load_dataset on AWS lambda throws OSError(30, 'Read-only file system') error | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7029/reactions"
} | I_kwDODunzps6OiVwo | null | 2024-07-04T19:15:16Z | https://api.github.com/repos/huggingface/datasets/issues/7029/comments | ### Describe the bug
I'm using AWS lambda to run a python application. I run the `load_dataset` function with cache_dir="/tmp" and is still throws the OSError(30, 'Read-only file system') error. Is even updated all the HF envs to point to /tmp dir but the issue still persists. I can confirm that the I can write to /tmp directory.
### Steps to reproduce the bug
```python
d = load_dataset(
path=hugging_face_link,
split=split,
token=token,
cache_dir="/tmp/hugging_face_cache",
)
```
### Expected behavior
Everything written to the file system as part of the load_datasets function should be in the /tmp directory.
### Environment info
datasets version: 2.16.1
Platform: Linux-5.10.216-225.855.amzn2.x86_64-x86_64-with-glibc2.26
Python version: 3.11.9
huggingface_hub version: 0.19.4
PyArrow version: 16.1.0
Pandas version: 2.2.2
fsspec version: 2023.10.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/171606538?v=4",
"events_url": "https://api.github.com/users/sugam-nexusflow/events{/privacy}",
"followers_url": "https://api.github.com/users/sugam-nexusflow/followers",
"following_url": "https://api.github.com/users/sugam-nexusflow/following{/other_user}",
"gists_url": "https://api.github.com/users/sugam-nexusflow/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sugam-nexusflow",
"id": 171606538,
"login": "sugam-nexusflow",
"node_id": "U_kgDOCjqCCg",
"organizations_url": "https://api.github.com/users/sugam-nexusflow/orgs",
"received_events_url": "https://api.github.com/users/sugam-nexusflow/received_events",
"repos_url": "https://api.github.com/users/sugam-nexusflow/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sugam-nexusflow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sugam-nexusflow/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sugam-nexusflow"
} | https://api.github.com/repos/huggingface/datasets/issues/7029/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7029/timeline | open | false | 7,029 | null | null | null | false |
2,391,077,531 | https://api.github.com/repos/huggingface/datasets/issues/7028 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7028/events | [] | null | 2024-07-04T15:26:35Z | [] | https://github.com/huggingface/datasets/pull/7028 | MEMBER | null | false | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7028). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005748 / 0.011353 (-0.005605) | 0.004109 / 0.011008 (-0.006899) | 0.067017 / 0.038508 (0.028509) | 0.031950 / 0.023109 (0.008841) | 0.239939 / 0.275898 (-0.035959) | 0.266339 / 0.323480 (-0.057141) | 0.003176 / 0.007986 (-0.004809) | 0.003556 / 0.004328 (-0.000773) | 0.050725 / 0.004250 (0.046475) | 0.047711 / 0.037052 (0.010658) | 0.251048 / 0.258489 (-0.007441) | 0.287049 / 0.293841 (-0.006792) | 0.029919 / 0.128546 (-0.098627) | 0.012562 / 0.075646 (-0.063085) | 0.212903 / 0.419271 (-0.206369) | 0.036570 / 0.043533 (-0.006963) | 0.240975 / 0.255139 (-0.014164) | 0.266473 / 0.283200 (-0.016726) | 0.019959 / 0.141683 (-0.121724) | 1.152224 / 1.452155 (-0.299931) | 1.186046 / 1.492716 (-0.306671) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095836 / 0.018006 (0.077829) | 0.303402 / 0.000490 (0.302913) | 0.000210 / 0.000200 (0.000010) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.020552 / 0.037411 (-0.016859) | 0.063619 / 0.014526 (0.049093) | 0.076969 / 0.176557 (-0.099588) | 0.123368 / 0.737135 (-0.613767) | 0.077005 / 0.296338 (-0.219334) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282005 / 0.215209 (0.066796) | 2.794144 / 2.077655 (0.716489) | 1.463569 / 1.504120 (-0.040551) | 1.334295 / 1.541195 (-0.206899) | 1.387198 / 1.468490 (-0.081292) | 0.707654 / 4.584777 (-3.877123) | 2.341698 / 3.745712 (-1.404014) | 2.865131 / 5.269862 (-2.404731) | 1.945168 / 4.565676 (-2.620509) | 0.077926 / 0.424275 (-0.346349) | 0.005470 / 0.007607 (-0.002137) | 0.336498 / 0.226044 (0.110454) | 3.330262 / 2.268929 (1.061334) | 1.865574 / 55.444624 (-53.579050) | 1.536932 / 6.876477 (-5.339545) | 1.720960 / 2.142072 (-0.421113) | 0.794753 / 4.805227 (-4.010475) | 0.133491 / 6.500664 (-6.367173) | 0.042437 / 0.075469 (-0.033032) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.976788 / 1.841788 (-0.865000) | 11.895137 / 8.074308 (3.820829) | 9.211969 / 10.191392 (-0.979423) | 0.141798 / 0.680424 (-0.538626) | 0.014354 / 0.534201 (-0.519847) | 0.306044 / 0.579283 (-0.273239) | 0.265016 / 0.434364 (-0.169348) | 0.340877 / 0.540337 (-0.199460) | 0.470449 / 1.386936 (-0.916487) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006134 / 0.011353 (-0.005219) | 0.004023 / 0.011008 (-0.006985) | 0.050419 / 0.038508 (0.011911) | 0.033853 / 0.023109 (0.010744) | 0.266799 / 0.275898 (-0.009099) | 0.291248 / 0.323480 (-0.032232) | 0.004474 / 0.007986 (-0.003511) | 0.002847 / 0.004328 (-0.001481) | 0.049895 / 0.004250 (0.045645) | 0.041160 / 0.037052 (0.004108) | 0.278818 / 0.258489 (0.020329) | 0.314027 / 0.293841 (0.020186) | 0.032303 / 0.128546 (-0.096243) | 0.012367 / 0.075646 (-0.063279) | 0.061495 / 0.419271 (-0.357776) | 0.033512 / 0.043533 (-0.010021) | 0.266168 / 0.255139 (0.011029) | 0.283129 / 0.283200 (-0.000071) | 0.018674 / 0.141683 (-0.123009) | 1.124453 / 1.452155 (-0.327701) | 1.164527 / 1.492716 (-0.328189) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098522 / 0.018006 (0.080516) | 0.315069 / 0.000490 (0.314579) | 0.000202 / 0.000200 (0.000002) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022809 / 0.037411 (-0.014602) | 0.078409 / 0.014526 (0.063883) | 0.088558 / 0.176557 (-0.087998) | 0.130004 / 0.737135 (-0.607131) | 0.090507 / 0.296338 (-0.205832) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291323 / 0.215209 (0.076114) | 2.836363 / 2.077655 (0.758708) | 1.548889 / 1.504120 (0.044769) | 1.423857 / 1.541195 (-0.117337) | 1.461667 / 1.468490 (-0.006823) | 0.714956 / 4.584777 (-3.869821) | 0.948170 / 3.745712 (-2.797542) | 3.036151 / 5.269862 (-2.233711) | 1.923824 / 4.565676 (-2.641853) | 0.078002 / 0.424275 (-0.346273) | 0.005198 / 0.007607 (-0.002409) | 0.337007 / 0.226044 (0.110963) | 3.310255 / 2.268929 (1.041327) | 1.910371 / 55.444624 (-53.534253) | 1.619855 / 6.876477 (-5.256622) | 1.682093 / 2.142072 (-0.459979) | 0.789903 / 4.805227 (-4.015324) | 0.132117 / 6.500664 (-6.368547) | 0.041312 / 0.075469 (-0.034157) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.997658 / 1.841788 (-0.844130) | 12.447878 / 8.074308 (4.373570) | 10.277662 / 10.191392 (0.086270) | 0.143580 / 0.680424 (-0.536844) | 0.016472 / 0.534201 (-0.517729) | 0.307235 / 0.579283 (-0.272048) | 0.125469 / 0.434364 (-0.308895) | 0.339525 / 0.540337 (-0.200813) | 0.427371 / 1.386936 (-0.959566) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#689447f8c86f777829a4db9ccc5d8133c12ec84c \"CML watermark\")\n"
] | Fix ci | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7028/reactions"
} | PR_kwDODunzps50dQ1w | {
"diff_url": "https://github.com/huggingface/datasets/pull/7028.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7028",
"merged_at": "2024-07-04T15:19:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7028.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7028"
} | 2024-07-04T15:11:08Z | https://api.github.com/repos/huggingface/datasets/issues/7028/comments | ...after last pr errors | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | https://api.github.com/repos/huggingface/datasets/issues/7028/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7028/timeline | closed | false | 7,028 | null | 2024-07-04T15:19:16Z | null | true |
2,391,013,330 | https://api.github.com/repos/huggingface/datasets/issues/7027 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7027/events | [] | null | 2024-07-04T14:40:46Z | [] | https://github.com/huggingface/datasets/pull/7027 | MEMBER | null | false | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7027). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005612 / 0.011353 (-0.005741) | 0.004023 / 0.011008 (-0.006985) | 0.065578 / 0.038508 (0.027070) | 0.030476 / 0.023109 (0.007367) | 0.237131 / 0.275898 (-0.038767) | 0.269388 / 0.323480 (-0.054092) | 0.003364 / 0.007986 (-0.004622) | 0.002938 / 0.004328 (-0.001390) | 0.050867 / 0.004250 (0.046617) | 0.049456 / 0.037052 (0.012403) | 0.249587 / 0.258489 (-0.008902) | 0.291132 / 0.293841 (-0.002709) | 0.029373 / 0.128546 (-0.099174) | 0.012266 / 0.075646 (-0.063380) | 0.206239 / 0.419271 (-0.213033) | 0.037192 / 0.043533 (-0.006340) | 0.244902 / 0.255139 (-0.010237) | 0.269779 / 0.283200 (-0.013421) | 0.019870 / 0.141683 (-0.121813) | 1.123697 / 1.452155 (-0.328458) | 1.181256 / 1.492716 (-0.311460) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.108535 / 0.018006 (0.090529) | 0.317838 / 0.000490 (0.317348) | 0.000216 / 0.000200 (0.000016) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019097 / 0.037411 (-0.018315) | 0.063836 / 0.014526 (0.049310) | 0.075446 / 0.176557 (-0.101111) | 0.124503 / 0.737135 (-0.612632) | 0.077730 / 0.296338 (-0.218608) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284688 / 0.215209 (0.069479) | 2.817832 / 2.077655 (0.740178) | 1.487342 / 1.504120 (-0.016778) | 1.354037 / 1.541195 (-0.187158) | 1.426904 / 1.468490 (-0.041586) | 0.728754 / 4.584777 (-3.856022) | 2.361140 / 3.745712 (-1.384573) | 2.926215 / 5.269862 (-2.343647) | 1.981767 / 4.565676 (-2.583909) | 0.079278 / 0.424275 (-0.344997) | 0.005567 / 0.007607 (-0.002040) | 0.336590 / 0.226044 (0.110546) | 3.371062 / 2.268929 (1.102134) | 1.845343 / 55.444624 (-53.599282) | 1.537699 / 6.876477 (-5.338777) | 1.731407 / 2.142072 (-0.410665) | 0.796148 / 4.805227 (-4.009079) | 0.133830 / 6.500664 (-6.366835) | 0.043117 / 0.075469 (-0.032352) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.980786 / 1.841788 (-0.861001) | 12.653553 / 8.074308 (4.579245) | 9.402636 / 10.191392 (-0.788756) | 0.143756 / 0.680424 (-0.536667) | 0.014896 / 0.534201 (-0.519304) | 0.328796 / 0.579283 (-0.250487) | 0.275108 / 0.434364 (-0.159255) | 0.343397 / 0.540337 (-0.196940) | 0.472301 / 1.386936 (-0.914635) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005882 / 0.011353 (-0.005471) | 0.003982 / 0.011008 (-0.007026) | 0.050484 / 0.038508 (0.011976) | 0.035217 / 0.023109 (0.012108) | 0.271683 / 0.275898 (-0.004215) | 0.291498 / 0.323480 (-0.031982) | 0.004429 / 0.007986 (-0.003557) | 0.002928 / 0.004328 (-0.001401) | 0.049386 / 0.004250 (0.045136) | 0.040868 / 0.037052 (0.003815) | 0.280968 / 0.258489 (0.022479) | 0.314880 / 0.293841 (0.021039) | 0.032590 / 0.128546 (-0.095956) | 0.012319 / 0.075646 (-0.063327) | 0.060354 / 0.419271 (-0.358917) | 0.034138 / 0.043533 (-0.009394) | 0.267491 / 0.255139 (0.012352) | 0.283077 / 0.283200 (-0.000123) | 0.017784 / 0.141683 (-0.123899) | 1.154835 / 1.452155 (-0.297320) | 1.179271 / 1.492716 (-0.313446) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.100519 / 0.018006 (0.082513) | 0.309043 / 0.000490 (0.308553) | 0.000222 / 0.000200 (0.000022) | 0.000055 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024056 / 0.037411 (-0.013356) | 0.077810 / 0.014526 (0.063284) | 0.092682 / 0.176557 (-0.083875) | 0.132101 / 0.737135 (-0.605034) | 0.091986 / 0.296338 (-0.204352) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.298186 / 0.215209 (0.082977) | 2.905134 / 2.077655 (0.827479) | 1.552364 / 1.504120 (0.048245) | 1.424644 / 1.541195 (-0.116551) | 1.457667 / 1.468490 (-0.010823) | 0.717606 / 4.584777 (-3.867171) | 0.944470 / 3.745712 (-2.801242) | 3.056236 / 5.269862 (-2.213626) | 1.946453 / 4.565676 (-2.619223) | 0.080525 / 0.424275 (-0.343750) | 0.005235 / 0.007607 (-0.002372) | 0.348561 / 0.226044 (0.122516) | 3.449350 / 2.268929 (1.180421) | 1.930165 / 55.444624 (-53.514459) | 1.620883 / 6.876477 (-5.255593) | 1.671963 / 2.142072 (-0.470109) | 0.801978 / 4.805227 (-4.003249) | 0.134494 / 6.500664 (-6.366170) | 0.041888 / 0.075469 (-0.033581) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.005961 / 1.841788 (-0.835826) | 12.687638 / 8.074308 (4.613330) | 10.398730 / 10.191392 (0.207338) | 0.134503 / 0.680424 (-0.545920) | 0.015839 / 0.534201 (-0.518362) | 0.307465 / 0.579283 (-0.271819) | 0.130805 / 0.434364 (-0.303559) | 0.349079 / 0.540337 (-0.191259) | 0.437609 / 1.386936 (-0.949327) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#cc6ac9e5f70811a450198203ddc077c0c7bff206 \"CML watermark\")\n"
] | Missing line from previous pr | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7027/reactions"
} | PR_kwDODunzps50dCsE | {
"diff_url": "https://github.com/huggingface/datasets/pull/7027.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7027",
"merged_at": "2024-07-04T14:34:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7027.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7027"
} | 2024-07-04T14:34:29Z | https://api.github.com/repos/huggingface/datasets/issues/7027/comments | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | https://api.github.com/repos/huggingface/datasets/issues/7027/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7027/timeline | closed | false | 7,027 | null | 2024-07-04T14:34:36Z | null | true |
2,390,983,889 | https://api.github.com/repos/huggingface/datasets/issues/7026 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7026/events | [] | null | 2024-07-04T14:28:36Z | [] | https://github.com/huggingface/datasets/pull/7026 | MEMBER | null | false | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7026). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005637 / 0.011353 (-0.005716) | 0.003967 / 0.011008 (-0.007041) | 0.064187 / 0.038508 (0.025679) | 0.031356 / 0.023109 (0.008246) | 0.239203 / 0.275898 (-0.036695) | 0.261033 / 0.323480 (-0.062447) | 0.003256 / 0.007986 (-0.004730) | 0.003416 / 0.004328 (-0.000913) | 0.049673 / 0.004250 (0.045423) | 0.047021 / 0.037052 (0.009969) | 0.252146 / 0.258489 (-0.006343) | 0.283663 / 0.293841 (-0.010178) | 0.030223 / 0.128546 (-0.098324) | 0.012342 / 0.075646 (-0.063304) | 0.213061 / 0.419271 (-0.206211) | 0.036867 / 0.043533 (-0.006665) | 0.242589 / 0.255139 (-0.012550) | 0.265584 / 0.283200 (-0.017616) | 0.019149 / 0.141683 (-0.122533) | 1.108909 / 1.452155 (-0.343246) | 1.148484 / 1.492716 (-0.344232) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096815 / 0.018006 (0.078809) | 0.299633 / 0.000490 (0.299143) | 0.000212 / 0.000200 (0.000013) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018947 / 0.037411 (-0.018464) | 0.061640 / 0.014526 (0.047114) | 0.074621 / 0.176557 (-0.101935) | 0.120830 / 0.737135 (-0.616305) | 0.075472 / 0.296338 (-0.220866) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284626 / 0.215209 (0.069417) | 2.805299 / 2.077655 (0.727644) | 1.469879 / 1.504120 (-0.034241) | 1.355524 / 1.541195 (-0.185671) | 1.388246 / 1.468490 (-0.080244) | 0.726740 / 4.584777 (-3.858037) | 2.387461 / 3.745712 (-1.358251) | 2.834137 / 5.269862 (-2.435724) | 1.915750 / 4.565676 (-2.649927) | 0.079223 / 0.424275 (-0.345052) | 0.005489 / 0.007607 (-0.002118) | 0.335517 / 0.226044 (0.109473) | 3.299332 / 2.268929 (1.030403) | 1.817726 / 55.444624 (-53.626898) | 1.520834 / 6.876477 (-5.355642) | 1.696285 / 2.142072 (-0.445788) | 0.815147 / 4.805227 (-3.990080) | 0.136566 / 6.500664 (-6.364098) | 0.043482 / 0.075469 (-0.031987) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.981382 / 1.841788 (-0.860406) | 11.472890 / 8.074308 (3.398582) | 9.274181 / 10.191392 (-0.917211) | 0.133051 / 0.680424 (-0.547373) | 0.015417 / 0.534201 (-0.518784) | 0.306098 / 0.579283 (-0.273185) | 0.261424 / 0.434364 (-0.172940) | 0.338946 / 0.540337 (-0.201391) | 0.460776 / 1.386936 (-0.926160) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005806 / 0.011353 (-0.005547) | 0.004274 / 0.011008 (-0.006734) | 0.050831 / 0.038508 (0.012323) | 0.033717 / 0.023109 (0.010607) | 0.280561 / 0.275898 (0.004663) | 0.302437 / 0.323480 (-0.021043) | 0.004543 / 0.007986 (-0.003442) | 0.002905 / 0.004328 (-0.001424) | 0.048897 / 0.004250 (0.044646) | 0.041089 / 0.037052 (0.004037) | 0.291439 / 0.258489 (0.032950) | 0.319762 / 0.293841 (0.025921) | 0.033178 / 0.128546 (-0.095368) | 0.012336 / 0.075646 (-0.063311) | 0.061033 / 0.419271 (-0.358238) | 0.034018 / 0.043533 (-0.009515) | 0.278514 / 0.255139 (0.023375) | 0.295648 / 0.283200 (0.012448) | 0.018621 / 0.141683 (-0.123062) | 1.160250 / 1.452155 (-0.291905) | 1.183867 / 1.492716 (-0.308850) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096354 / 0.018006 (0.078348) | 0.301907 / 0.000490 (0.301417) | 0.000205 / 0.000200 (0.000006) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022357 / 0.037411 (-0.015054) | 0.076218 / 0.014526 (0.061692) | 0.088172 / 0.176557 (-0.088385) | 0.128621 / 0.737135 (-0.608515) | 0.089250 / 0.296338 (-0.207089) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292633 / 0.215209 (0.077424) | 2.862456 / 2.077655 (0.784801) | 1.581967 / 1.504120 (0.077847) | 1.459822 / 1.541195 (-0.081373) | 1.475896 / 1.468490 (0.007406) | 0.728550 / 4.584777 (-3.856226) | 0.958819 / 3.745712 (-2.786893) | 3.011074 / 5.269862 (-2.258788) | 1.934393 / 4.565676 (-2.631283) | 0.079831 / 0.424275 (-0.344444) | 0.005249 / 0.007607 (-0.002358) | 0.346334 / 0.226044 (0.120290) | 3.438979 / 2.268929 (1.170051) | 1.935567 / 55.444624 (-53.509057) | 1.648723 / 6.876477 (-5.227754) | 1.685489 / 2.142072 (-0.456583) | 0.800992 / 4.805227 (-4.004236) | 0.139388 / 6.500664 (-6.361276) | 0.042518 / 0.075469 (-0.032951) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.031715 / 1.841788 (-0.810072) | 12.486711 / 8.074308 (4.412403) | 10.430191 / 10.191392 (0.238799) | 0.146884 / 0.680424 (-0.533540) | 0.015735 / 0.534201 (-0.518466) | 0.303938 / 0.579283 (-0.275346) | 0.140374 / 0.434364 (-0.293989) | 0.338508 / 0.540337 (-0.201830) | 0.429551 / 1.386936 (-0.957385) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e32336195f3ea69988148df5f129f9f59d3ab595 \"CML watermark\")\n"
] | Fix check_library_imports | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7026/reactions"
} | PR_kwDODunzps50c8Mf | {
"diff_url": "https://github.com/huggingface/datasets/pull/7026.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7026",
"merged_at": "2024-07-04T14:20:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7026.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7026"
} | 2024-07-04T14:18:38Z | https://api.github.com/repos/huggingface/datasets/issues/7026/comments | move it to after the `trust_remote_code` check
Note that it only affects local datasets that already exist on disk, not datasets loaded from HF directly | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | https://api.github.com/repos/huggingface/datasets/issues/7026/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7026/timeline | closed | false | 7,026 | null | 2024-07-04T14:20:02Z | null | true |
2,390,488,546 | https://api.github.com/repos/huggingface/datasets/issues/7025 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7025/events | [] | null | 2024-07-31T06:15:50Z | [] | https://github.com/huggingface/datasets/pull/7025 | CONTRIBUTOR | null | false | null | [
"requesting review - @albertvillanova @lhoestq ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7025). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@lhoestq rebased the PR, It would be really helpful to have this feature into datasets, please let me know if there is anything pending on this PR, thanks. ",
"@lhoestq \r\n\r\nHave added the unit test to generate tables for both the arrow formats - file and streaming.\r\n\r\nLet me know if we have any docs changes as well. Thanks\r\n\r\n<img width=\"568\" alt=\"Screenshot 2024-07-25 at 7 04 26 PM\" src=\"https://github.com/user-attachments/assets/69fd0906-bda9-45fa-8f7e-8092e351ac29\">\r\n",
"@lhoestq any update on this thread? Thanks",
"Timely PR!\r\nCan we please look into this?",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005737 / 0.011353 (-0.005615) | 0.003894 / 0.011008 (-0.007114) | 0.067510 / 0.038508 (0.029002) | 0.033431 / 0.023109 (0.010321) | 0.262766 / 0.275898 (-0.013132) | 0.283776 / 0.323480 (-0.039704) | 0.003296 / 0.007986 (-0.004689) | 0.003577 / 0.004328 (-0.000752) | 0.052165 / 0.004250 (0.047915) | 0.047815 / 0.037052 (0.010763) | 0.263528 / 0.258489 (0.005039) | 0.292980 / 0.293841 (-0.000861) | 0.031535 / 0.128546 (-0.097011) | 0.012966 / 0.075646 (-0.062680) | 0.218827 / 0.419271 (-0.200444) | 0.039181 / 0.043533 (-0.004352) | 0.263768 / 0.255139 (0.008629) | 0.288012 / 0.283200 (0.004813) | 0.020562 / 0.141683 (-0.121121) | 1.180547 / 1.452155 (-0.271608) | 1.269283 / 1.492716 (-0.223433) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098951 / 0.018006 (0.080944) | 0.318922 / 0.000490 (0.318433) | 0.000214 / 0.000200 (0.000014) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021315 / 0.037411 (-0.016097) | 0.067728 / 0.014526 (0.053202) | 0.079428 / 0.176557 (-0.097129) | 0.127472 / 0.737135 (-0.609663) | 0.080455 / 0.296338 (-0.215883) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.308725 / 0.215209 (0.093516) | 3.043555 / 2.077655 (0.965900) | 1.587419 / 1.504120 (0.083299) | 1.444421 / 1.541195 (-0.096774) | 1.470703 / 1.468490 (0.002213) | 0.784005 / 4.584777 (-3.800772) | 2.582064 / 3.745712 (-1.163648) | 3.140269 / 5.269862 (-2.129592) | 2.031099 / 4.565676 (-2.534577) | 0.086999 / 0.424275 (-0.337277) | 0.005923 / 0.007607 (-0.001684) | 0.361333 / 0.226044 (0.135289) | 3.587173 / 2.268929 (1.318244) | 1.961448 / 55.444624 (-53.483177) | 1.649868 / 6.876477 (-5.226609) | 1.698595 / 2.142072 (-0.443478) | 0.858552 / 4.805227 (-3.946676) | 0.146001 / 6.500664 (-6.354663) | 0.046049 / 0.075469 (-0.029421) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.022644 / 1.841788 (-0.819144) | 12.655994 / 8.074308 (4.581686) | 10.205832 / 10.191392 (0.014440) | 0.156073 / 0.680424 (-0.524351) | 0.015550 / 0.534201 (-0.518651) | 0.327762 / 0.579283 (-0.251521) | 0.299212 / 0.434364 (-0.135152) | 0.367549 / 0.540337 (-0.172788) | 0.474499 / 1.386936 (-0.912437) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005904 / 0.011353 (-0.005448) | 0.004245 / 0.011008 (-0.006763) | 0.054309 / 0.038508 (0.015801) | 0.037490 / 0.023109 (0.014381) | 0.293540 / 0.275898 (0.017642) | 0.324068 / 0.323480 (0.000588) | 0.004675 / 0.007986 (-0.003311) | 0.003091 / 0.004328 (-0.001238) | 0.052972 / 0.004250 (0.048721) | 0.045545 / 0.037052 (0.008493) | 0.301465 / 0.258489 (0.042976) | 0.342822 / 0.293841 (0.048981) | 0.033958 / 0.128546 (-0.094588) | 0.013311 / 0.075646 (-0.062336) | 0.064050 / 0.419271 (-0.355222) | 0.038127 / 0.043533 (-0.005406) | 0.297383 / 0.255139 (0.042244) | 0.312244 / 0.283200 (0.029044) | 0.019395 / 0.141683 (-0.122288) | 1.244335 / 1.452155 (-0.207820) | 1.305547 / 1.492716 (-0.187169) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.101847 / 0.018006 (0.083840) | 0.330827 / 0.000490 (0.330337) | 0.000211 / 0.000200 (0.000011) | 0.000047 / 0.000054 (-0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025734 / 0.037411 (-0.011677) | 0.085020 / 0.014526 (0.070494) | 0.096724 / 0.176557 (-0.079833) | 0.141276 / 0.737135 (-0.595859) | 0.099150 / 0.296338 (-0.197189) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.316058 / 0.215209 (0.100849) | 3.059459 / 2.077655 (0.981804) | 1.638394 / 1.504120 (0.134274) | 1.505313 / 1.541195 (-0.035881) | 1.526635 / 1.468490 (0.058145) | 0.777259 / 4.584777 (-3.807518) | 1.059575 / 3.745712 (-2.686137) | 2.952334 / 5.269862 (-2.317528) | 2.003894 / 4.565676 (-2.561782) | 0.084464 / 0.424275 (-0.339811) | 0.007343 / 0.007607 (-0.000265) | 0.366218 / 0.226044 (0.140174) | 3.705588 / 2.268929 (1.436660) | 2.047029 / 55.444624 (-53.397595) | 1.766970 / 6.876477 (-5.109507) | 1.883804 / 2.142072 (-0.258268) | 0.865780 / 4.805227 (-3.939447) | 0.143180 / 6.500664 (-6.357485) | 0.044943 / 0.075469 (-0.030527) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.141391 / 1.841788 (-0.700397) | 13.244917 / 8.074308 (5.170609) | 10.907863 / 10.191392 (0.716471) | 0.156087 / 0.680424 (-0.524337) | 0.016487 / 0.534201 (-0.517714) | 0.331377 / 0.579283 (-0.247906) | 0.148863 / 0.434364 (-0.285501) | 0.370443 / 0.540337 (-0.169895) | 0.499647 / 1.386936 (-0.887289) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ce4a0c573920607bc6c814605734091b06b860e7 \"CML watermark\")\n"
] | feat: support non streamable arrow file binary format | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7025/reactions"
} | PR_kwDODunzps50bSyD | {
"diff_url": "https://github.com/huggingface/datasets/pull/7025.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7025",
"merged_at": "2024-07-31T06:09:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7025.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7025"
} | 2024-07-04T10:11:12Z | https://api.github.com/repos/huggingface/datasets/issues/7025/comments | Support Arrow files (`.arrow`) that are in non streamable binary file formats. | {
"avatar_url": "https://avatars.githubusercontent.com/u/15800200?v=4",
"events_url": "https://api.github.com/users/kmehant/events{/privacy}",
"followers_url": "https://api.github.com/users/kmehant/followers",
"following_url": "https://api.github.com/users/kmehant/following{/other_user}",
"gists_url": "https://api.github.com/users/kmehant/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kmehant",
"id": 15800200,
"login": "kmehant",
"node_id": "MDQ6VXNlcjE1ODAwMjAw",
"organizations_url": "https://api.github.com/users/kmehant/orgs",
"received_events_url": "https://api.github.com/users/kmehant/received_events",
"repos_url": "https://api.github.com/users/kmehant/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kmehant/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kmehant/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kmehant"
} | https://api.github.com/repos/huggingface/datasets/issues/7025/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7025/timeline | closed | false | 7,025 | null | 2024-07-31T06:09:31Z | null | true |
2,390,141,626 | https://api.github.com/repos/huggingface/datasets/issues/7024 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7024/events | [] | null | 2024-07-04T07:21:47Z | [] | https://github.com/huggingface/datasets/issues/7024 | NONE | null | null | null | [] | Streaming dataset not returning data | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7024/reactions"
} | I_kwDODunzps6Odqq6 | null | 2024-07-04T07:21:47Z | https://api.github.com/repos/huggingface/datasets/issues/7024/comments | ### Describe the bug
I'm deciding to post here because I'm still not sure what the issue is, or if I am using IterableDatasets wrongly.
I'm following the guide on here https://huggingface.co/learn/cookbook/en/fine_tuning_code_llm_on_single_gpu pretty much to a tee and have verified that it works when I'm fine-tuning on the provided dataset.
However, I'm doing some data preprocessing steps (filtering out entries), when I try to swap out the dataset for mine, it fails to train. However, I eventually fixed this by simply setting `stream=False` in `load_dataset`.
Coud this be some sort of network / firewall issue I'm facing?
### Steps to reproduce the bug
I made a post with greater description about how I reproduced this problem before I found my workaround: https://discuss.huggingface.co/t/problem-with-custom-iterator-of-streaming-dataset-not-returning-anything/94551
Here is the problematic dataset snippet, which works when streaming=False (and with buffer keyword removed from shuffle)
```
commitpackft = load_dataset(
"chargoddard/commitpack-ft-instruct", split="train", streaming=True
).filter(lambda example: example["language"] == "Python")
def form_template(example):
"""Forms a template for each example following the alpaca format for CommitPack"""
example["content"] = (
"### Human: " + example["instruction"] + " " + example["input"] + " ### Assistant: " + example["output"]
)
return example
dataset = commitpackft.map(
form_template,
remove_columns=["id", "language", "license", "instruction", "input", "output"],
).shuffle(
seed=42, buffer_size=10000
) # remove everything since its all inside "content" now
validation_data = dataset.take(4000)
train_data = dataset.skip(4000)
```
The annoying part about this is that it only fails during training and I don't know when it will fail, except that it always fails during evaluation.
### Expected behavior
The expected behavior is that I should be able to get something from the iterator when called instead of getting nothing / stuck in a loop somewhere.
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-5.4.0-121-generic-x86_64-with-glibc2.31
- Python version: 3.11.7
- `huggingface_hub` version: 0.23.4
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0
| {
"avatar_url": "https://avatars.githubusercontent.com/u/91670254?v=4",
"events_url": "https://api.github.com/users/johnwee1/events{/privacy}",
"followers_url": "https://api.github.com/users/johnwee1/followers",
"following_url": "https://api.github.com/users/johnwee1/following{/other_user}",
"gists_url": "https://api.github.com/users/johnwee1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/johnwee1",
"id": 91670254,
"login": "johnwee1",
"node_id": "U_kgDOBXbG7g",
"organizations_url": "https://api.github.com/users/johnwee1/orgs",
"received_events_url": "https://api.github.com/users/johnwee1/received_events",
"repos_url": "https://api.github.com/users/johnwee1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/johnwee1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johnwee1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/johnwee1"
} | https://api.github.com/repos/huggingface/datasets/issues/7024/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7024/timeline | open | false | 7,024 | null | null | null | false |
2,388,090,424 | https://api.github.com/repos/huggingface/datasets/issues/7023 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7023/events | [] | null | 2024-07-03T09:24:46Z | [] | https://github.com/huggingface/datasets/pull/7023 | MEMBER | null | false | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7023). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005669 / 0.011353 (-0.005684) | 0.004233 / 0.011008 (-0.006775) | 0.063550 / 0.038508 (0.025041) | 0.031269 / 0.023109 (0.008160) | 0.234280 / 0.275898 (-0.041618) | 0.264517 / 0.323480 (-0.058963) | 0.003310 / 0.007986 (-0.004676) | 0.003640 / 0.004328 (-0.000688) | 0.050139 / 0.004250 (0.045889) | 0.046909 / 0.037052 (0.009856) | 0.253101 / 0.258489 (-0.005388) | 0.280281 / 0.293841 (-0.013560) | 0.029558 / 0.128546 (-0.098989) | 0.012537 / 0.075646 (-0.063110) | 0.209624 / 0.419271 (-0.209648) | 0.036857 / 0.043533 (-0.006676) | 0.236957 / 0.255139 (-0.018182) | 0.260510 / 0.283200 (-0.022689) | 0.019802 / 0.141683 (-0.121881) | 1.141747 / 1.452155 (-0.310407) | 1.172617 / 1.492716 (-0.320099) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.107381 / 0.018006 (0.089375) | 0.308401 / 0.000490 (0.307911) | 0.000227 / 0.000200 (0.000027) | 0.000056 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019504 / 0.037411 (-0.017907) | 0.063920 / 0.014526 (0.049394) | 0.075375 / 0.176557 (-0.101181) | 0.122707 / 0.737135 (-0.614428) | 0.080015 / 0.296338 (-0.216324) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288716 / 0.215209 (0.073507) | 2.862022 / 2.077655 (0.784368) | 1.472510 / 1.504120 (-0.031610) | 1.332989 / 1.541195 (-0.208206) | 1.395140 / 1.468490 (-0.073350) | 0.728042 / 4.584777 (-3.856735) | 2.409914 / 3.745712 (-1.335799) | 2.912514 / 5.269862 (-2.357347) | 1.986980 / 4.565676 (-2.578697) | 0.078587 / 0.424275 (-0.345688) | 0.005601 / 0.007607 (-0.002006) | 0.342510 / 0.226044 (0.116466) | 3.354621 / 2.268929 (1.085692) | 1.852472 / 55.444624 (-53.592153) | 1.542567 / 6.876477 (-5.333910) | 1.726756 / 2.142072 (-0.415317) | 0.794567 / 4.805227 (-4.010660) | 0.135279 / 6.500664 (-6.365386) | 0.042591 / 0.075469 (-0.032878) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.968336 / 1.841788 (-0.873452) | 12.334614 / 8.074308 (4.260305) | 9.638775 / 10.191392 (-0.552617) | 0.143625 / 0.680424 (-0.536799) | 0.015475 / 0.534201 (-0.518726) | 0.313357 / 0.579283 (-0.265926) | 0.271257 / 0.434364 (-0.163107) | 0.362074 / 0.540337 (-0.178263) | 0.468595 / 1.386936 (-0.918341) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006243 / 0.011353 (-0.005110) | 0.004496 / 0.011008 (-0.006512) | 0.051271 / 0.038508 (0.012763) | 0.035718 / 0.023109 (0.012609) | 0.272623 / 0.275898 (-0.003275) | 0.297060 / 0.323480 (-0.026420) | 0.004801 / 0.007986 (-0.003185) | 0.003060 / 0.004328 (-0.001269) | 0.049990 / 0.004250 (0.045740) | 0.042413 / 0.037052 (0.005360) | 0.281268 / 0.258489 (0.022779) | 0.327224 / 0.293841 (0.033383) | 0.033745 / 0.128546 (-0.094801) | 0.012777 / 0.075646 (-0.062869) | 0.061808 / 0.419271 (-0.357464) | 0.034428 / 0.043533 (-0.009105) | 0.272211 / 0.255139 (0.017072) | 0.327260 / 0.283200 (0.044061) | 0.019756 / 0.141683 (-0.121927) | 1.137768 / 1.452155 (-0.314387) | 1.220347 / 1.492716 (-0.272369) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099737 / 0.018006 (0.081731) | 0.304627 / 0.000490 (0.304137) | 0.000210 / 0.000200 (0.000011) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023177 / 0.037411 (-0.014234) | 0.077505 / 0.014526 (0.062979) | 0.088957 / 0.176557 (-0.087599) | 0.129187 / 0.737135 (-0.607948) | 0.090386 / 0.296338 (-0.205953) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291558 / 0.215209 (0.076349) | 2.874297 / 2.077655 (0.796642) | 1.562316 / 1.504120 (0.058196) | 1.439950 / 1.541195 (-0.101244) | 1.492316 / 1.468490 (0.023826) | 0.729885 / 4.584777 (-3.854892) | 0.985075 / 3.745712 (-2.760637) | 3.108313 / 5.269862 (-2.161549) | 1.998072 / 4.565676 (-2.567604) | 0.079367 / 0.424275 (-0.344908) | 0.005210 / 0.007607 (-0.002398) | 0.347335 / 0.226044 (0.121290) | 3.519375 / 2.268929 (1.250446) | 1.949395 / 55.444624 (-53.495229) | 1.650379 / 6.876477 (-5.226097) | 1.691606 / 2.142072 (-0.450466) | 0.816023 / 4.805227 (-3.989204) | 0.135318 / 6.500664 (-6.365346) | 0.041390 / 0.075469 (-0.034079) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.018964 / 1.841788 (-0.822823) | 13.120135 / 8.074308 (5.045827) | 10.618095 / 10.191392 (0.426703) | 0.134507 / 0.680424 (-0.545917) | 0.015895 / 0.534201 (-0.518306) | 0.302864 / 0.579283 (-0.276420) | 0.131117 / 0.434364 (-0.303247) | 0.342374 / 0.540337 (-0.197964) | 0.441640 / 1.386936 (-0.945296) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c5fdb68cd12d069f05a3db8add8e6feab3c06930 \"CML watermark\")\n"
] | Remove dead code for pyarrow < 15.0.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7023/reactions"
} | PR_kwDODunzps50TDot | {
"diff_url": "https://github.com/huggingface/datasets/pull/7023.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7023",
"merged_at": "2024-07-03T09:17:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7023.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7023"
} | 2024-07-03T09:05:03Z | https://api.github.com/repos/huggingface/datasets/issues/7023/comments | Remove dead code for pyarrow < 15.0.0.
Code is dead since the merge of:
- #6892
Fix #7022. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/7023/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7023/timeline | closed | false | 7,023 | null | 2024-07-03T09:17:35Z | null | true |
2,388,064,650 | https://api.github.com/repos/huggingface/datasets/issues/7022 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7022/events | [
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] | null | 2024-07-03T09:17:36Z | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | https://github.com/huggingface/datasets/issues/7022 | MEMBER | completed | null | null | [] | There is dead code after we require pyarrow >= 15.0.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7022/reactions"
} | I_kwDODunzps6OVvmK | null | 2024-07-03T08:52:57Z | https://api.github.com/repos/huggingface/datasets/issues/7022/comments | There are code lines specific for pyarrow versions < 15.0.0.
However, we require pyarrow >= 15.0.0 since the merge of PR:
- #6892
Those code lines are now dead code and should be removed. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/7022/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7022/timeline | closed | false | 7,022 | null | 2024-07-03T09:17:36Z | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | false |
2,387,948,935 | https://api.github.com/repos/huggingface/datasets/issues/7021 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7021/events | [] | null | 2024-07-03T08:47:49Z | [] | https://github.com/huggingface/datasets/pull/7021 | MEMBER | null | false | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7021). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005126 / 0.011353 (-0.006227) | 0.003417 / 0.011008 (-0.007591) | 0.063274 / 0.038508 (0.024766) | 0.030896 / 0.023109 (0.007787) | 0.246661 / 0.275898 (-0.029237) | 0.275037 / 0.323480 (-0.048443) | 0.003243 / 0.007986 (-0.004742) | 0.003460 / 0.004328 (-0.000868) | 0.049665 / 0.004250 (0.045414) | 0.045826 / 0.037052 (0.008773) | 0.254360 / 0.258489 (-0.004129) | 0.294934 / 0.293841 (0.001094) | 0.029115 / 0.128546 (-0.099431) | 0.011908 / 0.075646 (-0.063738) | 0.207429 / 0.419271 (-0.211842) | 0.036371 / 0.043533 (-0.007162) | 0.249127 / 0.255139 (-0.006012) | 0.273982 / 0.283200 (-0.009218) | 0.019318 / 0.141683 (-0.122365) | 1.108985 / 1.452155 (-0.343169) | 1.147234 / 1.492716 (-0.345482) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.104830 / 0.018006 (0.086824) | 0.313453 / 0.000490 (0.312964) | 0.000213 / 0.000200 (0.000013) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019140 / 0.037411 (-0.018271) | 0.062160 / 0.014526 (0.047634) | 0.073537 / 0.176557 (-0.103020) | 0.119605 / 0.737135 (-0.617530) | 0.074707 / 0.296338 (-0.221632) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282600 / 0.215209 (0.067391) | 2.805560 / 2.077655 (0.727906) | 1.471312 / 1.504120 (-0.032808) | 1.360920 / 1.541195 (-0.180275) | 1.361132 / 1.468490 (-0.107358) | 0.714791 / 4.584777 (-3.869986) | 2.405224 / 3.745712 (-1.340488) | 2.814498 / 5.269862 (-2.455363) | 1.896792 / 4.565676 (-2.668884) | 0.078138 / 0.424275 (-0.346137) | 0.005430 / 0.007607 (-0.002177) | 0.345529 / 0.226044 (0.119485) | 3.366205 / 2.268929 (1.097277) | 1.862820 / 55.444624 (-53.581805) | 1.555970 / 6.876477 (-5.320507) | 1.665102 / 2.142072 (-0.476970) | 0.798679 / 4.805227 (-4.006548) | 0.132601 / 6.500664 (-6.368064) | 0.041819 / 0.075469 (-0.033650) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.972545 / 1.841788 (-0.869242) | 11.250626 / 8.074308 (3.176318) | 9.211127 / 10.191392 (-0.980265) | 0.130818 / 0.680424 (-0.549605) | 0.014123 / 0.534201 (-0.520078) | 0.298384 / 0.579283 (-0.280899) | 0.269736 / 0.434364 (-0.164628) | 0.341322 / 0.540337 (-0.199015) | 0.466915 / 1.386936 (-0.920021) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005884 / 0.011353 (-0.005469) | 0.003983 / 0.011008 (-0.007025) | 0.050295 / 0.038508 (0.011787) | 0.033906 / 0.023109 (0.010797) | 0.271364 / 0.275898 (-0.004534) | 0.290652 / 0.323480 (-0.032828) | 0.004503 / 0.007986 (-0.003483) | 0.002946 / 0.004328 (-0.001382) | 0.049336 / 0.004250 (0.045086) | 0.040987 / 0.037052 (0.003935) | 0.283088 / 0.258489 (0.024599) | 0.313132 / 0.293841 (0.019291) | 0.032545 / 0.128546 (-0.096001) | 0.012622 / 0.075646 (-0.063024) | 0.060574 / 0.419271 (-0.358698) | 0.033625 / 0.043533 (-0.009908) | 0.266765 / 0.255139 (0.011626) | 0.286164 / 0.283200 (0.002964) | 0.018840 / 0.141683 (-0.122843) | 1.167874 / 1.452155 (-0.284281) | 1.170767 / 1.492716 (-0.321950) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.102266 / 0.018006 (0.084260) | 0.309530 / 0.000490 (0.309040) | 0.000210 / 0.000200 (0.000010) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023879 / 0.037411 (-0.013533) | 0.076837 / 0.014526 (0.062311) | 0.088718 / 0.176557 (-0.087839) | 0.129422 / 0.737135 (-0.607714) | 0.090051 / 0.296338 (-0.206287) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287325 / 0.215209 (0.072116) | 2.844051 / 2.077655 (0.766397) | 1.552338 / 1.504120 (0.048218) | 1.422390 / 1.541195 (-0.118804) | 1.458580 / 1.468490 (-0.009910) | 0.712103 / 4.584777 (-3.872674) | 0.935116 / 3.745712 (-2.810596) | 2.891878 / 5.269862 (-2.377984) | 1.884683 / 4.565676 (-2.680994) | 0.077810 / 0.424275 (-0.346465) | 0.005087 / 0.007607 (-0.002520) | 0.337981 / 0.226044 (0.111937) | 3.346176 / 2.268929 (1.077248) | 1.892525 / 55.444624 (-53.552100) | 1.595472 / 6.876477 (-5.281004) | 1.595617 / 2.142072 (-0.546455) | 0.779581 / 4.805227 (-4.025647) | 0.131042 / 6.500664 (-6.369623) | 0.040665 / 0.075469 (-0.034804) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.063560 / 1.841788 (-0.778227) | 12.030321 / 8.074308 (3.956013) | 10.213963 / 10.191392 (0.022571) | 0.142954 / 0.680424 (-0.537470) | 0.015700 / 0.534201 (-0.518501) | 0.311536 / 0.579283 (-0.267747) | 0.127064 / 0.434364 (-0.307300) | 0.351636 / 0.540337 (-0.188702) | 0.442281 / 1.386936 (-0.944655) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9ccc1f3d533712baf15cb7a93182add3e5446165 \"CML watermark\")\n"
] | Fix casting list array to fixed size list | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7021/reactions"
} | PR_kwDODunzps50SlKR | {
"diff_url": "https://github.com/huggingface/datasets/pull/7021.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7021",
"merged_at": "2024-07-03T08:41:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7021.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7021"
} | 2024-07-03T07:58:57Z | https://api.github.com/repos/huggingface/datasets/issues/7021/comments | Fix casting list array to fixed size list.
This bug was introduced in [datasets-2.17.0](https://github.com/huggingface/datasets/releases/tag/2.17.0) by PR: https://github.com/huggingface/datasets/pull/6283/files#diff-1cb2b66aa9311d729cfd83013dad56cf5afcda35b39dfd0bfe9c3813a049eab0R1899
- #6283
Fix #7020. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/7021/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7021/timeline | closed | false | 7,021 | null | 2024-07-03T08:41:55Z | null | true |
2,387,940,990 | https://api.github.com/repos/huggingface/datasets/issues/7020 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7020/events | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | null | 2024-07-03T08:41:56Z | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | https://github.com/huggingface/datasets/issues/7020 | MEMBER | completed | null | null | [] | Casting list array to fixed size list raises error | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7020/reactions"
} | I_kwDODunzps6OVRZ- | null | 2024-07-03T07:54:49Z | https://api.github.com/repos/huggingface/datasets/issues/7020/comments | When trying to cast a list array to fixed size list, an AttributeError is raised:
> AttributeError: 'pyarrow.lib.FixedSizeListType' object has no attribute 'length'
Steps to reproduce the bug:
```python
import pyarrow as pa
from datasets.table import array_cast
arr = pa.array([[0, 1]])
array_cast(arr, pa.list_(pa.int64(), 2))
```
Stack trace:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-12-6cb90a1d8216> in <module>
3
4 arr = pa.array([[0, 1]])
----> 5 array_cast(arr, pa.list_(pa.int64(), 2))
~/huggingface/datasets/src/datasets/table.py in wrapper(array, *args, **kwargs)
1802 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
1803 else:
-> 1804 return func(array, *args, **kwargs)
1805
1806 return wrapper
~/huggingface/datasets/src/datasets/table.py in array_cast(array, pa_type, allow_primitive_to_str, allow_decimal_to_str)
1920 else:
1921 array_values = array.values[
-> 1922 array.offset * pa_type.length : (array.offset + len(array)) * pa_type.length
1923 ]
1924 return pa.FixedSizeListArray.from_arrays(_c(array_values, pa_type.value_type), pa_type.list_size)
AttributeError: 'pyarrow.lib.FixedSizeListType' object has no attribute 'length'
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/7020/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7020/timeline | closed | false | 7,020 | null | 2024-07-03T08:41:56Z | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | false |
2,385,793,897 | https://api.github.com/repos/huggingface/datasets/issues/7019 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7019/events | [] | null | 2024-08-12T14:49:45Z | [] | https://github.com/huggingface/datasets/pull/7019 | MEMBER | null | false | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7019). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@albertvillanova really happy to see this fix.\r\n\r\nHave you attempted to save a dataset to disk after this? I attempted to utilize your fix in a build from source, and while I can now successfully get a dataset object from a polars df containing a large list, I am getting the following error when attempting to save the resulting dataset to disk:\r\n```\r\nFile \"/Users/x/VSCodeProjects/HuggingFace/hf.py\", line 9, in <module>\r\n dataset.save_to_disk(\"data/test.hf\")\r\n File \"/Users/x/VSCodeProjects/HuggingFace/datasets/src/datasets/arrow_dataset.py\", line 1591, in save_to_disk\r\n for kwargs in kwargs_per_job:\r\n File \"/Users/x/VSCodeProjects/HuggingFace/datasets/src/datasets/arrow_dataset.py\", line 1568, in <genexpr>\r\n \"shard\": self.shard(num_shards=num_shards, index=shard_idx, contiguous=True),\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/x/VSCodeProjects/HuggingFace/datasets/src/datasets/arrow_dataset.py\", line 4757, in shard\r\n return self.select(\r\n ^^^^^^^^^^^^\r\n File \"/Users/x/VSCodeProjects/HuggingFace/datasets/src/datasets/arrow_dataset.py\", line 567, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/x/VSCodeProjects/HuggingFace/datasets/src/datasets/fingerprint.py\", line 482, in wrapper\r\n out = func(dataset, *args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/x/VSCodeProjects/HuggingFace/datasets/src/datasets/arrow_dataset.py\", line 3892, in select\r\n return self._select_contiguous(start, length, new_fingerprint=new_fingerprint)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/x/VSCodeProjects/HuggingFace/datasets/src/datasets/arrow_dataset.py\", line 567, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/x/VSCodeProjects/HuggingFace/datasets/src/datasets/fingerprint.py\", line 482, in wrapper\r\n out = func(dataset, *args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/x/VSCodeProjects/HuggingFace/datasets/src/datasets/arrow_dataset.py\", line 3955, in _select_contiguous\r\n return Dataset(\r\n ^^^^^^^^\r\n File \"/Users/x/VSCodeProjects/HuggingFace/datasets/src/datasets/arrow_dataset.py\", line 731, in __init__\r\n raise ValueError(\r\nValueError: External features info don't match the dataset:\r\nGot\r\n{'0': Value(dtype='int64', id=None), '1': Value(dtype='int64', id=None), '2': Value(dtype='int64', id=None), '3': Value(dtype='int64', id=None), '4': Value(dtype='int64', id=None), '5': Value(dtype='int64', id=None), '6': Value(dtype='int64', id=None), '7': Value(dtype='int64', id=None), '8': Value(dtype='int64', id=None), '9': Value(dtype='int64', id=None), '10': Value(dtype='int64', id=None), '11': Value(dtype='int64', id=None), '12': Value(dtype='int64', id=None), '13': Value(dtype='int64', id=None), '14': Value(dtype='int64', id=None), '15': Value(dtype='int64', id=None), '16': Value(dtype='int64', id=None), '17': Value(dtype='int64', id=None), '18': Value(dtype='int64', id=None), '19': Value(dtype='int64', id=None), 'A': Sequence(feature=Value(dtype='int64', id=None), length=-1, large=False, id=None), 'B': Sequence(feature=Value(dtype='int64', id=None), length=-1, large=False, id=None), 'C': Sequence(feature=Value(dtype='int64', id=None), length=-1, large=False, id=None), 'D': Sequence(feature=Value(dtype='int64', id=None), length=-1, large=False, id=None), '__index_level_0__': Value(dtype='int64', id=None)}\r\nwith type\r\nstruct<0: int64, 1: int64, 2: int64, 3: int64, 4: int64, 5: int64, 6: int64, 7: int64, 8: int64, 9: int64, 10: int64, 11: int64, 12: int64, 13: int64, 14: int64, 15: int64, 16: int64, 17: int64, 18: int64, 19: int64, A: list<item: int64>, B: list<item: int64>, C: list<item: int64>, D: list<item: int64>, __index_level_0__: int64>\r\n\r\nbut expected something like\r\n{'0': Value(dtype='int64', id=None), '1': Value(dtype='int64', id=None), '2': Value(dtype='int64', id=None), '3': Value(dtype='int64', id=None), '4': Value(dtype='int64', id=None), '5': Value(dtype='int64', id=None), '6': Value(dtype='int64', id=None), '7': Value(dtype='int64', id=None), '8': Value(dtype='int64', id=None), '9': Value(dtype='int64', id=None), '10': Value(dtype='int64', id=None), '11': Value(dtype='int64', id=None), '12': Value(dtype='int64', id=None), '13': Value(dtype='int64', id=None), '14': Value(dtype='int64', id=None), '15': Value(dtype='int64', id=None), '16': Value(dtype='int64', id=None), '17': Value(dtype='int64', id=None), '18': Value(dtype='int64', id=None), '19': Value(dtype='int64', id=None), 'A': Sequence(feature=Value(dtype='int64', id=None), length=-1, large=True, id=None), 'B': Sequence(feature=Value(dtype='int64', id=None), length=-1, large=True, id=None), 'C': Sequence(feature=Value(dtype='int64', id=None), length=-1, large=True, id=None), 'D': Sequence(feature=Value(dtype='int64', id=None), length=-1, large=True, id=None), '__index_level_0__': Value(dtype='int64', id=None)}\r\nwith type\r\nstruct<0: int64, 1: int64, 2: int64, 3: int64, 4: int64, 5: int64, 6: int64, 7: int64, 8: int64, 9: int64, 10: int64, 11: int64, 12: int64, 13: int64, 14: int64, 15: int64, 16: int64, 17: int64, 18: int64, 19: int64, A: large_list<item: int64>, B: large_list<item: int64>, C: large_list<item: int64>, D: large_list<item: int64>, __index_level_0__: int64>\r\n```\r\n\r\ncode to reproduce is actually 2 separate scripts below.\r\n\r\ncreating test data:\r\n```\r\nimport pandas as pd\r\nimport numpy as np\r\n\r\ndf = pd.DataFrame(np.random.randint(0, 100000, size=(100000, 20)))\r\nfeatureVector = np.random.randint(0, 100000, size=(100000, 1000)).tolist()\r\n\r\ndf['A'] = featureVector\r\ndf['B'] = featureVector\r\ndf['C'] = featureVector\r\ndf['D'] = featureVector\r\n\r\ndf.to_parquet('data/train_data.parquet', engine='pyarrow')\r\n```\r\n\r\nloading data, converting to HF dataset, attempting to save to disk\r\n```\r\nimport datasets\r\nimport polars as pl\r\n\r\ndf = pl.read_parquet('data/train_data.parquet')\r\n\r\ndataset = datasets.Dataset.from_polars(df)\r\n\r\ndataset.save_to_disk(\"data/test.hf\")\r\n```\r\n\r\nIf this isn't the appropriate place to put this, let me know. Since it isn't merged yet I didn't think raising an issue was appropriate.",
"Thanks for your useful review comments, @dakotamurdock. \r\n\r\nI am investigating that issue to fix it in this PR.",
"Hi @albertvillanova thanks for your work! When is the fix planned to be released?\r\n\r\nI tested your feature branch and managed to load from a polars dataframe with the large_list type, persist to disk, load and convert it again. Also asserted that they are both equal.\r\n\r\n```\r\n> print(df[:3])\r\n\r\nshape: (3, 6)\r\n┌─────────────────┬─────────────────┬──────────┬────────────────┬─────────────────┬────────────────┐\r\n│ plain_text ┆ title ┆ language ┆ language_score ┆ plain_text_hash ┆ response_objec │\r\n│ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ t │\r\n│ str ┆ str ┆ str ┆ f64 ┆ str ┆ --- │\r\n│ ┆ ┆ ┆ ┆ ┆ list[struct[3] │\r\n│ ┆ ┆ ┆ ┆ ┆ ] │\r\n╞═════════════════╪═════════════════╪══════════╪════════════════╪═════════════════╪════════════════╡\r\n│ Royal fans ┆ Prince Louis ┆ en ┆ 0.987661 ┆ 37e438dccb283d1 ┆ [{\"Prince Loui │\r\n│ delighted by ┆ delights crowd ┆ ┆ ┆ f3be2d9d4bb7ed3 ┆ s\",\"waves\",\"cr │\r\n│ Prince… ┆ wi… ┆ ┆ ┆ … ┆ ow… │\r\n│ There have been ┆ Reactions After ┆ en ┆ 0.991371 ┆ 37fafbb69dfcfa5 ┆ [{\"David │\r\n│ diverse reacti… ┆ Davido Alleged… ┆ ┆ ┆ 303d5e3e6917a35 ┆ Adedeji │\r\n│ ┆ ┆ ┆ ┆ … ┆ Adeleke\",\"is … │\r\n│ Betfred will ┆ Betfred to pay ┆ en ┆ 0.980579 ┆ 922e19e6f598e9b ┆ [{\"Betfred\",\"w │\r\n│ pay a £3.25 ┆ £3.25 million ┆ ┆ ┆ 14cdb6772829cbd ┆ ill │\r\n│ milli… ┆ f… ┆ ┆ ┆ … ┆ pay\",\"£3.25 … │\r\n└─────────────────┴─────────────────┴──────────┴────────────────┴─────────────────┴────────────────┘\r\n\r\n> Dataset.from_polars(df).save_to_disk('./test')\r\n\r\nSaving the dataset (1/1 shards): 100%|██████████| 14997/14997 [00:00<00:00, 225472.09 examples/s]\r\n\r\n> another_df = load_from_disk('./test').to_polars()\r\n> print(another_df[:3])\r\n\r\nshape: (3, 6)\r\n┌─────────────────┬─────────────────┬──────────┬────────────────┬─────────────────┬────────────────┐\r\n│ plain_text ┆ title ┆ language ┆ language_score ┆ plain_text_hash ┆ response_objec │\r\n│ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ t │\r\n│ str ┆ str ┆ str ┆ f64 ┆ str ┆ --- │\r\n│ ┆ ┆ ┆ ┆ ┆ list[struct[3] │\r\n│ ┆ ┆ ┆ ┆ ┆ ] │\r\n╞═════════════════╪═════════════════╪══════════╪════════════════╪═════════════════╪════════════════╡\r\n│ Royal fans ┆ Prince Louis ┆ en ┆ 0.987661 ┆ 37e438dccb283d1 ┆ [{\"Prince Loui │\r\n│ delighted by ┆ delights crowd ┆ ┆ ┆ f3be2d9d4bb7ed3 ┆ s\",\"waves\",\"cr │\r\n│ Prince… ┆ wi… ┆ ┆ ┆ … ┆ ow… │\r\n│ There have been ┆ Reactions After ┆ en ┆ 0.991371 ┆ 37fafbb69dfcfa5 ┆ [{\"David │\r\n│ diverse reacti… ┆ Davido Alleged… ┆ ┆ ┆ 303d5e3e6917a35 ┆ Adedeji │\r\n│ ┆ ┆ ┆ ┆ … ┆ Adeleke\",\"is … │\r\n│ Betfred will ┆ Betfred to pay ┆ en ┆ 0.980579 ┆ 922e19e6f598e9b ┆ [{\"Betfred\",\"w │\r\n│ pay a £3.25 ┆ £3.25 million ┆ ┆ ┆ 14cdb6772829cbd ┆ ill │\r\n│ milli… ┆ f… ┆ ┆ ┆ … ┆ pay\",\"£3.25 … │\r\n└─────────────────┴─────────────────┴──────────┴────────────────┴─────────────────┴────────────────┘\r\n\r\n> another_df.equals(df)\r\n\r\nTrue\r\n```\r\n\r\nThis is indeed the error I was getting with datasets==2.19.1\r\n```\r\nDataset.from_polars(df).save_to_disk('./test')\r\nValueError: Arrow type large_list<item: struct<entity1: large_string, relationship: large_string, entity2: large_string>> does not have a datasets dtype equivalent.\r\n```",
"@EdoardoLuciani thanks for your feedback!\r\nI think we should make a new release soon: last one was on June 13.\r\nWhat do you think, @huggingface/datasets?\r\nThe only potential problem I see are the breaking changes once we remove all deprecated code...",
"Your issue was fixed, @dakotamurdock.",
"I am working in a big refactoring of the approach to support large_list: implement a new `LargeList` type instead of using `Sequence.large` attribute.",
"There are many feature-functions and most of them are not properly covered by tests.\r\n\r\nI am adding tests and fixing these feature-functions.",
"I think this PR is ready for review, @huggingface/datasets.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005640 / 0.011353 (-0.005713) | 0.003926 / 0.011008 (-0.007083) | 0.063103 / 0.038508 (0.024595) | 0.032088 / 0.023109 (0.008979) | 0.238615 / 0.275898 (-0.037283) | 0.268379 / 0.323480 (-0.055101) | 0.003146 / 0.007986 (-0.004840) | 0.002813 / 0.004328 (-0.001516) | 0.049681 / 0.004250 (0.045431) | 0.044577 / 0.037052 (0.007525) | 0.249782 / 0.258489 (-0.008708) | 0.282548 / 0.293841 (-0.011293) | 0.029986 / 0.128546 (-0.098560) | 0.012474 / 0.075646 (-0.063172) | 0.203347 / 0.419271 (-0.215925) | 0.035950 / 0.043533 (-0.007583) | 0.243410 / 0.255139 (-0.011729) | 0.267056 / 0.283200 (-0.016143) | 0.022086 / 0.141683 (-0.119597) | 1.145513 / 1.452155 (-0.306641) | 1.207583 / 1.492716 (-0.285133) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095584 / 0.018006 (0.077578) | 0.304264 / 0.000490 (0.303774) | 0.000215 / 0.000200 (0.000015) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019460 / 0.037411 (-0.017952) | 0.062268 / 0.014526 (0.047742) | 0.074943 / 0.176557 (-0.101613) | 0.121657 / 0.737135 (-0.615478) | 0.075930 / 0.296338 (-0.220408) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288975 / 0.215209 (0.073766) | 2.869610 / 2.077655 (0.791955) | 1.491057 / 1.504120 (-0.013063) | 1.384160 / 1.541195 (-0.157035) | 1.380977 / 1.468490 (-0.087513) | 0.723181 / 4.584777 (-3.861596) | 2.397960 / 3.745712 (-1.347752) | 2.899919 / 5.269862 (-2.369942) | 1.878714 / 4.565676 (-2.686962) | 0.078162 / 0.424275 (-0.346113) | 0.005115 / 0.007607 (-0.002493) | 0.337599 / 0.226044 (0.111555) | 3.367450 / 2.268929 (1.098522) | 1.823745 / 55.444624 (-53.620880) | 1.540528 / 6.876477 (-5.335949) | 1.546146 / 2.142072 (-0.595927) | 0.796927 / 4.805227 (-4.008300) | 0.134389 / 6.500664 (-6.366275) | 0.042298 / 0.075469 (-0.033172) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.959687 / 1.841788 (-0.882101) | 11.505269 / 8.074308 (3.430961) | 9.631551 / 10.191392 (-0.559841) | 0.142301 / 0.680424 (-0.538123) | 0.013912 / 0.534201 (-0.520289) | 0.314940 / 0.579283 (-0.264343) | 0.263134 / 0.434364 (-0.171229) | 0.352966 / 0.540337 (-0.187372) | 0.440421 / 1.386936 (-0.946515) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005878 / 0.011353 (-0.005475) | 0.003866 / 0.011008 (-0.007142) | 0.051347 / 0.038508 (0.012839) | 0.032662 / 0.023109 (0.009553) | 0.270701 / 0.275898 (-0.005197) | 0.345277 / 0.323480 (0.021797) | 0.004485 / 0.007986 (-0.003501) | 0.002782 / 0.004328 (-0.001546) | 0.048302 / 0.004250 (0.044051) | 0.040355 / 0.037052 (0.003303) | 0.285196 / 0.258489 (0.026707) | 0.320339 / 0.293841 (0.026499) | 0.032937 / 0.128546 (-0.095610) | 0.012298 / 0.075646 (-0.063348) | 0.061579 / 0.419271 (-0.357692) | 0.034129 / 0.043533 (-0.009403) | 0.265985 / 0.255139 (0.010846) | 0.302066 / 0.283200 (0.018867) | 0.018812 / 0.141683 (-0.122871) | 1.175705 / 1.452155 (-0.276450) | 1.197207 / 1.492716 (-0.295510) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096076 / 0.018006 (0.078070) | 0.312793 / 0.000490 (0.312303) | 0.000228 / 0.000200 (0.000028) | 0.000053 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022858 / 0.037411 (-0.014553) | 0.077160 / 0.014526 (0.062634) | 0.089742 / 0.176557 (-0.086815) | 0.130929 / 0.737135 (-0.606207) | 0.093431 / 0.296338 (-0.202907) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.298884 / 0.215209 (0.083675) | 2.961050 / 2.077655 (0.883395) | 1.620694 / 1.504120 (0.116574) | 1.499331 / 1.541195 (-0.041863) | 1.513118 / 1.468490 (0.044628) | 0.734738 / 4.584777 (-3.850039) | 0.972978 / 3.745712 (-2.772734) | 2.928172 / 5.269862 (-2.341690) | 1.903667 / 4.565676 (-2.662010) | 0.079207 / 0.424275 (-0.345068) | 0.005803 / 0.007607 (-0.001804) | 0.350144 / 0.226044 (0.124099) | 3.519456 / 2.268929 (1.250528) | 1.983809 / 55.444624 (-53.460815) | 1.690527 / 6.876477 (-5.185950) | 1.739301 / 2.142072 (-0.402772) | 0.802045 / 4.805227 (-4.003182) | 0.133041 / 6.500664 (-6.367623) | 0.042112 / 0.075469 (-0.033357) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.030056 / 1.841788 (-0.811731) | 12.077692 / 8.074308 (4.003384) | 9.988253 / 10.191392 (-0.203139) | 0.142745 / 0.680424 (-0.537679) | 0.015842 / 0.534201 (-0.518359) | 0.299055 / 0.579283 (-0.280228) | 0.123788 / 0.434364 (-0.310576) | 0.352782 / 0.540337 (-0.187555) | 0.451140 / 1.386936 (-0.935796) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0cf0be8906063d09456285be9c9f7ce5789726ae \"CML watermark\")\n"
] | Support pyarrow large_list | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7019/reactions"
} | PR_kwDODunzps50LMjW | {
"diff_url": "https://github.com/huggingface/datasets/pull/7019.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7019",
"merged_at": "2024-08-12T14:43:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7019.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7019"
} | 2024-07-02T09:52:52Z | https://api.github.com/repos/huggingface/datasets/issues/7019/comments | Allow Polars round trip by supporting pyarrow large list.
Fix #6834, fix #6984.
Supersede and close #4800, close #6835, close #6986. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/7019/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7019/timeline | closed | false | 7,019 | null | 2024-08-12T14:43:45Z | null | true |
2,383,700,286 | https://api.github.com/repos/huggingface/datasets/issues/7018 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7018/events | [] | null | 2024-08-05T09:21:55Z | [] | https://github.com/huggingface/datasets/issues/7018 | NONE | null | null | null | [
"In my case the error was:\r\n```\r\nValueError: You are trying to load a dataset that was saved using `save_to_disk`. Please use `load_from_disk` instead.\r\n```\r\nDid you try `load_from_disk`?",
"More generally, any reason there is no API consistency between save_to_disk and push_to_hub ? \r\n\r\nWould be nice to be able to save_to_disk and then upload manually to the hub and load_dataset (which works in some situations but not all)..."
] | `load_dataset` fails to load dataset saved by `save_to_disk` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7018/reactions"
} | I_kwDODunzps6OFGE- | null | 2024-07-01T12:19:19Z | https://api.github.com/repos/huggingface/datasets/issues/7018/comments | ### Describe the bug
This code fails to load the dataset it just saved:
```python
from datasets import load_dataset
from transformers import AutoTokenizer
MODEL = "google-bert/bert-base-cased"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
dataset = load_dataset("yelp_review_full")
def tokenize_function(examples):
return tokenizer(examples["text"], padding="max_length", truncation=True)
tokenized_datasets = dataset.map(tokenize_function, batched=True)
tokenized_datasets.save_to_disk("dataset")
tokenized_datasets = load_dataset("dataset/") # raises
```
It raises `ValueError: Couldn't infer the same data file format for all splits. Got {NamedSplit('train'): ('arrow', {}), NamedSplit('test'): ('json', {})}`.
I believe this bug is caused by the [logic that tries to infer dataset format](https://github.com/huggingface/datasets/blob/9af8dd3de7626183a9a9ec8973cebc672d690400/src/datasets/load.py#L556). It counts the most common file extension. However, a small dataset can fit in a single `.arrow` file and have two JSON metadata files, causing the format to be inferred as JSON:
```shell
$ ls -l dataset/test
-rw-r--r-- 1 sliedes sliedes 191498784 Jul 1 13:55 data-00000-of-00001.arrow
-rw-r--r-- 1 sliedes sliedes 1730 Jul 1 13:55 dataset_info.json
-rw-r--r-- 1 sliedes sliedes 249 Jul 1 13:55 state.json
```
### Steps to reproduce the bug
Execute the code above.
### Expected behavior
The dataset is loaded successfully.
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-6.9.3-arch1-1-x86_64-with-glibc2.39
- Python version: 3.12.4
- `huggingface_hub` version: 0.23.4
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0
| {
"avatar_url": "https://avatars.githubusercontent.com/u/2307997?v=4",
"events_url": "https://api.github.com/users/sliedes/events{/privacy}",
"followers_url": "https://api.github.com/users/sliedes/followers",
"following_url": "https://api.github.com/users/sliedes/following{/other_user}",
"gists_url": "https://api.github.com/users/sliedes/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sliedes",
"id": 2307997,
"login": "sliedes",
"node_id": "MDQ6VXNlcjIzMDc5OTc=",
"organizations_url": "https://api.github.com/users/sliedes/orgs",
"received_events_url": "https://api.github.com/users/sliedes/received_events",
"repos_url": "https://api.github.com/users/sliedes/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sliedes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sliedes/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sliedes"
} | https://api.github.com/repos/huggingface/datasets/issues/7018/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7018/timeline | open | false | 7,018 | null | null | null | false |
2,383,647,419 | https://api.github.com/repos/huggingface/datasets/issues/7017 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7017/events | [] | null | 2024-07-01T12:12:32Z | [] | https://github.com/huggingface/datasets/pull/7017 | MEMBER | null | false | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7017). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005520 / 0.011353 (-0.005832) | 0.004216 / 0.011008 (-0.006792) | 0.063465 / 0.038508 (0.024957) | 0.032116 / 0.023109 (0.009007) | 0.242486 / 0.275898 (-0.033412) | 0.262554 / 0.323480 (-0.060925) | 0.004218 / 0.007986 (-0.003768) | 0.003264 / 0.004328 (-0.001064) | 0.050306 / 0.004250 (0.046056) | 0.044995 / 0.037052 (0.007942) | 0.257797 / 0.258489 (-0.000693) | 0.284595 / 0.293841 (-0.009246) | 0.030623 / 0.128546 (-0.097924) | 0.012245 / 0.075646 (-0.063401) | 0.205496 / 0.419271 (-0.213775) | 0.039327 / 0.043533 (-0.004206) | 0.246834 / 0.255139 (-0.008305) | 0.269296 / 0.283200 (-0.013903) | 0.017714 / 0.141683 (-0.123969) | 1.127246 / 1.452155 (-0.324909) | 1.172147 / 1.492716 (-0.320569) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.137621 / 0.018006 (0.119615) | 0.299843 / 0.000490 (0.299353) | 0.000248 / 0.000200 (0.000048) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018968 / 0.037411 (-0.018443) | 0.062636 / 0.014526 (0.048111) | 0.074098 / 0.176557 (-0.102459) | 0.121139 / 0.737135 (-0.615996) | 0.075121 / 0.296338 (-0.221217) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289907 / 0.215209 (0.074698) | 2.872250 / 2.077655 (0.794595) | 1.508635 / 1.504120 (0.004515) | 1.345356 / 1.541195 (-0.195839) | 1.361858 / 1.468490 (-0.106632) | 0.738961 / 4.584777 (-3.845816) | 2.414616 / 3.745712 (-1.331097) | 2.843464 / 5.269862 (-2.426398) | 1.953716 / 4.565676 (-2.611961) | 0.079063 / 0.424275 (-0.345212) | 0.005498 / 0.007607 (-0.002109) | 0.346211 / 0.226044 (0.120166) | 3.446294 / 2.268929 (1.177366) | 1.857191 / 55.444624 (-53.587433) | 1.536924 / 6.876477 (-5.339553) | 1.655782 / 2.142072 (-0.486290) | 0.800508 / 4.805227 (-4.004719) | 0.136116 / 6.500664 (-6.364548) | 0.042648 / 0.075469 (-0.032821) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.964286 / 1.841788 (-0.877501) | 11.574645 / 8.074308 (3.500336) | 9.351631 / 10.191392 (-0.839761) | 0.139693 / 0.680424 (-0.540731) | 0.014368 / 0.534201 (-0.519833) | 0.303953 / 0.579283 (-0.275330) | 0.263302 / 0.434364 (-0.171062) | 0.342436 / 0.540337 (-0.197901) | 0.457195 / 1.386936 (-0.929741) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005526 / 0.011353 (-0.005827) | 0.003959 / 0.011008 (-0.007050) | 0.049979 / 0.038508 (0.011471) | 0.032695 / 0.023109 (0.009586) | 0.269461 / 0.275898 (-0.006437) | 0.296622 / 0.323480 (-0.026858) | 0.004410 / 0.007986 (-0.003576) | 0.002708 / 0.004328 (-0.001621) | 0.048413 / 0.004250 (0.044163) | 0.040567 / 0.037052 (0.003515) | 0.278854 / 0.258489 (0.020364) | 0.318839 / 0.293841 (0.024998) | 0.031228 / 0.128546 (-0.097318) | 0.012411 / 0.075646 (-0.063236) | 0.060077 / 0.419271 (-0.359194) | 0.033072 / 0.043533 (-0.010461) | 0.275281 / 0.255139 (0.020142) | 0.292588 / 0.283200 (0.009388) | 0.018218 / 0.141683 (-0.123465) | 1.124877 / 1.452155 (-0.327278) | 1.164880 / 1.492716 (-0.327836) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095098 / 0.018006 (0.077092) | 0.298341 / 0.000490 (0.297851) | 0.000225 / 0.000200 (0.000025) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022502 / 0.037411 (-0.014909) | 0.076650 / 0.014526 (0.062124) | 0.088851 / 0.176557 (-0.087705) | 0.128261 / 0.737135 (-0.608875) | 0.089305 / 0.296338 (-0.207033) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.298704 / 0.215209 (0.083495) | 2.917605 / 2.077655 (0.839951) | 1.568964 / 1.504120 (0.064844) | 1.437668 / 1.541195 (-0.103527) | 1.458787 / 1.468490 (-0.009704) | 0.732347 / 4.584777 (-3.852430) | 0.960834 / 3.745712 (-2.784878) | 2.947899 / 5.269862 (-2.321963) | 1.885576 / 4.565676 (-2.680100) | 0.079093 / 0.424275 (-0.345182) | 0.005199 / 0.007607 (-0.002408) | 0.353754 / 0.226044 (0.127710) | 3.495197 / 2.268929 (1.226268) | 1.936840 / 55.444624 (-53.507785) | 1.622797 / 6.876477 (-5.253680) | 1.627132 / 2.142072 (-0.514940) | 0.804007 / 4.805227 (-4.001221) | 0.135990 / 6.500664 (-6.364674) | 0.041606 / 0.075469 (-0.033863) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.004860 / 1.841788 (-0.836928) | 12.027573 / 8.074308 (3.953265) | 10.478055 / 10.191392 (0.286663) | 0.143946 / 0.680424 (-0.536477) | 0.015538 / 0.534201 (-0.518663) | 0.302592 / 0.579283 (-0.276691) | 0.123177 / 0.434364 (-0.311187) | 0.340752 / 0.540337 (-0.199585) | 0.436536 / 1.386936 (-0.950400) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#100361d7ccae451a34c6bd9e48dee55d6a3c6006 \"CML watermark\")\n"
] | Support fsspec 2024.6.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7017/reactions"
} | PR_kwDODunzps50D3gi | {
"diff_url": "https://github.com/huggingface/datasets/pull/7017.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7017",
"merged_at": "2024-07-01T12:06:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7017.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7017"
} | 2024-07-01T11:57:15Z | https://api.github.com/repos/huggingface/datasets/issues/7017/comments | Support fsspec 2024.6.1. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/7017/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7017/timeline | closed | false | 7,017 | null | 2024-07-01T12:06:24Z | null | true |
2,383,262,608 | https://api.github.com/repos/huggingface/datasets/issues/7016 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7016/events | [
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
},
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | null | 2024-07-20T06:51:58Z | [] | https://github.com/huggingface/datasets/issues/7016 | NONE | null | null | null | [
"There is an open issue #2514 about this which also proposes solutions."
] | `drop_duplicates` method | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7016/reactions"
} | I_kwDODunzps6ODbOQ | null | 2024-07-01T09:01:06Z | https://api.github.com/repos/huggingface/datasets/issues/7016/comments | ### Feature request
`drop_duplicates` method for huggingface datasets (similiar in simplicity to the `pandas` one)
### Motivation
Ease of use
### Your contribution
I don't think i am good enough to help | {
"avatar_url": "https://avatars.githubusercontent.com/u/26205298?v=4",
"events_url": "https://api.github.com/users/MohamedAliRashad/events{/privacy}",
"followers_url": "https://api.github.com/users/MohamedAliRashad/followers",
"following_url": "https://api.github.com/users/MohamedAliRashad/following{/other_user}",
"gists_url": "https://api.github.com/users/MohamedAliRashad/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/MohamedAliRashad",
"id": 26205298,
"login": "MohamedAliRashad",
"node_id": "MDQ6VXNlcjI2MjA1Mjk4",
"organizations_url": "https://api.github.com/users/MohamedAliRashad/orgs",
"received_events_url": "https://api.github.com/users/MohamedAliRashad/received_events",
"repos_url": "https://api.github.com/users/MohamedAliRashad/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/MohamedAliRashad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MohamedAliRashad/subscriptions",
"type": "User",
"url": "https://api.github.com/users/MohamedAliRashad"
} | https://api.github.com/repos/huggingface/datasets/issues/7016/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7016/timeline | open | false | 7,016 | null | null | null | false |
2,383,151,220 | https://api.github.com/repos/huggingface/datasets/issues/7015 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7015/events | [] | null | 2024-07-26T09:37:51Z | [] | https://github.com/huggingface/datasets/pull/7015 | CONTRIBUTOR | null | false | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7015). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@albertvillanova thanks for the review, please take a look",
"@albertvillanova please take a look",
"Thank you again! Your PR is merged.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005267 / 0.011353 (-0.006086) | 0.003711 / 0.011008 (-0.007297) | 0.062288 / 0.038508 (0.023780) | 0.031357 / 0.023109 (0.008248) | 0.233592 / 0.275898 (-0.042306) | 0.257722 / 0.323480 (-0.065758) | 0.003124 / 0.007986 (-0.004861) | 0.003335 / 0.004328 (-0.000994) | 0.048594 / 0.004250 (0.044344) | 0.043853 / 0.037052 (0.006801) | 0.248589 / 0.258489 (-0.009900) | 0.278474 / 0.293841 (-0.015367) | 0.029573 / 0.128546 (-0.098973) | 0.011779 / 0.075646 (-0.063868) | 0.204989 / 0.419271 (-0.214282) | 0.035734 / 0.043533 (-0.007799) | 0.240064 / 0.255139 (-0.015075) | 0.263105 / 0.283200 (-0.020094) | 0.018764 / 0.141683 (-0.122919) | 1.115705 / 1.452155 (-0.336449) | 1.175457 / 1.492716 (-0.317260) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092664 / 0.018006 (0.074657) | 0.297893 / 0.000490 (0.297403) | 0.000217 / 0.000200 (0.000017) | 0.000047 / 0.000054 (-0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019056 / 0.037411 (-0.018355) | 0.062472 / 0.014526 (0.047946) | 0.073462 / 0.176557 (-0.103094) | 0.119723 / 0.737135 (-0.617412) | 0.074420 / 0.296338 (-0.221919) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283131 / 0.215209 (0.067922) | 2.776694 / 2.077655 (0.699039) | 1.455586 / 1.504120 (-0.048534) | 1.323902 / 1.541195 (-0.217293) | 1.333169 / 1.468490 (-0.135321) | 0.723921 / 4.584777 (-3.860856) | 2.385842 / 3.745712 (-1.359870) | 2.926843 / 5.269862 (-2.343018) | 1.896773 / 4.565676 (-2.668903) | 0.079754 / 0.424275 (-0.344521) | 0.005188 / 0.007607 (-0.002419) | 0.342466 / 0.226044 (0.116421) | 3.404204 / 2.268929 (1.135275) | 1.856575 / 55.444624 (-53.588049) | 1.554507 / 6.876477 (-5.321970) | 1.564065 / 2.142072 (-0.578007) | 0.810363 / 4.805227 (-3.994864) | 0.135537 / 6.500664 (-6.365127) | 0.041987 / 0.075469 (-0.033482) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.962288 / 1.841788 (-0.879500) | 11.310837 / 8.074308 (3.236529) | 9.630034 / 10.191392 (-0.561358) | 0.131108 / 0.680424 (-0.549316) | 0.015225 / 0.534201 (-0.518976) | 0.304211 / 0.579283 (-0.275072) | 0.272707 / 0.434364 (-0.161657) | 0.341550 / 0.540337 (-0.198787) | 0.444528 / 1.386936 (-0.942408) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005665 / 0.011353 (-0.005688) | 0.003916 / 0.011008 (-0.007092) | 0.049946 / 0.038508 (0.011438) | 0.031760 / 0.023109 (0.008651) | 0.273826 / 0.275898 (-0.002072) | 0.300193 / 0.323480 (-0.023287) | 0.004350 / 0.007986 (-0.003635) | 0.002749 / 0.004328 (-0.001579) | 0.048451 / 0.004250 (0.044201) | 0.039798 / 0.037052 (0.002746) | 0.284570 / 0.258489 (0.026081) | 0.318855 / 0.293841 (0.025014) | 0.032724 / 0.128546 (-0.095822) | 0.012103 / 0.075646 (-0.063543) | 0.059857 / 0.419271 (-0.359414) | 0.034185 / 0.043533 (-0.009348) | 0.276079 / 0.255139 (0.020940) | 0.294070 / 0.283200 (0.010871) | 0.018168 / 0.141683 (-0.123515) | 1.149681 / 1.452155 (-0.302473) | 1.191349 / 1.492716 (-0.301367) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092676 / 0.018006 (0.074669) | 0.304971 / 0.000490 (0.304481) | 0.000203 / 0.000200 (0.000003) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023110 / 0.037411 (-0.014301) | 0.079117 / 0.014526 (0.064591) | 0.087457 / 0.176557 (-0.089099) | 0.128295 / 0.737135 (-0.608840) | 0.089747 / 0.296338 (-0.206592) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.305158 / 0.215209 (0.089949) | 2.992277 / 2.077655 (0.914623) | 1.595369 / 1.504120 (0.091249) | 1.462955 / 1.541195 (-0.078240) | 1.476269 / 1.468490 (0.007779) | 0.731652 / 4.584777 (-3.853124) | 0.961053 / 3.745712 (-2.784659) | 2.800259 / 5.269862 (-2.469602) | 1.881249 / 4.565676 (-2.684428) | 0.079503 / 0.424275 (-0.344772) | 0.005252 / 0.007607 (-0.002355) | 0.354921 / 0.226044 (0.128877) | 3.495272 / 2.268929 (1.226343) | 1.956419 / 55.444624 (-53.488205) | 1.654941 / 6.876477 (-5.221536) | 1.782506 / 2.142072 (-0.359567) | 0.816487 / 4.805227 (-3.988741) | 0.135870 / 6.500664 (-6.364794) | 0.041114 / 0.075469 (-0.034355) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.050346 / 1.841788 (-0.791442) | 12.510129 / 8.074308 (4.435821) | 10.524835 / 10.191392 (0.333443) | 0.152388 / 0.680424 (-0.528036) | 0.016073 / 0.534201 (-0.518128) | 0.301956 / 0.579283 (-0.277327) | 0.126871 / 0.434364 (-0.307493) | 0.339554 / 0.540337 (-0.200783) | 0.435873 / 1.386936 (-0.951064) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ead089d949febdce415d79ef3802e188316c0b26 \"CML watermark\")\n"
] | add split argument to Generator | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7015/reactions"
} | PR_kwDODunzps50CJuE | {
"diff_url": "https://github.com/huggingface/datasets/pull/7015.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7015",
"merged_at": "2024-07-26T09:31:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7015.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7015"
} | 2024-07-01T08:09:25Z | https://api.github.com/repos/huggingface/datasets/issues/7015/comments | ## Actual
When creating a multi-split dataset using generators like
```python
datasets.DatasetDict({
"val": datasets.Dataset.from_generator(
generator=generator_val,
features=features
),
"test": datasets.Dataset.from_generator(
generator=generator_test,
features=features,
)
})
```
It displays (for both test and val)
```
Generating train split
```
## Expected
I would like to be able to improve this behavior by doing
```python
datasets.DatasetDict({
"val": datasets.Dataset.from_generator(
generator=generator_val,
features=features,
split="val"
),
"test": datasets.Dataset.from_generator(
generator=generator_test,
features=features,
split="test"
)
})
```
It would display
```
Generating val split
```
and
```
Generating test split
```
## Proposal
Current PR is adding an explicit `split` argument and replace the implicit "train" split in the following classes/function :
* Generator
* from_generator
* AbstractDatasetInputStream
* GeneratorDatasetInputStream
Please share your feedbacks | {
"avatar_url": "https://avatars.githubusercontent.com/u/156736?v=4",
"events_url": "https://api.github.com/users/piercus/events{/privacy}",
"followers_url": "https://api.github.com/users/piercus/followers",
"following_url": "https://api.github.com/users/piercus/following{/other_user}",
"gists_url": "https://api.github.com/users/piercus/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/piercus",
"id": 156736,
"login": "piercus",
"node_id": "MDQ6VXNlcjE1NjczNg==",
"organizations_url": "https://api.github.com/users/piercus/orgs",
"received_events_url": "https://api.github.com/users/piercus/received_events",
"repos_url": "https://api.github.com/users/piercus/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/piercus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/piercus/subscriptions",
"type": "User",
"url": "https://api.github.com/users/piercus"
} | https://api.github.com/repos/huggingface/datasets/issues/7015/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7015/timeline | closed | false | 7,015 | null | 2024-07-26T09:31:56Z | null | true |
2,382,985,847 | https://api.github.com/repos/huggingface/datasets/issues/7014 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7014/events | [] | null | 2024-07-01T07:16:36Z | [] | https://github.com/huggingface/datasets/pull/7014 | MEMBER | null | false | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7014). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"The failing CI tests are unrelated to this PR.\r\n\r\nWe can see that now the integration tests on Windows finish in a reasonable amount of time, e.g. 8m 10s.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005219 / 0.011353 (-0.006134) | 0.003825 / 0.011008 (-0.007183) | 0.063082 / 0.038508 (0.024574) | 0.031258 / 0.023109 (0.008149) | 0.232288 / 0.275898 (-0.043610) | 0.261140 / 0.323480 (-0.062340) | 0.003185 / 0.007986 (-0.004801) | 0.002807 / 0.004328 (-0.001522) | 0.049438 / 0.004250 (0.045188) | 0.045112 / 0.037052 (0.008059) | 0.245327 / 0.258489 (-0.013162) | 0.277941 / 0.293841 (-0.015900) | 0.029190 / 0.128546 (-0.099357) | 0.012071 / 0.075646 (-0.063575) | 0.204351 / 0.419271 (-0.214921) | 0.036546 / 0.043533 (-0.006987) | 0.235999 / 0.255139 (-0.019140) | 0.269069 / 0.283200 (-0.014131) | 0.019047 / 0.141683 (-0.122636) | 1.117213 / 1.452155 (-0.334941) | 1.202807 / 1.492716 (-0.289909) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096680 / 0.018006 (0.078674) | 0.304513 / 0.000490 (0.304023) | 0.000211 / 0.000200 (0.000011) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019526 / 0.037411 (-0.017885) | 0.062239 / 0.014526 (0.047713) | 0.073988 / 0.176557 (-0.102569) | 0.122156 / 0.737135 (-0.614980) | 0.075727 / 0.296338 (-0.220611) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284125 / 0.215209 (0.068916) | 2.804235 / 2.077655 (0.726581) | 1.463729 / 1.504120 (-0.040390) | 1.337854 / 1.541195 (-0.203341) | 1.340435 / 1.468490 (-0.128055) | 0.711647 / 4.584777 (-3.873130) | 2.365194 / 3.745712 (-1.380518) | 2.839193 / 5.269862 (-2.430669) | 1.909730 / 4.565676 (-2.655947) | 0.077399 / 0.424275 (-0.346876) | 0.005432 / 0.007607 (-0.002175) | 0.332281 / 0.226044 (0.106236) | 3.301854 / 2.268929 (1.032925) | 1.836672 / 55.444624 (-53.607952) | 1.511144 / 6.876477 (-5.365333) | 1.624167 / 2.142072 (-0.517905) | 0.803453 / 4.805227 (-4.001775) | 0.132760 / 6.500664 (-6.367904) | 0.042323 / 0.075469 (-0.033146) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.951576 / 1.841788 (-0.890212) | 11.476809 / 8.074308 (3.402501) | 9.208285 / 10.191392 (-0.983107) | 0.131797 / 0.680424 (-0.548626) | 0.014362 / 0.534201 (-0.519839) | 0.316051 / 0.579283 (-0.263232) | 0.269250 / 0.434364 (-0.165114) | 0.366949 / 0.540337 (-0.173388) | 0.471047 / 1.386936 (-0.915889) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005905 / 0.011353 (-0.005448) | 0.003892 / 0.011008 (-0.007116) | 0.050513 / 0.038508 (0.012005) | 0.030903 / 0.023109 (0.007794) | 0.268835 / 0.275898 (-0.007063) | 0.288825 / 0.323480 (-0.034655) | 0.004372 / 0.007986 (-0.003614) | 0.002805 / 0.004328 (-0.001523) | 0.048497 / 0.004250 (0.044246) | 0.040665 / 0.037052 (0.003613) | 0.279842 / 0.258489 (0.021352) | 0.310715 / 0.293841 (0.016874) | 0.032133 / 0.128546 (-0.096413) | 0.012288 / 0.075646 (-0.063358) | 0.059719 / 0.419271 (-0.359552) | 0.033825 / 0.043533 (-0.009708) | 0.264670 / 0.255139 (0.009531) | 0.283799 / 0.283200 (0.000599) | 0.017968 / 0.141683 (-0.123715) | 1.160578 / 1.452155 (-0.291577) | 1.198602 / 1.492716 (-0.294115) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094388 / 0.018006 (0.076382) | 0.301861 / 0.000490 (0.301371) | 0.000212 / 0.000200 (0.000012) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022901 / 0.037411 (-0.014510) | 0.076816 / 0.014526 (0.062290) | 0.089203 / 0.176557 (-0.087354) | 0.129040 / 0.737135 (-0.608096) | 0.090758 / 0.296338 (-0.205580) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.301191 / 0.215209 (0.085982) | 2.962887 / 2.077655 (0.885232) | 1.607134 / 1.504120 (0.103014) | 1.477817 / 1.541195 (-0.063377) | 1.485984 / 1.468490 (0.017494) | 0.717358 / 4.584777 (-3.867419) | 0.976018 / 3.745712 (-2.769694) | 2.951509 / 5.269862 (-2.318352) | 1.910619 / 4.565676 (-2.655057) | 0.078579 / 0.424275 (-0.345697) | 0.005209 / 0.007607 (-0.002398) | 0.345287 / 0.226044 (0.119243) | 3.487012 / 2.268929 (1.218084) | 1.938104 / 55.444624 (-53.506521) | 1.639341 / 6.876477 (-5.237136) | 1.617874 / 2.142072 (-0.524198) | 0.793721 / 4.805227 (-4.011506) | 0.136834 / 6.500664 (-6.363830) | 0.041211 / 0.075469 (-0.034258) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.988106 / 1.841788 (-0.853682) | 12.035176 / 8.074308 (3.960868) | 10.594559 / 10.191392 (0.403167) | 0.149917 / 0.680424 (-0.530507) | 0.015913 / 0.534201 (-0.518288) | 0.307658 / 0.579283 (-0.271625) | 0.130645 / 0.434364 (-0.303719) | 0.348450 / 0.540337 (-0.191887) | 0.443559 / 1.386936 (-0.943377) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9af8dd3de7626183a9a9ec8973cebc672d690400 \"CML watermark\")\n"
] | Skip faiss tests on Windows to avoid running CI for 360 minutes | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7014/reactions"
} | PR_kwDODunzps50BlwV | {
"diff_url": "https://github.com/huggingface/datasets/pull/7014.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7014",
"merged_at": "2024-07-01T07:10:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7014.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7014"
} | 2024-07-01T06:45:35Z | https://api.github.com/repos/huggingface/datasets/issues/7014/comments | Skip faiss tests on Windows to avoid running CI for 360 minutes.
Fix #7013.
Revert once the underlying issue is fixed. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/7014/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7014/timeline | closed | false | 7,014 | null | 2024-07-01T07:10:27Z | null | true |
2,382,976,738 | https://api.github.com/repos/huggingface/datasets/issues/7013 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7013/events | [
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] | null | 2024-07-01T07:10:28Z | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | https://github.com/huggingface/datasets/issues/7013 | MEMBER | completed | null | null | [] | CI is broken for faiss tests on Windows: node down: Not properly terminated | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7013/reactions"
} | I_kwDODunzps6OCVbi | null | 2024-07-01T06:40:03Z | https://api.github.com/repos/huggingface/datasets/issues/7013/comments | Faiss tests on Windows make the CI run indefinitely until maximum execution time (360 minutes) is reached.
See: https://github.com/huggingface/datasets/actions/runs/9712659783
```
test (integration, windows-latest, deps-minimum)
The job running on runner GitHub Actions 60 has exceeded the maximum execution time of 360 minutes.
test (integration, windows-latest, deps-latest)
The job running on runner GitHub Actions 238 has exceeded the maximum execution time of 360 minutes.
```
```
____________________________ tests/test_search.py _____________________________
[gw1] win32 -- Python 3.8.10 C:\hostedtoolcache\windows\Python\3.8.10\x64\python.exe
worker 'gw1' crashed while running 'tests/test_search.py::IndexableDatasetTest::test_add_faiss_index'
____________________________ tests/test_search.py _____________________________
[gw2] win32 -- Python 3.8.10 C:\hostedtoolcache\windows\Python\3.8.10\x64\python.exe
worker 'gw2' crashed while running 'tests/test_search.py::IndexableDatasetTest::test_add_faiss_index'
```
```
tests/test_search.py::IndexableDatasetTest::test_add_faiss_index
[gw0] node down: Not properly terminated
[gw0] FAILED tests/test_search.py::IndexableDatasetTest::test_add_faiss_index
replacing crashed worker gw0
tests/test_search.py::IndexableDatasetTest::test_add_faiss_index
[gw1] node down: Not properly terminated
[gw1] FAILED tests/test_search.py::IndexableDatasetTest::test_add_faiss_index
replacing crashed worker gw1
tests/test_search.py::IndexableDatasetTest::test_add_faiss_index
[gw2] node down: Not properly terminated
[gw2] FAILED tests/test_search.py::IndexableDatasetTest::test_add_faiss_index
replacing crashed worker gw2
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/7013/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7013/timeline | closed | false | 7,013 | null | 2024-07-01T07:10:28Z | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | false |
2,380,934,047 | https://api.github.com/repos/huggingface/datasets/issues/7012 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7012/events | [] | null | 2024-07-11T02:06:16Z | [] | https://github.com/huggingface/datasets/pull/7012 | NONE | null | false | null | [] | Raise an error when a nested object is expected to be a mapping that displays the object | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7012/reactions"
} | PR_kwDODunzps5z61A3 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7012.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7012",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7012.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7012"
} | 2024-06-28T18:10:59Z | https://api.github.com/repos/huggingface/datasets/issues/7012/comments | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/22511797?v=4",
"events_url": "https://api.github.com/users/sebbyjp/events{/privacy}",
"followers_url": "https://api.github.com/users/sebbyjp/followers",
"following_url": "https://api.github.com/users/sebbyjp/following{/other_user}",
"gists_url": "https://api.github.com/users/sebbyjp/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sebbyjp",
"id": 22511797,
"login": "sebbyjp",
"node_id": "MDQ6VXNlcjIyNTExNzk3",
"organizations_url": "https://api.github.com/users/sebbyjp/orgs",
"received_events_url": "https://api.github.com/users/sebbyjp/received_events",
"repos_url": "https://api.github.com/users/sebbyjp/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sebbyjp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sebbyjp/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sebbyjp"
} | https://api.github.com/repos/huggingface/datasets/issues/7012/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7012/timeline | closed | false | 7,012 | null | 2024-07-11T02:06:16Z | null | true |
2,379,785,262 | https://api.github.com/repos/huggingface/datasets/issues/7011 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7011/events | [] | null | 2024-06-28T12:25:25Z | [] | https://github.com/huggingface/datasets/pull/7011 | MEMBER | null | false | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7011). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005589 / 0.011353 (-0.005764) | 0.003855 / 0.011008 (-0.007153) | 0.063445 / 0.038508 (0.024937) | 0.030815 / 0.023109 (0.007706) | 0.244052 / 0.275898 (-0.031846) | 0.269916 / 0.323480 (-0.053563) | 0.003130 / 0.007986 (-0.004856) | 0.003349 / 0.004328 (-0.000980) | 0.049338 / 0.004250 (0.045088) | 0.045314 / 0.037052 (0.008261) | 0.250646 / 0.258489 (-0.007844) | 0.295828 / 0.293841 (0.001987) | 0.029808 / 0.128546 (-0.098738) | 0.012299 / 0.075646 (-0.063347) | 0.204946 / 0.419271 (-0.214325) | 0.036387 / 0.043533 (-0.007146) | 0.244316 / 0.255139 (-0.010823) | 0.269308 / 0.283200 (-0.013892) | 0.019226 / 0.141683 (-0.122457) | 1.138739 / 1.452155 (-0.313416) | 1.155265 / 1.492716 (-0.337451) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094085 / 0.018006 (0.076078) | 0.299764 / 0.000490 (0.299275) | 0.000205 / 0.000200 (0.000005) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018361 / 0.037411 (-0.019050) | 0.062665 / 0.014526 (0.048139) | 0.075888 / 0.176557 (-0.100668) | 0.120915 / 0.737135 (-0.616221) | 0.075465 / 0.296338 (-0.220873) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279698 / 0.215209 (0.064489) | 2.784544 / 2.077655 (0.706889) | 1.498441 / 1.504120 (-0.005679) | 1.379789 / 1.541195 (-0.161406) | 1.388480 / 1.468490 (-0.080011) | 0.724249 / 4.584777 (-3.860528) | 2.343139 / 3.745712 (-1.402573) | 2.816179 / 5.269862 (-2.453683) | 1.908737 / 4.565676 (-2.656940) | 0.077686 / 0.424275 (-0.346589) | 0.005444 / 0.007607 (-0.002163) | 0.344084 / 0.226044 (0.118039) | 3.367548 / 2.268929 (1.098619) | 1.849200 / 55.444624 (-53.595424) | 1.556390 / 6.876477 (-5.320087) | 1.672902 / 2.142072 (-0.469170) | 0.795457 / 4.805227 (-4.009770) | 0.133521 / 6.500664 (-6.367143) | 0.042883 / 0.075469 (-0.032586) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.959094 / 1.841788 (-0.882694) | 11.399783 / 8.074308 (3.325475) | 9.075784 / 10.191392 (-1.115608) | 0.142897 / 0.680424 (-0.537527) | 0.014765 / 0.534201 (-0.519436) | 0.302259 / 0.579283 (-0.277024) | 0.261148 / 0.434364 (-0.173216) | 0.340302 / 0.540337 (-0.200035) | 0.459203 / 1.386936 (-0.927733) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005821 / 0.011353 (-0.005532) | 0.003964 / 0.011008 (-0.007044) | 0.049904 / 0.038508 (0.011396) | 0.031061 / 0.023109 (0.007952) | 0.270002 / 0.275898 (-0.005896) | 0.289489 / 0.323480 (-0.033991) | 0.004477 / 0.007986 (-0.003509) | 0.002800 / 0.004328 (-0.001528) | 0.048029 / 0.004250 (0.043779) | 0.040486 / 0.037052 (0.003434) | 0.278442 / 0.258489 (0.019953) | 0.312606 / 0.293841 (0.018765) | 0.032920 / 0.128546 (-0.095626) | 0.012572 / 0.075646 (-0.063075) | 0.060589 / 0.419271 (-0.358682) | 0.034147 / 0.043533 (-0.009386) | 0.275282 / 0.255139 (0.020143) | 0.314073 / 0.283200 (0.030873) | 0.017555 / 0.141683 (-0.124128) | 1.149974 / 1.452155 (-0.302181) | 1.183715 / 1.492716 (-0.309002) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095616 / 0.018006 (0.077610) | 0.302101 / 0.000490 (0.301611) | 0.000201 / 0.000200 (0.000001) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022245 / 0.037411 (-0.015166) | 0.076890 / 0.014526 (0.062364) | 0.088471 / 0.176557 (-0.088085) | 0.128364 / 0.737135 (-0.608771) | 0.089907 / 0.296338 (-0.206431) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.302662 / 0.215209 (0.087453) | 2.979054 / 2.077655 (0.901399) | 1.576534 / 1.504120 (0.072414) | 1.443784 / 1.541195 (-0.097410) | 1.476000 / 1.468490 (0.007510) | 0.740580 / 4.584777 (-3.844197) | 0.953349 / 3.745712 (-2.792363) | 2.925619 / 5.269862 (-2.344243) | 1.904701 / 4.565676 (-2.660975) | 0.078404 / 0.424275 (-0.345872) | 0.005179 / 0.007607 (-0.002429) | 0.357217 / 0.226044 (0.131173) | 3.494812 / 2.268929 (1.225884) | 1.927345 / 55.444624 (-53.517280) | 1.627162 / 6.876477 (-5.249315) | 1.676748 / 2.142072 (-0.465324) | 0.798826 / 4.805227 (-4.006401) | 0.133617 / 6.500664 (-6.367047) | 0.041229 / 0.075469 (-0.034240) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.017046 / 1.841788 (-0.824742) | 12.045942 / 8.074308 (3.971634) | 10.430383 / 10.191392 (0.238991) | 0.144497 / 0.680424 (-0.535926) | 0.015809 / 0.534201 (-0.518392) | 0.304701 / 0.579283 (-0.274582) | 0.126496 / 0.434364 (-0.307868) | 0.340308 / 0.540337 (-0.200030) | 0.434917 / 1.386936 (-0.952019) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#054e57a8468af9fff5b75c08d2d6adf3e05fa763 \"CML watermark\")\n"
] | Re-enable raising error from huggingface-hub FutureWarning in CI | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7011/reactions"
} | PR_kwDODunzps5z27Fs | {
"diff_url": "https://github.com/huggingface/datasets/pull/7011.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7011",
"merged_at": "2024-06-28T12:19:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7011.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7011"
} | 2024-06-28T07:28:32Z | https://api.github.com/repos/huggingface/datasets/issues/7011/comments | Re-enable raising error from huggingface-hub FutureWarning in tests, once that the fix in transformers
- https://github.com/huggingface/transformers/pull/31007
was just released yesterday in transformers-4.42.0: https://github.com/huggingface/transformers/releases/tag/v4.42.0
Fix #7010. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/7011/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7011/timeline | closed | false | 7,011 | null | 2024-06-28T12:19:28Z | null | true |
2,379,777,480 | https://api.github.com/repos/huggingface/datasets/issues/7010 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7010/events | [
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] | null | 2024-06-28T12:19:30Z | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | https://github.com/huggingface/datasets/issues/7010 | MEMBER | completed | null | null | [] | Re-enable raising error from huggingface-hub FutureWarning in CI | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7010/reactions"
} | I_kwDODunzps6N2IXI | null | 2024-06-28T07:23:40Z | https://api.github.com/repos/huggingface/datasets/issues/7010/comments | Re-enable raising error from huggingface-hub FutureWarning in CI, which was disabled by PR:
- #6876
Note that this can only be done once transformers releases the fix:
- https://github.com/huggingface/transformers/pull/31007 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/7010/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7010/timeline | closed | false | 7,010 | null | 2024-06-28T12:19:29Z | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | false |
2,379,619,132 | https://api.github.com/repos/huggingface/datasets/issues/7009 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7009/events | [] | null | 2024-06-28T07:17:26Z | [] | https://github.com/huggingface/datasets/pull/7009 | MEMBER | null | false | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7009). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005481 / 0.011353 (-0.005872) | 0.003580 / 0.011008 (-0.007428) | 0.062682 / 0.038508 (0.024174) | 0.031125 / 0.023109 (0.008015) | 0.239443 / 0.275898 (-0.036455) | 0.262950 / 0.323480 (-0.060529) | 0.003129 / 0.007986 (-0.004857) | 0.003393 / 0.004328 (-0.000935) | 0.048765 / 0.004250 (0.044514) | 0.044363 / 0.037052 (0.007311) | 0.248632 / 0.258489 (-0.009857) | 0.285056 / 0.293841 (-0.008785) | 0.029674 / 0.128546 (-0.098872) | 0.011963 / 0.075646 (-0.063684) | 0.204122 / 0.419271 (-0.215150) | 0.035867 / 0.043533 (-0.007665) | 0.245422 / 0.255139 (-0.009717) | 0.267165 / 0.283200 (-0.016035) | 0.018556 / 0.141683 (-0.123127) | 1.132112 / 1.452155 (-0.320043) | 1.173512 / 1.492716 (-0.319204) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092749 / 0.018006 (0.074743) | 0.298946 / 0.000490 (0.298457) | 0.000211 / 0.000200 (0.000011) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019496 / 0.037411 (-0.017915) | 0.062209 / 0.014526 (0.047683) | 0.074656 / 0.176557 (-0.101901) | 0.121238 / 0.737135 (-0.615897) | 0.075810 / 0.296338 (-0.220528) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.278089 / 0.215209 (0.062880) | 2.725602 / 2.077655 (0.647948) | 1.413346 / 1.504120 (-0.090774) | 1.290352 / 1.541195 (-0.250843) | 1.306732 / 1.468490 (-0.161758) | 0.713945 / 4.584777 (-3.870832) | 2.380131 / 3.745712 (-1.365581) | 2.804548 / 5.269862 (-2.465314) | 1.896506 / 4.565676 (-2.669170) | 0.078303 / 0.424275 (-0.345972) | 0.005475 / 0.007607 (-0.002132) | 0.340162 / 0.226044 (0.114117) | 3.355732 / 2.268929 (1.086803) | 1.776012 / 55.444624 (-53.668613) | 1.507006 / 6.876477 (-5.369471) | 1.607234 / 2.142072 (-0.534838) | 0.796458 / 4.805227 (-4.008769) | 0.135888 / 6.500664 (-6.364776) | 0.042352 / 0.075469 (-0.033118) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.988337 / 1.841788 (-0.853450) | 11.299311 / 8.074308 (3.225003) | 9.166845 / 10.191392 (-1.024547) | 0.140351 / 0.680424 (-0.540073) | 0.013932 / 0.534201 (-0.520269) | 0.302157 / 0.579283 (-0.277126) | 0.259355 / 0.434364 (-0.175009) | 0.339850 / 0.540337 (-0.200488) | 0.465345 / 1.386936 (-0.921591) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005707 / 0.011353 (-0.005646) | 0.003846 / 0.011008 (-0.007162) | 0.050100 / 0.038508 (0.011591) | 0.031810 / 0.023109 (0.008701) | 0.265120 / 0.275898 (-0.010778) | 0.286635 / 0.323480 (-0.036845) | 0.004329 / 0.007986 (-0.003657) | 0.002757 / 0.004328 (-0.001571) | 0.050864 / 0.004250 (0.046614) | 0.039872 / 0.037052 (0.002820) | 0.277675 / 0.258489 (0.019186) | 0.310251 / 0.293841 (0.016410) | 0.032458 / 0.128546 (-0.096088) | 0.012072 / 0.075646 (-0.063574) | 0.060539 / 0.419271 (-0.358733) | 0.033772 / 0.043533 (-0.009761) | 0.265992 / 0.255139 (0.010853) | 0.286152 / 0.283200 (0.002953) | 0.018210 / 0.141683 (-0.123473) | 1.151461 / 1.452155 (-0.300694) | 1.199998 / 1.492716 (-0.292718) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094109 / 0.018006 (0.076103) | 0.298190 / 0.000490 (0.297701) | 0.000199 / 0.000200 (-0.000001) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022431 / 0.037411 (-0.014980) | 0.076319 / 0.014526 (0.061794) | 0.090023 / 0.176557 (-0.086533) | 0.128189 / 0.737135 (-0.608946) | 0.089564 / 0.296338 (-0.206774) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.298887 / 0.215209 (0.083678) | 2.928580 / 2.077655 (0.850926) | 1.565379 / 1.504120 (0.061259) | 1.424704 / 1.541195 (-0.116490) | 1.446336 / 1.468490 (-0.022154) | 0.716348 / 4.584777 (-3.868429) | 0.967465 / 3.745712 (-2.778247) | 2.967318 / 5.269862 (-2.302544) | 1.918878 / 4.565676 (-2.646798) | 0.077167 / 0.424275 (-0.347108) | 0.005271 / 0.007607 (-0.002336) | 0.342376 / 0.226044 (0.116332) | 3.386044 / 2.268929 (1.117115) | 1.915308 / 55.444624 (-53.529316) | 1.612729 / 6.876477 (-5.263748) | 1.621278 / 2.142072 (-0.520794) | 0.804639 / 4.805227 (-4.000589) | 0.132596 / 6.500664 (-6.368069) | 0.041075 / 0.075469 (-0.034394) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.996521 / 1.841788 (-0.845267) | 12.328856 / 8.074308 (4.254548) | 10.585154 / 10.191392 (0.393762) | 0.131720 / 0.680424 (-0.548704) | 0.016777 / 0.534201 (-0.517424) | 0.300424 / 0.579283 (-0.278860) | 0.128526 / 0.434364 (-0.305838) | 0.339961 / 0.540337 (-0.200377) | 0.441661 / 1.386936 (-0.945275) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a16477ddf8f96e590e9597225a5d180cce343f26 \"CML watermark\")\n"
] | Support ruff 0.5.0 in CI | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7009/reactions"
} | PR_kwDODunzps5z2Xe6 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7009.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7009",
"merged_at": "2024-06-28T07:11:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7009.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7009"
} | 2024-06-28T05:37:36Z | https://api.github.com/repos/huggingface/datasets/issues/7009/comments | Support ruff 0.5.0 in CI and revert:
- #7007
Fix #7008. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/7009/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7009/timeline | closed | false | 7,009 | null | 2024-06-28T07:11:17Z | null | true |
2,379,591,141 | https://api.github.com/repos/huggingface/datasets/issues/7008 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7008/events | [
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] | null | 2024-06-28T07:11:18Z | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | https://github.com/huggingface/datasets/issues/7008 | MEMBER | completed | null | null | [] | Support ruff 0.5.0 in CI | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7008/reactions"
} | I_kwDODunzps6N1a3l | null | 2024-06-28T05:11:26Z | https://api.github.com/repos/huggingface/datasets/issues/7008/comments | Support ruff 0.5.0 in CI.
Also revert:
- #7007 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | https://api.github.com/repos/huggingface/datasets/issues/7008/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7008/timeline | closed | false | 7,008 | null | 2024-06-28T07:11:18Z | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | false |