Commit
•
1c45f7c
1
Parent(s):
7143728
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -11,9 +11,9 @@ dataset_info:
|
|
11 |
- name: poster
|
12 |
dtype: string
|
13 |
- name: date_utc
|
14 |
-
dtype: timestamp[
|
15 |
- name: flair
|
16 |
-
dtype:
|
17 |
- name: title
|
18 |
dtype: string
|
19 |
- name: score
|
@@ -24,11 +24,32 @@ dataset_info:
|
|
24 |
dtype: string
|
25 |
splits:
|
26 |
- name: train
|
27 |
-
num_bytes:
|
28 |
-
num_examples:
|
29 |
-
download_size:
|
30 |
-
dataset_size:
|
31 |
---
|
32 |
# Dataset Card for "dataset-creator-reddit-amitheasshole"
|
33 |
|
34 |
-
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
- name: poster
|
12 |
dtype: string
|
13 |
- name: date_utc
|
14 |
+
dtype: timestamp[ns]
|
15 |
- name: flair
|
16 |
+
dtype: 'null'
|
17 |
- name: title
|
18 |
dtype: string
|
19 |
- name: score
|
|
|
24 |
dtype: string
|
25 |
splits:
|
26 |
- name: train
|
27 |
+
num_bytes: 440321
|
28 |
+
num_examples: 200
|
29 |
+
download_size: 279819
|
30 |
+
dataset_size: 440321
|
31 |
---
|
32 |
# Dataset Card for "dataset-creator-reddit-amitheasshole"
|
33 |
|
34 |
+
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
35 |
+
|
36 |
+
--- Generated Part of README Below ---
|
37 |
+
|
38 |
+
|
39 |
+
## Dataset Overview
|
40 |
+
The goal is to have an open dataset of [r/amitheasshole](https://www.reddit.com/r/amitheasshole/) submissions. Im leveraging PRAW and the reddit API to get downloads.
|
41 |
+
|
42 |
+
There is a limit of 1000 in an API call and limited search functionality, so this is run every hour to get new submissions.
|
43 |
+
|
44 |
+
## Creation Details
|
45 |
+
THis was created by [derek-thomas/dataset-creator-reddit-amitheasshole](https://huggingface.co/spaces/derek-thomas/dataset-creator-reddit-amitheasshole)
|
46 |
+
|
47 |
+
## Update Frequency
|
48 |
+
The dataset is updated hourly with the most recent update being `2023-10-27 16:00:00 UTC+0000` where we added **200 new rows**.
|
49 |
+
|
50 |
+
## Licensing
|
51 |
+
[Reddit Licensing terms](https://www.redditinc.com/policies/data-api-terms) as accessed on October 25:
|
52 |
+
> The Content created with or submitted to our Services by Users (“User Content”) is owned by Users and not by Reddit. Subject to your complete and ongoing compliance with the Data API Terms, Reddit grants you a non-exclusive, non-transferable, non-sublicensable, and revocable license to copy and display the User Content using the Data API solely as necessary to develop, deploy, distribute, and run your App to your App Users. You may not modify the User Content except to format it for such display. You will comply with any requirements or restrictions imposed on usage of User Content by their respective owners, which may include "all rights reserved" notices, Creative Commons licenses, or other terms and conditions that may be agreed upon between you and the owners. Except as expressly permitted by this section, no other rights or licenses are granted or implied, including any right to use User Content for other purposes, such as for training a machine learning or AI model, without the express permission of rightsholders in the applicable User Content
|
53 |
+
|
54 |
+
My take is that you can't use this data for *training* without getting permission.
|
55 |
+
|