The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 6 new columns ({'creation_date', 'date_accessed', 'subscribers', 'current_users', 'appearances', 'time_accessed_UTC'}) and 1 missing columns ({'response_code'}).

This happened while the csv dataset builder was generating data using

hf://datasets/davidwisdom/reddit-randomness/summary.csv (at revision 01740f7cd9ffa5855819bd828d5dcb03578abf0e)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              subreddit: string
              subscribers: int64
              current_users: int64
              creation_date: string
              date_accessed: string
              time_accessed_UTC: string
              appearances: int64
              -- schema metadata --
              pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 1127
              to
              {'subreddit': Value(dtype='string', id=None), 'response_code': Value(dtype='int64', id=None)}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1321, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 935, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 6 new columns ({'creation_date', 'date_accessed', 'subscribers', 'current_users', 'appearances', 'time_accessed_UTC'}) and 1 missing columns ({'response_code'}).
              
              This happened while the csv dataset builder was generating data using
              
              hf://datasets/davidwisdom/reddit-randomness/summary.csv (at revision 01740f7cd9ffa5855819bd828d5dcb03578abf0e)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

subreddit
string
response_code
int64
changemyview
302
Terraform
302
lostpause
302
USPS
302
MaliciousCompliance
302
BATProject
302
obs
302
apple
302
IndoorGarden
302
Dimension20
302
honkaiimpact3
302
chimebank
302
confusing_perspective
302
geopolitics
302
awardtravel
302
Home
302
AmITheDevil
302
runescape
302
benzodiazepines
302
interactivebrokers
302
Nootropics
302
2007scape
302
thesopranos
302
HottestFemaleAthletes
302
SuddenlyGay
302
QuotesPorn
302
formula1
302
Genshin_Impact
302
Hairloss
302
SomethingWasWrong
302
maryland
302
RPI
302
inkarnate
302
distantsocializing
302
clevercomebacks
302
OnlyFans
302
Discord_Bots
302
UniUK
302
CalamityMod
302
HealthAnxiety
302
CrucibleGuidebook
302
oilpen
302
Chonkers
302
yorku
302
reactiongifs
302
ClassyPornstars
302
MxRMods
302
trashy
302
Porsche
302
Drugs
302
oneplus
302
youseeingthisshit
302
DestinyFashion
302
fastfood
302
Aphantasia
302
FPSAimTrainer
302
CFA
302
comedyhomicide
302
discordapp
302
FedEx
302
TeslaModelY
302
NYStateOfMind
302
beer
302
SubredditDrama
302
SUMC
302
Techno
302
USC
302
AskVet
302
GreenBayPackers
302
FanTheories
302
WidescreenWallpaper
302
PS4Pro
302
datascience
302
French
302
ModernWarzone
302
kitchener
302
CrazyIdeas
302
JuiceWRLD
302
adhdwomen
302
lebanon
302
tasker
302
learnjava
302
halifax
302
kitchener
302
AWSCertifications
302
Music
302
AltStore
302
Beastars
302
malegrooming
302
kansascity
302
japanlife
302
ContraPoints
302
canberra
302
imaginarymaps
302
Tetris
302
BMW
302
travisscott
302
Crunchyroll
302
fightporn
302
Webull
302
End of preview.