or4cl3ai/Aiden_t5
Text Generation
•
Updated
•
822
•
14
image
imagewidth (px) 84
478
| labels
class label 15
classes |
---|---|
11sitting
|
|
14using_laptop
|
|
7hugging
|
|
12sleeping
|
|
14using_laptop
|
|
12sleeping
|
|
4drinking
|
|
7hugging
|
|
1clapping
|
|
3dancing
|
|
2cycling
|
|
4drinking
|
|
1clapping
|
|
0calling
|
|
12sleeping
|
|
4drinking
|
|
0calling
|
|
8laughing
|
|
14using_laptop
|
|
14using_laptop
|
|
1clapping
|
|
5eating
|
|
6fighting
|
|
9listening_to_music
|
|
3dancing
|
|
4drinking
|
|
2cycling
|
|
8laughing
|
|
4drinking
|
|
3dancing
|
|
9listening_to_music
|
|
2cycling
|
|
11sitting
|
|
11sitting
|
|
12sleeping
|
|
5eating
|
|
10running
|
|
10running
|
|
7hugging
|
|
7hugging
|
|
0calling
|
|
14using_laptop
|
|
9listening_to_music
|
|
2cycling
|
|
12sleeping
|
|
7hugging
|
|
13texting
|
|
12sleeping
|
|
14using_laptop
|
|
10running
|
|
7hugging
|
|
0calling
|
|
14using_laptop
|
|
4drinking
|
|
7hugging
|
|
2cycling
|
|
8laughing
|
|
0calling
|
|
1clapping
|
|
8laughing
|
|
13texting
|
|
11sitting
|
|
8laughing
|
|
14using_laptop
|
|
12sleeping
|
|
9listening_to_music
|
|
2cycling
|
|
1clapping
|
|
13texting
|
|
13texting
|
|
5eating
|
|
2cycling
|
|
4drinking
|
|
0calling
|
|
10running
|
|
4drinking
|
|
13texting
|
|
11sitting
|
|
0calling
|
|
7hugging
|
|
5eating
|
|
12sleeping
|
|
14using_laptop
|
|
14using_laptop
|
|
10running
|
|
0calling
|
|
14using_laptop
|
|
14using_laptop
|
|
9listening_to_music
|
|
0calling
|
|
3dancing
|
|
8laughing
|
|
2cycling
|
|
1clapping
|
|
14using_laptop
|
|
0calling
|
|
3dancing
|
|
4drinking
|
|
13texting
|
|
6fighting
|
A dataset from kaggle. origin: https://dphi.tech/challenges/data-sprint-76-human-activity-recognition/233/data
The data instances have the following fields:
image
: A PIL.Image.Image
object containing the image. Note that when accessing the image column: dataset[0]["image"]
the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image"
column, i.e. dataset[0]["image"]
should always be preferred over dataset["image"][0]
.labels
: an int
classification label. All test
data is labeled 0.{
'calling': 0,
'clapping': 1,
'cycling': 2,
'dancing': 3,
'drinking': 4,
'eating': 5,
'fighting': 6,
'hugging': 7,
'laughing': 8,
'listening_to_music': 9,
'running': 10,
'sitting': 11,
'sleeping': 12,
'texting': 13,
'using_laptop': 14
}
train | test | |
---|---|---|
# of examples | 12600 | 5400 |
>>> from datasets import load_dataset
>>> ds = load_dataset("Bingsu/Human_Action_Recognition")
>>> ds
DatasetDict({
test: Dataset({
features: ['image', 'labels'],
num_rows: 5400
})
train: Dataset({
features: ['image', 'labels'],
num_rows: 12600
})
})
>>> ds["train"].features
{'image': Image(decode=True, id=None),
'labels': ClassLabel(num_classes=15, names=['calling', 'clapping', 'cycling', 'dancing', 'drinking', 'eating', 'fighting', 'hugging', 'laughing', 'listening_to_music', 'running', 'sitting', 'sleeping', 'texting', 'using_laptop'], id=None)}
>>> ds["train"][0]
{'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=240x160>,
'labels': 11}