File size: 3,237 Bytes
edf598a
 
60b17c8
0ccf41e
60b17c8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3226544
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0ccf41e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
60b17c8
 
 
 
 
3226544
 
 
 
0ccf41e
 
 
 
edf598a
7173aa8
 
 
d0aca25
7173aa8
7915082
7173aa8
 
 
da125d5
7915082
d0aca25
7915082
da125d5
7915082
da125d5
7173aa8
 
759ccd1
7173aa8
 
7915082
856186e
7915082
856186e
7915082
7173aa8
 
02a814b
7173aa8
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
---
license: apache-2.0
dataset_info:
- config_name: CLEVRER
  features:
  - name: video_filename
    dtype: string
  - name: scene_index
    dtype: int64
  - name: question_text
    dtype: string
  - name: answer_text
    dtype: string
  - name: attributes_list
    sequence: string
  splits:
  - name: train
    num_bytes: 2029869
    num_examples: 13374
  download_size: 203081
  dataset_size: 2029869
- config_name: VG_v1
  features:
  - name: img_id
    dtype: int64
  - name: orig_qa
    dtype: string
  - name: question_text
    dtype: string
  - name: answer_text
    dtype: string
  splits:
  - name: train
    num_bytes: 26281742
    num_examples: 424507
  download_size: 7732035
  dataset_size: 26281742
- config_name: vg_V1
  features:
  - name: img_id
    dtype: int64
  - name: orig_qa
    dtype: string
  - name: question_text
    dtype: string
  - name: answer_text
    dtype: string
  splits:
  - name: train
    num_bytes: 26281742
    num_examples: 424507
  download_size: 7732035
  dataset_size: 26281742
configs:
- config_name: CLEVRER
  data_files:
  - split: train
    path: CLEVRER/train-*
- config_name: VG_v1
  data_files:
  - split: train
    path: VG_v1/train-*
- config_name: vg_V1
  data_files:
  - split: train
    path: vg_V1/train-*
---

Here we create two datasets (from existing datasets: CLEVRER, VisualGenome) for the Object Counting instruction tuning task.

### CLEVRER, a video dataset

CLEVRER has QA pairs for each 5000 training videos.
```json
{'video_filename': int, 'scene_index': str (same as filename), 'questions': list [{'question_type': , 'question_subtype': , 'question_text': , 'answer_text': , 'program'(question attributes): }]} 
```
We select 'descriptive' type, 'count' subtype questions, they are object counting task questions. In the 'program' list, it shows how complex the question is (longer means more complex), so we filter out those longer than 9 to reduce difficulty.

CLEVRER contains both positive questions and negative (asking for non-exist objects) questions, so we skip generating negative samples for CLEVRER.

Some questions are 'event' specific, counting moving/stationary objects when a certain event happens. i.e., 'How many objects are stationary when the yellow object enters the scene?'

Downloading videos from: http://clevrer.csail.mit.edu/


### VisualGenome, an image dataset

We generate some negative questions for non-exist objects in the image. We use the version 1 image sets. Download from: https://homes.cs.washington.edu/~ranjay/visualgenome/api.html

VisualGenome has 100K+ images. And for the objects in the image, there are attributes associated with each object, we only focus on the color attributes.

For each image, we choose to add (1) 3 non-exist objects and (2) 1 non-exist attribute for existing objects as negative samples.

In the original qa dataset, VG has Object Counting questions, we also include them here, with the 'orig_qa'=='Yes'. For those negative questions we generated, 'orig_qa' =='No'.
```json
{'img_id': str, 'orig_qa': Yes/No, 'question_text': 'How many <attribute> <object in plural form> are there? ', 'answer_text': Numbers.(if exist) or None.(if non-exist) }
```
For more details, plz refer to the dataset.