Datasets:
feat(README): add dataset structure notes
Browse files
README.md
CHANGED
|
@@ -60,6 +60,48 @@ Project page: https://sphere-vlm.github.io/
|
|
| 60 |
This version of the dataset was prepared by combining the [JSON annotations](https://github.com/zwenyu/SPHERE-VLM/tree/main/eval_datasets/coco_test2017_annotations) with the corresponding images from [MS COCO-2017](https://cocodataset.org).
|
| 61 |
The script used can be found at `prepare_parquet.py`, to be executed in the root of [our GitHub repository](https://github.com/zwenyu/SPHERE-VLM).
|
| 62 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 63 |
## Licensing Information
|
| 64 |
|
| 65 |
Please note that the images are subject to the [Terms of Use of MS COCO-2017](https://cocodataset.org/#termsofuse):
|
|
|
|
| 60 |
This version of the dataset was prepared by combining the [JSON annotations](https://github.com/zwenyu/SPHERE-VLM/tree/main/eval_datasets/coco_test2017_annotations) with the corresponding images from [MS COCO-2017](https://cocodataset.org).
|
| 61 |
The script used can be found at `prepare_parquet.py`, to be executed in the root of [our GitHub repository](https://github.com/zwenyu/SPHERE-VLM).
|
| 62 |
|
| 63 |
+
## Dataset Structure
|
| 64 |
+
|
| 65 |
+
The dataset is split into the following subsets:
|
| 66 |
+
|
| 67 |
+
1. **Single-skill**:
|
| 68 |
+
1. **Position** (`position_only`) - 357 samples
|
| 69 |
+
- Egocentric: 172, Allocentric: 185
|
| 70 |
+
2. **Counting** (`counting_only-paired-distance_and_counting` + `counting_only-paired-position_and_counting`) - 201 samples
|
| 71 |
+
- The `counting_only-paired-distance_and_counting` subset comprises questions corresponding to those in `distance_and_counting`, and similarly for `counting_only-paired-position_and_counting` with `position-and_counting`.
|
| 72 |
+
- For instance, every question in `distance_and_counting` (e.g. "How many crows are on the railing farther from the viewer?") has a corresponding question in `counting_only-paired-distance_and_counting` to count all such instances (e.g. "How many crows are in the photo?")
|
| 73 |
+
3. **Distance** (`distance_only`) - 202 samples
|
| 74 |
+
4. **Size** (`size_only`) - 198 samples
|
| 75 |
+
2. **Multi-skill**:
|
| 76 |
+
1. **Position + Counting** (`position_and_counting`) - 169 samples
|
| 77 |
+
- Egocentric: 64, Allocentric: 105
|
| 78 |
+
2. **Distance + Counting** (`distance_and_counting`) - 158 samples
|
| 79 |
+
3. **Distance + Size** (`distance_and_size`) - 199 samples
|
| 80 |
+
3. **Reasoning**:
|
| 81 |
+
1. **Object occlusion** (`object_occlusion`) - 402 samples
|
| 82 |
+
- Intermediate: 202, Final: 200
|
| 83 |
+
- The `object_occlusion_w_intermediate` subset contains final questions with intermediate answers prefixed in the following format:
|
| 84 |
+
> "Given that for the question: \<intermediate step question\> The answer is: \<intermediate step answer\>. \<final step question\> Answer the question directly."
|
| 85 |
+
- For instance, given the two questions "Which object is thicker?" (intermediate) and "Where can a child be hiding?" (final) in `object_occlusion`, the corresponding question in `object_occlusion_w_intermediate` is:
|
| 86 |
+
> "Given that for the question: Which object is thicker? Fire hydrant or tree trunk? The answer is: Tree trunk. Where can a child be hiding? Behind the fire hydrant or behind the tree? Answer the question directly."
|
| 87 |
+
3. **Object manipulation** (`object_manipulation`) - 399
|
| 88 |
+
- Intermediate: 199, Final: 200
|
| 89 |
+
|
| 90 |
+
### Data Fields
|
| 91 |
+
|
| 92 |
+
The data fields are as follows:
|
| 93 |
+
|
| 94 |
+
- `question_id`: A unique ID for the question.
|
| 95 |
+
- `question`: Question to be passed to the VLM.
|
| 96 |
+
- `option`: A list of options that the VLM can select from. For counting tasks, this field is left as null.
|
| 97 |
+
- `answer`: The expected answer, which must be either one of the strings in `option` (for non-counting tasks) or a number (for counting tasks).
|
| 98 |
+
- `metadata`:
|
| 99 |
+
- `viewpoint`: Either "allo" (allocentric) or "ego" (egocentric).
|
| 100 |
+
- `format`: Expected format of the answer, e.g. "bool" (boolean), "name", "num" (numeric), "pos" (position).
|
| 101 |
+
- `source_dataset`: Currently, this is "coco_test2017" ([MS COCO-2017](https://cocodataset.org)) for our entire set of annotations.
|
| 102 |
+
- `source_img_id`: Source image ID in [MS COCO-2017](https://cocodataset.org).
|
| 103 |
+
- `skill`: For reasoning tasks, a list of skills tested by the question, e.g. "count", "dist" (distance), "pos" (position), "shape", "size", "vis" (visual).
|
| 104 |
+
|
| 105 |
## Licensing Information
|
| 106 |
|
| 107 |
Please note that the images are subject to the [Terms of Use of MS COCO-2017](https://cocodataset.org/#termsofuse):
|