The dataset viewer is not available for this split.
Error code: FeaturesError
Exception: ArrowInvalid
Message: JSON parse error: Invalid value. in row 0
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 174, in _generate_tables
df = pandas_read_json(f)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 38, in pandas_read_json
return pd.read_json(path_or_buf, **kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 791, in read_json
json_reader = JsonReader(
File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 905, in __init__
self.data = self._preprocess_data(data)
File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 917, in _preprocess_data
data = data.read()
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 813, in read_with_retries
out = read(*args, **kwargs)
File "/usr/local/lib/python3.9/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response
iterable_dataset = iterable_dataset._resolve_features()
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3422, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2187, in _head
return next(iter(self.iter(batch_size=n)))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2391, in iter
for key, example in iterator:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1882, in __iter__
for key, pa_table in self._iter_arrow():
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1904, in _iter_arrow
yield from self.ex_iterable._iter_arrow()
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 499, in _iter_arrow
for key, pa_table in iterator:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 346, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 177, in _generate_tables
raise e
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 151, in _generate_tables
pa_table = paj.read_json(
File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Invalid value. in row 0Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
This dataset repackages a subset of [google/docci] images and descriptions
for Mini-LLaVA training/inference. We only reorganized files and converted
metadata to data/docci_converted.json. Original content and copyright remain
with the DOCCI authors.
Attribution: Onoe et al., DOCCI: Descriptions of Connected and Contrasting Images, ECCV 2024.
Original dataset: google/docci (CC BY 4.0).
Changes from the original: file renaming, directory layout, and metadata JSON conversion.
Original README.md is shown below.
annotations_creators: - expert-generated - crowdsourced language: - en language_creators: - other license: - cc-by-4.0 multilinguality: - monolingual pretty_name: DOCCI size_categories: - 10K<n<100K source_datasets: - original tags: [] task_categories: - text-to-image - image-to-text task_ids: - image-captioning
Dataset Card for DOCCI
Dataset Summary
DOCCI (Descriptions of Connected and Contrasting Images) is a collection of images paired with detailed descriptions. The descriptions explain the key elements of the images, as well as secondary information such as background, lighting, and settings. The images are specifically taken to help assess the precise visual properties of images. DOCCI also includes many related images that vary in having key differences from the others. All descriptions are manually annotated to ensure they adequately distinguish each image from its counterparts.
Supported Tasks
Text-to-Image and Image-to-Text generation
Languages
English
Dataset Structure
Data Instances
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1536x2048>,
'example_id': 'qual_dev_00000',
'description': 'An indoor angled down medium close-up front view of a real sized stuffed dog with white and black colored fur wearing a blue hard hat with a light on it. A couple inches to the right of the dog is a real sized black and white penguin that is also wearing a blue hard hat with a light on it. The dog is sitting, and is facing slightly towards the right while looking to its right with its mouth slightly open, showing its pink tongue. The dog and penguin are placed on a gray and white carpet, and placed against a white drawer that has a large gray cushion on top of it. Behind the gray cushion is a transparent window showing green trees on the outside.'
}
Data Fields
| Name | Explanation |
|---|---|
image |
PIL.JpegImagePlugin.JpegImageFile |
example_id |
The unique ID of an example follows this format: <SPLIT_NAME>_<EXAMPLE_NUMBER>. |
description |
Text description of the associated image. |
Data Splits
| Dataset | Train | Test | Qual Dev | Qual Test |
|---|---|---|---|---|
| DOCCI | 9,647 | 5,000 | 100 | 100 |
| DOCCI-AAR | 4,932 | 5,000 | -- | -- |
Dataset Creation
Curation Rationale
DOCCI is designed as an evaluation dataset for both text-to-image (T2I) and image-to-text (I2T) generation. Please see our paper for more details.
Source Data
Initial Data Collection
All images were taken by one of the authors and their family.
Annotations
Annotation process
All text descriptions were written by human annotators. We do not rely on any automated process in our data annotation pipeline. Please see Appendix A of our paper for details about image curation.
Personal and Sensitive Information
We manually reviewed all images for personally identifiable information (PII), removing some images and blurring detected faces, phone numbers, and URLs to protect privacy. For text descriptions, we instructed annotators to exclude any PII, such as people's names, phone numbers, and URLs. After the annotation phase, we employed automatic tools to scan for PII, ensuring the descriptions remained free of such information.
Considerations for Using the Data
Social Impact of Dataset
[More Information Needed]
Discussion of Biases
[More Information Needed]
Other Known Limitations
[More Information Needed]
Licensing Information
CC BY 4.0
Citation Information
@inproceedings{OnoeDocci2024,
author = {Yasumasa Onoe and Sunayana Rane and Zachary Berger and Yonatan Bitton and Jaemin Cho and Roopal Garg and
Alexander Ku and Zarana Parekh and Jordi Pont-Tuset and Garrett Tanzer and Su Wang and Jason Baldridge},
title = {{DOCCI: Descriptions of Connected and Contrasting Images}},
booktitle = {ECCV},
year = {2024}
}
- Downloads last month
- 27