The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
π§ MindEye: fMRI-to-Image Reconstruction Dataset
MindEye is a groundbreaking fMRI-to-image dataset that enables state-of-the-art reconstruction and retrieval of viewed natural scene images from human brain activity.
- π₯ Built on the Natural Scenes Dataset (NSD), containing brain responses from 4 participants who passively viewed MS-COCO natural scenes during 7-Tesla fMRI scanning
- π Achieves >90% accuracy across multiple reconstruction metrics and >93% top-1 retrieval accuracy, marking a major breakthrough in neural decoding
- π Maps fMRI brain activity to CLIP image embeddings through specialized contrastive learning frameworks and diffusion-based generative models
- π¨ Combines high-level semantic information with low-level perceptual features, enabling fine-grained decoding that can distinguish between highly similar images (e.g., different zebras)
- π Demonstrates scalability to billion-image retrieval tasks using LAION-5B, extending its impact to internet-scale benchmarks
Figure: Paired side-by-side example β Left: MS-COCO stimulus shown during scanning; Right: MindEye reconstruction (Subject 01) derived from fmri_voxels + clip_embeddings.
π Table of Contents
- Quickstart
- Dataset Configurations
- Provenance & Processing
- Intended Uses & Limitations
- Bias & Fairness Considerations
- Results
- Dataset Scale
- Citation
- License
π Quickstart
from datasets import load_dataset
# 1. Load the default NSD brain trials (streaming recommended for large configs)
nsd_ds = load_dataset("medarc/fmri-fm", name="nsd-brain-trials", streaming=True, split="train")
example = next(iter(nsd_ds))
print(f"Subject: {example['subject_id']}, Session: {example['session']}, NSD Image ID: {example['nsd_image_id']}")
print(f"fMRI Data (first 5 values of first frame): {example['fmri_data'][0][:5]}")
# 2. Load the supplementary HCP brain trials (streaming)
hcp_ds = load_dataset("medarc/fmri-fm", name="hcp-brain-trials", streaming=True, split="train")
hcp_example = next(iter(hcp_ds))
print(f"HCP Subject: {hcp_example['subject_id']}, Task: {hcp_example['task']}")
# 3. Load CLIP image embeddings for all NSD stimuli (dense table, non-streaming)
clip_ds = load_dataset("medarc/fmri-fm", name="clip-image-embeddings", split="train")
print(f"Total embeddings: {len(clip_ds)}")
print(f"Embedding vector length (first image): {len(clip_ds[0]['image_embeddings'])}")
# 4. Load HCP session metadata
metadata_ds = load_dataset("medarc/fmri-fm", name="hcp-session-metadata", split="train")
meta_example = next(iter(metadata_ds))
print(f"Example Session Key: {meta_example['session_key']}")
print(f"Number of voxels in session: {meta_example['n_voxels']}")
# π¦ Dataset Configurations
πΉ nsd-brain-trials (Default)
Contains the primary fMRI signals from the Natural Scenes Dataset, used as the main inputs for the MindEye model.
| Field Name | Type | Description |
|---|---|---|
subject_id |
string |
Subject identifier (e.g., "subj01") |
session |
int32 |
Scanning session number (1-40) |
run |
int32 |
fMRI run number within session |
fmri_data |
sequence[sequence[float16]] |
Brain activity tensor |
nsd_image_id |
int32 |
NSD image identifier (links to COCO dataset) |
πΉ hcp-brain-trials
Contains supplementary fMRI data from the Human Connectome Project (HCP) for various cognitive tasks.
| Field Name | Type | Description |
|---|---|---|
subject_id |
string |
HCP subject identifier (6-digit code) |
modality |
string |
Imaging modality ("tfMRI" or "rfMRI") |
task |
string |
HCP task name (RELATIONAL, SOCIAL, WM, etc.) |
fmri_data |
sequence[sequence[float16]] |
Brain activity tensor |
trial_type |
string |
Specific trial condition (relation, mental, etc.) |
πΉ clip-image-embeddings
Contains the target CLIP (ViT-L/14) image embeddings for the 73,000 NSD stimulus images.
| Field Name | Type | Description |
|---|---|---|
image_embeddings |
sequence[sequence[float32]] |
CLIP ViT-L/14 embeddings for NSD images |
πΉ hcp-session-metadata
Detailed metadata for each fMRI session in the HCP dataset.
| Field Name | Type | Description |
|---|---|---|
session_key |
string |
Unique session identifier |
subject_id |
string |
HCP subject identifier |
task |
string |
HCP task name |
n_voxels |
int32 |
Total voxels in session (77,763) |
πΉ Other Configurations
- hcp-session-archives: Raw .tar archives of preprocessed fMRI data from the HCP dataset.
- semantic-clusters: Semantic cluster assignments for the 73k COCO/NSD images.
- brain-parcellations: Brain atlas files (Schaefer 2018, Yeo 2011) used for feature engineering.
- hcp-task-mapping: JSON file mapping HCP task conditions to numerical target IDs.
π¬ Provenance & Processing
This dataset is constructed from two major sources:
- Natural Scenes Dataset (NSD)
- Human Connectome Project (HCP)
The primary neuroimaging data was collected using a 7-Tesla (7T) fMRI scanner. All fMRI data underwent rigorous preprocessing including:
- GLMsingle: General Linear Model single-trial estimation
- Z-scoring: Session-wise normalization of signals
- Brain parcellation: Feature vectors constructed via atlases such as the Schaefer 2018 (400 Parcels) and **Yeo 2011 (7 Networks)**βreducing volumetric brain data into region-wise summaries suitable for machine learning tasks.
β Intended Uses & Limitations
Recommended Uses
- Neuroscience Research: Understanding visual cortex representations, brain-computer interfaces, and neural mechanisms of visual perception
- Computer Vision: Developing novel multimodal learning approaches between brain signals and visual data, contrastive learning research
- AI Model Development: Training and benchmarking brain decoding models, diffusion-based reconstruction systems
- Medical Applications: Research into locked-in syndrome communication, depression assessment through visual bias analysis, neurological disorder diagnosis
Out-of-Scope Uses
- β οΈ Clinical Diagnosis: Not validated for medical diagnosis without extensive additional clinical validation and regulatory approval
- β οΈ Cross-subject Generalization: Models are subject-specific and do not generalize across individuals without additional training data
- β οΈ Non-consensual Applications: Requires active participant compliance; easily defeated by head movement, unrelated thinking, or non-compliance
Known Limitations
- Subject Specificity: Each participant requires individual model training with extensive fMRI data (up to 40 hours scanning)
- Compliance Requirement: Non-invasive neuroimaging requires participant cooperation and cannot be used covertly
- Single-trial Degradation: Performance significantly degrades when using single-trial vs. averaged responses
βοΈ Bias & Fairness Considerations
Sampling Biases: Limited to 4 participants, all capable of undergoing extensive MRI scanning. Geographic and cultural representation not specified.
Image Distribution: Images limited to MS-COCO natural scenes, which may not represent diverse visual experiences across cultures, environments, or individual visual preferences.
Technical Access: Requires expensive 7-Tesla MRI equipment and substantial computational resources, limiting accessibility and reproducibility across research groups.
π Results
| Category | Best Prior SOTA | MindEye |
|---|---|---|
| Pixel Correlation | 0.254 (Ozcelik) | 0.309 |
| SSIM | 0.356 (Ozcelik) | 0.323 |
| Image Retrieval (top-1) | 94.2% (Ozcelik) | 97.8% |
| Brain Retrieval (top-1) | 30.3% (Ozcelik) | 90.1% |
| CLIP Identification | 91.5% (Ozcelik) | 94.1% |
| Parameter Efficiency | 1.45B (Ozcelik LL) | 206M |
π Dataset Scale:
- Training samples: 24,980 across 4 subjects (individual trials preserved)
- Test samples: 982 (averaged across 3 repetitions per image)
- Voxels per subject: 13,000-16,000 from nsdgeneral brain region
π Citation
Please cite:
@article{scotti2023reconstructing,
title={Reconstructing the Mind's Eye: fMRI-to-Image with Contrastive Learning and Diffusion Priors},
author={Paul S. Scotti and Atmadeep Banerjee and Jimmie Goode and Stepan Shabalin and Alex Nguyen and Ethan Cohen and Aidan J. Dempster and Nathalie Verlinde and Elad Yundler and David Weisberg and Kenneth A. Norman and Tanishq Mathew Abraham},
journal={arXiv preprint arXiv:2305.18274},
year={2023},
url={[https://arxiv.org/abs/2305.18274v2}](https://arxiv.org/abs/2305.18274v2%7D)
}
π License
MIT License
Copyright (c) 2022 MEDARC
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
- Downloads last month
- 21

