You need to agree to share your contact information to access this model
Access is open to verified users who agree to strict privacy, no-harm, and non-identification policies compliant with EDPB guidelines.
EEG FOUNDATION MODEL RESPONSIBLE USE AGREEMENT
This model is available for general use (research, commercial, and personal), provided strictly that you adhere to the following privacy and safety standards. By requesting access, you agree to be bound by the following ethical principles and the regulatory guidance outlined in EDPB Opinion 28/2024.
- No Privacy Intrusion or Reconstruction You acknowledge that AI models trained on personal data may not be fully anonymous and can be vulnerable to attacks. You expressly agree NOT to:
- Attempt to extract, infer, or reconstruct subject-level EEG data or personal information from the model weights or outputs.
- Perform "Model Inversion" or "Membership Inference" attacks to extract statistical data related to specific individuals.
- Attempt to re-identify individuals from the model's embeddings.
- No Harm, Surveillance, or Discrimination In line with protecting fundamental rights, you will not use this model for:
- Biometric Identification: Continuous monitoring, behavioral profiling, or identification of natural persons.
- Discrimination: Any purpose that leads to unfair treatment of individuals or groups, or exploits vulnerabilities (e.g., age, disability).
- Manipulation: Coercing or exploiting users, particularly vulnerable populations, or infringing on human autonomy.
- Fair Use, Security, and Data Minimisation If you deploy this model, you accept accountability for the processing. You must:
- Minimize Data: Ensure any additional data used with the model is limited, pseudonymised where possible, and securely handled.
- Be Transparent: Any research or deployment must clearly state the purpose, limitations, and safeguards implemented to protect rights.
- Secure the Deployment: Implement measures to prevent unauthorized access or adversarial attacks on the model.
- Redistribution and Access Revocation
- No Redistribution: You will not share, host, or distribute the model weights or derivatives to users without permission; they must access the model via this repository to agree to these terms.
- Dataset Withdrawal: If any underlying dataset becomes closed or restricted, access to this model may be revoked or replaced by a retrained version.
Log in or Sign Up to review the conditions and access this model content.
Model Card for REVE-large
REVE (project page here) is a transformer-based foundation model for EEG signal processing. It was trained on 60k hours of EEG data from various sources and is designed to be adaptable to any electrode configuration and a wide range of EEG-based tasks.
Model Details
Architecture
REVE (Representation for EEG with Versatile Embeddings), a pretrained encoder explicitly designed to generalize across diverse EEG signals. REVE introduces a novel 4D positional encoding scheme that enables it to process signals of arbitrary length and electrode arrangement. Using a masked autoencoding objective, we pretrain REVE on over 60,000 hours of EEG data from 92 datasets spanning 25,000 subjects.
Developed by the BRAIN team and UdeM
Funded by: This research was supported by the French National Research Agency (ANR) through its AI@IMT program and grant ANR-24-CE23-7365, as well as by a grant from the Brittany region. Further support was provided by a Discovery Grant from the Natural Sciences and Engineering Research Council of Canada (NSERC), by funding from the Canada Research Chairs program and the Fonds de recherche du Québec – Nature et technologies (FRQ-NT). This work was granted access to the HPC resources of IDRIS under the allocation 2024-AD011015237R1 made by GENCI, as well as HPC provided by Digital Alliance Canada.
Model Sources
Uses
Example script to extract embeddings with REVE, using our position bank:
from transformers import AutoModel
pos_bank = AutoModel.from_pretrained("brain-bzh/reve-positions", trust_remote_code=True)
model = AutoModel.from_pretrained("brain-bzh/reve-large", trust_remote_code=True)
eeg_data = ... # EEG data as a torch Tensor (batch_size, channels, time_points), must be sampled at 200 Hz
electrode_names = [...] # List of electrode names corresponding to the channels in eeg_data
positions = pos_bank(electrode_names) # Get positions (channels, 3)
# Expand the positions vector to match the batch size
positions = positions.expand(eeg_data.size(0), -1, -1) # (batch_size, channels, 3)
output = model(eeg_data, positions)
- Downloads last month
- -
Collection including brain-bzh/reve-large
Evaluation results
- Accuracy on Mumtaz-LPself-reported0.985
- Accuracy on MAT-LPself-reported0.712
- Accuracy on TUAB-LPself-reported0.821
- Accuracy on PhysionetMIT-LPself-reported0.617
- Accuracy on BCIC-IV-2a-LPself-reported0.603
- Accuracy on ISRUC-LPself-reported0.758
- Accuracy on HMC-LPself-reported0.710
- Accuracy on BCIC2020-3A-LPself-reported0.390
- Accuracy on TUEV-LPself-reported0.630
- Accuracy on FACED-LPself-reported0.469