The dataset viewer is not available for this split.
Error code: FeaturesError
Exception: ArrowInvalid
Message: JSON parse error: Invalid value. in row 0
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 174, in _generate_tables
df = pandas_read_json(f)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 38, in pandas_read_json
return pd.read_json(path_or_buf, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 815, in read_json
return json_reader.read()
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1014, in read
obj = self._get_object_parser(self.data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1040, in _get_object_parser
obj = FrameParser(json, **kwargs).parse()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1176, in parse
self._parse()
File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1392, in _parse
ujson_loads(json, precise_float=self.precise_float), dtype=None
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: Expected object or value
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response
iterable_dataset = iterable_dataset._resolve_features()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3496, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2257, in _head
return next(iter(self.iter(batch_size=n)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2461, in iter
for key, example in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1952, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1974, in _iter_arrow
yield from self.ex_iterable._iter_arrow()
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 503, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 350, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 177, in _generate_tables
raise e
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 151, in _generate_tables
pa_table = paj.read_json(
^^^^^^^^^^^^^^
File "pyarrow/_json.pyx", line 342, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Invalid value. in row 0Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
𧬠OmniGenBench Hub: Curated Genomic Datasets for AI Research
Welcome to OmniGenBench Hub - your one-stop repository for high-quality, curated genomic datasets designed specifically for AI and machine learning research in computational biology. Our hub provides seamlessly integrated datasets that work directly with the OmniGenBench framework, enabling researchers to focus on model development rather than data preprocessing.
π― What is OmniGenBench Hub?
OmniGenBench Hub is a centralized collection of genomic datasets that have been:
- β Carefully curated and quality-controlled for research use
- β Standardized for consistent formatting and structure
- β Optimized for seamless integration with OmniGenBench framework
- β Validated through extensive testing and benchmarking
- β Documented with comprehensive metadata and usage examples
π Quick Start
Getting started with our datasets is incredibly simple:
from omnigenbench import OmniDatasetForSequenceClassification, OmniTokenizer
# Initialize tokenizer
tokenizer = OmniTokenizer.from_pretrained("yangheng/OmniGenome-52M")
# Load any dataset from the hub
datasets = OmniDatasetForSequenceClassification.from_hub(
dataset_name="deepsea_tfb_prediction",
tokenizer=tokenizer,
max_length=512
)
# That's it! Your datasets are ready for training and evaluation
# Access train, validation, and test splits
train_dataset = datasets['train']
valid_dataset = datasets['valid']
test_dataset = datasets['test']
π Available Datasets
Our hub currently hosts 3 high-quality datasets spanning multiple genomic AI tasks. Each dataset is carefully organized with data files and configuration scripts.
π Dataset Categories and Overview
| Category | Dataset Name | Task Type | Description | Size |
|---|---|---|---|---|
| 𧬠Gene Regulation | deepsea_tfb_prediction |
Classification | Transcription Factor Binding Site Prediction | 822 MB |
| π¬ Protein Synthesis | translation_efficiency_prediction |
Regression | mRNA Translation Efficiency Prediction | 868 KB |
| βοΈ Functional Genomics | variant_effective_prediction |
Multi-class | Variant Effect Prediction on Protein Function | 1.64 GB |
π Detailed Dataset Descriptions
1. 𧬠DeepSEA TFB Prediction (deepsea_tfb_prediction)
- Research Area: Gene Regulation and Transcription Factor Binding
- Task: Binary/Multi-class classification of transcription factor binding sites
- Applications:
- Regulatory element discovery
- Gene expression prediction
- Epigenomic analysis
- Drug target identification
- Data Format: DNA sequences with binding labels
- Usage Example:
from omnigenbench import OmniDatasetForSequenceClassification, OmniTokenizer
tokenizer = OmniTokenizer.from_pretrained("yangheng/OmniGenome-52M")
tfb_datasets = OmniDatasetForSequenceClassification.from_hub(
dataset_name="deepsea_tfb_prediction",
tokenizer=tokenizer,
max_length=512
)
# Access the training dataset
train_data = tfb_datasets['train']
2. π¬ Translation Efficiency Prediction (translation_efficiency_prediction)
- Research Area: Protein Synthesis and mRNA Biology
- Task: Regression for predicting ribosome loading efficiency
- Applications:
- mRNA therapeutic design
- Protein production optimization
- Synthetic biology applications
- Gene expression engineering
- Data Format: mRNA sequences with continuous efficiency scores
- Usage Example:
from omnigenbench import OmniDatasetForSequenceRegression, OmniTokenizer
tokenizer = OmniTokenizer.from_pretrained("yangheng/OmniGenome-52M")
translation_datasets = OmniDatasetForSequenceRegression.from_hub(
dataset_name="translation_efficiency_prediction",
tokenizer=tokenizer,
max_length=256
)
# Access the training dataset
train_data = translation_datasets['train']
3. βοΈ Variant Effect Prediction (variant_effective_prediction)
- Research Area: Functional Genomics and Precision Medicine
- Task: Multi-class classification of genetic variant effects
- Applications:
- Clinical variant interpretation
- Precision medicine
- Pharmacogenomics
- Disease risk assessment
- Data Format: Protein sequences with variant effect annotations
- Usage Example:
from omnigenbench import OmniDatasetForTokenClassification, OmniTokenizer
tokenizer = OmniTokenizer.from_pretrained("yangheng/OmniGenome-52M")
variant_datasets = OmniDatasetForTokenClassification.from_hub(
dataset_name="variant_effective_prediction",
tokenizer=tokenizer,
max_length=1024
)
# Access the training dataset
train_data = variant_datasets['train']
π Dataset Structure
Each dataset in our hub follows a standardized structure for consistency and ease of use:
dataset_name.zip
βββ data/
β βββ train.csv # Training data
β βββ valid.csv # Validation data
β βββ test.csv # Test data
β βββ metadata.json # Dataset metadata
βββ config.py # Dataset configuration and loading scripts
βββ README.md # Dataset-specific documentation
π οΈ Integration with OmniGenBench
Our datasets are designed to work seamlessly with the OmniGenBench ecosystem:
For Classification Tasks
from omnigenbench import OmniDatasetForSequenceClassification, OmniTokenizer
# Initialize tokenizer
tokenizer = OmniTokenizer.from_pretrained("yangheng/OmniGenome-52M")
# Load classification datasets
datasets = OmniDatasetForSequenceClassification.from_hub(
dataset_name="deepsea_tfb_prediction",
tokenizer=tokenizer,
max_length=512
)
# Access individual splits
train_dataset = datasets['train']
valid_dataset = datasets['valid']
test_dataset = datasets['test']
For Regression Tasks
from omnigenbench import OmniDatasetForSequenceRegression, OmniTokenizer
# Initialize tokenizer
tokenizer = OmniTokenizer.from_pretrained("yangheng/OmniGenome-52M")
# Load regression datasets
datasets = OmniDatasetForSequenceRegression.from_hub(
dataset_name="translation_efficiency_prediction",
tokenizer=tokenizer,
max_length=256
)
# Access individual splits
train_dataset = datasets['train']
valid_dataset = datasets['valid']
test_dataset = datasets['test']
For Token-level Tasks
from omnigenbench import OmniDatasetForTokenClassification, OmniTokenizer
# Initialize tokenizer
tokenizer = OmniTokenizer.from_pretrained("yangheng/OmniGenome-52M")
# Load token-level datasets
datasets = OmniDatasetForTokenClassification.from_hub(
dataset_name="variant_effective_prediction",
tokenizer=tokenizer,
max_length=1024
)
# Access individual splits
train_dataset = datasets['train']
valid_dataset = datasets['valid']
test_dataset = datasets['test']
π Educational Resources
Each dataset comes with comprehensive educational materials:
- π Tutorial Notebooks: Step-by-step guides for using each dataset
- π¬ Scientific Background: Detailed biological context and significance
- π‘ Best Practices: Recommended approaches and methodologies
- π Benchmark Results: Performance baselines from state-of-the-art models
π Key Features
β¨ Plug-and-Play Integration
- One-line dataset loading with automatic preprocessing
- Compatible with all major deep learning frameworks
- Seamless integration with Hugging Face ecosystem
π§ Standardized Format
- Consistent data structure across all datasets
- Unified API for different task types
- Automatic train/validation/test splitting
π Research-Ready
- High-quality, peer-reviewed datasets
- Comprehensive evaluation metrics
- Reproducible benchmark results
π Performance Optimized
- Efficient data loading and caching
- Memory-optimized for large-scale training
- Multi-processing support for faster iteration
π Usage Guidelines
Basic Usage Pattern
# 1. Import the framework
from omnigenbench import (
OmniDatasetForSequenceClassification,
OmniTokenizer,
OmniModelForSequenceClassification,
Trainer
)
# 2. Initialize tokenizer
tokenizer = OmniTokenizer.from_pretrained("yangheng/OmniGenome-52M")
# 3. Load your chosen dataset
datasets = OmniDatasetForSequenceClassification.from_hub(
dataset_name="deepsea_tfb_prediction",
tokenizer=tokenizer,
max_length=512
)
# 4. Initialize model
model = OmniModelForSequenceClassification(
model_name_or_path="yangheng/OmniGenome-52M",
tokenizer=tokenizer,
num_labels=2 # Adjust based on your task
)
# 5. Create trainer and start training
trainer = Trainer(
model=model,
train_dataset=datasets['train'],
valid_dataset=datasets['valid'],
epochs=10,
batch_size=16,
learning_rate=1e-4
)
# 6. Train the model
trainer.train()
# 7. Evaluate on test set
test_results = trainer.evaluate(datasets['test'])
print(f"Test results: {test_results}")
# 8. Inference on new sequences
predictions = model.inference(["ATCGATCGATCG", "GCTAGCTAGCTA"])
print(f"Predictions: {predictions}")
Advanced Configuration
from omnigenbench import OmniDatasetForSequenceRegression, OmniTokenizer
# Initialize tokenizer
tokenizer = OmniTokenizer.from_pretrained("yangheng/OmniGenome-52M")
# Custom configuration for specific needs
datasets = OmniDatasetForSequenceRegression.from_hub(
dataset_name="translation_efficiency_prediction",
tokenizer=tokenizer,
max_length=512,
cache_dir="./cache"
)
# Create DataLoader with custom settings
train_loader = datasets['train'].get_dataloader(
batch_size=32,
shuffle=True,
num_workers=4,
pin_memory=True
)
# Use in training loop
for batch in train_loader:
input_ids = batch['input_ids']
attention_mask = batch['attention_mask']
labels = batch['labels']
# Your training code here...
π€ Contributing
We welcome contributions to expand our dataset collection! If you have a high-quality genomic dataset that would benefit the research community:
- Format your dataset according to our standards
- Include comprehensive documentation and metadata
- Provide benchmark results using standard evaluation metrics
- Submit a pull request with your dataset and documentation
π Licensing
All datasets in OmniGenBench Hub are released under the Apache 2.0 License, ensuring:
- β Free use for research and commercial applications
- β Modification and redistribution rights
- β Patent protection for users
- β Clear attribution requirements
π Support and Contact
- Documentation: OmniGenBench Docs
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Email: Contact the maintainer at [[email protected]]
π― Roadmap
Coming Soon
- π Additional Dataset Categories: Epigenomics, Proteomics, Multi-omics
- π§ Enhanced Tools: Advanced preprocessing utilities and evaluation metrics
- π Community Features: User-contributed datasets and collaborative benchmarks
- π± API Expansion: REST API for programmatic access
Future Releases
- 10+ new datasets covering diverse genomic tasks
- Automated benchmarking pipeline for consistent evaluation
- Multi-modal datasets combining sequence, structure, and functional data
- Real-time dataset updates and version control
π Acknowledgments
OmniGenBench Hub is built on the foundation of outstanding research from the genomics and AI communities. We thank:
- The original dataset creators and research teams
- The open-source community for tools and frameworks
- Beta testers and early adopters for valuable feedback
- The Hugging Face team for hosting infrastructure
π Dataset Statistics Summary
| Metric | Total |
|---|---|
| Total Datasets | 3 |
| Total Size | 2.46 GB |
| Task Types | Classification, Regression, Multi-class |
| Sequence Types | DNA, RNA, Protein |
| Research Areas | Gene Regulation, Protein Synthesis, Functional Genomics |
𧬠Start your genomic AI journey today with OmniGenBench Hub!
Empowering researchers worldwide with high-quality genomic datasets for breakthrough discoveries in computational biology.
- Downloads last month
- 98