The dataset viewer is not available for this split.
Error code: TooBigContentError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
OIBench Dataset
Dataset Overview
OIBench is a high-quality, private, and challenging olympiad-level informatics benchmark consisting of 250 carefully curated original problems.
The OIBench Dataset's HuggingFace repo contains algorithm problem statements, solutions, and associated metadata such as test cases, pseudo code, and difficulty levels. The dataset has been processed and stored in Parquet format for efficient access and analysis.
We provide complete information for the 250 questions in the data (use dataset = load_dataset("AGI-Eval/OIBench") to access, as the test cases are large and the default Dataset Viewer on Hugging Face may not fully display the information).
We provide the competition records of human participants in human_participants_data.parquet. For detailed usage, refer to https://github.com/AGI-Eval-Official/OIBench
Dataset Structure
The dataset includes the following fields:
id: Problem ID (e.g.,000,001, ...,249)prob_zh: Problem description in Chineseprob_en: Problem description in Englishalgorithm_tag_zh: Algorithm tags in Chinesealgorithm_tag_en: Algorithm tags in Englishlevel: Problem difficultycanonical_solution: Official solution code in C++test_case: List of test cases, each containinginputandoutput.- Each test case is structured as a list of objects containing:
input: The input for the test caseoutput: The output for the test case
- Each test case is structured as a list of objects containing:
pseudo_code: Pseudo code for the algorithmbuggy_code: Buggy code for the problemcorrupted_code: Incomplete code for the problem
Usage
You can load the dataset in your Python code using the following example:
from datasets import load_dataset
dataset = load_dataset("AGI-Eval/OIBench")
print(dataset)
For more usage details, refer to our GitHub Repo: https://github.com/AGI-Eval-Official/OIBench
Citation
@misc{zhu2025oibenchbenchmarkingstrongreasoning,
title={OIBench: Benchmarking Strong Reasoning Models with Olympiad in Informatics},
author={Yaoming Zhu and Junxin Wang and Yiyang Li and Lin Qiu and ZongYu Wang and Jun Xu and Xuezhi Cao and Yuhuai Wei and Mingshi Wang and Xunliang Cai and Rong Ma},
year={2025},
eprint={2506.10481},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2506.10481},
}
Corresponding Author: Lin Qiu ( [email protected] )
- Downloads last month
- 289