LongBench-Pro / README.md
czyPL's picture
Upload folder using huggingface_hub
f9f05c1 verified
---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: test
path: longbench_pro.json
task_categories:
- question-answering
- text-classification
- table-question-answering
- summarization
language:
- en
- zh
tags:
- Long Context
- Realistic
- Comprehensive
pretty_name: LongBench Pro
size_categories:
- 1K<n<10K
---
<div align="center">
<img src="images/logo.png" width="80" alt="LongBench-Pro Logo"/>
<h1>LongBench Pro: A More Realistic and Comprehensive Bilingual Long-Context Evaluation Benchmark</h1>
</div>
<div align="center">
[![Dataset](https://img.shields.io/badge/Dataset-yellow?logo=huggingface&logoColor=yellow&labelColor=white)](https://huggingface.co/datasets/caskcsg/LongBench-Pro) &nbsp;&nbsp;
[![Code](https://img.shields.io/badge/Code-181717?logo=github&logoColor=181717&labelColor=white)](https://github.com/caskcsg/longcontext/tree/main/LongBench-Pro) &nbsp;&nbsp;
[![Paper](https://img.shields.io/badge/Paper-red?logo=arxiv&logoColor=B31B1B&labelColor=white)]() &nbsp;&nbsp;
[![Leaderboard](https://img.shields.io/badge/🏆-Leaderboard-blue?labelColor=white)](https://huggingface.co/spaces/caskcsg/LongBench-Pro-Leaderboard)
</div>
---
**LongBench-Pro**, containing **1,500 samples**, is entirely built on **authentic, natural long documents** and includes **11 primary tasks and 25 secondary tasks**, covering all long-context capabilities assessed by existing benchmarks. It employs **diverse evaluation metrics**, enabling a more fine-grained measurement of model abilities, and provides a balanced set of **bilingual samples in both English and Chinese**.
In addition, **LongBench Pro** introduces a multi-dimensional taxonomy to support a comprehensive evaluation of models under different operating conditions:
- **Context Requirement**: *Full* context (global integration) versus *Partial* context (localized retrieval);
- **Length**: Six lengths uniformly distributed from *8k to 256k* tokens, used to analyze scaling behavior;
- **Difficulty**: Four levels ranging from *Easy to Extreme*, defined based on model performance.
<div align="center">
<img src="images/bench_comparison.png" width="100%"/>
</div>
## 🧩 Task Framework
<div align="center">
<img src="images/task_definition.png" width="100%"/>
<br />
<br />
<img src="images/task_map.png" width="80%"/>
<br />
<b>Task mapping between LongBench Pro and existing benchmarks</b>
</div>
## 📊 Dataset Statistics
<div align="center">
<img src="images/sample_distrubution.png" width="100%"/>
</div>
## 📝 Data Format
**LongBench Pro** organizes data in the following format:
```json
{
"id": "Sample ID: unique for each sample.",
"context": "Long context: 14 types of texts covering domains such as news, medicine, science, literature, law, and education, with various forms such as reports, tables, code, dialogues, lists, and JSON.",
"language": "Sample language: English or Chinese.",
"token_length": "Sample token length: 8k, 16k, 32k, 64k, 128k, or 256k (calculated using the Qwen tokenizer)",
"primary_task": "Primary task type: 11 types.",
"secondary_task": "Secondary task type: 25 types.",
"contextual_requirement": "Contextual Requirement: Full or Partial.",
"question_nonthinking": "Non-thinking prompt of the question: direct answer required.",
"question_thinking": "Thinking prompt of the question: think first, then answer.",
"answer": ["List of components that constitute the answer."],
"difficulty": "Sample difficulty: Easy, Moderate, Hard or Extreme."
}
```
## 🧰 How to use it?
### Loading Data
You can download and load **LongBench Pro** data using the following code:
```python
from datasets import load_dataset
dataset = load_dataset('caskcsg/LongBench-Pro', split='test')
```
### Evaluation
Please refer to our [Github Repo](https://github.com/caskcsg/longcontext/tree/main/LongBench-Pro) for automated evaluation.
## 📖 Citation
*Coming Soon...*