LongBench-Pro / README.md
czyPL's picture
Upload folder using huggingface_hub
f9f05c1 verified
metadata
license: apache-2.0
configs:
  - config_name: default
    data_files:
      - split: test
        path: longbench_pro.json
task_categories:
  - question-answering
  - text-classification
  - table-question-answering
  - summarization
language:
  - en
  - zh
tags:
  - Long Context
  - Realistic
  - Comprehensive
pretty_name: LongBench Pro
size_categories:
  - 1K<n<10K
LongBench-Pro Logo

LongBench Pro: A More Realistic and Comprehensive Bilingual Long-Context Evaluation Benchmark

Dataset    Code    Paper    Leaderboard


LongBench-Pro, containing 1,500 samples, is entirely built on authentic, natural long documents and includes 11 primary tasks and 25 secondary tasks, covering all long-context capabilities assessed by existing benchmarks. It employs diverse evaluation metrics, enabling a more fine-grained measurement of model abilities, and provides a balanced set of bilingual samples in both English and Chinese.

In addition, LongBench Pro introduces a multi-dimensional taxonomy to support a comprehensive evaluation of models under different operating conditions:

  • Context Requirement: Full context (global integration) versus Partial context (localized retrieval);
  • Length: Six lengths uniformly distributed from 8k to 256k tokens, used to analyze scaling behavior;
  • Difficulty: Four levels ranging from Easy to Extreme, defined based on model performance.

🧩 Task Framework




Task mapping between LongBench Pro and existing benchmarks

📊 Dataset Statistics

📝 Data Format

LongBench Pro organizes data in the following format:

{
    "id": "Sample ID: unique for each sample.",
    "context": "Long context: 14 types of texts covering domains such as news, medicine, science, literature, law, and education, with various forms such as reports, tables, code, dialogues, lists, and JSON.",
    "language": "Sample language: English or Chinese.",
    "token_length": "Sample token length: 8k, 16k, 32k, 64k, 128k, or 256k (calculated using the Qwen tokenizer)",
    "primary_task": "Primary task type: 11 types.",
    "secondary_task": "Secondary task type: 25 types.",
    "contextual_requirement": "Contextual Requirement: Full or Partial.",
    "question_nonthinking": "Non-thinking prompt of the question: direct answer required.",
    "question_thinking": "Thinking prompt of the question: think first, then answer.",
    "answer": ["List of components that constitute the answer."],
    "difficulty": "Sample difficulty: Easy, Moderate, Hard or Extreme."
}

🧰 How to use it?

Loading Data

You can download and load LongBench Pro data using the following code:

from datasets import load_dataset
dataset = load_dataset('caskcsg/LongBench-Pro', split='test')

Evaluation

Please refer to our Github Repo for automated evaluation.

📖 Citation

Coming Soon...