difficult_retrieval / README.md
yuyijiong's picture
Update task category (#2)
b1f56c0 verified
metadata
license: apache-2.0
task_categories:
  - text-retrieval
tags:
  - long-context
  - retrieval
  - llm-evaluation
  - benchmark

Difficult Long-context Retrieval Tasks

This dataset is designed to evaluate the performance of Long-Context Language Models (LCLMs) on challenging retrieval tasks. While LCLMs are characterized by their extensive context windows, many long-context benchmarks present tasks that even the most advanced models struggle to complete. Our research indicates that the difficulty of these tasks primarily stems from two basic issues: "multi-matching retrieval," which requires the simultaneous retrieval of multiple items, and "logic-based retrieval," which necessitates logical judgment within retrieval criteria. These two problems, while seemingly straightforward, are proven to be hyper-multi-step in nature, explaining why LCLMs struggle with more advanced long-context tasks.

The tasks we provide are:

😄 Simple tasks which are easy for Long-Context LMs:

  • simple_k2v: Direct key-to-value retrieval. The key is given and the model needs to retrieve the corresponding value.
  • simple_v2k: Direct value-to-key retrieval. The value is given and the model needs to retrieve the corresponding key.
  • multi_step(kv): multi-step (formal) KV retrieval. The model needs to retrieve multiple values with multiple queries. Then concatenate the values to form a new key, and finally retrieve the corresponding value.

😵 Difficult tasks which are nearly unsolvable for Long-Context LMs:

  • logic(kv): logic-based KV retrieval. All the values are in range 0-9. We give the range of the value and the model needs to retrieve the corresponding key.
  • logic(resume): logic-based student resume retrieval. We give the range of the GPA and the model needs to retrieve the corresponding student whose GPA is in the range.
  • multi_match(kv): multi-match KV retrieval. The value is given and the model needs to retrieve multiple corresponding keys.
  • multi_match(resume): multi-match student resume retrieval. A university name is given and the model needs to retrieve multiple corresponding students who are from this university.
  • multi_match_last(kv): multi-match KV retrieval. The value is given and the model needs to retrieve multiple corresponding keys. The other gold keys are already given in the prompt, except the last one.

The meaning of file names

For example:

  • logic_kv_10 means logic-based KV retrieval task with the context containing 10 KV items.
  • 3_match_resume_100 means multi-match student resume retrieval task with the context containing 100 students and the model needs to retrieve 3 students.
  • concat_3_kv_100_cot means multi-step KV retrieval task with the context containing 100 KV items and the model needs to concatenate 3 values retrieved with 3 queries. And the prompt style is Chain-of-Thought (CoT).

Columns in the dataset

  • prompt: the full prompt of the task
  • gold_keys: the gold keys of the KV retrieval task. It's a string if there is only one gold key, otherwise it's a list of strings. In student resume retrieval, it's the student name (or a list of student names).
  • gold_values: the gold values of the KV retrieval task. It's a string if there is only one gold value, otherwise it's a list of strings. In student resume retrieval, it's the student's GPA or University (or a list of them).

Note that, in logic-based retrieval and multi-match retrieval tasks, gold_keys are actually the answer to the prompt.

Sample Usage

You can use the evaluate.py script from the GitHub repository to test the performance of LLMs on these difficult retrieval tasks or other retrieval tasks. You should directly modify the code in evaluate.py to choose different tasks, models, and prompt types.

The prompt styles provided are:

  • None: default prompt, lets the model give the answer directly.
  • "cot": adds a Chain-of-Thought (CoT) prompt, guiding the model to 'think step by step'.
  • "one-by-one": adds a one-by-one prompt, guiding the model to 'examine every item one by one'.

For more detailed usage instructions, including hidden states linear probing and attention analysis, please refer to the GitHub repository.