Datasets:
rlhn
/

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:

Change task category to text-ranking

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +155 -135
README.md CHANGED
@@ -1,138 +1,158 @@
1
  ---
2
- dataset_info:
3
- features:
4
- - name: query_id
5
- dtype: string
6
- - name: query
7
- dtype: string
8
- - name: positive_passages
9
- list:
10
- - name: docid
11
- dtype: string
12
- - name: text
13
- dtype: string
14
- - name: title
15
- dtype: string
16
- - name: negative_passages
17
- list:
18
- - name: docid
19
- dtype: string
20
- - name: text
21
- dtype: string
22
- - name: title
23
- dtype: string
24
- - name: subset
25
- dtype: string
26
- splits:
27
- - name: train
28
- num_bytes: 10528804827
29
- num_examples: 648766
30
- download_size: 6214353445
31
- dataset_size: 10528804827
32
- configs:
33
- - config_name: default
34
- data_files:
35
- - split: train
36
- path: data/train-*
37
- license: cc-by-sa-4.0
38
- task_categories:
39
- - question-answering
40
- language:
41
- - en
42
- pretty_name: HN Remove 680K
43
- size_categories:
44
- - 100K<n<1M
45
  ---
46
 
47
- # Dataset Card for HN-Remove 680K
48
-
49
- ## Dataset Description
50
- [Repository](https://github.com/castorini/rlhn) |
51
- [Paper](https://huggingface.co/papers/2505.16967) |
52
- [ArXiv](https://arxiv.org/abs/2505.16967)
53
-
54
- RLHN is a cascading LLM framework designed to accurately relabel hard negatives in existing IR/RAG training datasets, such as MS MARCO and HotpotQA.
55
-
56
- This Tevatron dataset (680K training pairs) contains the queries, positives, hard negatives (with dropped false negatives) for 7 datasets in the BGE training collection.
57
-
58
- This repository contains the training pairs that can be used to fine-tune embedding, ColBERT or multi-vector, and reranker models.
59
-
60
- The original dataset (bad quality; containing false negatives) can be found at [rlhn/default-680K](https://huggingface.co/datasets/rlhn/default-680K/).
61
-
62
- > Note: RLHN datasets are not **new** training datasets, but rather existing BGE collection training datasets with hard negatives cleaned!
63
-
64
- ## Dataset Structure
65
-
66
- To access the data using HuggingFace `datasets`:
67
- ```python
68
- rlhn = datasets.load_dataset('rlhn/hn-remove-680K')
69
-
70
- # training set:
71
- for data in freshstack['train']:
72
- query_id = data["query_id"] # md5 hash of the query_id
73
- query = data["query"] # query text
74
- subset = data["subset"] # training dataset, e.g., fiqa or msmarco_passage
75
-
76
- # positive passages
77
- for positive_passage in data["positive_passages"]:
78
- doc_id = positive_passage["docid"]
79
- title = positive_passage["title"] # title is usually empty, added in text
80
- text = positive_passage["text"] # contains both the title & text
81
-
82
- # hard negative passages
83
- for negative_passage in data["negative_passages"]:
84
- doc_id = negative_passage["docid"]
85
- title = negative_passage["title"] # title is usually empty, added in text
86
- text = negative_passage["text"] # contains both the title & text
87
- ```
88
-
89
-
90
- ## Original Dataset Statistics
91
- The following table contains the number of training pairs for each training dataset included in RLHN. These numbers are for the default setting.
92
-
93
- | Dataset | 100K splits | 250K splits | 400K splits | 680K splits |
94
- |-------------------|-------------|-------------|-------------|------------- |
95
- | arguana | 4,065 | 4,065 | 4,065 | 4,065 |
96
- | fever | 28,755 | 28,755 | 28,755 | 28,755 |
97
- | fiqa | 5,500 | 5,500 | 5,500 | 5,500 |
98
- | hotpotqa | 10,250 | 30,000 | 84,516 | 84,516 |
99
- | msmarco_passage | 49,571 | 145,000 | 210,000 | 485,823 |
100
- | nq | 6,110 | 30,000 | 58,568 | 58,568 |
101
- | scidocsrr | 12,654 | 12,654 | 12,654 | 12,654 |
102
- | **total** | **96,167** | **255,974** | **404,058** | **679,881** |
103
-
104
-
105
- ## License
106
- The RLHN dataset is made available with the CC-BY-SA 4.0 license.
107
-
108
- ## Hashing & IDs
109
-
110
- We generate the md5 hash as the unique identifier (ID) for both the query \& documents, using the code below:
111
-
112
- ```python
113
- import hashlib
114
-
115
- def get_md5_hash(text):
116
- """Calculates the MD5 hash of a given string.
117
- Args:
118
- text: The string to hash.
119
- Returns:
120
- The MD5 hash of the string as a hexadecimal string.
121
- """
122
- text_bytes = text.encode('utf-8') # Encode the string to bytes
123
- md5_hash = hashlib.md5(text_bytes).hexdigest()
124
- return md5_hash
125
- ```
126
-
127
- ## Citation
128
- ```
129
- @misc{thakur2025relabel,
130
- title={Fixing Data That Hurts Performance: Cascading LLMs to Relabel Hard Negatives for Robust Information Retrieval},
131
- author={Nandan Thakur and Crystina Zhang and Xueguang Ma and Jimmy Lin},
132
- year={2025},
133
- eprint={2505.16967},
134
- archivePrefix={arXiv},
135
- primaryClass={cs.IR},
136
- url={https://arxiv.org/abs/2505.16967},
137
- }
138
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ library_name: transformers
3
+ tags:
4
+ - text-ranking
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  ---
6
 
7
+ # Model Card for Model ID
8
+
9
+ Model presented in the paper [Fixing Data That Hurts Performance: Cascading LLMs to Relabel Hard Negatives for Robust Information Retrieval](https://huggingface.co/papers/2505.16967).
10
+
11
+ ## Model Details
12
+
13
+ ### Model Description
14
+
15
+ This model is a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. It is fine-tuned using a novel approach of cascading LLM prompts to identify and relabel hard negatives for robust information retrieval.
16
+
17
+ - **Developed by:** [More Information Needed]
18
+ - **Funded by [optional]:** [More Information Needed]
19
+ - **Shared by [optional]:** [More Information Needed]
20
+ - **Model type:** Text-ranking
21
+ - **Language(s) (NLP):** en
22
+ - **License:** [More Information Needed]
23
+ - **Finetuned from model [optional]:** [More Information Needed]
24
+
25
+ ### Model Sources [optional]
26
+
27
+ - **Repository:** [More Information Needed]
28
+ - **Paper:** [Fixing Data That Hurts Performance: Cascading LLMs to Relabel Hard Negatives for Robust Information Retrieval](https://huggingface.co/papers/2505.16967)
29
+ - **Code:** https://github.com/luo-junyu/rlhn
30
+ - **Demo [optional]:** [More Information Needed]
31
+
32
+ ## Uses
33
+
34
+ ### Direct Use
35
+
36
+ [More Information Needed]
37
+
38
+ ### Downstream Use [optional]
39
+
40
+ [More Information Needed]
41
+
42
+ ### Out-of-Scope Use
43
+
44
+ [More Information Needed]
45
+
46
+ ## Bias, Risks, and Limitations
47
+
48
+ [More Information Needed]
49
+
50
+ ### Recommendations
51
+
52
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
53
+
54
+ ## How to Get Started with the Model
55
+
56
+ Use the code below to get started with the model.
57
+
58
+ [More Information Needed]
59
+
60
+ ## Training Details
61
+
62
+ ### Training Data
63
+
64
+ [More Information Needed]
65
+
66
+ ### Training Procedure
67
+
68
+ #### Preprocessing [optional]
69
+
70
+ [More Information Needed]
71
+
72
+ #### Training Hyperparameters
73
+
74
+ - **Training regime:** [More Information Needed]
75
+
76
+ #### Speeds, Sizes, Times [optional]
77
+
78
+ [More Information Needed]
79
+
80
+ ## Evaluation
81
+
82
+ ### Testing Data, Factors & Metrics
83
+
84
+ #### Testing Data
85
+
86
+ [More Information Needed]
87
+
88
+ #### Factors
89
+
90
+ [More Information Needed]
91
+
92
+ #### Metrics
93
+
94
+ [More Information Needed]
95
+
96
+ ### Results
97
+
98
+ [More Information Needed]
99
+
100
+ #### Summary
101
+
102
+ ## Model Examination [optional]
103
+
104
+ [More Information Needed]
105
+
106
+ ## Environmental Impact
107
+
108
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
109
+
110
+ - **Hardware Type:** [More Information Needed]
111
+ - **Hours used:** [More Information Needed]
112
+ - **Cloud Provider:** [More Information Needed]
113
+ - **Compute Region:** [More Information Needed]
114
+ - **Carbon Emitted:** [More Information Needed]
115
+
116
+ ## Technical Specifications [optional]
117
+
118
+ ### Model Architecture and Objective
119
+
120
+ [More Information Needed]
121
+
122
+ ### Compute Infrastructure
123
+
124
+ [More Information Needed]
125
+
126
+ #### Hardware
127
+
128
+ [More Information Needed]
129
+
130
+ #### Software
131
+
132
+ [More Information Needed]
133
+
134
+ ## Citation [optional]
135
+
136
+ **BibTeX:**
137
+
138
+ [More Information Needed]
139
+
140
+ **APA:**
141
+
142
+ [More Information Needed]
143
+
144
+ ## Glossary [optional]
145
+
146
+ [More Information Needed]
147
+
148
+ ## More Information [optional]
149
+
150
+ [More Information Needed]
151
+
152
+ ## Model Card Authors [optional]
153
+
154
+ [More Information Needed]
155
+
156
+ ## Model Card Contact
157
+
158
+ [More Information Needed]