The dataset viewer is not available for this split.
Error code: FeaturesError
Exception: ArrowInvalid
Message: Schema at index 1 was different:
latency: struct<avg_latency_sec: double, p95_latency_sec: double, p99_latency_sec: double, total_queries: int64>
hard_failures: list<item: struct<task_id: string, turn: int64, collection: string, ndcg_at_10: double, recall_at_10: double, ndcg_at_5: double, recall_at_5: double, recall_at_20: double, recall_at_100: double, precision_at_5: double, precision_at_10: double>>
performance_by_turn: struct<mean: struct<1: double, 2: double, 3: double, 4: double, 5: double, 6: double, 7: double, 8: double, 9: double>, std: struct<1: double, 2: double, 3: double, 4: double, 5: double, 6: double, 7: double, 8: double, 9: double>, count: struct<1: int64, 2: int64, 3: int64, 4: int64, 5: int64, 6: int64, 7: int64, 8: int64, 9: int64>>
bootstrap_ci_ndcg_at_5: struct<mean: double, ci_lower: double, ci_upper: double, std_dev: double, confidence_level: double>
variance_by_turn: struct<mean: struct<1: double, 2: double, 3: double, 4: double, 5: double, 6: double, 7: double, 8: double, 9: double>, std: struct<1: double, 2: double, 3: double, 4: double, 5: double, 6: double, 7: double, 8: double, 9: double>, min: struct<1: double, 2: double, 3: double, 4: double, 5: double, 6: double, 7: double, 8: double, 9: double>, max: struct<1: double, 2: double, 3: double, 4: double, 5: double, 6: double, 7: double, 8: double, 9: double>>
vs
nDCG: list<item: double>
Recall: list<item: double>
MAP: list<item: double>
Precision: list<item: double>
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
iterable_dataset = iterable_dataset._resolve_features()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3496, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2257, in _head
return next(iter(self.iter(batch_size=n)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2461, in iter
for key, example in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1952, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1974, in _iter_arrow
yield from self.ex_iterable._iter_arrow()
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 547, in _iter_arrow
yield new_key, pa.Table.from_batches(chunks_buffer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/table.pxi", line 5039, in pyarrow.lib.Table.from_batches
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Schema at index 1 was different:
latency: struct<avg_latency_sec: double, p95_latency_sec: double, p99_latency_sec: double, total_queries: int64>
hard_failures: list<item: struct<task_id: string, turn: int64, collection: string, ndcg_at_10: double, recall_at_10: double, ndcg_at_5: double, recall_at_5: double, recall_at_20: double, recall_at_100: double, precision_at_5: double, precision_at_10: double>>
performance_by_turn: struct<mean: struct<1: double, 2: double, 3: double, 4: double, 5: double, 6: double, 7: double, 8: double, 9: double>, std: struct<1: double, 2: double, 3: double, 4: double, 5: double, 6: double, 7: double, 8: double, 9: double>, count: struct<1: int64, 2: int64, 3: int64, 4: int64, 5: int64, 6: int64, 7: int64, 8: int64, 9: int64>>
bootstrap_ci_ndcg_at_5: struct<mean: double, ci_lower: double, ci_upper: double, std_dev: double, confidence_level: double>
variance_by_turn: struct<mean: struct<1: double, 2: double, 3: double, 4: double, 5: double, 6: double, 7: double, 8: double, 9: double>, std: struct<1: double, 2: double, 3: double, 4: double, 5: double, 6: double, 7: double, 8: double, 9: double>, min: struct<1: double, 2: double, 3: double, 4: double, 5: double, 6: double, 7: double, 8: double, 9: double>, max: struct<1: double, 2: double, 3: double, 4: double, 5: double, 6: double, 7: double, 8: double, 9: double>>
vs
nDCG: list<item: double>
Recall: list<item: double>
MAP: list<item: double>
Precision: list<item: double>Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
MT-RAG Benchmark - Retrieval Results
This dataset contains experimental results from the Multi-Turn RAG (MT-RAG) benchmark focusing on retrieval tasks across multiple domains.
Dataset Description
Competition: MT-RAG Benchmark - Task A (Retrieval)
Date: January 2026
Domains: CLAPNQ, CLOUD, FIQA, GOVT
Contents
1. Baseline Results with Ground Truth Rewrites
Directory: submissions/baselines_rewrite/
Results for 5 retrieval models using query rewrites provided in the original dataset:
- BM25: Traditional sparse retrieval
- SPLADE: Learned sparse retrieval (best sparse model)
- BGE-1.5: Dense retrieval (BAAI/bge-large-en-v1.5)
- BGE-M3: Multi-modal dense retrieval
- Voyage-3: Commercial dense retrieval API
20 experiments total (5 models Γ 4 domains)
2. Hybrid Retrieval Results
Directory: submissions/hybrid/
Results combining sparse and dense methods using Reciprocal Rank Fusion (RRF):
- SPLADE + Voyage-3: For CLAPNQ and GOVT (strongest domains)
- SPLADE + BGE-1.5: For CLOUD and FIQA (cost-effective)
Both with and without query rewrites.
8 experiments total (4 configurations Γ 2 rewrite variants)
File Structure
submissions/
βββ baselines_rewrite/
β βββ A2_baseline_bm25_rewrite/
β β βββ clapnq/
β β β βββ metrics.json
β β β βββ retrieval_results.jsonl
β β βββ cloud/
β β βββ fiqa/
β β βββ govt/
β βββ A2_baseline_splade_rewrite/
β βββ A2_baseline_bge15_rewrite/
β βββ A2_baseline_bgem3_rewrite/
β βββ A2_baseline_voyage_rewrite/
β
βββ hybrid/
β βββ hybrid_splade_voyage_norewrite/
β βββ hybrid_splade_voyage_rewrite/
β βββ hybrid_splade_bge15_norewrite/
β βββ hybrid_splade_bge15_rewrite/
β
βββ RESULTS_SUMMARY.md
Metrics
Each experiment includes metrics.json with:
- nDCG @ 5, 10, 20, 100
- Recall @ 5, 10, 20, 100
- MAP @ 5, 10, 20, 100
- Precision @ 5, 10, 20, 100
Best Results (nDCG@10)
| Domain | Configuration | Score |
|---|---|---|
| CLAPNQ | SPLADE + Voyage-3 (Rewrite) | 0.56266 |
| GOVT | SPLADE + Voyage-3 (Rewrite) | 0.53445 |
| CLOUD | SPLADE + BGE-1.5 (Rewrite) | 0.44028 |
| FIQA | SPLADE + BGE-1.5 (Rewrite) | 0.40589 |
Average: 0.48582
Key Findings
- Ground truth rewrites are effective: +9% to +26% improvement over last-turn queries
- SPLADE is the best sparse retriever: Consistent performance across all domains (avg 0.457)
- Hybrid methods outperform individual retrievers: +3% to +10% improvement
- Domain-specific optimization matters: Voyage-3 for strong domains, BGE-1.5 for weaker/cost-sensitive
- BGE-M3 underperforms with rewrites: Should be avoided in rewrite scenarios
Retrieval Results Format
Each retrieval_results.jsonl contains one JSON object per query:
{
"task_id": "clapnq_123",
"question": "How do I configure SSL?",
"contexts": [
{
"document_id": "doc123_chunk5",
"score": 0.85,
"text": "To configure SSL..."
},
...
],
"Collection": "clapnq",
"turn_id": 3
}
Citation
If you use this dataset, please cite:
@dataset{mt_rag_retrieval_results_2026,
title={MT-RAG Benchmark Retrieval Results},
author={Your Name},
year={2026},
publisher={Hugging Face},
url={https://huggingface.co/datasets/vania-janet/multiturn-rag-retrieval-data}
}
License
Apache 2.0
Additional Information
For methodology details, see RESULTS_SUMMARY.md in the dataset.
- Downloads last month
- 23