florin-hf's picture
Update README.md
0a83147 verified
---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: train
num_bytes: 13638576512
num_examples: 20970784
download_size: 7557029888
dataset_size: 13638576512
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- question-answering
language:
- en
size_categories:
- 10M<n<100M
---
# Wikipedia Dump without Duplicates
## Dataset Summary
This is a cleaned and de-duplicated version of the English Wikipedia dump dated December 20, 2018. Originally sourced from the [DPR repository](https://github.com/facebookresearch/DPR), it has been processed to remove duplicates, resulting in a final count of **20,970,784** passages, each consisting of 100 words.
The original corpus is available for download via this [link](https://dl.fbaipublicfiles.com/dpr/wikipedia_split/psgs_w100.tsv.gz).
The corpus is used in the research paper [A Tale of Trust and Accuracy: Base vs. Instruct LLMs in RAG Systems](https://arxiv.org/abs/2406.14972), supporting experiments comparing base and instruct Large Language Models within Retrieval-Augmented Generation systems.