florin-hf's picture
Update README.md
0a83147 verified
metadata
dataset_info:
  features:
    - name: id
      dtype: int64
    - name: text
      dtype: string
    - name: title
      dtype: string
  splits:
    - name: train
      num_bytes: 13638576512
      num_examples: 20970784
  download_size: 7557029888
  dataset_size: 13638576512
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
task_categories:
  - question-answering
language:
  - en
size_categories:
  - 10M<n<100M

Wikipedia Dump without Duplicates

Dataset Summary

This is a cleaned and de-duplicated version of the English Wikipedia dump dated December 20, 2018. Originally sourced from the DPR repository, it has been processed to remove duplicates, resulting in a final count of 20,970,784 passages, each consisting of 100 words. The original corpus is available for download via this link.

The corpus is used in the research paper A Tale of Trust and Accuracy: Base vs. Instruct LLMs in RAG Systems, supporting experiments comparing base and instruct Large Language Models within Retrieval-Augmented Generation systems.