medimaven-qa-data / README.md
dranreb1660's picture
add dataset_info for qa_long
78202f7 verified
|
raw
history blame
5.11 kB
metadata
annotations_creators:
  - machine-generated
language:
  - en
license: cc-by-4.0
tags:
  - medical
  - rag
  - synthetic-qa
  - lay-symptom
pretty_name: MediMaven-QA v1.0
size_categories:
  - 100K<n<1M
dataset_info:
  config_name: qa_long
  features:
    - name: chunk_id
      dtype: string
    - name: question
      dtype: string
    - name: answer
      dtype: string
  splits:
    - name: train
      num_bytes: 52621793
      num_examples: 143280
  download_size: 26138154
  dataset_size: 52621793
configs:
  - config_name: qa_long
    data_files:
      - split: train
        path: qa_long/train-*

License Language Downloads

🩺 MediMaven-QA v1.0

MediMaven-QA is a chunk-level, citation-preserving medical question-answer corpus purpose-built for Retrieval-Augmented Generation (RAG).
It bridges everyday lay-symptom narratives with trustworthy clinical content from curated web sources.

πŸ“¦ Dataset Contents

Config (name) Rows What it holds Typical use-case
chunks 70 248 200-token, sentence-aware context windows with rich metadata (id, url, title, section, source, n_token, text) RAG context store / retriever training
qa_wide 70 018 List-of-dict QA per chunk_id
β†’ single row may have β‰₯1 QA pair
Fast retrieval + generation, keeps chunk linkage
qa_long 143 221 Fully exploded (chunk_id, question, answer) Classic supervised QA fine-tuning or eval

⚠️ Disclaimer β€” This corpus is for research & benchmarking only.
It is not a diagnostic tool and should not be used in clinical workflows.

πŸš€ Quick Load

from datasets import load_dataset

# pick one of these configs
qa_long  = load_dataset("bernard-kyei/medimaven-qa-data", "qa_long", split="train")
qa_long  = load_dataset("bernard-kyei/medimaven-qa-data", "qa_long", split="train")
# accompany with chunks to get contexts
chunks   = load_dataset("bernard-kyei/medimaven-qa-data", "kb_chunks",  split="train")

print(qa_long[0]["question"])
print(qa_long[0]["answer"])

πŸ› οΈ Generation Pipeline

Stage Tooling Notes
1️⃣ Crawl Scrapy + Splash Mayo Clinic, NHS.uk, WebMD, Cleveland Clinic (public-domain / permissive T&Cs)
2️⃣ Chunk spaCy sentenciser β‰ˆ200 tokens / chunk; keeps heading context
3️⃣ Synthetic QA GPT-4o-mini (gpt-4o-mini-2024-05-preview) β€’ 1 concise lay Q
β€’ 1 symptom-narrative Q
β†’ cost $40 for 143 k pairs
4️⃣ Versioning Weights & Biases Artifacts kb_chunks, qa_wide qa_long

πŸ“Š Key Stats

Metric Value
Total context tokens 27.4 M
Avg. tokens / chunk 390
Unique host domains 4
QA pairs / chunk (mean) 2.0
% symptom-narrative Qs 51 %

🧩 Dataset Structure (Arrow schema)

click to expand β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ chunks β”‚ qa_wide / qa_long β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ id: string β”‚ chunk_id: string β”‚ β”‚ url: string β”‚ question: string β”‚ β”‚ title: str β”‚ answer: string β”‚ β”‚ section:str β”‚ -- qa_wide only -- β”‚ β”‚ source:str β”‚ qa: list β”‚ β”‚ text: str β”‚ β”‚ β”‚ n_token:int β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸ“œ Citation

@misc{KyeiMensah2025MediMavenQA,
  author  = {Kyei-Mensah, Bernard},
  title   = {MediMaven-QA: A Citation-Preserving Medical Q\A Dataset with Symptom Narratives},
  year    = {2025},
  url     = {https://huggingface.co/datasets/dranreb1660/medimaven-qa-data},
  note    = {Version 1.0}
}

πŸ—’οΈ Changelog

Date (UTC) Version Highlights
2025-05-27 v1.0 β€’ Sentence-aware chunking
β€’ 143 k synthetic QA pairs
β€’ Cost optimisation to $25