medimaven-qa-data / README.md
dranreb1660
fix: remove problematic dataset_info section to allow auto-detection
1c425f7
metadata
annotations_creators:
  - machine-generated
language:
  - en
license: cc-by-4.0
tags:
  - medical
  - rag
  - synthetic-qa
  - lay-symptom
pretty_name: MediMaven-QA v1.0
size_categories:
  - 100K<n<1M
dataset_info:
  - config_name: kb_chunks
    features:
      - name: id
        dtype: string
      - name: url
        dtype: string
      - name: title
        dtype: string
      - name: section
        dtype: string
      - name: source
        dtype: string
      - name: text
        dtype: string
      - name: retrieved_date
        dtype: string
      - name: n_tokens
        dtype: int64
    splits:
      - name: train
        num_bytes: 133140842
        num_examples: 70743
    download_size: 51361461
    dataset_size: 133140842
  - config_name: qa_long
    features:
      - name: chunk_id
        dtype: string
      - name: question
        dtype: string
      - name: answer
        dtype: string
    splits:
      - name: train
        num_bytes: 52621793
        num_examples: 143280
    download_size: 26138154
    dataset_size: 52621793
  - config_name: qa_wide
    features:
      - name: chunk_id
        dtype: string
      - name: qa
        list:
          - name: answer
            dtype: string
          - name: question
            dtype: string
    splits:
      - name: train
        num_bytes: 49971385
        num_examples: 70018
    download_size: 27339393
    dataset_size: 49971385
configs:
  - config_name: kb_chunks
    data_files:
      - split: train
        path: kb_chunks/train-*
  - config_name: qa_long
    data_files:
      - split: train
        path: qa_long/train-*
  - config_name: qa_wide
    data_files:
      - split: train
        path: qa_wide/train-*

License Language Downloads

🩺 MediMaven-QA v1.1

MediMaven-QA is a multi-source, medically-grounded question-answer (QA) corpus designed for retrieval-augmented generation (RAG), LLM fine-tuning, and benchmarking factual consistency in healthcare chatbots. Version 1.1 aggregates Mayo Clinic, NHS.uk, WebMD, and CDC consumer health pages, then auto-generates lay-friendly Q&A pairs with GPT-4o. Each passage is PHI-scrubbed, CC-BY licensed, and versioned with Weights & Biases.

  • 150 k QA pairs

  • 28 M tokens

  • Balanced mix of concise FAQs and narrative symptom descriptions

πŸ“¦ Dataset Contents

Config (name) Rows What it holds Typical use-case
chunks 70 243 400-token, sentence-aware context windows with rich metadata (id, url, title, section, source, n_token, text) RAG context store / retriever training
qa_wide 70 018 List-of-dict QA per chunk_id
β†’ single row may have β‰₯1 QA pair
Fast retrieval + generation, keeps chunk linkage
qa_long 143 221 Fully exploded (chunk_id, question, answer) Classic supervised QA fine-tuning or eval

⚠️ Disclaimer β€” This corpus is for research & benchmarking only.
It is not a diagnostic tool and should not be used in clinical workflows.

πŸš€ Quick Load

from datasets import load_dataset

# pick one of these configs
qa_long  = load_dataset("bernard-kyei/medimaven-qa-data", "qa_long", split="train")
qa_wide  = load_dataset("bernard-kyei/medimaven-qa-data", "qa_wide", split="train")
# accompany with chunks to get contexts
chunks   = load_dataset("bernard-kyei/medimaven-qa-data", "kb_chunks",  split="train")

print(qa_long[0]["question"])
print(qa_long[0]["answer"])

πŸ› οΈ Creation Pipeline

  1. Scraped Mayo Clinic, NHS.uk, WebMD, (licensed for public use).
  2. Sentence-aware chunking (spaCy).
  3. GPT-4o-mini synthetic Q-A:
    • one concise lay question
    • one symptom-narrative question (e.g., β€œI woke up with a painful lump …”)
  4. Cost: $40 for ~150 k pairs.
  5. Versions tracked as W&B artifacts.
Stage Tooling Notes
1️⃣ Crawl Scrapy + Splash Mayo Clinic, NHS.uk, WebMD, (public-domain / permissive T&Cs)
2️⃣ Chunk spaCy sentenciser β‰ˆ400 tokens / chunk; keeps heading context
3️⃣ Synthetic QA GPT-4o-mini (gpt-4o-mini-2024-05-preview) β€’ 1 concise lay Q
β€’ 1 symptom-narrative Q
β†’ cost $40 for ~150 k pairs
4️⃣ Versioning Weights & Biases Artifacts kb_chunks, qa_wide qa_long

πŸ“Š Key Stats

Metric Value
Total context tokens ~28 M
Avg. tokens / chunk 390
Unique host domains 4
QA pairs / chunk (mean) 2.0
% symptom-narrative Qs 51 %

🧩 Dataset Structure

Field Type Description
id string UUID4 per chunk
url string Canonical page URL
title string Page headline
section string <h2>/<h3> section header
source string Domain slug (mayo, nhs, webmd, cdc)
text string ≀ 400-token chunk
n_tokens int32 Token count (tiktoken/gpt-4 tokenizer)
qa list[dict] Each dict has question, answer

πŸ—’οΈ Changelog

Date (UTC) Version Notes
2025-05-29 1.1 Multi-source crawl, synthetic QA, PHI scrub, W&B tracking
2024-11-01 1.0 MedQuad + iCliniq seed release

Contributions

PRs welcome! Please open an issue describing planned changes.
We follow the Hugging Face Datasets community guidelines.

Maintainer: Bernard Kyei-Mensah β€” [email protected]
LinkedIn / GitHub: @dranreb1660

Acknowledgements

Thanks to the open-source healthcare community, Hugging Face, and contributors who reported issues and suggested improvements.

πŸ“œ Citation

@misc{KyeiMensah2025MediMavenQA,
  author  = {Kyei-Mensah, Bernard},
  title   = {MediMaven-QA: A Citation-Preserving Medical Q\A Dataset with Symptom Narratives},
  year    = {2025},
  url     = {https://huggingface.co/datasets/dranreb1660/medimaven-qa-data},
  note    = {Version 1.0}
}