Dataset Viewer
Auto-converted to Parquet
Search is not available for this dataset
image
imagewidth (px)
115
1.5k
label
class label
4 classes
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
0abc_question_images
End of preview. Expand in Data Studio

CK-12 TQA Multimodal: Textbook Question Answering with Images

Dataset Description

Dataset Summary

CK-12 TQA Multimodal is a comprehensive multimodal dataset for science education, containing 26,260 questions paired with 6,206 images from middle school science textbooks. This dataset is sourced from CK-12 Foundation's open educational resources and includes both text-only questions and diagram-based visual reasoning questions.

This is the complete multimodal version with all images properly organized and mapped to questions, ready for training vision-language models, visual question answering systems, and multimodal educational AI.

Dataset Highlights:

  • 🖼️ 6,206 images across 4 types (question diagrams, labeled diagrams, teaching images, textbook figures)
  • 📊 26,260 questions (12,567 require diagrams, 13,693 are text-only)
  • 📚 1,076 lessons from Life Science, Earth Science, and Physical Science
  • 🎯 Proper train/validation/test splits with images organized by split
  • 🔬 Middle school level (grades 6-8) science content
  • ✅ Images verified and mapped to questions with metadata

Original Source

Supported Tasks

  • Visual Question Answering (VQA): Answer questions about scientific diagrams
  • Multimodal Reasoning: Combine text and visual information
  • Question Answering: Text-based comprehension
  • Science Education: Middle school science assessment
  • Diagram Understanding: Interpret charts, graphs, and scientific illustrations

Dataset Structure

Data Organization

ck12_tqa_multimodal/
├── ck12_tqa_train.jsonl          # 15,154 questions
├── ck12_tqa_validation.jsonl     # 5,309 questions
├── ck12_tqa_test.jsonl           # 5,797 questions
├── dataset_manifest.json         # Dataset statistics
└── images/
    ├── train/
    │   ├── question_images/      # 1,077 diagram question images
    │   ├── abc_question_images/  # 422 labeled diagram images
    │   ├── teaching_images/      # 185 instructional diagrams
    │   └── textbook_images/      # 1,972 textbook figures
    ├── validation/
    │   ├── question_images/      # 450 images
    │   ├── abc_question_images/  # 140 images
    │   ├── teaching_images/      # 60 images
    │   └── textbook_images/      # 593 images
    └── test/
        ├── question_images/      # 516 images
        ├── abc_question_images/  # 144 images
        ├── teaching_images/      # 31 images
        └── textbook_images/      # 616 images

Data Format

Each question includes complete metadata and image mappings:

Non-Diagram Question:

{
  "id": "NDQ_000046",
  "source": "TQA-CK12",
  "split": "train",
  "lesson_id": "L_0002",
  "lesson_name": "earth science and its branches",
  "has_diagram": false,
  "instruction": "Answer the following multiple choice question from a science textbook.",
  "input": "Earth science is the study of\n\nOptions:\na) solid Earth.\nb) Earths oceans.\nc) Earths atmosphere.\nd) all of the above",
  "output": "d",
  "question_type": "Multiple Choice",
  "question_subtype": "Multiple Choice",
  "options": ["solid Earth.", "Earths oceans.", "Earths atmosphere.", "all of the above"],
  "option_labels": ["a", "b", "c", "d"]
}

Diagram Question (with image):

{
  "id": "DQ_000001",
  "source": "TQA-CK12",
  "split": "train",
  "lesson_id": "L_0003",
  "lesson_name": "erosion and deposition by flowing water",
  "has_diagram": true,
  "instruction": "Answer the following multiple choice question about the diagram from a science textbook.",
  "input": "How many actions are depicted in the diagram?\n\nOptions:\na) 6\nb) 4\nc) 8\nd) 7",
  "output": "d",
  "question_type": "Diagram Multiple Choice",
  "question_subtype": "",
  "image_path": "question_images/erosion_6843.png",
  "image_name": "erosion_6843.png",
  "options": ["6", "4", "8", "7"],
  "option_labels": ["a", "b", "c", "d"]
}

Data Fields

  • id: Unique identifier (DQ_* = diagram question, NDQ_* = non-diagram)
  • source: "TQA-CK12"
  • split: "train", "validation", or "test"
  • lesson_id: Lesson identifier
  • lesson_name: Topic name
  • has_diagram: Boolean indicating if image is required
  • instruction: Model instruction prompt
  • input: Question text with formatted options
  • output: Correct answer (letter: a, b, c, d, etc.)
  • question_type: "Multiple Choice", "True/False", etc.
  • question_subtype: Additional categorization
  • options: List of answer choices
  • option_labels: List of choice labels
  • image_path: Relative path to image (for diagram questions)
  • image_name: Image filename (for diagram questions)

Image Types

1. Question Images (question_images/)

  • Purpose: Diagrams directly referenced in questions
  • Count: 2,043 images total
  • Usage: Required to answer diagram questions
  • Examples: Scientific diagrams, charts, illustrations

2. ABC Question Images (abc_question_images/)

  • Purpose: Labeled versions of diagrams with letter markers
  • Count: 706 images total
  • Usage: Questions about specific labeled parts
  • Examples: Diagrams with A, B, C, D labels

3. Teaching Images (teaching_images/)

  • Purpose: Instructional diagrams with detailed descriptions
  • Count: 276 images total
  • Usage: Supporting visual content
  • Examples: Concept illustrations, process diagrams

4. Textbook Images (textbook_images/)

  • Purpose: Figures from textbook lessons
  • Count: 3,181 images total
  • Usage: Contextual illustrations
  • Examples: Photos, drawings, charts from lessons

Data Splits

Split Lessons Questions Non-Diagram Diagram Images
Train 666 15,154 8,653 6,501 3,656
Validation 200 5,309 2,528 2,781 1,243
Test 210 5,797 2,512 3,285 1,307
TOTAL 1,076 26,260 13,693 12,567 6,206

Split Strategy: Splits are created at the lesson level to minimize concept overlap between train/val/test sets. Related lessons are grouped together to prevent data leakage.

Usage

Loading the Dataset

from datasets import load_dataset
from PIL import Image
import json

# Load questions
dataset = load_dataset("notefill/ck12-tqa-multimodal", data_files="ck12_tqa_train.jsonl")

# Load a question with image
with open("ck12_tqa_train.jsonl", "r") as f:
    for line in f:
        q = json.loads(line)
        if q["has_diagram"]:
            # Construct image path
            img_path = f"images/{q['split']}/{q['image_path']}"
            # Load image
            image = Image.open(img_path)
            print(f"Question: {q['input']}")
            print(f"Answer: {q['output']}")
            image.show()
            break

Filter by Image Type

import json

# Load only diagram questions
diagram_questions = []
with open("ck12_tqa_train.jsonl", "r") as f:
    for line in f:
        q = json.loads(line)
        if q["has_diagram"]:
            diagram_questions.append(q)

print(f"Found {len(diagram_questions)} diagram questions")

Vision-Language Model Training

from transformers import VisionEncoderDecoderModel, AutoTokenizer, AutoFeatureExtractor
from PIL import Image
import json

# Load model (example with BLIP, ViLT, or similar)
model = VisionEncoderDecoderModel.from_pretrained("your-model")
tokenizer = AutoTokenizer.from_pretrained("your-model")
feature_extractor = AutoFeatureExtractor.from_pretrained("your-model")

# Prepare data
def prepare_multimodal_data(jsonl_file, image_dir):
    data = []
    with open(jsonl_file, "r") as f:
        for line in f:
            q = json.loads(line)
            if q["has_diagram"]:
                img_path = f"{image_dir}/{q['split']}/{q['image_path']}"
                image = Image.open(img_path).convert("RGB")
                data.append({
                    "image": image,
                    "question": q["input"],
                    "answer": q["output"],
                    "instruction": q["instruction"]
                })
    return data

train_data = prepare_multimodal_data("ck12_tqa_train.jsonl", "images")

Evaluation

def evaluate_vqa_model(model, test_data):
    """Evaluate visual question answering accuracy"""
    correct = 0
    total = 0
    
    for item in test_data:
        if item["has_diagram"]:
            # Load image
            img_path = f"images/{item['split']}/{item['image_path']}"
            image = Image.open(img_path)
            
            # Get prediction
            prediction = model.predict(image, item["input"])
            
            if prediction.strip().lower() == item["output"].strip().lower():
                correct += 1
            total += 1
    
    return correct / total if total > 0 else 0.0

Subject Coverage

Life Science (Biology)

  • Cell structure and function
  • Genetics and heredity
  • Ecology and ecosystems
  • Evolution and adaptation
  • Human body systems
  • Plants and animals

Earth Science

  • Geology and rock cycle
  • Plate tectonics
  • Weather and climate
  • Oceans and water cycle
  • Earth's atmosphere
  • Astronomy and solar system

Physical Science

  • Matter and its properties
  • Chemical reactions
  • Forces and motion
  • Energy forms and transfer
  • Waves and sound
  • Electricity and magnetism

Benchmark Performance

This dataset is designed for challenging multimodal comprehension. Key difficulty factors:

  1. Visual Reasoning: Requires interpreting scientific diagrams
  2. Multi-step Reasoning: Many questions need multiple inferences
  3. Domain Knowledge: Requires middle school science knowledge
  4. Diagram Complexity: Varying levels of visual complexity

Considerations for Using the Data

Recommended Uses

✅ Training vision-language models for education
✅ Evaluating multimodal reasoning capabilities
✅ Building educational AI tutoring systems
✅ Research in visual question answering
✅ Developing diagram understanding systems
✅ Science education assessment tools

Limitations

⚠️ Grade Level: Limited to middle school (ages 11-14)
⚠️ Subject Scope: Only Life, Earth, and Physical Science
⚠️ Language: English only
⚠️ Image Quality: Varies by source
⚠️ Answer Format: Multiple choice only
⚠️ Non-Commercial: License restricts commercial use

Ethical Considerations

  • Dataset designed for research and non-commercial educational purposes
  • Content sourced from CK-12's open educational resources
  • Should be used to improve educational access and equity
  • Human oversight recommended for student-facing applications
  • Consider diverse student needs when deploying AI systems

Citation

Original TQA Dataset

@inproceedings{Kembhavi2017tqa,
  title={Are You Smarter Than A Sixth Grader? Textbook Question Answering for Multimodal Machine Comprehension},
  author={Aniruddha Kembhavi and Minjoon Seo and Dustin Schwenk and Jonghyun Choi and Ali Farhadi and Hannaneh Hajishirzi},
  booktitle={Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2017}
}

This Multimodal Version

@dataset{ck12_tqa_multimodal2025,
  title={CK-12 TQA Multimodal: Textbook Question Answering with Images},
  author={Kuyeso Rogers and Adiza Alhassan and Notefill},
  year={2025},
  publisher={Hugging Face},
  howpublished={\url{https://huggingface.co/datasets/notefill/ck12-tqa-multimodal}},
  note={Complete multimodal dataset with images from CK-12 textbooks}
}

Licensing

Creative Commons Attribution-NonCommercial 3.0 Unported (CC BY-NC 3.0)

You are free to:

  • Share: Copy and redistribute the material
  • Adapt: Remix, transform, and build upon the material

Under the following terms:

  • 📝 Attribution: Credit CK-12 Foundation and TQA authors
  • NonCommercial: No commercial use without permission

For commercial licensing, contact CK-12 Foundation.

Additional Resources

Acknowledgments

We gratefully acknowledge:

  • CK-12 Foundation for creating and freely distributing world-class science educational materials under open licenses
  • Original TQA Authors: Aniruddha Kembhavi, Minjoon Seo, Dustin Schwenk, Jonghyun Choi, Ali Farhadi, and Hannaneh Hajishirzi for their groundbreaking work on multimodal machine comprehension
  • Allen Institute for AI (AI2) for hosting and maintaining the original dataset
  • The educators and content creators who contributed to CK-12's science curriculum

About CK-12 Foundation

CK-12 Foundation is a non-profit organization providing free, high-quality K-12 STEM content. Their mission is to reduce the cost of textbook materials and increase access to quality education worldwide through openly-licensed, customizable content.

Dataset Statistics

{
  "total_questions": 26260,
  "total_images": 6206,
  "total_lessons": 1076,
  "diagram_questions": 12567,
  "text_questions": 13693,
  "image_types": 4,
  "subjects": ["Life Science", "Earth Science", "Physical Science"],
  "grade_level": "Middle School (6-8)",
  "languages": ["English"]
}

Contact

For questions about this multimodal version, please open an issue on the dataset repository.

For questions about the original TQA dataset, visit allenai.org/data/tqa.

For CK-12 content and licensing inquiries, visit www.ck12.org.

Downloads last month
197