multi-tw / README.md
RayminQAQ's picture
Update README.md
940db03 verified
metadata
license: mit
dataset_info:
  features:
    - name: id
      dtype: string
    - name: instruction
      dtype: string
    - name: question
      dtype: string
    - name: option1
      dtype: string
    - name: option2
      dtype: string
    - name: option3
      dtype: string
    - name: option4
      dtype: string
    - name: answer
      dtype: string
    - name: image
      dtype: image
    - name: audio
      dtype: audio
  splits:
    - name: validation
      num_bytes: 873288128
      num_examples: 900
  download_size: 819328629
  dataset_size: 873288128
configs:
  - config_name: default
    data_files:
      - split: validation
        path: data/validation-*

Multi-TW: Traditional Chinese Language Learning Dataset

Dataset Description

Multi-TW is a Traditional Chinese language learning and assessment dataset containing 900 multiple-choice questions with multimedia content. This dataset is designed for evaluating multi-modal language models on Traditional Chinese comprehension tasks.

Dataset Structure

The dataset contains 900 samples in the validation split, suitable for benchmarking purposes.

Data Fields

  • id: Unique identifier for each question
  • instruction: Task instructions in Chinese
  • question: The question text in Chinese
  • option1: Multiple choice option A
  • option2: Multiple choice option B
  • option3: Multiple choice option C
  • option4: Multiple choice option D (may be empty)
  • answer: Correct answer (A, B, C, or D)
  • image: PIL Image object (for visual questions)
  • audio: Audio data with sampling rate (for audio questions)

Data Composition

  • Total samples: 900
  • Samples with images: 450
  • Samples with audio: 450
  • Answer distribution: A: 249, B: 261, C: 263, D: 127
  • Question types: L (Listening): 660, R (Reading): 240

Usage

from datasets import load_dataset

# Load the dataset
dataset = load_dataset("ntuai/multi-tw")
validation_data = dataset["validation"]

# Access a sample
sample = validation_data[0]
print(f"Question: {sample['question']}")
print(f"Options: {sample['option1']}, {sample['option2']}, {sample['option3']}")
print(f"Answer: {sample['answer']}")

# Check if sample has image or audio
if sample['image'] is not None:
    # Process image
    image = sample['image']
    
if sample['audio'] is not None:
    # Process audio
    audio_array = sample['audio']['array']
    sampling_rate = sample['audio']['sampling_rate']

Dataset Statistics

The dataset covers various aspects of Chinese language learning:

  • Visual comprehension: Questions requiring image understanding
  • Audio comprehension: Questions requiring audio understanding
  • Multiple choice format: 3-4 options per question
  • Balanced distribution: Relatively even distribution across answer choices

License

MIT License

Citation

If you use this dataset in your research, please cite:

@dataset{multi_tw_2025,
  title={Multi-TW: Benchmarking Multimodal Models on Traditional Chinese Question Answering in Taiwan},
  author={Jui-Ming Yao, Bing-Cheng Xie, Sheng-Wei Peng, Hao-Yuan Chen, He-Rong Zheng, Bing-Jia Tan, Peter Shaojui Wang, and Shun-Feng Su},
  year={2025},
  publisher={Hugging Face},
  url={https://huggingface.co/datasets/ntuai/multi-tw}
}