LTD_Bench / README.md
nielsr's picture
nielsr HF Staff
Improve dataset card: Add description, links, task categories, tags, abstract, and sample usage
350f36a verified
|
raw
history blame
3.78 kB
metadata
license: apache-2.0
task_categories:
  - text-generation
  - image-to-text
language: en
tags:
  - benchmark
  - llm-evaluation
  - spatial-reasoning
  - multimodal

LTD-Bench: Evaluating Large Language Models by Letting Them Draw

This repository contains LTD-Bench, a breakthrough benchmark presented in the paper LTD-Bench: Evaluating Large Language Models by Letting Them Draw. LTD-Bench transforms LLM evaluation from abstract scores to directly observable visual outputs by requiring models to generate drawings through dot matrices or executable code, making spatial reasoning limitations immediately apparent.

LTD-Bench implements a comprehensive methodology with complementary generation tasks (testing spatial imagination) and recognition tasks (assessing spatial perception) across three progressively challenging difficulty levels, methodically evaluating both directions of the critical language-spatial mapping.

Paper: LTD-Bench: Evaluating Large Language Models by Letting Them Draw Code: https://github.com/walktaster/LTD-Bench

Abstract

Current evaluation paradigms for large language models (LLMs) represent a critical blind spot in AI research--relying on opaque numerical metrics that conceal fundamental limitations in spatial reasoning while providing no intuitive understanding of model capabilities. This deficiency creates a dangerous disconnect between reported performance and practical abilities, particularly for applications requiring physical world understanding. We introduce LTD-Bench, a breakthrough benchmark that transforms LLM evaluation from abstract scores to directly observable visual outputs by requiring models to generate drawings through dot matrices or executable code. This approach makes spatial reasoning limitations immediately apparent even to non-experts, bridging the fundamental gap between statistical performance and intuitive assessment. LTD-Bench implements a comprehensive methodology with complementary generation tasks (testing spatial imagination) and recognition tasks (assessing spatial perception) across three progressively challenging difficulty levels, methodically evaluating both directions of the critical language-spatial mapping. Our extensive experiments with state-of-the-art models expose an alarming capability gap: even LLMs achieving impressive results on traditional benchmarks demonstrate profound deficiencies in establishing bidirectional mappings between language and spatial concept--a fundamental limitation that undermines their potential as genuine world models. Furthermore, LTD-Bench's visual outputs enable powerful diagnostic analysis, offering a potential approach to investigate model similarity.

Sample Usage

To get started with LTD-Bench, follow these steps for environment setup and running the benchmark.

Setup

Before running LTD-Bench, please ensure that your Linux environment has already installed Xvfb, as it may be required for the Hard-level generation tasks.

You can install it using the following command.

apt-get install xvfb
apt-get install ghostscript

or

yum install xorg-x11-server-Xvfb
yum install ghostscript

Then you need to run Xvfb

Xvfb :1 -screen 0 800x600x24&

Setup your Python environment

pip install -r requirements.txt

Run

Set up the model configuration in "run.sh" file, including your model_id, API_BASE_URL and API_KEY.

Then you can start running model inference!

sh run.sh

Evaluation

Set up your GPT-4.1 configuration in "run_eval.sh" file, including your OPENAI_BASE_URL and OPENAIL_KEY.

Then you can run GPT-4.1 automatic evaluation.

sh run_eval.sh