AcT2I-Prompts / README.md
vatsal-malaviya's picture
Update README.md
e10671b verified
metadata
license: cc-by-4.0
dataset_info:
  features:
    - name: id
      dtype: string
    - name: base_prompt
      dtype: string
    - name: animal1
      dtype: string
    - name: animal2
      dtype: string
    - name: action
      dtype: string
    - name: rarity_label
      dtype: string
    - name: emotional_valence
      dtype: string
    - name: spatial_topology
      dtype: string
    - name: temporal_extent
      dtype: string
    - name: emotional_prompt
      dtype: string
    - name: spatial_prompt
      dtype: string
    - name: temporal_prompt
      dtype: string
  splits:
    - name: train
      num_bytes: 94793
      num_examples: 125
  download_size: 51438
  dataset_size: 94793
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
task_categories:
  - text-to-image
language:
  - en
tags:
  - T2I
  - Reasoning
  - Action
  - Benchmark
size_categories:
  - n<1K

AcT2I-Prompts

What is this?

AcT2I-Prompts is the core prompt set from the AcT2I benchmark. It contains 125 base action-centric prompts that describe interactions between two animal agents (e.g. "a goose competing for dominance with a turkey") plus 3 enriched variants per base prompt:

  • spatial_prompt
  • emotional_prompt
  • temporal_prompt

Each base prompt is also labeled along semantic axes:

  • rarity_label (how biologically/common the interaction is)
  • emotional_valence (aggressive / defensive / affiliative / communicative)
  • spatial_topology (pursuit vs physical contact vs distant interaction)
  • temporal_extent (instantaneous vs extended action)

Total rows: 125 (one per base prompt). Each row includes all 4 textual variants.

This repo intentionally does not include generated images or human study results. Those are released separately.


Intended use

This dataset is for evaluation / analysis of text-to-image models, not for training.

Typical use:

  1. For each row, take the base_prompt and (optionally) the enriched spatial_prompt, emotional_prompt, and temporal_prompt.
  2. Generate images from your T2I model for each variant.
  3. Measure whether the model's image actually depicts the described interaction and action.

These prompts are meant to stress-test spatial, temporal, and affective reasoning ("who is doing what to whom, in what posture, with what intent, at what moment").

Out-of-scope / disallowed use

This dataset is not intended for:

  • Training or promoting violent / graphic animal content for shock or harassment.
  • Generating deceptive media presented as "real" wildlife attacks or staged cruelty.
  • Drawing conclusions about human social behavior, human interpersonal violence, or human identity bias. The benchmark is deliberately animal–animal and two-agent focused.

Do not use this dataset to build abusive content pipelines.


Data fields

Each row in data/prompts.jsonl represents one base interaction scenario.

  • id (int)
  • base_prompt (str)
  • animal1 (str)
  • animal2 (str)
  • action (str)
  • rarity_label (str: frequent | rare | very_rare)
  • emotional_valence (str: aggressive | defensive | affiliative | communicative)
  • spatial_topology (str: proximal-contact | pursuit / avoidance | distant interaction)
  • temporal_extent (str: instantaneous | extended action)
  • spatial_prompt (str)
  • emotional_prompt (str)
  • temporal_prompt (str)

There are no train/dev/test splits. All 125 rows are considered the official evaluation set.


Dataset creation

Curation rationale

Most existing "compositional" prompts test simple attribute binding ("a blue cat on a skateboard"). AcT2I instead targets interaction semantics: chasing, comforting, retaliating, asserting dominance, surrendering, etc. These require:

  • asymmetric roles (one agent acts on the other),
  • physically plausible contact / pursuit / restraint poses,
  • temporal cues (in the middle of an attack vs after being struck),
  • emotional / intent cues (aggressive vs affiliative).

We focus on animal–animal interactions (instead of human–human violence or human identity scenarios) to:

  1. Reduce sensitive social/ethical risk around representing harm between humans.
  2. Get clearer signal about action depiction instead of immediately running into "the model can't draw human hands" failures.

How prompts were generated

  • We defined pairs of animals and an interaction verb (e.g. "competing for dominance with", "comforting", "chasing", "retaliating against").

  • We wrote a concise base_prompt for each interaction.

  • For each base prompt, we produced three enriched variants:

    • spatial_prompt: adds explicit body orientation / physical layout.
    • emotional_prompt: adds affect / intent wording.
    • temporal_prompt: anchors the scene in a specific moment or phase of action.
  • We assigned semantic labels (rarity_label, emotional_valence, spatial_topology, temporal_extent) to each base prompt.

Who created the data

All prompts, enriched variants, and semantic labels were authored/verified by the AcT2I team. No personal names, locations, or other PII were included.


Bias, risks, and limitations

  • Violence / aggression content: Many prompts explicitly describe aggression, dominance, pursuit, or threat between animals. This is intentional (models struggle most with these high-contact, asymmetric actions). However, it means the dataset can be used to generate violent-looking content. Please use responsibly.

  • Scope limitations: The benchmark is animal–animal only and two-agent only. Results should not be overgeneralized to human social interactions, medical scenarios, multi-agent scenes, tool use, etc.

  • Biological plausibility: Some interactions are biologically rare or borderline impossible. That is deliberate: we care about whether the model can depict the requested interaction clearly, not whether the interaction is common in nature.


Citation

If you use AcT2I-Prompts, please cite:

@article{malaviya2025act2i,
  title={AcT2I: Evaluating and Improving Action Depiction in Text-to-Image Models},
  author={Malaviya, Vatsal and Chatterjee, Agneet and Patel, Maitreya and Yang, Yezhou and Baral, Chitta},
  journal={arXiv preprint arXiv:2509.16141},
  year={2025}
}