Datasets:
AgentCoMa Benchmark
Paper | GitHub | Leaderboard
Dataset repository for the paper AgentCoMa: A Compositional Benchmark Mixing Commonsense and Mathematical Reasoning in Real-World Scenarios.
To submit to the Leaderboard, follow the instructions in this README.
AgentCoMa is an Agentic Commonsense and Math benchmark where each compositional task requires both commonsense and mathematical reasoning to be solved. The tasks are set in real-world scenarios: house working, web shopping, science experiments, smart assistant and travel agent. The benchmark is designed to test the mixed-type compositional reasoning abilities of LLMs. Contemporary LLMs perform well on commonsense and math reasoning in isolation, but are far less effective at solving AgentCoMa tasks that require their composition. See some dev set example questions below.
For each compositional task, we also provide its underlying reasoning steps as individual questions. Performance on AgentCoMa is measured as the compositionality gap — i.e., the difference between the accuracy on the compositional tasks and the proportion of samples where all individual reasoning steps are answered correctly in isolation.
Citation
If you use this dataset, please cite our work:
@misc{alazraki2025agentcomacompositionalbenchmarkmixing,
title={AgentCoMa: A Compositional Benchmark Mixing Commonsense and Mathematical Reasoning in Real-World Scenarios},
author={Lisa Alazraki and Lihu Chen and Ana Brassard and Joe Stacey and Hossein A. Rahmani and Marek Rei},
year={2025},
eprint={2508.19988},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2508.19988},
}
- Downloads last month
- 12