๐ค Model Description
This is the Mini Hanabi specialist model of the MARSHAL framework, initialized from Qwen3-4B. It has been trained via self-play on Mini Hanabi.
๐ Overview
We introduce MARSHAL, an end-to-end reinforcement learning framework designed to incentivize Multi-Agent Reasoning through Self-play witH strAtegic LLMs in a diverse range of competitive and cooperative games.
MARSHAL addresses the challenge of credit assignment in multi-agent multi-turn self-play through two core mechanisms:
- Turn-level Advantage Estimator: Enables fine-grained credit assignment, allowing the model to accurately attribute long-term outcomes to individual actions and provide learning signals across multiple turns.
- Agent-specific Advantage Normalization: Stabilizes the training process by calibrating advantage estimates relative to the performance of each agent.
๐ฅ Key Results
By leveraging self-play across strategic games, MARSHAL (based on Qwen3-4B) demonstrates notable generalization capabilities:
- Strategic Games: Achieves up to 28.7% performance improvement on held-out games.
- Reasoning Benchmarks: When integrated into leading multi-agent systems (MASs), MARSHAL yields consistent gains of up to
- +10.0% on AIME
- +7.6% on GPQA-Diamond
- +3.5% on average across all tested benchmarks.
๐ฎ Featured Games
- Competitive, perfect-information: Tic-Tac-Toe, Connect Four.
- Competitive, imperfect-information: Kuhn Poker, Leduc Hold'em.
- Cooperative, imperfect-information: Mini Hanabi, Simple Hanabi.
๐ Method
Figure 1: Overview of MARSHAL. Left: Generating player trajectories via self-play in strategic games. Middle: Naive advantage estimation (e.g., GRPO) often fails in multi-turn settings. Right: MARSHAL's advantage estimation ensures accurate credit assignment for multi-turn, multi-agent interactions.
๐ Results
Figure 2: Performance Comparison. Evaluation of MARSHAL against baselines on strategic games and reasoning benchmarks. MARSHAL not only masters strategic games but also generalizes effectively to complex reasoning tasks within multi-agent frameworks like MAD and AutoGen.
๐๏ธ Citation
If you find our work helpful, please cite:
@misc{yuan2025marshal,
title={MARSHAL: Incentivizing Multi-Agent Reasoning via Self-Play with Strategic LLMs},
author={Huining Yuan and Zelai Xu and Zheyue Tan and Xiangmin Yi and Mo Guang and Kaiwen Long and Haojia Hui and Boxun Li and Xinlei Chen and Bo Zhao and Xiao-Ping Zhang and Chao Yu and Yu Wang},
year={2025},
eprint={2510.15414},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={[https://arxiv.org/abs/2510.15414](https://arxiv.org/abs/2510.15414)},
}
- Downloads last month
- 30