MARSHAL Header
MARSHAL Logo

MARSHAL: Incentivizing Multi-Agent Reasoning via Self-Play with Strategic LLMs

License Python arXiv

๐ŸŒ Project Page | ๐Ÿ“ Paper | ๐Ÿ’ป Code


๐Ÿค— Model Description

This is the Mini Hanabi specialist model of the MARSHAL framework, initialized from Qwen3-4B. It has been trained via self-play on Mini Hanabi.

๐Ÿ“– Overview

We introduce MARSHAL, an end-to-end reinforcement learning framework designed to incentivize Multi-Agent Reasoning through Self-play witH strAtegic LLMs in a diverse range of competitive and cooperative games.

MARSHAL addresses the challenge of credit assignment in multi-agent multi-turn self-play through two core mechanisms:

  1. Turn-level Advantage Estimator: Enables fine-grained credit assignment, allowing the model to accurately attribute long-term outcomes to individual actions and provide learning signals across multiple turns.
  2. Agent-specific Advantage Normalization: Stabilizes the training process by calibrating advantage estimates relative to the performance of each agent.

๐Ÿ”ฅ Key Results

By leveraging self-play across strategic games, MARSHAL (based on Qwen3-4B) demonstrates notable generalization capabilities:

  • Strategic Games: Achieves up to 28.7% performance improvement on held-out games.
  • Reasoning Benchmarks: When integrated into leading multi-agent systems (MASs), MARSHAL yields consistent gains of up to
    • +10.0% on AIME
    • +7.6% on GPQA-Diamond
    • +3.5% on average across all tested benchmarks.

๐ŸŽฎ Featured Games

  • Competitive, perfect-information: Tic-Tac-Toe, Connect Four.
  • Competitive, imperfect-information: Kuhn Poker, Leduc Hold'em.
  • Cooperative, imperfect-information: Mini Hanabi, Simple Hanabi.

๐Ÿš€ Method

Overview of MARSHAL

Figure 1: Overview of MARSHAL. Left: Generating player trajectories via self-play in strategic games. Middle: Naive advantage estimation (e.g., GRPO) often fails in multi-turn settings. Right: MARSHAL's advantage estimation ensures accurate credit assignment for multi-turn, multi-agent interactions.

๐Ÿ“Š Results

Evaluation results

Figure 2: Performance Comparison. Evaluation of MARSHAL against baselines on strategic games and reasoning benchmarks. MARSHAL not only masters strategic games but also generalizes effectively to complex reasoning tasks within multi-agent frameworks like MAD and AutoGen.


๐Ÿ–Š๏ธ Citation

If you find our work helpful, please cite:

@misc{yuan2025marshal,
      title={MARSHAL: Incentivizing Multi-Agent Reasoning via Self-Play with Strategic LLMs},
      author={Huining Yuan and Zelai Xu and Zheyue Tan and Xiangmin Yi and Mo Guang and Kaiwen Long and Haojia Hui and Boxun Li and Xinlei Chen and Bo Zhao and Xiao-Ping Zhang and Chao Yu and Yu Wang},
      year={2025},
      eprint={2510.15414},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={[https://arxiv.org/abs/2510.15414](https://arxiv.org/abs/2510.15414)},
}
Downloads last month
30
Safetensors
Model size
4B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for nics-efc/MARSHAL-Mini-Hanabi-Qwen3-4B

Base model

Qwen/Qwen3-4B-Base
Finetuned
Qwen/Qwen3-4B
Finetuned
(347)
this model

Collection including nics-efc/MARSHAL-Mini-Hanabi-Qwen3-4B