Papers
arxiv:2511.03601

Step-Audio-EditX Technical Report

Published on Nov 5
· Submitted by Yang on Nov 5
Authors:
,
,
,
,
,
,
,
,
,
,

Abstract

Step-Audio-EditX, an open-source LLM-based audio model, excels in expressive and iterative audio editing and zero-shot TTS using large-margin synthetic data.

AI-generated summary

We present Step-Audio-EditX, the first open-source LLM-based audio model excelling at expressive and iterative audio editing encompassing emotion, speaking style, and paralinguistics alongside robust zero-shot text-to-speech (TTS) capabilities.Our core innovation lies in leveraging only large-margin synthetic data, which circumvents the need for embedding-based priors or auxiliary modules. This large-margin learning approach enables both iterative control and high expressivity across voices, and represents a fundamental pivot from the conventional focus on representation-level disentanglement. Evaluation results demonstrate that Step-Audio-EditX surpasses both MiniMax-2.6-hd and Doubao-Seed-TTS-2.0 in emotion editing and other fine-grained control tasks.

Community

Paper author Paper submitter
This comment has been hidden (marked as Resolved)

床前明月光

Is this multilingual?

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2511.03601 in a dataset README.md to link it from this page.

Spaces citing this paper 1

Collections including this paper 3