ACE-Step v1.5 XL Turbo - AIO Merged (1.7B LM)

This repository contains a custom, unified All-In-One (AIO) .safetensors checkpoint for ACE-Step v1.5 XL Turbo.

Instead of managing multiple fragmented components, this model packs the entire high-fidelity audio generation pipeline—including the VAE, Text Encoder, the massive 1.7B Language Model, and the XL Turbo base model—into a single, easy-to-load file (acestep-v1.5-AIO-merged.safetensors).

Created and compiled by Team Emogi.

🧩 Components Included

This AIO checkpoint was compiled using a sequential memory-merging script and contains the following unmodified base weights, properly prefixed for standard generation pipelines:

  • Base Model: ACE-Step v1.5 XL Turbo (Hidden Dimension: 2560)
  • Language Model: 5Hz LM (1.7B Parameter Version for high-quality prompting)
  • Text Encoder: Qwen3-Embedding (0.6B)
  • VAE: Standard ACE-Step Diffusion VAE

🚀 General Usage

This model is designed for local inference in custom UIs, ComfyUI workflows, or standalone Python scripts that require a single checkpoint file.

Because all components are packed into one dictionary, your loading script simply needs to route the standard prefixes to their respective modules:

  • model.* ➡️ Base Diffusion Model
  • lm.* ➡️ 5Hz Language Model
  • text_encoder.* ➡️ Qwen Text Encoder
  • vae.* ➡️ Variational Autoencoder
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support