PromptEnhancerV2 (32B)

PromptEnhancerV2 is a multimodal language model fine-tuned for text-to-image prompt enhancement and rewriting. It restructures user input prompts while preserving the original intent, producing clearer, layered, and logically consistent prompts suitable for downstream image generation tasks.

Model Details

Model Description

PromptEnhancerV2 is a specialized text-to-image prompt rewriting model that employs chain-of-thought reasoning to enhance user prompts.

  • Model type: Vision-Language Model for Prompt Enhancement
  • Language(s) (NLP): Chinese (zh), English (en)
  • License: Apache-2.0
  • Finetuned from model: Qwen/Qwen2.5-VL-32B-Instruct

Model Sources

How to Get Started with the Model

  • 1. Clone the repository::
git clone https://github.com/ximinng/PromptEnhancer.git
cd PromptEnhancer
pip install -r requirements.txt
  • 2. Model Download:
huggingface-cli download PromptEnhancer/PromptEnhancer-32B --local-dir ./models/promptenhancer-32b
  • 3. Use the model:
from inference.prompt_enhancer_v2 import PromptEnhancerV2

# Initialize the model
models_root_path = "./models/promptenhancer-32b"
enhancer = PromptEnhancerV2(models_root_path=models_root_path, device_map="auto")

# Enhance a prompt (Chinese or English)
user_prompt = "้Ÿฉ็ณปๆ’็”ป้ฃŽๅฅณ็”Ÿๅคดๅƒ๏ผŒ็ฒ‰็ดซ่‰ฒ็Ÿญๅ‘+้€ๆ˜Žๆ„Ÿ่…ฎ็บข๏ผŒไพงๅ…‰ๆธฒๆŸ“ใ€‚"
enhanced_prompt = enhancer.predict(
    prompt_cot=user_prompt,
    device="cuda"
)

print("Enhanced:", enhanced_prompt)

Evaluation

The model is evaluated on the T2I-Keypoints-Eval dataset, which contains diverse text-to-image prompts across various categories and languages.

Citation

If you find this model useful, please consider citing:

BibTeX:

@article{promptenhancer,
  title={PromptEnhancer: A Simple Approach to Enhance Text-to-Image Models via Chain-of-Thought Prompt Rewriting},
  author={Wang, Linqing and Xing, Ximing and Cheng, Yiji and Zhao, Zhiyuan and Donghao, Li and Tiankai, Hang and Zhenxi, Li and Tao, Jiale and Wang, QiXun and Li, Ruihuang and Chen, Comi and Li, Xin and Wu, Mingrui and Deng, Xinchi and Gu, Shuyang and Wang, Chunyu and Lu, Qinglin},
  journal={arXiv preprint arXiv:2509.04545},
  year={2025}
}
Downloads last month
912
Safetensors
Model size
33B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for PromptEnhancer/PromptEnhancer-32B

Finetuned
(47)
this model
Quantizations
2 models

Spaces using PromptEnhancer/PromptEnhancer-32B 3