|
|
--- |
|
|
license: apache-2.0 |
|
|
base_model: |
|
|
- Qwen/Qwen-Image-Edit |
|
|
language: |
|
|
- en |
|
|
- zh |
|
|
library_name: diffusers |
|
|
pipeline_tag: image-to-image |
|
|
datasets: |
|
|
- OPPOer/X2Edit-Dataset |
|
|
--- |
|
|
<div align="center"> |
|
|
<h1>Qwen-Image-Edit-Pruning</h1> |
|
|
<a href='https://github.com/OPPO-Mente-Lab/Qwen-Image-Pruning'><img src="https://img.shields.io/badge/GitHub-OPPOer-blue.svg?logo=github" alt="GitHub"></a> |
|
|
</div> |
|
|
|
|
|
## Update |
|
|
- 2025/10/09: We release **[Qwen-Image-Edit-2509-Pruning-13B-4steps](https://huggingface.co/OPPOer/Qwen-Image-Edit-2509-Pruning)** |
|
|
- 2025/09/29: We release **[Qwen-Image-Edit-2509-Pruning-14B](https://huggingface.co/OPPOer/Qwen-Image-Edit-2509-Pruning)** |
|
|
- 2025/09/28: We release **[Qwen-Image-Edit-Pruning-13B-4steps](https://huggingface.co/OPPOer/Qwen-Image-Edit-Pruning)** |
|
|
|
|
|
|
|
|
## Introduction |
|
|
This open-source project is based on Qwen-Image-Edit and has attempted model pruning, removing 20 layers while retaining the weights of 40 layers, resulting in a model size of 13.6B parameters. The pruned version will continue to be iterated upon. Please stay tuned. |
|
|
|
|
|
<div align="center"> |
|
|
<img src="bench-2509.png"> |
|
|
</div> |
|
|
|
|
|
## Quick Start |
|
|
|
|
|
Install the latest version of diffusers and pytorch |
|
|
``` |
|
|
pip install torch |
|
|
pip install git+https://github.com/huggingface/diffusers |
|
|
``` |
|
|
|
|
|
### Qwen-Image-Edit-2509-14B Inference |
|
|
```python |
|
|
import os |
|
|
import torch |
|
|
from PIL import Image |
|
|
from diffusers import QwenImageEditPlusPipeline |
|
|
model_name = f"OPPOer/Qwen-Image-Edit-2509-Pruning" |
|
|
pipeline = QwenImageEditPlusPipeline.from_pretrained(model_name, torch_dtype=torch.bfloat16) |
|
|
print("pipeline loaded") |
|
|
pipeline.to('cuda') |
|
|
pipeline.set_progress_bar_config(disable=None) |
|
|
image1 = Image.open("input1.jpg") |
|
|
image2 = Image.open("input2.jpg") |
|
|
prompt = "Let the ancient costume beauty in the second picture sit on the sofa in the first picture" |
|
|
inputs = { |
|
|
"image": [image1, image2], |
|
|
"prompt": prompt, |
|
|
"generator": torch.manual_seed(0), |
|
|
"true_cfg_scale": 4.0, |
|
|
"negative_prompt": " ", |
|
|
"num_inference_steps": 40, |
|
|
"guidance_scale": 1.0, |
|
|
"num_images_per_prompt": 1, |
|
|
} |
|
|
with torch.inference_mode(): |
|
|
output = pipeline(**inputs) |
|
|
output_image = output.images[0] |
|
|
output_image.save("output_image_edit_plus.png") |
|
|
print("image saved at", os.path.abspath("output_image_edit_plus.png")) |
|
|
``` |
|
|
|
|
|
### Qwen-Image-Edit-2509-13B Inference |
|
|
```python |
|
|
import os |
|
|
import torch |
|
|
from PIL import Image |
|
|
from diffusers import QwenImageEditPlusPipeline |
|
|
model_name = f"OPPOer/Qwen-Image-Edit-2509-Pruning/Qwen-Image-Edit-2509-13B-4steps" |
|
|
pipeline = QwenImageEditPlusPipeline.from_pretrained(model_name, torch_dtype=torch.bfloat16) |
|
|
print("pipeline loaded") |
|
|
pipeline.to('cuda') |
|
|
pipeline.set_progress_bar_config(disable=None) |
|
|
image1 = Image.open("input1.jpg") |
|
|
image2 = Image.open("input2.jpg") |
|
|
prompt = "Let the ancient costume beauty in the second picture sit on the sofa in the first picture" |
|
|
inputs = { |
|
|
"image": [image1, image2], |
|
|
"prompt": prompt, |
|
|
"generator": torch.manual_seed(0), |
|
|
"true_cfg_scale": 1.0, |
|
|
"negative_prompt": " ", |
|
|
"num_inference_steps": 4, |
|
|
"guidance_scale": 1.0, |
|
|
"num_images_per_prompt": 1, |
|
|
} |
|
|
with torch.inference_mode(): |
|
|
output = pipeline(**inputs) |
|
|
output_image = output.images[0] |
|
|
output_image.save("output_image_edit_plus.png") |
|
|
print("image saved at", os.path.abspath("output_image_edit_plus.png")) |
|
|
``` |