File size: 3,356 Bytes
30b6f6f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
923cd6b
9ca051d
923cd6b
30b6f6f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f0f437b
30b6f6f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7b9fda7
30b6f6f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
---
license: apache-2.0
base_model:
- Qwen/Qwen-Image-Edit
language:
- en
- zh
library_name: diffusers
pipeline_tag: image-to-image
datasets:
- OPPOer/X2Edit-Dataset
---
<div align="center">
  <h1>Qwen-Image-Edit-Pruning</h1>
<a href='https://github.com/OPPO-Mente-Lab/Qwen-Image-Pruning'><img src="https://img.shields.io/badge/GitHub-OPPOer-blue.svg?logo=github" alt="GitHub"></a>
</div>

## Update
- 2025/10/09: We release **[Qwen-Image-Edit-2509-Pruning-13B-4steps](https://huggingface.co/OPPOer/Qwen-Image-Edit-2509-Pruning)**
- 2025/09/29: We release **[Qwen-Image-Edit-2509-Pruning-14B](https://huggingface.co/OPPOer/Qwen-Image-Edit-2509-Pruning)**
- 2025/09/28: We release **[Qwen-Image-Edit-Pruning-13B-4steps](https://huggingface.co/OPPOer/Qwen-Image-Edit-Pruning)** 


## Introduction
This open-source project is based on Qwen-Image-Edit and has attempted model pruning, removing 20 layers while retaining the weights of 40 layers, resulting in a model size of 13.6B parameters. The pruned version will continue to be iterated upon. Please stay tuned.

<div align="center">
  <img src="bench-2509.png">
</div>

## Quick Start

Install the latest version of diffusers and pytorch
```
pip install torch
pip install git+https://github.com/huggingface/diffusers
```

### Qwen-Image-Edit-2509-14B Inference
```python
import os
import torch
from PIL import Image
from diffusers import QwenImageEditPlusPipeline
model_name = f"OPPOer/Qwen-Image-Edit-2509-Pruning"
pipeline = QwenImageEditPlusPipeline.from_pretrained(model_name, torch_dtype=torch.bfloat16)
print("pipeline loaded")
pipeline.to('cuda')
pipeline.set_progress_bar_config(disable=None)
image1 = Image.open("input1.jpg")
image2 = Image.open("input2.jpg")
prompt = "Let the ancient costume beauty in the second picture sit on the sofa in the first picture"
inputs = {
    "image": [image1, image2],
    "prompt": prompt,
    "generator": torch.manual_seed(0),
    "true_cfg_scale": 4.0,
    "negative_prompt": " ",
    "num_inference_steps": 40,
    "guidance_scale": 1.0,
    "num_images_per_prompt": 1,
}
with torch.inference_mode():
    output = pipeline(**inputs)
    output_image = output.images[0]
    output_image.save("output_image_edit_plus.png")
    print("image saved at", os.path.abspath("output_image_edit_plus.png"))
```

### Qwen-Image-Edit-2509-13B Inference
```python
import os
import torch
from PIL import Image
from diffusers import QwenImageEditPlusPipeline
model_name = f"OPPOer/Qwen-Image-Edit-2509-Pruning/Qwen-Image-Edit-2509-13B-4steps"
pipeline = QwenImageEditPlusPipeline.from_pretrained(model_name, torch_dtype=torch.bfloat16)
print("pipeline loaded")
pipeline.to('cuda')
pipeline.set_progress_bar_config(disable=None)
image1 = Image.open("input1.jpg")
image2 = Image.open("input2.jpg")
prompt = "Let the ancient costume beauty in the second picture sit on the sofa in the first picture"
inputs = {
    "image": [image1, image2],
    "prompt": prompt,
    "generator": torch.manual_seed(0),
    "true_cfg_scale": 1.0,
    "negative_prompt": " ",
    "num_inference_steps": 4,
    "guidance_scale": 1.0,
    "num_images_per_prompt": 1,
}
with torch.inference_mode():
    output = pipeline(**inputs)
    output_image = output.images[0]
    output_image.save("output_image_edit_plus.png")
    print("image saved at", os.path.abspath("output_image_edit_plus.png"))
```