Thanks a lot!

#2
by RuneXX - opened

This model just came out, and QuantStack already on it ;-)
Thanks a lot, for these GGUF models. It's such a lifesaver sometimes, when the VRAM screams OOM ;-)

Just a quick test with Q5

(not sure if my workflow is 100% correct - might be some added node at ComfyUI for it in updates, but at least it works ;-))

q5.png

YarvixPA changed discussion status to closed
YarvixPA changed discussion status to open
QuantStack org
edited Sep 23

Thanks for sharing and the support!!

Q8 will come and what size will be?

Thx!

Just a quick test with Q5

(not sure if my workflow is 100% correct - might be some added node at ComfyUI for it in updates, but at least it works ;-))

q5.png

Could you share your multi image workflow?

QuantStack org

Q8 will come and what size will be?

Thx!

Uploaded, sorry for the long wait. I was getting issues uploading via web browser. I change via terminal to upload and no get issue.

Could you share your multi image workflow?

It was a bit of a frankenstein mix of stitching together images and send to the text encoder.
Its much easier now, ComfyUI added a new node: TextEncodeQwenImageEditPlus

With that node its easy, simply add up to 3 images to the 3 available image inputs at the node ;-) Just use the default Qwen Image Edit workflow, and replace the text encoder node.
qwenimageeditplus.png

Thanks. It worked so far, the result is quite far away from yours. May I get your prompt? It may also relate to the poor pictues, i just sliced your sample above. I need to find some better source pics i tink. I also used the Q5 gguf, doesnt fit into VRAM and takes quite long however :(

QuantStack org

Thanks. It worked so far, the result is quite far away from yours. May I get your prompt? It may also relate to the poor pictues, i just sliced your sample above. I need to find some better source pics i tink. I also used the Q5 gguf, doesnt fit into VRAM and takes quite long however :(

You should use the distorch v2 node, that one has nearly no speed penalty with offloading (;

Can this use the "Qwen-Image-Lightning-4steps-V1.0"?

QuantStack org

Can this use the "Qwen-Image-Lightning-4steps-V1.0"?

Yes it can, though it makes more sense to use the v2.0 (;

Can this use the "Qwen-Image-Lightning-4steps-V1.0"?

Yes it can, though it makes more sense to use the v2.0 (;

LOL, of course! Thanks!

@MiAngel
Yes dont chop up my image above, i downscaled it to not spam this thread with huge images ;-) (Qwen Images are quite large, compared to say SDXL, Flux etc)
Use images with high quality as the image you want to edit.

For prompting, that is really where often the challenge is. For the new Qwen Image Edit model it seems a little more "forgiving" ( I didnt have a chance to properly test it yet), but the previous Qwen Image for sure loved very specific words.

"Replace X with Y" to replace something, "Transform X to Y" to completely change say background or camera angle. And then "Preserve XXX, leave everything else unchanged" etc
A good guide here: https://www.reddit.com/r/StableDiffusion/comments/1n1n81o/qwenimageedit_prompt_guide_the_complete_playbook/

And looking at how the Qwen team they write their own examples is really good help:
https://qwen.ai/blog?id=a6f483777144685d33cd3d2af95136fcbeb57652&from=research.research-list (such as "Obtain the front view" etc.. syntax that is very specific)

That being said, the new model seems to be more forgiving for not using its favorite words ;-) For the above image i created i just used "Put the red leather jacket on the girl. Put the caps on the girl. Leave everything else unchanged". Seemed to work fine

What do I need to do to my workflow to make this work properly? I swapped out the "TextEncodeQwenImageEdit" with "TextEncodeQwenImageEdit" and the model of course, and I am getting worse results. Maybe someone can share a good workflow? Thanks

QuantStack org

I have one, it is a bit messy so no guaranties but it should work if you tweak it a bit (;
https://pastebin.com/TSpdm1sj

QuantStack org

There should be a simpler one on the comfyui wiki too though

You find the basic starter workflow inside ComfyUI menu settings as well.

workflow.png

(just replace the model loader with Unet/Gguf loader, and clip loader with gguf loader too should you want)

You find the basic starter workflow inside ComfyUI menu settings as well.

workflow.png

(just replace the model loader with Unet/Gguf loader, and clip loader with gguf loader too should you want)

Yeah that is the one I am using exactly as you describe. Swapped for the unet loader and activated the lightning 4-step (then switched to V2) but my results are worse somehow than when I did the same thing for the older model. Also do I not need to replace the "TextEncodeQwenImageEdit" with "TextEncodeQwenImageEditPlus"? I was told I needed to for 2509.

EDIT: WAIT, I didn't actually see the 2509 one! I was using the old one, let me go see. Maybe I need to update again. Thanks

Thanks!

You find the basic starter workflow inside ComfyUI menu settings as well.

workflow.png

(just replace the model loader with Unet/Gguf loader, and clip loader with gguf loader too should you want)

OK I got it working with this. Thank you! Can't get the CLIPLoader GGUF to work though. I am using "Qwen-Image-Edit-2509-Q3_K_M.gguf" and for my CLIP I am using "Qwen2.5-VL-7B-Instruct-Q4_K_M.gguf" (with the CLIPLoader GGUF) but I get this error and I don't know what I am doing wrong -

TextEncodeQwenImageEditPlus

mat1 and mat2 shapes cannot be multiplied (704x1280 and 3840x1280)

Thanks for all the help!

You find the basic starter workflow inside ComfyUI menu settings as well.

workflow.png

(just replace the model loader with Unet/Gguf loader, and clip loader with gguf loader too should you want)

OK I got it working with this. Thank you! Can't get the CLIPLoader GGUF to work though. I am using "Qwen-Image-Edit-2509-Q3_K_M.gguf" and for my CLIP I am using "Qwen2.5-VL-7B-Instruct-Q4_K_M.gguf" (with the CLIPLoader GGUF) but I get this error and I don't know what I am doing wrong -

TextEncodeQwenImageEditPlus

mat1 and mat2 shapes cannot be multiplied (704x1280 and 3840x1280)

Thanks for all the help!

You'll need the mmproj file. I believe that it also in the same repo as all the quants is. Put it in the same folder as the main clip model and the node should automatically find it.

You find the basic starter workflow inside ComfyUI menu settings as well.

workflow.png

(just replace the model loader with Unet/Gguf loader, and clip loader with gguf loader too should you want)

OK I got it working with this. Thank you! Can't get the CLIPLoader GGUF to work though. I am using "Qwen-Image-Edit-2509-Q3_K_M.gguf" and for my CLIP I am using "Qwen2.5-VL-7B-Instruct-Q4_K_M.gguf" (with the CLIPLoader GGUF) but I get this error and I don't know what I am doing wrong -

TextEncodeQwenImageEditPlus

mat1 and mat2 shapes cannot be multiplied (704x1280 and 3840x1280)

Thanks for all the help!

You'll need the mmproj file. I believe that it also in the same repo as all the quants is. Put it in the same folder as the main clip model and the node should automatically find it.

Thanks. I actually do have the "mmproj-BF16.gguf" file in the same folder (text_encoders) when i get that error. I don't understand that error at all.

Then that is a naming issue. I believe the node requires that you name the mmproj file with the base model name included. For this one I believe that you'll at least need to add something like "qwen-2.5-vl-7b" into the mmproj's filename to get it to work. Also, you might want to check if the node actually found the mmproj file by checking it's output int the console. It'll let you know which file it is picking.

QuantStack org
edited Sep 24

use this one Qwen2.5-VL-7B-Instruct-mmproj-BF16.gguf i have it like this

Then that is a naming issue. I believe the node requires that you name the mmproj file with the base model name included. For this one I believe that you'll at least need to add something like "qwen-2.5-vl-7b" into the mmproj's filename to get it to work. Also, you might want to check if the node actually found the mmproj file by checking it's output int the console. It'll let you know which file it is picking.

You are the man!!!

I renamed it to "Qwen2.5-VL-7B-Instruct-mmproj-BF16.gguf" and it found it and worked! Didn't see much improvment in speed after a few renders though. Maybe I should drop to the Q3 CLIP to match the model.

EDIT:

use this one Qwen2.5-VL-7B-Instruct-mmproj-BF16.gguf i have it like this

Thanks dude! I quoted the wrong person last time but huge thanks to both of you for helping.

does the lightning lora affect character consistency?

YarvixPA changed discussion status to closed

Sign up or log in to comment