How about T2v v1 gguf?
V1 version seems to looks better. V2 is too much cartoon with big boobs only.
I can try to do it, lets see if anyone else asks for in the following days
I would love that as well. I agree with tech77 on how version 1 is better. Actually after testing all the t2v models on Civitai I like this one the most.
I'm now uploading the v1.0
Cheers.
I'm now uploading the v1.0
Thank you BigDannyPt. It means a lot with these ggufs. You're awesome. π πͺ
A quick question though. Do you know if the one from Civitai has the clip embedded because that one is 21gb in size and your Q8 is 15gb. Perhaps the 'nsfw_wan_umt5-xxl_fp8_scaled.safetensors'?
i think I somehow messed up with the low model.
i got an error when running it through runpod, and i think I ruined it up.
the high is running correctly? no error if you load only it?
I'll try to run it again today to see if it works correctly
I'm now uploading the v1.0
Thank you BigDannyPt. It means a lot with these ggufs. You're awesome. π πͺ
A quick question though. Do you know if the one from Civitai has the clip embedded because that one is 21gb in size and your Q8 is 15gb. Perhaps the 'nsfw_wan_umt5-xxl_fp8_scaled.safetensors'?
the original model has clip and vae in it, this doesn't , you would need to add any, the tests that I did were with the normal ones, Q6 for CLIP and FP32 VAE
i think I somehow messed up with the low model.
i got an error when running it through runpod, and i think I ruined it up.the high is running correctly? no error if you load only it?
I'll try to run it again today to see if it works correctly
I'm now uploading the v1.0
Thank you BigDannyPt. It means a lot with these ggufs. You're awesome. π πͺ
A quick question though. Do you know if the one from Civitai has the clip embedded because that one is 21gb in size and your Q8 is 15gb. Perhaps the 'nsfw_wan_umt5-xxl_fp8_scaled.safetensors'?
the original model has clip and vae in it, this doesn't , you would need to add any, the tests that I did were with the normal ones, Q6 for CLIP and FP32 VAE
That's great. Thanks.
Well mistakes can happen, we're only human. Have a great day.
I'm updating the script at the same time, and the pod got stuck, so I went with a quick way of doing since I was also getting late to leave for a dinner, but now I'm doing the same way as with the High version.
will then download Q8 High and test all Low
High seems to work fine. Only low have problems.
the new ones that are there are working, I just did the test and it worked, sorry about this
the new ones that are there are working, I just did the test and it worked, sorry about this
Thank you for doing this for us. Hopefully others will appreciate this as well. :)
Thanks for fast fix. I tested it now and works great. And, wow, so much motion, I didn't see model with more motion then this one. Absolutly amazing one.
I've created one video with 560x940 81 frames and 16fps with 8steps (4+4) and I thought it was a little bit low on the quality, some pixel were been seen, and that with Q2 CLIP and FP32 VAE
I personally like to add the Candid Photography at a strength around 0.65 maybe a little higher. Right now I'm testing with dr34ml4y aio at 0.5 to try and fix up the nsfw portion of this model since I don't like all of the merged loras included. UltraSharpCC at 0.2 together with Fast Laplacian Sharpen at 0.3 and Fast Film Grain at grain_intensity 0.02, saturation_mix 0.3 and batch_size 8. the last two is from the comfyui-vrgamedevgirl repo. That grain setting is at 640Γ960. euler/beta, shift 12. I use the 'UNetTemporalAttentionMultiply' node at (from the top going down) 1.00, 1.00, 1.20 and 1.30. I use a clip of Q6, lower than Q4 usually is bad for ai models, I haven't found any noticable quality loss by using a clip of Q4_k_m. I'm using the ComfyUI-MultiGPU distorch2 nodes for both models and clip together with ComfyUI-GGUF to be able to use ggufs. Wan MoE KSampler (Advanced) to be able to use only one ksampler and automatic split steps, WanVideo Enhance A Video (native) and at 0.5, WanVideoNAG at default, Rife Tensorrt at x4 and the Video Combine at 48 fps because this model is sometimes a little to lively with motion.
Perhaps you already know most of this but I thought I'd share if it could help you get a better output. 8 steps with the lightx2v loras embedded in this model give okay quality. I would say 6 steps for drafts, 8 for maybe keep and at least 10 steps if not 12-14 for great quality.
