ussoewwin commited on
Commit
cc822ed
·
verified ·
1 Parent(s): a986c94

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -80
README.md CHANGED
@@ -1,81 +1,16 @@
1
- # Wan2.2 GGUF Quantized Models
2
-
3
- **Quantized GGUF versions of Wan2.2-I2V-A14B model**
4
- This repository contains GGUF-quantized versions of **WAN2.2-I2V-A14B**, converted for use with ComfyUI-GGUF.
5
-
6
- ## Model Files
7
-
8
- - `wan2.2_i2v_high_noise_14B_fp16.gguf`: High-noise model (used for initial denoising steps)
9
- - `wan2.2_i2v_low_noise_14B_fp16.gguf`: Low-noise model (used for detail refinement)
10
-
11
- ## Requirements
12
-
13
- - [ComfyUI](https://github.com/comfyanonymous/ComfyUI)
14
- - [ComfyUI-GGUF](https://github.com/city96/ComfyUI-GGUF) extension by city96
15
-
16
- ## Installation
17
-
18
- 1. Download both GGUF files and place them in `ComfyUI/models/unet/`
19
- 2. Install ComfyUI-GGUF extension
20
- 3. Restart ComfyUI
21
-
22
- ## Usage
23
-
24
- 1. Load the workflow file included in this repository (drag and drop into ComfyUI)
25
- 2. The workflow will automatically use:
26
- - High-noise model for initial denoising steps
27
- - Low-noise model for final detail refinement
28
-
29
- ## Quantization Details
30
-
31
- - Base model: [Wan-AI/Wan2.2-I2V-A14B](https://huggingface.co/Wan-AI/Wan2.2-I2V-A14B)
32
- - Quantization method: GGUF (converted using llama.cpp)
33
- - Quantization level: [Specify your quantization level, e.g., Q5_K_M]
34
- - Original model size: ~27B parameters (14B active per step)
35
- - GGUF file sizes:
36
- - high: [Specify size] MB
37
- - low: [Specify size] MB
38
-
39
- ## Why GGUF?
40
-
41
- GGUF format allows for:
42
- - Reduced memory requirements
43
- - Faster inference on consumer GPUs
44
- - Compatibility with ComfyUI-GGUF extension
45
- - Efficient CPU offloading options
46
-
47
- ## Performance Notes
48
-
49
- - For best results, use both high and low noise models as intended by the original architecture
50
- - Generation quality may vary based on quantization level
51
- - Recommended GPU: RTX 3090 or better for smooth 720P generation
52
-
53
- ## Original Model Information
54
-
55
- Wan2.2 is an advanced video generative model with:
56
- - **Effective MoE Architecture**: Separates denoising process with specialized experts
57
- - **Cinematic-level Aesthetics**: Detailed control over lighting, composition, and color
58
- - **Complex Motion Generation**: Trained on significantly larger dataset (+83.2% videos)
59
- - **Efficient High-Definition**: Supports 720P@24fps generation on consumer GPUs
60
-
61
- ## Community Works
62
-
63
- If you've used these GGUF models in your projects, please let us know so we can feature them!
64
-
65
- ## License Agreement
66
-
67
- The original Wan2.2 model is licensed under the Apache 2.0 License. These GGUF conversions are provided under the same terms. You are fully accountable for your use of the models, which must comply with the original license restrictions.
68
-
69
- ## Acknowledgements
70
-
71
- - [Wan-AI](https://huggingface.co/Wan-AI) for the original Wan2.2 model
72
- - [city96](https://github.com/city96) for ComfyUI-GGUF extension
73
- - [ggerganov](https://github.com/ggerganov) for llama.cpp
74
-
75
- ## Contact
76
-
77
- For issues with these GGUF files, please open an issue on [GitHub](https://github.com/yourusername/wan2.2-gguf/issues)
78
-
79
  ---
80
-
81
- 2-bit | 3-bit | 4-bit | 5-bit | 6-bit | 8-bit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: apache-2.0
3
+ tags:
4
+ - gguf
5
+ - wan2.2
6
+ - i2v
7
+ - video-generation
8
+ - wan-ai
9
+ - comfyui
10
+ language:
11
+ - en
12
+ - ja
13
+ - zh
14
+ library_name: comfyui
15
+ pipeline_tag: image-to-video
16
+ base_model: Wan-AI/Wan2.2-I2V-A14B