File size: 1,342 Bytes
5c7ec8e
cc822ed
 
782208d
 
 
 
 
 
 
 
cc822ed
782208d
cc822ed
 
782208d
 
 
acf7f7e
 
 
505e813
ea714ce
dec7a86
 
acf7f7e
 
 
 
 
62a1c76
acf7f7e
 
 
19e125a
 
acf7f7e
 
 
 
 
1304ca8
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
---
license: apache-2.0
tags:
- gguf
- wan2.2
- i2v
- t2v
- video-generation
- wan-ai
- comfyui
- fp16
language:
- en
library_name: comfyui
pipeline_tag: image-to-video
base_model:
- Wan-AI/Wan2.2-I2V-A14B
- Wan-AI/Wan2.2-T2V-A14B
---
## Model Files

- `wan2.2_i2v_high_noise_14B_fp16.gguf`: High-noise model in **FP16 format (not quantized)**
- `wan2.2_i2v_low_noise_14B_fp16.gguf`: Low-noise model in **FP16 format (not quantized)**
- `wan2.2_t2v_high_noise_14B_fp16.gguf`: High-noise model in **FP16 format (not quantized)**
- `wan2.2_t2v_low_noise_14B_fp16.gguf`: High-noise model in **FP16 format (not quantized)**

## Format Details

- **Important**: These are **NOT quantized models** but FP16 precision models in GGUF container format
- Base model: [Wan-AI/Wan2.2-I2V-A14B](https://huggingface.co/Wan-AI/Wan2.2-I2V-A14B)
-Base model: [Wan-AI/Wan2.2-T2V-A14B](https://huggingface.co/Wan-AI/Wan2.2-T2V-A14B)
- Format: GGUF container with **FP16 precision (unquantized)**
- Original model size: ~27B parameters (14B active per step)
- File sizes: 
  - high: 28.6 GB for FP16 (SHA256: 3a7d4e...)
  - low: 28.6 GB (SHA256: 1b4e28...)

## Why FP16 in GGUF?

While GGUF is typically used for quantized models, ComfyUI-GGUF extension supports:
- Loading FP16 models in GGUF container format
- This provides compatibility with ComfyUI workflow