metadata
license: apache-2.0
tags:
- gguf
- wan2.2
- i2v
- t2v
- video-generation
- wan-ai
- comfyui
- fp16
language:
- en
library_name: comfyui
pipeline_tag: image-to-video
base_model:
- Wan-AI/Wan2.2-I2V-A14B
- Wan-AI/Wan2.2-T2V-A14B
Model Files
wan2.2_i2v_high_noise_14B_fp16.gguf: High-noise model in FP16 format (not quantized)wan2.2_i2v_low_noise_14B_fp16.gguf: Low-noise model in FP16 format (not quantized)wan2.2_t2v_high_noise_14B_fp16.gguf: High-noise model in FP16 format (not quantized)wan2.2_t2v_low_noise_14B_fp16.gguf: High-noise model in FP16 format (not quantized)
Format Details
- Important: These are NOT quantized models but FP16 precision models in GGUF container format
- Base model: Wan-AI/Wan2.2-I2V-A14B -Base model: Wan-AI/Wan2.2-T2V-A14B
- Format: GGUF container with FP16 precision (unquantized)
- Original model size: ~27B parameters (14B active per step)
- File sizes:
- high: 28.6 GB for FP16 (SHA256: 3a7d4e...)
- low: 28.6 GB (SHA256: 1b4e28...)
Why FP16 in GGUF?
While GGUF is typically used for quantized models, ComfyUI-GGUF extension supports:
- Loading FP16 models in GGUF container format
- This provides compatibility with ComfyUI workflow