Update README.md
Browse files
README.md
CHANGED
|
@@ -18,8 +18,8 @@ base_model: Wan-AI/Wan2.2-I2V-A14B
|
|
| 18 |
---
|
| 19 |
## Model Files
|
| 20 |
|
| 21 |
-
- `wan2.
|
| 22 |
-
- `wan2.
|
| 23 |
|
| 24 |
## Format Details
|
| 25 |
|
|
@@ -28,8 +28,8 @@ base_model: Wan-AI/Wan2.2-I2V-A14B
|
|
| 28 |
- Format: GGUF container with **FP16 precision (unquantized)**
|
| 29 |
- Original model size: ~27B parameters (14B active per step)
|
| 30 |
- File sizes:
|
| 31 |
-
- high: 26.8 GB for FP16
|
| 32 |
-
- low: 26.8 GB for FP16
|
| 33 |
|
| 34 |
## Why FP16 in GGUF?
|
| 35 |
|
|
|
|
| 18 |
---
|
| 19 |
## Model Files
|
| 20 |
|
| 21 |
+
- `wan2.2_i2v_high_noise_14B_fp16.gguf`: High-noise model in **FP16 format (not quantized)**
|
| 22 |
+
- `wan2.2_i2v_low_noise_14B_fp16`: Low-noise model in **FP16 format (not quantized)**
|
| 23 |
|
| 24 |
## Format Details
|
| 25 |
|
|
|
|
| 28 |
- Format: GGUF container with **FP16 precision (unquantized)**
|
| 29 |
- Original model size: ~27B parameters (14B active per step)
|
| 30 |
- File sizes:
|
| 31 |
+
- high: 26.8 GB for FP16
|
| 32 |
+
- low: 26.8 GB for FP16
|
| 33 |
|
| 34 |
## Why FP16 in GGUF?
|
| 35 |
|