ussoewwin commited on
Commit
505e813
·
verified ·
1 Parent(s): acf7f7e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -18,8 +18,8 @@ base_model: Wan-AI/Wan2.2-I2V-A14B
18
  ---
19
  ## Model Files
20
 
21
- - `wan2.2-i2v-a14b-high-FP16.gguf`: High-noise model in **FP16 format (not quantized)**
22
- - `wan2.2-i2v-a14b-low-FP16.gguf`: Low-noise model in **FP16 format (not quantized)**
23
 
24
  ## Format Details
25
 
@@ -28,8 +28,8 @@ base_model: Wan-AI/Wan2.2-I2V-A14B
28
  - Format: GGUF container with **FP16 precision (unquantized)**
29
  - Original model size: ~27B parameters (14B active per step)
30
  - File sizes:
31
- - high: 26.8 GB for FP16)
32
- - low: 26.8 GB for FP16)
33
 
34
  ## Why FP16 in GGUF?
35
 
 
18
  ---
19
  ## Model Files
20
 
21
+ - `wan2.2_i2v_high_noise_14B_fp16.gguf`: High-noise model in **FP16 format (not quantized)**
22
+ - `wan2.2_i2v_low_noise_14B_fp16`: Low-noise model in **FP16 format (not quantized)**
23
 
24
  ## Format Details
25
 
 
28
  - Format: GGUF container with **FP16 precision (unquantized)**
29
  - Original model size: ~27B parameters (14B active per step)
30
  - File sizes:
31
+ - high: 26.8 GB for FP16
32
+ - low: 26.8 GB for FP16
33
 
34
  ## Why FP16 in GGUF?
35