|
|
--- |
|
|
license: apache-2.0 |
|
|
language: |
|
|
- en |
|
|
- zh |
|
|
base_model: |
|
|
- Comfy-Org/Wan_2.1_ComfyUI_repackaged |
|
|
- Wan-AI/Wan2.1-I2V-14B-480P |
|
|
- Wan-AI/Wan2.1-I2V-14B-720P |
|
|
tags: |
|
|
- merge |
|
|
pipeline_tag: image-to-video |
|
|
--- |
|
|
Simple 50/50 merge of the 480p & 720p I2V models. On their own they seem to handle resolutions above and below their name fairly well, so maybe they would do well merged? |
|
|
|
|
|
I don't have the memory required to merge the full weights, so I used the fp8 weights. Code used to make the merge below. |
|
|
|
|
|
```py |
|
|
import torch |
|
|
from tqdm import tqdm |
|
|
from safetensors import safe_open |
|
|
from safetensors.torch import save_file |
|
|
import gc |
|
|
|
|
|
|
|
|
model1_path = "wan2.1_i2v_480p_14B_fp8_e4m3fn.safetensors" |
|
|
model2_path = "wan2.1_i2v_720p_14B_fp8_e4m3fn.safetensors" |
|
|
output_path = "wan2.1_i2v_480p_720p_14B_fp8_e4m3fn.safetensors" |
|
|
|
|
|
with ( |
|
|
safe_open(model1_path, framework="pt", device="cpu") as f_1, |
|
|
safe_open(model2_path, framework="pt", device="cpu") as f_2 |
|
|
): |
|
|
mixed_tensors = {} |
|
|
for key in tqdm(f_1.keys()): |
|
|
t_1 = f_1.get_tensor(key) |
|
|
t_2 = f_2.get_tensor(key) |
|
|
|
|
|
if t_1.dtype == torch.float8_e4m3fn or t_2.dtype == torch.float8_e4m3fn: |
|
|
mixed_tensors[key] = t_1.to(torch.float32).add_(t_2.to(torch.float32)).mul_(0.5).to("cuda").to(torch.float8_e4m3fn) |
|
|
else: |
|
|
mixed_tensors[key] = t_1.add_(t_2).mul_(0.5).to("cuda") |
|
|
|
|
|
del t_1, t_2 |
|
|
gc.collect() |
|
|
|
|
|
save_file(mixed_tensors, output_path) |
|
|
``` |
|
|
|