Lora of Tokawa Sakiko in Wan2.1
Train and inference in DiffSynth-Studio.
Base model: Wan2.1-T2V-1.3B
Dataset: togawa_sakiko_bangdreamitsmygo
Inference
import torch
from diffsynth import ModelManager, WanVideoPipeline, save_video, VideoData
model_manager = ModelManager(torch_dtype=torch.bfloat16, device="cpu")
model_manager.load_models([
"models/Wan-AI/Wan2.1-T2V-1.3B/diffusion_pytorch_model.safetensors",
"models/Wan-AI/Wan2.1-T2V-1.3B/models_t5_umt5-xxl-enc-bf16.pth",
"models/Wan-AI/Wan2.1-T2V-1.3B/Wan2.1_VAE.pth",
])
model_manager.load_lora("models/lightning_logs/version_2/checkpoints/epoch=9-step=1250.ckpt", lora_alpha=1.0)
pipe = WanVideoPipeline.from_model_manager(model_manager, device="cuda")
pipe.enable_vram_management(num_persistent_param_in_dit=None)
video = pipe(
prompt="1girl,solo,long_hair,blush,smile,bangs,shirt,ribbon,closed_mouth,hair_ribbon,yellow_eyes,grey_hair,sidelocks,outdoors,day,blunt_bangs,sailor_collar,blurry,official_alternate_costume,tree,looking_to_the_side,black_ribbon,depth_of_field,blurry_background,looking_away,portrait,light_blush,bush,brown_shirt,official_alternate_hairstyle",
negative_prompt="bad_quality,low_quality,low_res,low_resolution,low_definition,low_definition_video,low_definition_image,low_definition_video_image,low_definition_video_image_quality,low_definition_video_image_quality_video,low_definition_video_image_quality_video_image,low_definition_video_image_quality_video_image_video,low_definition_video_image_quality_video_image_video_image",
num_inference_steps=50,
seed=0, tiled=True
)
save_video(video, "video.mp4", fps=30, quality=5)
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for lavinal712/togawa_sakiko_wan2.1_lora
Base model
Wan-AI/Wan2.1-T2V-1.3B