File size: 1,914 Bytes
bcc2b75
e23482e
bcc2b75
 
 
 
 
 
e23482e
bcc2b75
e23482e
bcc2b75
 
e23482e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ab85748
e23482e
bcc2b75
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
---
title: Wan 2.2 Text + Image to Video
emoji: 🔥
colorFrom: purple
colorTo: indigo
sdk: gradio
sdk_version: 6.1.0
app_file: app.py
pinned: true
license: apache-2.0
short_description: Text + Image to Video generator using Wan 2.2 5B model
---

# Wan 2.2 Text + Image to Video Generator

Generate high-quality videos from text prompts and optional images using the **Wan 2.2 TI2V-5B** model.

## Features

- **Text-to-Video**: Generate videos from text prompts only
- **Image-to-Video**: Animate static images with text prompts
- **Full Parameter Control**: Adjust all generation parameters including:
  - Sampling steps (10-50)
  - Guidance scale (1.0-10.0)
  - Sample shift (1.0-20.0)
  - Solver selection (UniPC or DPM++)
  - Custom resolution (multiples of 32)
  - Duration control (0.3-5.0 seconds)
  - Negative prompts
  - Seed control for reproducibility
- **ZeroGPU Support**: Optimized for Hugging Face Spaces with ZeroGPU hardware

## Model

- **Model**: [Wan-AI/Wan2.2-TI2V-5B](https://huggingface.co/Wan-AI/Wan2.2-TI2V-5B)
- **Paper**: [Wan 2.2 Paper](https://arxiv.org/abs/2503.20314)

## Usage

1. **Text-to-Video**: Leave the image input blank and provide a text prompt
2. **Image-to-Video**: Upload an image and provide a text prompt describing the desired animation
3. Adjust advanced settings as needed:
   - More sampling steps = higher quality but slower
   - Higher guidance scale = more adherence to prompt
   - Lower shift values (e.g., 3.0) recommended for 480p videos

## Technical Details

- Frame rate: 24 FPS
- Supported resolutions: Multiples of 32 (128-1280)
- Frame count: 8-121 frames (automatically adjusted to 4n+1 format)
- Default duration: 2.0 seconds (48 frames)
- **Flash Attention**: Optional - automatically uses PyTorch's built-in attention if flash-attn is not available

Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference