Yuanshi nielsr HF Staff commited on
Commit
5621cf5
·
verified ·
1 Parent(s): b5da6d2

Refine model card content and add pipeline tag (#1)

Browse files

- Refine model card content and add pipeline tag (3c6131e61a638f75af086b6710e350e49ab43896)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -7,6 +7,7 @@ tags:
7
  - Video Generation
8
  - Vision Translation
9
  - Bridge Model
 
10
  ---
11
 
12
  # 🎥 ViBT: Vision Bridge Transformer at Scale
@@ -17,4 +18,4 @@ tags:
17
  <a href="https://github.com/Yuanshi9815/ViBT"><img src="https://img.shields.io/badge/GitHub-Code-blue.svg?logo=github&" alt="GitHub"></a>
18
  </div>
19
 
20
- We introduce **Vision Bridge Transformer (ViBT)**, a large-scale instantiation of Brownian Bridge Models designed for conditional generation. Unlike traditional diffusion models that transform noise into data, Bridge Models directly model the trajectory between inputs and outputs, creating an efficient data-to-data translation paradigm. By scaling these models to 20B and 1.3B parameters, we demonstrate their effectiveness for image and video translation tasks. To support this scale, we adopt a Transformer architecture and propose a variance-stabilized velocity-matching objective for robust training. Together, these advances highlight the power of scaling Bridge Models for instruction-based image editing and complex video translation.
 
7
  - Video Generation
8
  - Vision Translation
9
  - Bridge Model
10
+ pipeline_tag: any-to-any
11
  ---
12
 
13
  # 🎥 ViBT: Vision Bridge Transformer at Scale
 
18
  <a href="https://github.com/Yuanshi9815/ViBT"><img src="https://img.shields.io/badge/GitHub-Code-blue.svg?logo=github&" alt="GitHub"></a>
19
  </div>
20
 
21
+ This repository introduces **Vision Bridge Transformer (ViBT)**, a large-scale instantiation of Brownian Bridge Models designed for efficient conditional generation. ViBT directly models the trajectory between inputs and outputs, creating an efficient data-to-data translation paradigm. The models demonstrate effectiveness for various image and video translation tasks, including instruction-based image editing and complex video translation.