YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Ming-VideoMAR: Autoregressive Video Generation with Continuous Tokens

πŸ€— Hugging Face ο½œπŸ“„ Paper (NeurIPS 2025)

🌍 Introduction

  • 🌐 The First NTP MLLM with Continuous Unified Vision Representations: Ming-VideoMAR is a concise and efficient decoder-only autoregressive image-to-video model with continuous tokens, composing temporal frame-by-frame and spatial masked generation. Ming-VideoMAR identifies temporal causality and spatial bi-directionality as the first principle of video AR models, and proposes the next-frame diffusion loss for the integration of mask and video generation.
  • πŸ–ΌοΈ First Zero-shot Resolution Scaling for Video Generation: Ming-VideoMAR replicates the unique capacity of sequence extrapolation from language models to video generation. It supports generating videos of flexible spatial and temporal resolutions that is far beyond the training resolution. This is achieved by solving the training-inference gap and adopting the 3D rotary embeddings.
  • ⚑ Extreme Hihg Training Efficiency: Ming-VideoMAR proposes the temporal short-to-long curriculum learning and spatial progressive resolution training. It surpasses the previous state-of-the-art (Cosmos I2V) while requiring significantly fewer parameters (9.3%), training data (0.5%), and GPU resources (0.2%), both quantatively and qualitatively.
  • ⚑ Extreme Hihg Inference Efficiency: Ming-VideoMAR inherently bears high efficiency due to simultaneous temporal-wise KV cache and spatial-wise parallel generation, significantly surpassing the NTP counterpart.
  • πŸ”— Accumulation Error Solution: Ming-VideoMAR employs the progressive temperature strategy at inference time to mitigate the accumulation error.

πŸ“Œ Updates

  • [2025.10.17] πŸ”₯ Code and Checkpoint!
    We’re thrilled to announce the code and checkpoint release of Ming-VideoMAR !
  • [2025 09.19] πŸŽ‰ Our paper is accepted by NeurIPS 2025.
  • [2025.06.18] πŸ“„ Technical Report Released!
    The full technical report is now available on arXiv:
    πŸ‘‰ VideoMAR: Autoregressive Video Generation with Continuous Tokens

πŸ“Š Evaluation

Ming-VideoMAR achieves sota autoregressive image-to-video generation performance with extremely small training and inference costs.

Quantitative Comparison

Ming-VideoMAR achieves sota performance across the token-wise autoregressive video generation models with sifnificantly lower training cost.

Qualitative Comparison

Ming-VideoMAR achieves better quality and finer details than the Cosmos baseline, even under lower resolution (Ming-VideoMAR:480x768 VS Cosmos:640x1024).

Qualitative Comparison

Ming-VideoMAR first unlocks the resolution scaling ability to flexibly generate higher or lower resolutions beyond the training scope.

πŸ“₯ Model Downloads

Model Hugging Face ModelScope
Stage1(25x256x256) Download Download
Stage2(49x480x768) Download Download

πŸ”— Both models are publicly available for research. Visit the respective pages for model details, inference examples, and integration guides.

πŸš€ Example Usage

πŸ”§ Installation

Download the code:

git clone https://github.com/inclusionAI/Ming-VideoMAR.git
cd Ming-VideoMAR

A suitable conda environment named videomar can be created and activated with:

conda env create -f environment.yaml
conda activate videomar

πŸ–ΌοΈ Training

Run the following command, which contains the script for the training of VideoMAR.

bash train.sh

Specifically, take the default training script of stage2 for example:

torchrun --standalone --nnodes 1 --nproc_per_node 8   main_videomar.py    \
--img_size_h 480  --img_size_w 768 --vae_embed_dim 16 --vae_spatial_stride 16 --vae_tempotal_stride 8 --patch_size 1  \
--model videomar --diffloss_d 3 --diffloss_w 1280  --save_last_freq 100  --num_workers 2  --file_type video  \
--epochs 800 --warmup_epochs 200 --batch_size 1 --blr 2.0e-4 --diffusion_batch_mul 4  --ema --ema_rate 0.995  --num_frames 49    \
--online_eval  --eval_freq 100  --eval_bsz 1  --cfg 3.0   --num_iter 32  \
--Cosmos_VAE  --vae_path $Cosmos-Tokenizer-CV8x16x16$ \
--output_dir logs  \
--text_model_path $Qwen2-VL-1.5B-Instruct$ \
--data_path $your_data_path$ \

Note!
This model is trained with our inner data, and therefore the original dataloader code is tailored for inner oss file system. If your want to train this model with your own data, you should replace the following Your_DataReader (Line 219 in main_videomar.py) with your own dataloader code.

######################### Load Dataset #########################
    dataset_train = Your_DataReader(data_path=args.data_path, img_size=[args.img_size_h, args.img_size_w], num_frames=args.num_frames, file_type=args.file_type)   # Replace this with your data reader file
    sampler_train = DistributedSampler(dataset_train, num_replicas=num_tasks, rank=global_rank, shuffle=True)
    data_loader_train = DataLoader(
        dataset_train,
        sampler=sampler_train,
        batch_size=args.batch_size,
        num_workers=args.num_workers,
        pin_memory=args.pin_mem,
        drop_last=True,
    )

πŸ–ΌοΈ Inference

Run the following command, which contains the script for the inference of VideoMAR.

bash samle.sh

Specifically, take the default inference script of stage2 for example:

CUDA_VISIBLE_DEVICES='0'    torchrun --standalone --nnodes 1 --nproc_per_node 1  main_videomar.py  \
--model videomar --diffloss_d 3 --diffloss_w 1280  --eval_bsz 1  --evaluate  \
--img_size_h 480  --img_size_w 768 --vae_embed_dim 16 --vae_spatial_stride 16 --vae_tempotal_stride 8   \
--i2v  --cond_frame 1  --cfg 3.0  --temperature 1.0  --num_frames 49  --num_sampling_steps 100  --num_iter 64  \
--Cosmos_VAE  --vae_path $Cosmos-Tokenizer-CV8x16x16$ \
--output_dir logs  \
--text_model_path $Qwen2-VL-1.5B-Instruct$ \
--resume ./ckpt/checkpoint-736.pth \

πŸ“Œ Tips:

  • $Cosmos-Tokenizer-CV8\times 16\times 16$: download Cosmos-CV8x16x16 and replace it with your downloaded path.
  • $Qwen2-VL-1.5B-Instruct$: download Qwen2-VL-1.5B, and optionally load it locally.
  • VideoMAR checkpoint: download the checkpoint and place it in ./ckpt/.

✍️ Citation

If you find our work useful in your research or applications, please consider citing:

@article{yu2025videomar,,
  title={VideoMAR: Autoregressive Video Generatio with Continuous Tokens},
  author={Hu Yu, Biao Gong, Hangjie Yuan, DanDan Zheng, Weilong Chai, Jingdong Chen, Kecheng Zheng, Feng Zhao},
  journal={Advances in neural information processing systems},
  year={2025}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support