Enhance model card for Human3R: Add metadata, links, description, and usage

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +47 -3
README.md CHANGED
@@ -1,3 +1,47 @@
1
- ---
2
- license: cc-by-nc-sa-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-sa-4.0
3
+ pipeline_tag: image-to-3d
4
+ ---
5
+
6
+ # Human3R: Everyone Everywhere All at Once
7
+
8
+ Human3R is a unified, feed-forward framework for online 4D human-scene reconstruction, in the world frame, from casually captured monocular videos. It jointly recovers global multi-person SMPL-X bodies ("everyone"), dense 3D scene ("everywhere"), and camera trajectories in a single forward pass ("all-at-once").
9
+
10
+ TL;DR: Inference with One model, One stage; Training in One day using One GPU
11
+
12
+ * **Paper**: [Human3R: Everyone Everywhere All at Once](https://huggingface.co/papers/2510.06219)
13
+ * **Project Page**: [https://fanegg.github.io/Human3R/](https://fanegg.github.io/Human3R/)
14
+ * **Code**: [https://github.com/fanegg/Human3R](https://github.com/fanegg/Human3R)
15
+
16
+ <div align="center">
17
+ <img src="https://github.com/user-attachments/assets/47fc7ecf-5235-471c-84b9-ccfeca6d56ea" alt="Human3R Demo" width="100%">
18
+ </div>
19
+
20
+ ## Sample Usage
21
+
22
+ To run the inference demo, you can use the following command (assuming you have followed the installation steps from the [GitHub repository](https://github.com/fanegg/Human3R)):
23
+
24
+ ```bash
25
+ # input can be a folder or a video
26
+ # the following script will run inference with Human3R and visualize the output with viser on port 8080
27
+ CUDA_VISIBLE_DEVICES=0 python demo.py --model_path MODEL_PATH --size 512 \
28
+ --seq_path SEQ_PATH --output_dir OUT_DIR --subsample 1 --use_ttt3r \
29
+ --vis_threshold 2 --downsample_factor 1 --reset_interval 100
30
+
31
+ # Example:
32
+ CUDA_VISIBLE_DEVICES=0 python demo.py --model_path src/human3r.pth --size 512 --seq_path examples/GoodMornin1.mp4 --subsample 1 --use_ttt3r --vis_threshold 2 --downsample_factor 1 --reset_interval 100 --output_dir tmp
33
+ ```
34
+ Output results will be saved to `output_dir`.
35
+
36
+ ## Citation
37
+
38
+ If you find our work useful, please cite:
39
+
40
+ ```bibtex
41
+ @article{chen2025human3r,
42
+ title={Human3R: Everyone Everywhere All at Once},
43
+ author={Chen, Yue and Chen, Xingyu and Xue, Yuxuan and Chen, Anpei and Xiu, Yuliang and Gerard, Pons-Moll},
44
+ journal={arXiv preprint arXiv:2510.06219},
45
+ year={2025}
46
+ }
47
+ ```