This is the official released weights of of the WACV 2026 Round 1 Early Accept paper (6.4% acceptance rate) - Face-LLaVA: Facial Expression and Attribute Understanding through Instruction Tuning. Please refer to the official github repository for instructions to run inference.
π§Ύ Abstract
The human face plays a central role in social communication, necessitating the use of performant computer vision tools for human-centered applications. We propose Face-LLaVA, a multimodal large language model for face-centered, in-context learning, including facial expression and attribute recognition. Additionally, Face-LLaVA is able to generate natural language descriptions that can be used for reasoning. Leveraging existing visual databases, we first developed FaceInstruct-1M, a face-centered database for instruction tuning MLLMs for face processing. We then developed a novel face-specific visual encoder powered by Face-Region Guided Cross-Attention that integrates face geometry with local visual features. We evaluated the proposed method across nine different datasets and five different face processing tasks, including facial expression recognition, action unit detection, facial attribute detection, age estimation and deepfake detection. Face-LLaVA achieves superior results compared to existing open-source MLLMs and competitive performance compared to commercial solutions. Our model output also receives a higher reasoning rating by GPT under a zero-shot setting across all the tasks. Both our dataset and model wil be released at this https URL to support future advancements in social AI and foundational vision-language research.
π£ News
- [Oct. 2025] Initial release of official codebase and model wirghts. Stay tuned for more details and the dataset.
- [Sept. 2025] FaceLLaVA accepted in the first round of WACV 2026 (6.4% acceptance rate). See you in Tucson!
π¦ Repository Structure
βββ cache_dir/ # will automatically be created to download LanguageBind image and video models from huggingface
βββ checkpoints/ # create a new folder by this name
βββ facellava/ # Main source code
βββ scripts/ # Training scripts for FaceLLaVA
π§ Installation
Clone the repository
git clone https://github.com/ac-alpha/face-llava.git cd Face-LLaVACreate a virtual environment (recommended)
conda create -n facellava python=3.10 -y conda activate facellavaInstall torch
pip install torch==2.5.1 torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121Potential issues
- You might want to download PyTorch for a different version of CUDA. We download it for CUDA-12.1 but we have tested it on a machine with CUDA-12.2 as well. However, you might need to change this depending on your machine.
- Based on the above, you might also have to upgrade/downgrade torch.
Install in editable mode for development:
pip install -e . pip install -e ".[train]" ## if you want to train your own modelInstall other libraries:
pip install flash-attn --no-build-isolation ## recommended but not required pip install decord opencv-python git+https://github.com/facebookresearch/pytorchvideo.git@28fe037d212663c6a24f373b94cc5d478c8c1a1d
π― Inference
Download the model weights from huggingface inside
checkpoints/folder so that the structure becomes -./checkpoints/FaceLLaVA.Crop the input image/video using
tools/crop_face.pybefore further processing.Use the following command to crop an image
python crop_face.py \ --mode image \ --image_path "/path/to/input.jpg" \ --output_image_path "/path/to/output_cropped.jpg"Use the following command to crop a video
python crop_face.py \ --mode video \ --video_path "/path/to/input/video.mp4" \ --output_video_path "/path/to/output/cropped_video.mp4" \ --temp_dir "/path/to/temp"Run the following command for inference.
CUDA_VISIBLE_DEVICES=0 python inference.py --model_path="./checkpoints/FaceLLaVA" \ --file_path="./assets/demo_inputs/face_attr_example_1.png" --prompt="What are the facial attributes in the given image?"Currently the following face perception tasks are supported along with the best modality suited for that task - Emotion(Video), Age(Image), Facial Attributes(Image), Facial Action Units(Image)
A list of prompts that work well for different tasks is present in
./assets/good_prompts.
β Repository Progress
- Dataset Release
- Training Script
- Inference Code
- Model Weights
βοΈ License
This codebase is distributed under the USC Research license. See LICENSE.rst for more details.
π Credits
This codebase builds upon the following excellent works: VideoLLaVA, LLaVA and LLaVA-Next. We gratefully acknowledge their contributions to the open-source community.
πͺΆ Citation
@article{chaubey2025face,
title={Face-LLaVA: Facial Expression and Attribute Understanding through Instruction Tuning},
author={Chaubey, Ashutosh and Guan, Xulang and Soleymani, Mohammad},
journal={arXiv preprint arXiv:2504.07198},
year={2025}
}
- Downloads last month
- 62
Model tree for chaubeyG/FaceLLaVA
Base model
lmsys/vicuna-7b-v1.5