QR-LoRA: Residual Weights for Efficient and Disentangled Fine-tuning
This repository contains the flux_res model, which holds the residual matrix weights used by QR-LoRA: Efficient and Disentangled Fine-tuning via QR Decomposition for Customized Generation, a novel fine-tuning framework for text-to-image models.
Paper: https://huggingface.co/papers/2507.04599 Project Page: https://luna-ai-lab.github.io/QR-LoRA/ Code: https://github.com/luna-ai-lab/QR-LoRA
Abstract
We propose QR-LoRA, a novel fine-tuning framework leveraging QR decomposition for structured parameter updates that effectively separate visual attributes. Our key insight is that the orthogonal Q matrix naturally minimizes interference between different visual features, while the upper triangular R matrix efficiently encodes attribute-specific transformations. Our approach fixes both Q and R matrices while only training an additional task-specific $\Delta R$ matrix. This structured design reduces trainable parameters to half of conventional LoRA methods and supports effective merging of multiple adaptations without cross-contamination due to the strong disentanglement properties between $\Delta R$ matrices. Experiments demonstrate that QR-LoRA achieves superior disentanglement in content-style fusion tasks, establishing a new paradigm for parameter-efficient, disentangled fine-tuning in generative models.
Model Description
含义解释: FLUX经过奇异值分解(SVD)后,提取了前rank(例如64)个奇异值对应的核心矩阵,然后对该核心矩阵进行QR分解。 最终,将原始模型权重减去QR分解得到的矩阵,构成了本权重。
The meaning of the model weights: FLUX undergoes SVD, and the core matrix corresponding to the first rank (e.g., 64) singular values is extracted and then QR decomposed. The original model weights minus the matrix from QR decomposition constitute the weights of this model.
These residual weights serve as the pre-saved initialization decomposition matrices to reduce inference time overhead in the QR-LoRA workflow.
How to Use
This model (flux_res) provides the flux_residual_weights.safetensors file. To utilize these weights, you need to follow the instructions provided in the main QR-LoRA GitHub repository.
Example usage in the QR-LoRA workflow:
You can download this model, place flux_residual_weights.safetensors in your flux_dir (as per the QR-LoRA repository structure), and then proceed with inference as described in their Quick Start guide, for example:
# Download the model weights to flux_dir/flux_residual_weights.safetensors
# Alternatively, use the script provided in the QR-LoRA repo to save these weights:
# bash flux_dir/save_flux_residual.sh 1
# After training your QR-LoRA (delta R) models, you can use these residual weights for inference:
# bash flux_dir/inference_merge.sh 1
For detailed setup, training, and inference procedures, please refer to the comprehensive documentation on the QR-LoRA GitHub page.
Citation
If you find QR-LoRA useful, please cite the paper:
@inproceedings{yang2025qrlora,
title={QR-LoRA: Efficient and Disentangled Fine-tuning via QR Decomposition for Customized Generation},
author={Jiahui Yang and Yongjia Ma and Donglin Di and Hao Li and Wei Chen and Yan Xie and Jianxun Cui and Xun Yang and Wangmeng Zuo},
booktitle=International Conference on Computer Vision,
year={2025}
}
- Downloads last month
- 6