--- license: cc-by-nc-4.0 language: - en pipeline_tag: object-detection library_name: mmdetection --- ## Introduction After fine-tuning Stable Diffusion, we generate synthetic source domain data (LINZ) and target domain data (UGRC). For each synthetic data sample, we extract the cross-attention maps from the word "car", the foreground learnable token, and the background learnable token during denoising steps. The enhanced cross-attention map is then obtained by stacking these cross-attention maps. First, we can label synthetic source domain data using the [trained detectors](https://huggingface.co/xiaofanghf/Real-LINZ-Detectors) on real source domain data. Then, we can train another detector on synthetic source domain cross-attention maps and then label synthetic target domain data. Finally, we train a detector on synthetic target domain data and test on real data to evaluate our pseudo-labels. ## Model Usage This folder contains four detectors trained on our generated Synthetic UGRC data and tested on Real UGRC data, along with configuration files we use for training and testing. ## References ➡️ **Paper:** [Adapting Vehicle Detectors for Aerial Imagery to Unseen Domains with Weak Supervision](https://arxiv.org/abs/2507.20976) ➡️ **Project Page:** [Webpage](https://humansensinglab.github.io/AGenDA/) ➡️ **Code:** [AGenDA](https://github.com/humansensinglab/AGenDA/blob/main/data_generation/data_generation.py) ➡️ **Synthetic Data:** [AGenDA](https://github.com/humansensinglab/AGenDA/tree/main/Data)