File size: 6,660 Bytes
4313f8a
ea4a8f5
 
 
4313f8a
 
 
ea4a8f5
 
 
4313f8a
ea4a8f5
 
 
 
4313f8a
ea4a8f5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e7d7e74
ea4a8f5
 
 
 
 
 
 
 
 
 
 
c26362f
e7d7e74
 
ea4a8f5
 
 
e7d7e74
c26362f
ea4a8f5
e7d7e74
 
 
 
ea4a8f5
e7d7e74
 
 
 
 
ea4a8f5
 
e7d7e74
 
ea4a8f5
 
 
 
 
e7d7e74
ea4a8f5
 
 
 
e7d7e74
 
 
ea4a8f5
e7d7e74
 
 
 
 
ea4a8f5
e7d7e74
 
 
ea4a8f5
 
e7d7e74
 
 
 
ea4a8f5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
---
license: apache-2.0
pipeline_tag: image-segmentation
library_name: Pytorch
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
- DINOv2
- CLIP
- open-vocabulary segmentation
---
<div align="center">
<h1>
Talking to DINO: Bridging Self-Supervised Vision Backbones with Language for Open-Vocabulary Segmentation (ICCV 2025)
</h1>

<h3>
<a href="https://www.linkedin.com/in/luca-barsellotti/">Luca Barsellotti*</a>&ensp;
<a href="https://www.linkedin.com/in/lorenzo-bianchi-893bb225a/">Lorenzo Bianchi*</a>&ensp;
<a href="https://www.linkedin.com/in/nicola-messina-a33848164/">Nicola Messina</a>&ensp;
<a href="https://www.linkedin.com/in/fabio-carrara-b28a2b111/">Fabio Carrara</a>&ensp;
<a href="https://aimagelab.ing.unimore.it/imagelab/person.asp?idpersona=90">Marcella Cornia</a>&ensp;
<a href="https://www.lorenzobaraldi.com/">Lorenzo Baraldi</a>&ensp;
<a href="https://fabriziofalchi.it">Fabrizio Falchi</a>&ensp;
<a href="https://www.linkedin.com/in/rita-cucchiara-a4653a13/">Rita Cucchiara</a>
</h3>

[Project Page](https://lorebianchi98.github.io/Talk2DINO/) | [Paper](http://arxiv.org/abs/2411.19331) | [Code](https://github.com/lorebianchi98/Talk2DINO)

</div>

<div align="center">
<figure>
  <img alt="Overview of Talk2DINO" src="./assets/overview.png" width="90%">
</figure>
</div>

## About
Open-Vocabulary Segmentation (OVS) aims at segmenting images from free-form textual concepts without predefined training classes. While existing vision-language models such as CLIP can generate segmentation masks by leveraging coarse spatial information from Vision Transformers, they face challenges in spatial localization due to their global alignment of image and text features. Conversely, self-supervised visual models like DINO excel in fine-grained visual encoding but lack integration with language. To bridge this gap, we present Talk2DINO, a novel hybrid approach that combines the spatial accuracy of DINOv2 with the language understanding of CLIP. Our approach aligns the textual embeddings of CLIP to the patch-level features of DINOv2 through a learned mapping function without the need to fine-tune the underlying backbones. At training time, we exploit the attention maps of DINOv2 to selectively align local visual patches with textual embeddings. We show that the powerful semantic and localization abilities of Talk2DINO can enhance the segmentation process, resulting in more natural and less noisy segmentations, and that our approach can also effectively distinguish foreground objects from the background. Experimental results demonstrate that Talk2DINO achieves state-of-the-art performance across several unsupervised OVS benchmarks.

## Sample Usage

### Mapping CLIP Text Embeddings to DINOv2 space with Talk2DINO
We can use Talk2DINO to map CLIP text embeddings into the DINOv2 patch embedding space.
```python
from transformers import AutoModel
from torchvision.io import read_image

# Device setup
device = 'cuda' if torch.cuda.is_available() else 'cpu'

# Model Loading
model = AutoModel.from_pretrained("lorebianchi98/Talk2DINO-ViTL").to(device).eval()

# Embedding generation
with torch.no_grad():
    text_embed = model.encode_text("a pikachu")
    image_embed = model.encode_image(image)

# normalize the features to perform cosine similarity
text_embed = text_embed / text_embed.norm(dim=-1, keepdim=True)
image_embed = image_embed / image_embed.norm(dim=-1, keepdim=True)

similarity = (image_embed @ text_embed.T).squeeze(0, -1).cpu().numpy()
```

### Demo
In `demo.ipynb` we provide a simple example on how to use Talk2DINO for inference on a given image with custom textual categories.
Result:
<div align="center">
<table><tr><td><figure>
  <img alt="" src="./assets/pikachu.png" width=300>
</figure></td><td><figure>
  <img alt="" src="./assets/pikachu_seg.png" width=300>
</figure></td></tr></table>
</div>

## Installation

To use the **Hugging Face interface** for inference:

```bash
# Clone the repository
git clone https://huggingface.co/lorebianchi98/Talk2DINO-ViTL
cd Talk2DINO-ViTL

# Install dependencies
pip install -r requirements.txt

# Install PyTorch and torchvision with the appropriate CUDA version
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu126
```

> For the **full MMCV interface** to perform evaluation on segmentation benchmarks, please refer to the [original Talk2DINO repository](https://github.com/lorebianchi98/Talk2DINO).



<details>
  <summary>Qualitative Results</summary>

| **Image** | **Ground Truth** | **FreeDA** | **ProxyCLIP** | **CLIP-DINOiser** | **Ours (Talk2DINO)** |
|-----------|------------------|------------|---------------|-------------------|------------------|
| ![Image](assets/qualitatives/voc/2_img.jpg) | ![Ground Truth](assets/qualitatives/voc/2_gt.png) | ![FreeDA](assets/qualitatives/voc/2_freeda.png) | ![ProxyCLIP](assets/qualitatives/voc/2_proxy.png) | ![CLIP-DINOiser](assets/qualitatives/voc/2_clipdinoiser.png) | ![Ours](assets/qualitatives/voc/2_talk2dino.png) |
| ![Image](assets/qualitatives/object/2r_img.png) | ![Ground Truth](assets/qualitatives/object/2r_gt.png) | ![FreeDA](assets/qualitatives/object/2r_freeda.png) | ![ProxyCLIP](assets/qualitatives/object/2r_proxy.png) | ![CLIP-DINOiser](assets/qualitatives/object/2r_clipdinoiser.png) | ![Ours](assets/qualitatives/object/2r_talk2dino.png) |
| ![Image](assets/qualitatives/cityscapes/1r_image.png) | ![Ground Truth](assets/qualitatives/cityscapes/1r_gt.png) | ![FreeDA](assets/qualitatives/cityscapes/1r_freeda.png) | ![ProxyCLIP](assets/qualitatives/cityscapes/1r_proxyclip.png) | ![CLIP-DINOiser](assets/qualitatives/cityscapes/1r_clipdinoiser.png) | ![Ours](assets/qualitatives/cityscapes/1r_talk2dino.png) |
| ![Image](assets/qualitatives/context/1r_img.png) | ![Ground Truth](assets/qualitatives/context/1r_gt.png) | ![FreeDA](assets/qualitatives/context/1r_freeda.png) | ![ProxyCLIP](assets/qualitatives/context/1r_proxy.png) | ![CLIP-DINOiser](assets/qualitatives/context/1r_clipdinoiser.png) | ![Ours](assets/qualitatives/context/1r_talk2dino.png) |
</details>


## Reference
If you found this code useful, please cite the following paper:
```
@misc{barsellotti2024talkingdinobridgingselfsupervised,
      title={Talking to DINO: Bridging Self-Supervised Vision Backbones with Language for Open-Vocabulary Segmentation}, 
      author={Luca Barsellotti and Lorenzo Bianchi and Nicola Messina and Fabio Carrara and Marcella Cornia and Lorenzo Baraldi and Fabrizio Falchi and Rita Cucchiara},
      year={2024},
      eprint={2411.19331},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2411.19331}, 
}
```