davidmezzetti commited on
Commit
64b040a
·
1 Parent(s): 3278f6e

Initial model

Browse files
1_Dense/config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"in_features": 50, "out_features": 128, "bias": false, "activation_function": "torch.nn.modules.linear.Identity"}
1_Dense/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0bce499da1d2b9a1bfdea8171778d4e56cb4afd6fe8fdc72a4937651eb86dc77
3
+ size 25688
README.md ADDED
@@ -0,0 +1,1090 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ tags:
5
+ - ColBERT
6
+ - PyLate
7
+ - sentence-transformers
8
+ - sentence-similarity
9
+ - feature-extraction
10
+ - generated_from_trainer
11
+ - dataset_size:640000
12
+ - loss:Distillation
13
+ datasets:
14
+ - lightonai/ms-marco-en-bge-gemma
15
+ pipeline_tag: sentence-similarity
16
+ library_name: PyLate
17
+ metrics:
18
+ - MaxSim_accuracy@1
19
+ - MaxSim_accuracy@3
20
+ - MaxSim_accuracy@5
21
+ - MaxSim_accuracy@10
22
+ - MaxSim_precision@1
23
+ - MaxSim_precision@3
24
+ - MaxSim_precision@5
25
+ - MaxSim_precision@10
26
+ - MaxSim_recall@1
27
+ - MaxSim_recall@3
28
+ - MaxSim_recall@5
29
+ - MaxSim_recall@10
30
+ - MaxSim_ndcg@10
31
+ - MaxSim_mrr@10
32
+ - MaxSim_map@100
33
+ model-index:
34
+ - name: ColBERT MUVERA Femto
35
+ results:
36
+ - task:
37
+ type: py-late-information-retrieval
38
+ name: Py Late Information Retrieval
39
+ dataset:
40
+ name: NanoClimateFEVER
41
+ type: NanoClimateFEVER
42
+ metrics:
43
+ - type: MaxSim_accuracy@1
44
+ value: 0.14
45
+ name: Maxsim Accuracy@1
46
+ - type: MaxSim_accuracy@3
47
+ value: 0.32
48
+ name: Maxsim Accuracy@3
49
+ - type: MaxSim_accuracy@5
50
+ value: 0.36
51
+ name: Maxsim Accuracy@5
52
+ - type: MaxSim_accuracy@10
53
+ value: 0.52
54
+ name: Maxsim Accuracy@10
55
+ - type: MaxSim_precision@1
56
+ value: 0.14
57
+ name: Maxsim Precision@1
58
+ - type: MaxSim_precision@3
59
+ value: 0.11333333333333333
60
+ name: Maxsim Precision@3
61
+ - type: MaxSim_precision@5
62
+ value: 0.07600000000000001
63
+ name: Maxsim Precision@5
64
+ - type: MaxSim_precision@10
65
+ value: 0.05600000000000001
66
+ name: Maxsim Precision@10
67
+ - type: MaxSim_recall@1
68
+ value: 0.085
69
+ name: Maxsim Recall@1
70
+ - type: MaxSim_recall@3
71
+ value: 0.165
72
+ name: Maxsim Recall@3
73
+ - type: MaxSim_recall@5
74
+ value: 0.19166666666666668
75
+ name: Maxsim Recall@5
76
+ - type: MaxSim_recall@10
77
+ value: 0.25233333333333335
78
+ name: Maxsim Recall@10
79
+ - type: MaxSim_ndcg@10
80
+ value: 0.19115874409066272
81
+ name: Maxsim Ndcg@10
82
+ - type: MaxSim_mrr@10
83
+ value: 0.2408333333333333
84
+ name: Maxsim Mrr@10
85
+ - type: MaxSim_map@100
86
+ value: 0.1462389973257929
87
+ name: Maxsim Map@100
88
+ - task:
89
+ type: py-late-information-retrieval
90
+ name: Py Late Information Retrieval
91
+ dataset:
92
+ name: NanoDBPedia
93
+ type: NanoDBPedia
94
+ metrics:
95
+ - type: MaxSim_accuracy@1
96
+ value: 0.7
97
+ name: Maxsim Accuracy@1
98
+ - type: MaxSim_accuracy@3
99
+ value: 0.82
100
+ name: Maxsim Accuracy@3
101
+ - type: MaxSim_accuracy@5
102
+ value: 0.82
103
+ name: Maxsim Accuracy@5
104
+ - type: MaxSim_accuracy@10
105
+ value: 0.84
106
+ name: Maxsim Accuracy@10
107
+ - type: MaxSim_precision@1
108
+ value: 0.7
109
+ name: Maxsim Precision@1
110
+ - type: MaxSim_precision@3
111
+ value: 0.5933333333333333
112
+ name: Maxsim Precision@3
113
+ - type: MaxSim_precision@5
114
+ value: 0.548
115
+ name: Maxsim Precision@5
116
+ - type: MaxSim_precision@10
117
+ value: 0.456
118
+ name: Maxsim Precision@10
119
+ - type: MaxSim_recall@1
120
+ value: 0.0728506527388449
121
+ name: Maxsim Recall@1
122
+ - type: MaxSim_recall@3
123
+ value: 0.13076941366456654
124
+ name: Maxsim Recall@3
125
+ - type: MaxSim_recall@5
126
+ value: 0.17827350013263704
127
+ name: Maxsim Recall@5
128
+ - type: MaxSim_recall@10
129
+ value: 0.2781635119304686
130
+ name: Maxsim Recall@10
131
+ - type: MaxSim_ndcg@10
132
+ value: 0.5510945084552747
133
+ name: Maxsim Ndcg@10
134
+ - type: MaxSim_mrr@10
135
+ value: 0.7555555555555555
136
+ name: Maxsim Mrr@10
137
+ - type: MaxSim_map@100
138
+ value: 0.39128533545834626
139
+ name: Maxsim Map@100
140
+ - task:
141
+ type: py-late-information-retrieval
142
+ name: Py Late Information Retrieval
143
+ dataset:
144
+ name: NanoFEVER
145
+ type: NanoFEVER
146
+ metrics:
147
+ - type: MaxSim_accuracy@1
148
+ value: 0.62
149
+ name: Maxsim Accuracy@1
150
+ - type: MaxSim_accuracy@3
151
+ value: 0.76
152
+ name: Maxsim Accuracy@3
153
+ - type: MaxSim_accuracy@5
154
+ value: 0.84
155
+ name: Maxsim Accuracy@5
156
+ - type: MaxSim_accuracy@10
157
+ value: 0.86
158
+ name: Maxsim Accuracy@10
159
+ - type: MaxSim_precision@1
160
+ value: 0.62
161
+ name: Maxsim Precision@1
162
+ - type: MaxSim_precision@3
163
+ value: 0.26666666666666666
164
+ name: Maxsim Precision@3
165
+ - type: MaxSim_precision@5
166
+ value: 0.184
167
+ name: Maxsim Precision@5
168
+ - type: MaxSim_precision@10
169
+ value: 0.09399999999999999
170
+ name: Maxsim Precision@10
171
+ - type: MaxSim_recall@1
172
+ value: 0.5766666666666667
173
+ name: Maxsim Recall@1
174
+ - type: MaxSim_recall@3
175
+ value: 0.7366666666666667
176
+ name: Maxsim Recall@3
177
+ - type: MaxSim_recall@5
178
+ value: 0.83
179
+ name: Maxsim Recall@5
180
+ - type: MaxSim_recall@10
181
+ value: 0.85
182
+ name: Maxsim Recall@10
183
+ - type: MaxSim_ndcg@10
184
+ value: 0.7249306483092258
185
+ name: Maxsim Ndcg@10
186
+ - type: MaxSim_mrr@10
187
+ value: 0.6976666666666665
188
+ name: Maxsim Mrr@10
189
+ - type: MaxSim_map@100
190
+ value: 0.679664802101873
191
+ name: Maxsim Map@100
192
+ - task:
193
+ type: py-late-information-retrieval
194
+ name: Py Late Information Retrieval
195
+ dataset:
196
+ name: NanoFiQA2018
197
+ type: NanoFiQA2018
198
+ metrics:
199
+ - type: MaxSim_accuracy@1
200
+ value: 0.28
201
+ name: Maxsim Accuracy@1
202
+ - type: MaxSim_accuracy@3
203
+ value: 0.34
204
+ name: Maxsim Accuracy@3
205
+ - type: MaxSim_accuracy@5
206
+ value: 0.44
207
+ name: Maxsim Accuracy@5
208
+ - type: MaxSim_accuracy@10
209
+ value: 0.48
210
+ name: Maxsim Accuracy@10
211
+ - type: MaxSim_precision@1
212
+ value: 0.28
213
+ name: Maxsim Precision@1
214
+ - type: MaxSim_precision@3
215
+ value: 0.13333333333333333
216
+ name: Maxsim Precision@3
217
+ - type: MaxSim_precision@5
218
+ value: 0.10800000000000001
219
+ name: Maxsim Precision@5
220
+ - type: MaxSim_precision@10
221
+ value: 0.062
222
+ name: Maxsim Precision@10
223
+ - type: MaxSim_recall@1
224
+ value: 0.13555555555555557
225
+ name: Maxsim Recall@1
226
+ - type: MaxSim_recall@3
227
+ value: 0.19755555555555557
228
+ name: Maxsim Recall@3
229
+ - type: MaxSim_recall@5
230
+ value: 0.2666349206349206
231
+ name: Maxsim Recall@5
232
+ - type: MaxSim_recall@10
233
+ value: 0.2994920634920635
234
+ name: Maxsim Recall@10
235
+ - type: MaxSim_ndcg@10
236
+ value: 0.2502784944505909
237
+ name: Maxsim Ndcg@10
238
+ - type: MaxSim_mrr@10
239
+ value: 0.33252380952380944
240
+ name: Maxsim Mrr@10
241
+ - type: MaxSim_map@100
242
+ value: 0.20907273372726215
243
+ name: Maxsim Map@100
244
+ - task:
245
+ type: py-late-information-retrieval
246
+ name: Py Late Information Retrieval
247
+ dataset:
248
+ name: NanoHotpotQA
249
+ type: NanoHotpotQA
250
+ metrics:
251
+ - type: MaxSim_accuracy@1
252
+ value: 0.76
253
+ name: Maxsim Accuracy@1
254
+ - type: MaxSim_accuracy@3
255
+ value: 0.84
256
+ name: Maxsim Accuracy@3
257
+ - type: MaxSim_accuracy@5
258
+ value: 0.9
259
+ name: Maxsim Accuracy@5
260
+ - type: MaxSim_accuracy@10
261
+ value: 0.92
262
+ name: Maxsim Accuracy@10
263
+ - type: MaxSim_precision@1
264
+ value: 0.76
265
+ name: Maxsim Precision@1
266
+ - type: MaxSim_precision@3
267
+ value: 0.36666666666666664
268
+ name: Maxsim Precision@3
269
+ - type: MaxSim_precision@5
270
+ value: 0.252
271
+ name: Maxsim Precision@5
272
+ - type: MaxSim_precision@10
273
+ value: 0.136
274
+ name: Maxsim Precision@10
275
+ - type: MaxSim_recall@1
276
+ value: 0.38
277
+ name: Maxsim Recall@1
278
+ - type: MaxSim_recall@3
279
+ value: 0.55
280
+ name: Maxsim Recall@3
281
+ - type: MaxSim_recall@5
282
+ value: 0.63
283
+ name: Maxsim Recall@5
284
+ - type: MaxSim_recall@10
285
+ value: 0.68
286
+ name: Maxsim Recall@10
287
+ - type: MaxSim_ndcg@10
288
+ value: 0.6514325561331983
289
+ name: Maxsim Ndcg@10
290
+ - type: MaxSim_mrr@10
291
+ value: 0.8098333333333333
292
+ name: Maxsim Mrr@10
293
+ - type: MaxSim_map@100
294
+ value: 0.5738665952275315
295
+ name: Maxsim Map@100
296
+ - task:
297
+ type: py-late-information-retrieval
298
+ name: Py Late Information Retrieval
299
+ dataset:
300
+ name: NanoMSMARCO
301
+ type: NanoMSMARCO
302
+ metrics:
303
+ - type: MaxSim_accuracy@1
304
+ value: 0.32
305
+ name: Maxsim Accuracy@1
306
+ - type: MaxSim_accuracy@3
307
+ value: 0.48
308
+ name: Maxsim Accuracy@3
309
+ - type: MaxSim_accuracy@5
310
+ value: 0.6
311
+ name: Maxsim Accuracy@5
312
+ - type: MaxSim_accuracy@10
313
+ value: 0.7
314
+ name: Maxsim Accuracy@10
315
+ - type: MaxSim_precision@1
316
+ value: 0.32
317
+ name: Maxsim Precision@1
318
+ - type: MaxSim_precision@3
319
+ value: 0.16
320
+ name: Maxsim Precision@3
321
+ - type: MaxSim_precision@5
322
+ value: 0.12000000000000002
323
+ name: Maxsim Precision@5
324
+ - type: MaxSim_precision@10
325
+ value: 0.07
326
+ name: Maxsim Precision@10
327
+ - type: MaxSim_recall@1
328
+ value: 0.32
329
+ name: Maxsim Recall@1
330
+ - type: MaxSim_recall@3
331
+ value: 0.48
332
+ name: Maxsim Recall@3
333
+ - type: MaxSim_recall@5
334
+ value: 0.6
335
+ name: Maxsim Recall@5
336
+ - type: MaxSim_recall@10
337
+ value: 0.7
338
+ name: Maxsim Recall@10
339
+ - type: MaxSim_ndcg@10
340
+ value: 0.4946222844793249
341
+ name: Maxsim Ndcg@10
342
+ - type: MaxSim_mrr@10
343
+ value: 0.43052380952380953
344
+ name: Maxsim Mrr@10
345
+ - type: MaxSim_map@100
346
+ value: 0.4408050908765128
347
+ name: Maxsim Map@100
348
+ - task:
349
+ type: py-late-information-retrieval
350
+ name: Py Late Information Retrieval
351
+ dataset:
352
+ name: NanoNFCorpus
353
+ type: NanoNFCorpus
354
+ metrics:
355
+ - type: MaxSim_accuracy@1
356
+ value: 0.32
357
+ name: Maxsim Accuracy@1
358
+ - type: MaxSim_accuracy@3
359
+ value: 0.42
360
+ name: Maxsim Accuracy@3
361
+ - type: MaxSim_accuracy@5
362
+ value: 0.52
363
+ name: Maxsim Accuracy@5
364
+ - type: MaxSim_accuracy@10
365
+ value: 0.62
366
+ name: Maxsim Accuracy@10
367
+ - type: MaxSim_precision@1
368
+ value: 0.32
369
+ name: Maxsim Precision@1
370
+ - type: MaxSim_precision@3
371
+ value: 0.2866666666666666
372
+ name: Maxsim Precision@3
373
+ - type: MaxSim_precision@5
374
+ value: 0.26799999999999996
375
+ name: Maxsim Precision@5
376
+ - type: MaxSim_precision@10
377
+ value: 0.21000000000000005
378
+ name: Maxsim Precision@10
379
+ - type: MaxSim_recall@1
380
+ value: 0.01921769353070746
381
+ name: Maxsim Recall@1
382
+ - type: MaxSim_recall@3
383
+ value: 0.03782391241260524
384
+ name: Maxsim Recall@3
385
+ - type: MaxSim_recall@5
386
+ value: 0.05411010345369293
387
+ name: Maxsim Recall@5
388
+ - type: MaxSim_recall@10
389
+ value: 0.09349869834347448
390
+ name: Maxsim Recall@10
391
+ - type: MaxSim_ndcg@10
392
+ value: 0.2481257474345093
393
+ name: Maxsim Ndcg@10
394
+ - type: MaxSim_mrr@10
395
+ value: 0.3995793650793651
396
+ name: Maxsim Mrr@10
397
+ - type: MaxSim_map@100
398
+ value: 0.08737709081330662
399
+ name: Maxsim Map@100
400
+ - task:
401
+ type: py-late-information-retrieval
402
+ name: Py Late Information Retrieval
403
+ dataset:
404
+ name: NanoNQ
405
+ type: NanoNQ
406
+ metrics:
407
+ - type: MaxSim_accuracy@1
408
+ value: 0.28
409
+ name: Maxsim Accuracy@1
410
+ - type: MaxSim_accuracy@3
411
+ value: 0.52
412
+ name: Maxsim Accuracy@3
413
+ - type: MaxSim_accuracy@5
414
+ value: 0.58
415
+ name: Maxsim Accuracy@5
416
+ - type: MaxSim_accuracy@10
417
+ value: 0.74
418
+ name: Maxsim Accuracy@10
419
+ - type: MaxSim_precision@1
420
+ value: 0.28
421
+ name: Maxsim Precision@1
422
+ - type: MaxSim_precision@3
423
+ value: 0.1733333333333333
424
+ name: Maxsim Precision@3
425
+ - type: MaxSim_precision@5
426
+ value: 0.12000000000000002
427
+ name: Maxsim Precision@5
428
+ - type: MaxSim_precision@10
429
+ value: 0.07600000000000001
430
+ name: Maxsim Precision@10
431
+ - type: MaxSim_recall@1
432
+ value: 0.26
433
+ name: Maxsim Recall@1
434
+ - type: MaxSim_recall@3
435
+ value: 0.49
436
+ name: Maxsim Recall@3
437
+ - type: MaxSim_recall@5
438
+ value: 0.56
439
+ name: Maxsim Recall@5
440
+ - type: MaxSim_recall@10
441
+ value: 0.7
442
+ name: Maxsim Recall@10
443
+ - type: MaxSim_ndcg@10
444
+ value: 0.4828411530427104
445
+ name: Maxsim Ndcg@10
446
+ - type: MaxSim_mrr@10
447
+ value: 0.4289603174603174
448
+ name: Maxsim Mrr@10
449
+ - type: MaxSim_map@100
450
+ value: 0.41150699780701017
451
+ name: Maxsim Map@100
452
+ - task:
453
+ type: py-late-information-retrieval
454
+ name: Py Late Information Retrieval
455
+ dataset:
456
+ name: NanoQuoraRetrieval
457
+ type: NanoQuoraRetrieval
458
+ metrics:
459
+ - type: MaxSim_accuracy@1
460
+ value: 0.74
461
+ name: Maxsim Accuracy@1
462
+ - type: MaxSim_accuracy@3
463
+ value: 0.84
464
+ name: Maxsim Accuracy@3
465
+ - type: MaxSim_accuracy@5
466
+ value: 0.88
467
+ name: Maxsim Accuracy@5
468
+ - type: MaxSim_accuracy@10
469
+ value: 0.9
470
+ name: Maxsim Accuracy@10
471
+ - type: MaxSim_precision@1
472
+ value: 0.74
473
+ name: Maxsim Precision@1
474
+ - type: MaxSim_precision@3
475
+ value: 0.30666666666666664
476
+ name: Maxsim Precision@3
477
+ - type: MaxSim_precision@5
478
+ value: 0.21199999999999997
479
+ name: Maxsim Precision@5
480
+ - type: MaxSim_precision@10
481
+ value: 0.11599999999999998
482
+ name: Maxsim Precision@10
483
+ - type: MaxSim_recall@1
484
+ value: 0.674
485
+ name: Maxsim Recall@1
486
+ - type: MaxSim_recall@3
487
+ value: 0.784
488
+ name: Maxsim Recall@3
489
+ - type: MaxSim_recall@5
490
+ value: 0.8413333333333333
491
+ name: Maxsim Recall@5
492
+ - type: MaxSim_recall@10
493
+ value: 0.8626666666666667
494
+ name: Maxsim Recall@10
495
+ - type: MaxSim_ndcg@10
496
+ value: 0.8016479127266055
497
+ name: Maxsim Ndcg@10
498
+ - type: MaxSim_mrr@10
499
+ value: 0.7995238095238095
500
+ name: Maxsim Mrr@10
501
+ - type: MaxSim_map@100
502
+ value: 0.7733654571274
503
+ name: Maxsim Map@100
504
+ - task:
505
+ type: py-late-information-retrieval
506
+ name: Py Late Information Retrieval
507
+ dataset:
508
+ name: NanoSCIDOCS
509
+ type: NanoSCIDOCS
510
+ metrics:
511
+ - type: MaxSim_accuracy@1
512
+ value: 0.3
513
+ name: Maxsim Accuracy@1
514
+ - type: MaxSim_accuracy@3
515
+ value: 0.44
516
+ name: Maxsim Accuracy@3
517
+ - type: MaxSim_accuracy@5
518
+ value: 0.52
519
+ name: Maxsim Accuracy@5
520
+ - type: MaxSim_accuracy@10
521
+ value: 0.62
522
+ name: Maxsim Accuracy@10
523
+ - type: MaxSim_precision@1
524
+ value: 0.3
525
+ name: Maxsim Precision@1
526
+ - type: MaxSim_precision@3
527
+ value: 0.18
528
+ name: Maxsim Precision@3
529
+ - type: MaxSim_precision@5
530
+ value: 0.14400000000000002
531
+ name: Maxsim Precision@5
532
+ - type: MaxSim_precision@10
533
+ value: 0.092
534
+ name: Maxsim Precision@10
535
+ - type: MaxSim_recall@1
536
+ value: 0.061000000000000006
537
+ name: Maxsim Recall@1
538
+ - type: MaxSim_recall@3
539
+ value: 0.11100000000000002
540
+ name: Maxsim Recall@3
541
+ - type: MaxSim_recall@5
542
+ value: 0.14700000000000002
543
+ name: Maxsim Recall@5
544
+ - type: MaxSim_recall@10
545
+ value: 0.18799999999999997
546
+ name: Maxsim Recall@10
547
+ - type: MaxSim_ndcg@10
548
+ value: 0.198564235862039
549
+ name: Maxsim Ndcg@10
550
+ - type: MaxSim_mrr@10
551
+ value: 0.3978253968253968
552
+ name: Maxsim Mrr@10
553
+ - type: MaxSim_map@100
554
+ value: 0.13670583023266375
555
+ name: Maxsim Map@100
556
+ - task:
557
+ type: py-late-information-retrieval
558
+ name: Py Late Information Retrieval
559
+ dataset:
560
+ name: NanoArguAna
561
+ type: NanoArguAna
562
+ metrics:
563
+ - type: MaxSim_accuracy@1
564
+ value: 0.14
565
+ name: Maxsim Accuracy@1
566
+ - type: MaxSim_accuracy@3
567
+ value: 0.24
568
+ name: Maxsim Accuracy@3
569
+ - type: MaxSim_accuracy@5
570
+ value: 0.28
571
+ name: Maxsim Accuracy@5
572
+ - type: MaxSim_accuracy@10
573
+ value: 0.36
574
+ name: Maxsim Accuracy@10
575
+ - type: MaxSim_precision@1
576
+ value: 0.14
577
+ name: Maxsim Precision@1
578
+ - type: MaxSim_precision@3
579
+ value: 0.07999999999999999
580
+ name: Maxsim Precision@3
581
+ - type: MaxSim_precision@5
582
+ value: 0.05600000000000001
583
+ name: Maxsim Precision@5
584
+ - type: MaxSim_precision@10
585
+ value: 0.036000000000000004
586
+ name: Maxsim Precision@10
587
+ - type: MaxSim_recall@1
588
+ value: 0.14
589
+ name: Maxsim Recall@1
590
+ - type: MaxSim_recall@3
591
+ value: 0.24
592
+ name: Maxsim Recall@3
593
+ - type: MaxSim_recall@5
594
+ value: 0.28
595
+ name: Maxsim Recall@5
596
+ - type: MaxSim_recall@10
597
+ value: 0.36
598
+ name: Maxsim Recall@10
599
+ - type: MaxSim_ndcg@10
600
+ value: 0.2444065884095295
601
+ name: Maxsim Ndcg@10
602
+ - type: MaxSim_mrr@10
603
+ value: 0.20804761904761904
604
+ name: Maxsim Mrr@10
605
+ - type: MaxSim_map@100
606
+ value: 0.21989999402599436
607
+ name: Maxsim Map@100
608
+ - task:
609
+ type: py-late-information-retrieval
610
+ name: Py Late Information Retrieval
611
+ dataset:
612
+ name: NanoSciFact
613
+ type: NanoSciFact
614
+ metrics:
615
+ - type: MaxSim_accuracy@1
616
+ value: 0.36
617
+ name: Maxsim Accuracy@1
618
+ - type: MaxSim_accuracy@3
619
+ value: 0.5
620
+ name: Maxsim Accuracy@3
621
+ - type: MaxSim_accuracy@5
622
+ value: 0.6
623
+ name: Maxsim Accuracy@5
624
+ - type: MaxSim_accuracy@10
625
+ value: 0.62
626
+ name: Maxsim Accuracy@10
627
+ - type: MaxSim_precision@1
628
+ value: 0.36
629
+ name: Maxsim Precision@1
630
+ - type: MaxSim_precision@3
631
+ value: 0.18666666666666668
632
+ name: Maxsim Precision@3
633
+ - type: MaxSim_precision@5
634
+ value: 0.14
635
+ name: Maxsim Precision@5
636
+ - type: MaxSim_precision@10
637
+ value: 0.07400000000000001
638
+ name: Maxsim Precision@10
639
+ - type: MaxSim_recall@1
640
+ value: 0.325
641
+ name: Maxsim Recall@1
642
+ - type: MaxSim_recall@3
643
+ value: 0.49
644
+ name: Maxsim Recall@3
645
+ - type: MaxSim_recall@5
646
+ value: 0.59
647
+ name: Maxsim Recall@5
648
+ - type: MaxSim_recall@10
649
+ value: 0.62
650
+ name: Maxsim Recall@10
651
+ - type: MaxSim_ndcg@10
652
+ value: 0.4856083424090788
653
+ name: Maxsim Ndcg@10
654
+ - type: MaxSim_mrr@10
655
+ value: 0.44449999999999995
656
+ name: Maxsim Mrr@10
657
+ - type: MaxSim_map@100
658
+ value: 0.44726079800650204
659
+ name: Maxsim Map@100
660
+ - task:
661
+ type: py-late-information-retrieval
662
+ name: Py Late Information Retrieval
663
+ dataset:
664
+ name: NanoTouche2020
665
+ type: NanoTouche2020
666
+ metrics:
667
+ - type: MaxSim_accuracy@1
668
+ value: 0.6530612244897959
669
+ name: Maxsim Accuracy@1
670
+ - type: MaxSim_accuracy@3
671
+ value: 0.9591836734693877
672
+ name: Maxsim Accuracy@3
673
+ - type: MaxSim_accuracy@5
674
+ value: 0.9795918367346939
675
+ name: Maxsim Accuracy@5
676
+ - type: MaxSim_accuracy@10
677
+ value: 1.0
678
+ name: Maxsim Accuracy@10
679
+ - type: MaxSim_precision@1
680
+ value: 0.6530612244897959
681
+ name: Maxsim Precision@1
682
+ - type: MaxSim_precision@3
683
+ value: 0.6054421768707483
684
+ name: Maxsim Precision@3
685
+ - type: MaxSim_precision@5
686
+ value: 0.5673469387755103
687
+ name: Maxsim Precision@5
688
+ - type: MaxSim_precision@10
689
+ value: 0.45918367346938777
690
+ name: Maxsim Precision@10
691
+ - type: MaxSim_recall@1
692
+ value: 0.04308959031413618
693
+ name: Maxsim Recall@1
694
+ - type: MaxSim_recall@3
695
+ value: 0.11831839494199368
696
+ name: Maxsim Recall@3
697
+ - type: MaxSim_recall@5
698
+ value: 0.1804772716223025
699
+ name: Maxsim Recall@5
700
+ - type: MaxSim_recall@10
701
+ value: 0.2842813442856462
702
+ name: Maxsim Recall@10
703
+ - type: MaxSim_ndcg@10
704
+ value: 0.5191595399345652
705
+ name: Maxsim Ndcg@10
706
+ - type: MaxSim_mrr@10
707
+ value: 0.8064625850340136
708
+ name: Maxsim Mrr@10
709
+ - type: MaxSim_map@100
710
+ value: 0.32574548665687825
711
+ name: Maxsim Map@100
712
+ - task:
713
+ type: nano-beir
714
+ name: Nano BEIR
715
+ dataset:
716
+ name: NanoBEIR mean
717
+ type: NanoBEIR_mean
718
+ metrics:
719
+ - type: MaxSim_accuracy@1
720
+ value: 0.4317739403453689
721
+ name: Maxsim Accuracy@1
722
+ - type: MaxSim_accuracy@3
723
+ value: 0.5753218210361067
724
+ name: Maxsim Accuracy@3
725
+ - type: MaxSim_accuracy@5
726
+ value: 0.6399686028257457
727
+ name: Maxsim Accuracy@5
728
+ - type: MaxSim_accuracy@10
729
+ value: 0.7061538461538461
730
+ name: Maxsim Accuracy@10
731
+ - type: MaxSim_precision@1
732
+ value: 0.4317739403453689
733
+ name: Maxsim Precision@1
734
+ - type: MaxSim_precision@3
735
+ value: 0.26554683411826263
736
+ name: Maxsim Precision@3
737
+ - type: MaxSim_precision@5
738
+ value: 0.2150266875981162
739
+ name: Maxsim Precision@5
740
+ - type: MaxSim_precision@10
741
+ value: 0.14901412872841444
742
+ name: Maxsim Precision@10
743
+ - type: MaxSim_recall@1
744
+ value: 0.2378753968312239
745
+ name: Maxsim Recall@1
746
+ - type: MaxSim_recall@3
747
+ value: 0.34854876486472214
748
+ name: Maxsim Recall@3
749
+ - type: MaxSim_recall@5
750
+ value: 0.41149967660335024
751
+ name: Maxsim Recall@5
752
+ - type: MaxSim_recall@10
753
+ value: 0.4744950475424349
754
+ name: Maxsim Recall@10
755
+ - type: MaxSim_ndcg@10
756
+ value: 0.44952851967210117
757
+ name: Maxsim Ndcg@10
758
+ - type: MaxSim_mrr@10
759
+ value: 0.5193719693005406
760
+ name: Maxsim Mrr@10
761
+ - type: MaxSim_map@100
762
+ value: 0.3725227084143902
763
+ name: Maxsim Map@100
764
+ ---
765
+
766
+ # ColBERT MUVERA Femto
767
+
768
+ This is a [PyLate](https://github.com/lightonai/pylate) model finetuned from [neuml/bert-hash-femto](https://huggingface.co/neuml/bert-hash-femto) on the [msmarco-en-bge-gemma unnormalized split](https://huggingface.co/datasets/lightonai/ms-marco-en-bge-gemma) dataset. It maps sentences & paragraphs to sequences of 50-dimensional dense vectors and can be used for semantic textual similarity using the MaxSim operator.
769
+
770
+ This model is trained with un-normalized scores, making it compatible with [MUVERA fixed-dimensional encoding](https://arxiv.org/abs/2405.19504).
771
+
772
+ ## Usage (txtai)
773
+
774
+ This model can be used to build embeddings databases with [txtai](https://github.com/neuml/txtai) for semantic search and/or as a knowledge source for retrieval augmented generation (RAG).
775
+
776
+ _Note: txtai 9.0+ is required for late interaction model support_
777
+
778
+ ```python
779
+ import txtai
780
+
781
+ embeddings = txtai.Embeddings(
782
+ sparse="neuml/colbert-muvera-femto",
783
+ content=True
784
+ )
785
+ embeddings.index(documents())
786
+
787
+ # Run a query
788
+ embeddings.search("query to run")
789
+ ```
790
+
791
+ Late interaction models excel as reranker pipelines.
792
+
793
+ ```python
794
+ from txtai.pipeline import Reranker, Similarity
795
+
796
+ similarity = Similarity(path="neuml/colbert-muvera-femto", lateencode=True)
797
+ ranker = Reranker(embeddings, similarity)
798
+ ranker("query to run")
799
+ ```
800
+
801
+ ## Usage (PyLate)
802
+
803
+ Alternatively, the model can be loaded with [PyLate](https://github.com/lightonai/pylate).
804
+
805
+ ```python
806
+ from pylate import rank, models
807
+
808
+ queries = [
809
+ "query A",
810
+ "query B",
811
+ ]
812
+
813
+ documents = [
814
+ ["document A", "document B"],
815
+ ["document 1", "document C", "document B"],
816
+ ]
817
+
818
+ documents_ids = [
819
+ [1, 2],
820
+ [1, 3, 2],
821
+ ]
822
+
823
+ model = models.ColBERT(
824
+ model_name_or_path="neuml/colbert-muvera-femto",
825
+ )
826
+
827
+ queries_embeddings = model.encode(
828
+ queries,
829
+ is_query=True,
830
+ )
831
+
832
+ documents_embeddings = model.encode(
833
+ documents,
834
+ is_query=False,
835
+ )
836
+
837
+ reranked_documents = rank.rerank(
838
+ documents_ids=documents_ids,
839
+ queries_embeddings=queries_embeddings,
840
+ documents_embeddings=documents_embeddings,
841
+ )
842
+ ```
843
+
844
+ ### Full Model Architecture
845
+
846
+ ```
847
+ ColBERT(
848
+ (0): Transformer({'max_seq_length': 299, 'do_lower_case': False}) with Transformer model: BertHashModel
849
+ (1): Dense({'in_features': 50, 'out_features': 50, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
850
+ )
851
+ ```
852
+
853
+ ## Evaluation
854
+
855
+ ### BEIR Subset
856
+
857
+ The following table shows a subset of BEIR scored with the [txtai benchmarks script](https://github.com/neuml/txtai/blob/master/examples/benchmarks.py).
858
+
859
+ Scores reported are `ndcg@10` and grouped into the following three categories.
860
+
861
+ #### FULL multi-vector maxsim
862
+
863
+ | Model | Parameters | NFCorpus | SciDocs | SciFact | Average |
864
+ |:------------------|:-----------|:---------|:---------|:--------|:--------|
865
+ | [ColBERT v2](https://huggingface.co/colbert-ir/colbertv2.0) | 110M | 0.3165 | 0.1497 | 0.6456 | 0.3706 |
866
+ | [**ColBERT MUVERA Femto**](https://huggingface.co/neuml/colbert-muvera-femto) | **0.2M** | **0.2513** | **0.0870** | **0.4710** | **0.2698** |
867
+ | [ColBERT MUVERA Pico](https://huggingface.co/neuml/colbert-muvera-pico) | 0.4M | 0.3005 | 0.1117 | 0.6452 | 0.3525 |
868
+ | [ColBERT MUVERA Nano](https://huggingface.co/neuml/colbert-muvera-nano) | 0.9M | 0.3180 | 0.1262 | 0.6576 | 0.3673 |
869
+ | [ColBERT MUVERA Micro](https://huggingface.co/neuml/colbert-muvera-micro) | 4M | 0.3235 | 0.1244 | 0.6676 | 0.3718 |
870
+
871
+ #### MUVERA encoding + maxsim re-ranking of the top 100 results per MUVERA paper
872
+
873
+ | Model | Parameters | NFCorpus | SciDocs | SciFact | Average |
874
+ |:------------------|:-----------|:---------|:---------|:--------|:--------|
875
+ | [ColBERT v2](https://huggingface.co/colbert-ir/colbertv2.0) | 110M | 0.3025 | 0.1538 | 0.6278 | 0.3614 |
876
+ | [**ColBERT MUVERA Femto**](https://huggingface.co/neuml/colbert-muvera-femto) | **0.2M** | **0.2316** | **0.0858** | **0.4641** | **0.2605** |
877
+ | [ColBERT MUVERA Pico](https://huggingface.co/neuml/colbert-muvera-pico) | 0.4M | 0.2821 | 0.1004 | 0.6090 | 0.3305 |
878
+ | [ColBERT MUVERA Nano](https://huggingface.co/neuml/colbert-muvera-nano) | 0.9M | 0.2996 | 0.1201 | 0.6249 | 0.3482 |
879
+ | [ColBERT MUVERA Micro](https://huggingface.co/neuml/colbert-muvera-micro) | 4M | 0.3095 | 0.1228 | 0.6464 | 0.3596 |
880
+
881
+ #### MUVERA encoding only
882
+
883
+ | Model | Parameters | NFCorpus | SciDocs | SciFact | Average |
884
+ |:------------------|:-----------|:---------|:---------|:--------|:--------|
885
+ | [ColBERT v2](https://huggingface.co/colbert-ir/colbertv2.0) | 110M | 0.2356 | 0.1229 | 0.5002 | 0.2862 |
886
+ | [**ColBERT MUVERA Femto**](https://huggingface.co/neuml/colbert-muvera-femto) | **0.2M** | **0.1851** | **0.0411** | **0.3518** | **0.1927** |
887
+ | [ColBERT MUVERA Pico](https://huggingface.co/neuml/colbert-muvera-pico) | 0.4M | 0.1926 | 0.0564 | 0.4424 | 0.2305 |
888
+ | [ColBERT MUVERA Nano](https://huggingface.co/neuml/colbert-muvera-nano) | 0.9M | 0.2355 | 0.0807 | 0.4904 | 0.2689 |
889
+ | [ColBERT MUVERA Micro](https://huggingface.co/neuml/colbert-muvera-micro) | 4M | 0.2348 | 0.0882 | 0.4875 | 0.2702 |
890
+
891
+ _Note: The scores reported don't match scores reported in the respective papers due to different default settings in the txtai benchmark scripts._
892
+
893
+ As noted earlier, models trained with min-max score normalization don't perform well with MUVERA encoding. See this [GitHub Issue](https://github.com/lightonai/pylate/issues/142) for more.
894
+
895
+ **This model is only 250K parameters with a file size of 950K. Keeping that in mind, it's surprising how decent the scores are!**
896
+
897
+ ### Nano BEIR
898
+ * Dataset: `NanoBEIR_mean`
899
+ * Evaluated with <code>pylate.evaluation.nano_beir_evaluator.NanoBEIREvaluator</code>
900
+
901
+ | Metric | Value |
902
+ |:--------------------|:-----------|
903
+ | MaxSim_accuracy@1 | 0.4318 |
904
+ | MaxSim_accuracy@3 | 0.5753 |
905
+ | MaxSim_accuracy@5 | 0.64 |
906
+ | MaxSim_accuracy@10 | 0.7062 |
907
+ | MaxSim_precision@1 | 0.4318 |
908
+ | MaxSim_precision@3 | 0.2655 |
909
+ | MaxSim_precision@5 | 0.215 |
910
+ | MaxSim_precision@10 | 0.149 |
911
+ | MaxSim_recall@1 | 0.2379 |
912
+ | MaxSim_recall@3 | 0.3485 |
913
+ | MaxSim_recall@5 | 0.4115 |
914
+ | MaxSim_recall@10 | 0.4745 |
915
+ | **MaxSim_ndcg@10** | **0.4495** |
916
+ | MaxSim_mrr@10 | 0.5194 |
917
+ | MaxSim_map@100 | 0.3725 |
918
+
919
+ ## Training Details
920
+
921
+ ### Training Hyperparameters
922
+
923
+ #### Non-Default Hyperparameters
924
+
925
+ - `eval_strategy`: steps
926
+ - `per_device_train_batch_size`: 32
927
+ - `learning_rate`: 0.0003
928
+ - `num_train_epochs`: 1
929
+ - `warmup_ratio`: 0.05
930
+ - `fp16`: True
931
+
932
+ #### All Hyperparameters
933
+ <details><summary>Click to expand</summary>
934
+
935
+ - `overwrite_output_dir`: False
936
+ - `do_predict`: False
937
+ - `eval_strategy`: steps
938
+ - `prediction_loss_only`: True
939
+ - `per_device_train_batch_size`: 32
940
+ - `per_device_eval_batch_size`: 8
941
+ - `per_gpu_train_batch_size`: None
942
+ - `per_gpu_eval_batch_size`: None
943
+ - `gradient_accumulation_steps`: 1
944
+ - `eval_accumulation_steps`: None
945
+ - `torch_empty_cache_steps`: None
946
+ - `learning_rate`: 0.0003
947
+ - `weight_decay`: 0.0
948
+ - `adam_beta1`: 0.9
949
+ - `adam_beta2`: 0.999
950
+ - `adam_epsilon`: 1e-08
951
+ - `max_grad_norm`: 1.0
952
+ - `num_train_epochs`: 1
953
+ - `max_steps`: -1
954
+ - `lr_scheduler_type`: linear
955
+ - `lr_scheduler_kwargs`: {}
956
+ - `warmup_ratio`: 0.05
957
+ - `warmup_steps`: 0
958
+ - `log_level`: passive
959
+ - `log_level_replica`: warning
960
+ - `log_on_each_node`: True
961
+ - `logging_nan_inf_filter`: True
962
+ - `save_safetensors`: True
963
+ - `save_on_each_node`: False
964
+ - `save_only_model`: False
965
+ - `restore_callback_states_from_checkpoint`: False
966
+ - `no_cuda`: False
967
+ - `use_cpu`: False
968
+ - `use_mps_device`: False
969
+ - `seed`: 42
970
+ - `data_seed`: None
971
+ - `jit_mode_eval`: False
972
+ - `bf16`: False
973
+ - `fp16`: True
974
+ - `fp16_opt_level`: O1
975
+ - `half_precision_backend`: auto
976
+ - `bf16_full_eval`: False
977
+ - `fp16_full_eval`: False
978
+ - `tf32`: None
979
+ - `local_rank`: 0
980
+ - `ddp_backend`: None
981
+ - `tpu_num_cores`: None
982
+ - `tpu_metrics_debug`: False
983
+ - `debug`: []
984
+ - `dataloader_drop_last`: False
985
+ - `dataloader_num_workers`: 0
986
+ - `dataloader_prefetch_factor`: None
987
+ - `past_index`: -1
988
+ - `disable_tqdm`: False
989
+ - `remove_unused_columns`: True
990
+ - `label_names`: None
991
+ - `load_best_model_at_end`: False
992
+ - `ignore_data_skip`: False
993
+ - `fsdp`: []
994
+ - `fsdp_min_num_params`: 0
995
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
996
+ - `fsdp_transformer_layer_cls_to_wrap`: None
997
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
998
+ - `parallelism_config`: None
999
+ - `deepspeed`: None
1000
+ - `label_smoothing_factor`: 0.0
1001
+ - `optim`: adamw_torch_fused
1002
+ - `optim_args`: None
1003
+ - `adafactor`: False
1004
+ - `group_by_length`: False
1005
+ - `length_column_name`: length
1006
+ - `project`: huggingface
1007
+ - `trackio_space_id`: trackio
1008
+ - `ddp_find_unused_parameters`: None
1009
+ - `ddp_bucket_cap_mb`: None
1010
+ - `ddp_broadcast_buffers`: False
1011
+ - `dataloader_pin_memory`: True
1012
+ - `dataloader_persistent_workers`: False
1013
+ - `skip_memory_metrics`: True
1014
+ - `use_legacy_prediction_loop`: False
1015
+ - `push_to_hub`: False
1016
+ - `resume_from_checkpoint`: None
1017
+ - `hub_model_id`: None
1018
+ - `hub_strategy`: every_save
1019
+ - `hub_private_repo`: None
1020
+ - `hub_always_push`: False
1021
+ - `hub_revision`: None
1022
+ - `gradient_checkpointing`: False
1023
+ - `gradient_checkpointing_kwargs`: None
1024
+ - `include_inputs_for_metrics`: False
1025
+ - `include_for_metrics`: []
1026
+ - `eval_do_concat_batches`: True
1027
+ - `fp16_backend`: auto
1028
+ - `push_to_hub_model_id`: None
1029
+ - `push_to_hub_organization`: None
1030
+ - `mp_parameters`:
1031
+ - `auto_find_batch_size`: False
1032
+ - `full_determinism`: False
1033
+ - `torchdynamo`: None
1034
+ - `ray_scope`: last
1035
+ - `ddp_timeout`: 1800
1036
+ - `torch_compile`: False
1037
+ - `torch_compile_backend`: None
1038
+ - `torch_compile_mode`: None
1039
+ - `include_tokens_per_second`: False
1040
+ - `include_num_input_tokens_seen`: no
1041
+ - `neftune_noise_alpha`: None
1042
+ - `optim_target_modules`: None
1043
+ - `batch_eval_metrics`: False
1044
+ - `eval_on_start`: False
1045
+ - `use_liger_kernel`: False
1046
+ - `liger_kernel_config`: None
1047
+ - `eval_use_gather_object`: False
1048
+ - `average_tokens_across_devices`: True
1049
+ - `prompts`: None
1050
+ - `batch_sampler`: batch_sampler
1051
+ - `multi_dataset_batch_sampler`: proportional
1052
+
1053
+ </details>
1054
+
1055
+ ### Framework Versions
1056
+ - Python: 3.10.18
1057
+ - Sentence Transformers: 4.0.2
1058
+ - PyLate: 1.3.2
1059
+ - Transformers: 4.57.0
1060
+ - PyTorch: 2.8.0+cu128
1061
+ - Accelerate: 1.10.1
1062
+ - Datasets: 4.1.1
1063
+ - Tokenizers: 0.22.1
1064
+
1065
+ ## Citation
1066
+
1067
+ ### BibTeX
1068
+
1069
+ #### Sentence Transformers
1070
+ ```bibtex
1071
+ @inproceedings{reimers-2019-sentence-bert,
1072
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
1073
+ author = "Reimers, Nils and Gurevych, Iryna",
1074
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
1075
+ month = "11",
1076
+ year = "2019",
1077
+ publisher = "Association for Computational Linguistics",
1078
+ url = "https://arxiv.org/abs/1908.10084"
1079
+ }
1080
+ ```
1081
+
1082
+ #### PyLate
1083
+ ```bibtex
1084
+ @misc{PyLate,
1085
+ title={PyLate: Flexible Training and Retrieval for Late Interaction Models},
1086
+ author={Chaffin, Antoine and Sourty, Raphaël},
1087
+ url={https://github.com/lightonai/pylate},
1088
+ year={2024}
1089
+ }
1090
+ ```
added_tokens.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "[D] ": 30523,
3
+ "[Q] ": 30522
4
+ }
config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "BertHashModel"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "auto_map": {
7
+ "AutoConfig": "configuration_bert_hash.BertHashConfig",
8
+ "AutoModel": "modeling_bert_hash.BertHashModel",
9
+ "AutoModelForMaskedLM": "modeling_bert_hash.BertHashForMaskedLM",
10
+ "AutoModelForSequenceClassification": "modeling_bert_hash.BertHashForSequenceClassification"
11
+ },
12
+ "classifier_dropout": null,
13
+ "dtype": "float32",
14
+ "hidden_act": "gelu",
15
+ "hidden_dropout_prob": 0.1,
16
+ "hidden_size": 50,
17
+ "initializer_range": 0.02,
18
+ "intermediate_size": 200,
19
+ "layer_norm_eps": 1e-12,
20
+ "max_position_embeddings": 512,
21
+ "model_type": "bert_hash",
22
+ "num_attention_heads": 2,
23
+ "num_hidden_layers": 2,
24
+ "pad_token_id": 0,
25
+ "position_embedding_type": "absolute",
26
+ "projections": 5,
27
+ "transformers_version": "4.57.0",
28
+ "type_vocab_size": 2,
29
+ "use_cache": true,
30
+ "vocab_size": 30524
31
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "4.0.2",
4
+ "transformers": "4.57.0",
5
+ "pytorch": "2.8.0+cu128"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": "MaxSim",
10
+ "query_prefix": "[Q] ",
11
+ "document_prefix": "[D] ",
12
+ "query_length": 32,
13
+ "document_length": 300,
14
+ "attend_to_expansion_tokens": false,
15
+ "skiplist_words": [
16
+ "!",
17
+ "\"",
18
+ "#",
19
+ "$",
20
+ "%",
21
+ "&",
22
+ "'",
23
+ "(",
24
+ ")",
25
+ "*",
26
+ "+",
27
+ ",",
28
+ "-",
29
+ ".",
30
+ "/",
31
+ ":",
32
+ ";",
33
+ "<",
34
+ "=",
35
+ ">",
36
+ "?",
37
+ "@",
38
+ "[",
39
+ "\\",
40
+ "]",
41
+ "^",
42
+ "_",
43
+ "`",
44
+ "{",
45
+ "|",
46
+ "}",
47
+ "~"
48
+ ],
49
+ "do_query_expansion": true
50
+ }
configuration_bert_hash.py ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from transformers.models.bert.configuration_bert import BertConfig
2
+
3
+
4
+ class BertHashConfig(BertConfig):
5
+ """
6
+ Extension of Bert configuration to add projections parameter.
7
+ """
8
+
9
+ model_type = "bert_hash"
10
+
11
+ def __init__(self, projections=5, **kwargs):
12
+ super().__init__(**kwargs)
13
+
14
+ self.projections = projections
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c394ea7e54b710573bc1d8a8ec83a5689db3d0d7b97c47718da535332eafa642
3
+ size 974552
modeling_bert_hash.py ADDED
@@ -0,0 +1,519 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Optional, Union
2
+
3
+ import torch
4
+ from torch import nn
5
+ from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss
6
+
7
+ from transformers.cache_utils import Cache
8
+ from transformers.models.bert.modeling_bert import BertEncoder, BertPooler, BertPreTrainedModel, BertOnlyMLMHead
9
+ from transformers.modeling_attn_mask_utils import _prepare_4d_attention_mask_for_sdpa, _prepare_4d_causal_attention_mask_for_sdpa
10
+ from transformers.modeling_outputs import (
11
+ BaseModelOutputWithPoolingAndCrossAttentions,
12
+ MaskedLMOutput,
13
+ SequenceClassifierOutput,
14
+ )
15
+ from transformers.utils import auto_docstring, logging
16
+
17
+ from .configuration_bert_hash import BertHashConfig
18
+
19
+ logger = logging.get_logger(__name__)
20
+
21
+
22
+ class BertHashTokens(nn.Module):
23
+ """
24
+ Module that embeds token vocabulary to an intermediate embeddings layer then projects those embeddings to the
25
+ hidden size.
26
+
27
+ The number of projections is like a hash. Setting the projections parameter to 5 is like generating a
28
+ 160-bit hash (5 x float32) for each token. That hash is then projected to the hidden size.
29
+
30
+ This significantly reduces the number of parameters necessary for token embeddings.
31
+
32
+ For example:
33
+ Standard token embeddings:
34
+ 30,522 (vocab size) x 768 (hidden size) = 23,440,896 parameters
35
+ 23,440,896 x 4 (float32) = 93,763,584 bytes
36
+
37
+ Hash token embeddings:
38
+ 30,522 (vocab size) x 5 (hash buckets) + 5 x 768 (projection matrix)= 156,450 parameters
39
+ 156,450 x 4 (float32) = 625,800 bytes
40
+ """
41
+
42
+ def __init__(self, config):
43
+ super().__init__()
44
+ self.config = config
45
+
46
+ # Token embeddings
47
+ self.embeddings = nn.Embedding(config.vocab_size, config.projections, padding_idx=config.pad_token_id)
48
+
49
+ # Token embeddings projections
50
+ self.projections = nn.Linear(config.projections, config.hidden_size)
51
+
52
+ def forward(self, input_ids):
53
+ # Project embeddings to hidden size
54
+ return self.projections(self.embeddings(input_ids))
55
+
56
+
57
+ class BertHashEmbeddings(nn.Module):
58
+ """Construct the embeddings from word, position and token_type embeddings."""
59
+
60
+ def __init__(self, config):
61
+ super().__init__()
62
+ self.word_embeddings = BertHashTokens(config)
63
+ self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size)
64
+ self.token_type_embeddings = nn.Embedding(config.type_vocab_size, config.hidden_size)
65
+
66
+ # self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load
67
+ # any TensorFlow checkpoint file
68
+ self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
69
+ self.dropout = nn.Dropout(config.hidden_dropout_prob)
70
+ # position_ids (1, len position emb) is contiguous in memory and exported when serialized
71
+ self.position_embedding_type = getattr(config, "position_embedding_type", "absolute")
72
+ self.register_buffer(
73
+ "position_ids", torch.arange(config.max_position_embeddings).expand((1, -1)), persistent=False
74
+ )
75
+ self.register_buffer(
76
+ "token_type_ids", torch.zeros(self.position_ids.size(), dtype=torch.long), persistent=False
77
+ )
78
+
79
+ def forward(
80
+ self,
81
+ input_ids: Optional[torch.LongTensor] = None,
82
+ token_type_ids: Optional[torch.LongTensor] = None,
83
+ position_ids: Optional[torch.LongTensor] = None,
84
+ inputs_embeds: Optional[torch.FloatTensor] = None,
85
+ past_key_values_length: int = 0,
86
+ ) -> torch.Tensor:
87
+ if input_ids is not None:
88
+ input_shape = input_ids.size()
89
+ else:
90
+ input_shape = inputs_embeds.size()[:-1]
91
+
92
+ seq_length = input_shape[1]
93
+
94
+ if position_ids is None:
95
+ position_ids = self.position_ids[:, past_key_values_length : seq_length + past_key_values_length]
96
+
97
+ # Setting the token_type_ids to the registered buffer in constructor where it is all zeros, which usually occurs
98
+ # when its auto-generated, registered buffer helps users when tracing the model without passing token_type_ids, solves
99
+ # issue #5664
100
+ if token_type_ids is None:
101
+ if hasattr(self, "token_type_ids"):
102
+ buffered_token_type_ids = self.token_type_ids[:, :seq_length]
103
+ buffered_token_type_ids_expanded = buffered_token_type_ids.expand(input_shape[0], seq_length)
104
+ token_type_ids = buffered_token_type_ids_expanded
105
+ else:
106
+ token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=self.position_ids.device)
107
+
108
+ if inputs_embeds is None:
109
+ inputs_embeds = self.word_embeddings(input_ids)
110
+ token_type_embeddings = self.token_type_embeddings(token_type_ids)
111
+
112
+ embeddings = inputs_embeds + token_type_embeddings
113
+ if self.position_embedding_type == "absolute":
114
+ position_embeddings = self.position_embeddings(position_ids)
115
+ embeddings += position_embeddings
116
+ embeddings = self.LayerNorm(embeddings)
117
+ embeddings = self.dropout(embeddings)
118
+ return embeddings
119
+
120
+
121
+ @auto_docstring(
122
+ custom_intro="""
123
+ The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
124
+ cross-attention is added between the self-attention layers, following the architecture described in [Attention is
125
+ all you need](https://huggingface.co/papers/1706.03762) by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit,
126
+ Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.
127
+
128
+ To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set
129
+ to `True`. To be used in a Seq2Seq model, the model needs to initialized with both `is_decoder` argument and
130
+ `add_cross_attention` set to `True`; an `encoder_hidden_states` is then expected as an input to the forward pass.
131
+ """
132
+ )
133
+ class BertHashModel(BertPreTrainedModel):
134
+ config_class = BertHashConfig
135
+
136
+ _no_split_modules = ["BertEmbeddings", "BertLayer"]
137
+
138
+ def __init__(self, config, add_pooling_layer=True):
139
+ r"""
140
+ add_pooling_layer (bool, *optional*, defaults to `True`):
141
+ Whether to add a pooling layer
142
+ """
143
+ super().__init__(config)
144
+ self.config = config
145
+
146
+ self.embeddings = BertHashEmbeddings(config)
147
+ self.encoder = BertEncoder(config)
148
+
149
+ self.pooler = BertPooler(config) if add_pooling_layer else None
150
+
151
+ self.attn_implementation = config._attn_implementation
152
+ self.position_embedding_type = config.position_embedding_type
153
+
154
+ # Initialize weights and apply final processing
155
+ self.post_init()
156
+
157
+ def get_input_embeddings(self):
158
+ return self.embeddings.word_embeddings.embeddings
159
+
160
+ def set_input_embeddings(self, value):
161
+ self.embeddings.word_embeddings.embeddings = value
162
+
163
+ def _prune_heads(self, heads_to_prune):
164
+ """
165
+ Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base
166
+ class PreTrainedModel
167
+ """
168
+ for layer, heads in heads_to_prune.items():
169
+ self.encoder.layer[layer].attention.prune_heads(heads)
170
+
171
+ @auto_docstring
172
+ def forward(
173
+ self,
174
+ input_ids: Optional[torch.Tensor] = None,
175
+ attention_mask: Optional[torch.Tensor] = None,
176
+ token_type_ids: Optional[torch.Tensor] = None,
177
+ position_ids: Optional[torch.Tensor] = None,
178
+ head_mask: Optional[torch.Tensor] = None,
179
+ inputs_embeds: Optional[torch.Tensor] = None,
180
+ encoder_hidden_states: Optional[torch.Tensor] = None,
181
+ encoder_attention_mask: Optional[torch.Tensor] = None,
182
+ past_key_values: Optional[list[torch.FloatTensor]] = None,
183
+ use_cache: Optional[bool] = None,
184
+ output_attentions: Optional[bool] = None,
185
+ output_hidden_states: Optional[bool] = None,
186
+ return_dict: Optional[bool] = None,
187
+ cache_position: Optional[torch.Tensor] = None,
188
+ ) -> Union[tuple[torch.Tensor], BaseModelOutputWithPoolingAndCrossAttentions]:
189
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
190
+ output_hidden_states = (
191
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
192
+ )
193
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
194
+
195
+ if self.config.is_decoder:
196
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
197
+ else:
198
+ use_cache = False
199
+
200
+ if input_ids is not None and inputs_embeds is not None:
201
+ raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
202
+ elif input_ids is not None:
203
+ self.warn_if_padding_and_no_attention_mask(input_ids, attention_mask)
204
+ input_shape = input_ids.size()
205
+ elif inputs_embeds is not None:
206
+ input_shape = inputs_embeds.size()[:-1]
207
+ else:
208
+ raise ValueError("You have to specify either input_ids or inputs_embeds")
209
+
210
+ batch_size, seq_length = input_shape
211
+ device = input_ids.device if input_ids is not None else inputs_embeds.device
212
+
213
+ past_key_values_length = 0
214
+ if past_key_values is not None:
215
+ past_key_values_length = (
216
+ past_key_values[0][0].shape[-2]
217
+ if not isinstance(past_key_values, Cache)
218
+ else past_key_values.get_seq_length()
219
+ )
220
+
221
+ if token_type_ids is None:
222
+ if hasattr(self.embeddings, "token_type_ids"):
223
+ buffered_token_type_ids = self.embeddings.token_type_ids[:, :seq_length]
224
+ buffered_token_type_ids_expanded = buffered_token_type_ids.expand(batch_size, seq_length)
225
+ token_type_ids = buffered_token_type_ids_expanded
226
+ else:
227
+ token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device)
228
+
229
+ embedding_output = self.embeddings(
230
+ input_ids=input_ids,
231
+ position_ids=position_ids,
232
+ token_type_ids=token_type_ids,
233
+ inputs_embeds=inputs_embeds,
234
+ past_key_values_length=past_key_values_length,
235
+ )
236
+
237
+ if attention_mask is None:
238
+ attention_mask = torch.ones((batch_size, seq_length + past_key_values_length), device=device)
239
+
240
+ use_sdpa_attention_masks = (
241
+ self.attn_implementation == "sdpa"
242
+ and self.position_embedding_type == "absolute"
243
+ and head_mask is None
244
+ and not output_attentions
245
+ )
246
+
247
+ # Expand the attention mask
248
+ if use_sdpa_attention_masks and attention_mask.dim() == 2:
249
+ # Expand the attention mask for SDPA.
250
+ # [bsz, seq_len] -> [bsz, 1, seq_len, seq_len]
251
+ if self.config.is_decoder:
252
+ extended_attention_mask = _prepare_4d_causal_attention_mask_for_sdpa(
253
+ attention_mask,
254
+ input_shape,
255
+ embedding_output,
256
+ past_key_values_length,
257
+ )
258
+ else:
259
+ extended_attention_mask = _prepare_4d_attention_mask_for_sdpa(
260
+ attention_mask, embedding_output.dtype, tgt_len=seq_length
261
+ )
262
+ else:
263
+ # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length]
264
+ # ourselves in which case we just need to make it broadcastable to all heads.
265
+ extended_attention_mask = self.get_extended_attention_mask(attention_mask, input_shape)
266
+
267
+ # If a 2D or 3D attention mask is provided for the cross-attention
268
+ # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length]
269
+ if self.config.is_decoder and encoder_hidden_states is not None:
270
+ encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.size()
271
+ encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length)
272
+ if encoder_attention_mask is None:
273
+ encoder_attention_mask = torch.ones(encoder_hidden_shape, device=device)
274
+
275
+ if use_sdpa_attention_masks and encoder_attention_mask.dim() == 2:
276
+ # Expand the attention mask for SDPA.
277
+ # [bsz, seq_len] -> [bsz, 1, seq_len, seq_len]
278
+ encoder_extended_attention_mask = _prepare_4d_attention_mask_for_sdpa(
279
+ encoder_attention_mask, embedding_output.dtype, tgt_len=seq_length
280
+ )
281
+ else:
282
+ encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask)
283
+ else:
284
+ encoder_extended_attention_mask = None
285
+
286
+ # Prepare head mask if needed
287
+ # 1.0 in head_mask indicate we keep the head
288
+ # attention_probs has shape bsz x n_heads x N x N
289
+ # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads]
290
+ # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length]
291
+ head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers)
292
+
293
+ encoder_outputs = self.encoder(
294
+ embedding_output,
295
+ attention_mask=extended_attention_mask,
296
+ head_mask=head_mask,
297
+ encoder_hidden_states=encoder_hidden_states,
298
+ encoder_attention_mask=encoder_extended_attention_mask,
299
+ past_key_values=past_key_values,
300
+ use_cache=use_cache,
301
+ output_attentions=output_attentions,
302
+ output_hidden_states=output_hidden_states,
303
+ return_dict=return_dict,
304
+ cache_position=cache_position,
305
+ )
306
+ sequence_output = encoder_outputs[0]
307
+ pooled_output = self.pooler(sequence_output) if self.pooler is not None else None
308
+
309
+ if not return_dict:
310
+ return (sequence_output, pooled_output) + encoder_outputs[1:]
311
+
312
+ return BaseModelOutputWithPoolingAndCrossAttentions(
313
+ last_hidden_state=sequence_output,
314
+ pooler_output=pooled_output,
315
+ past_key_values=encoder_outputs.past_key_values,
316
+ hidden_states=encoder_outputs.hidden_states,
317
+ attentions=encoder_outputs.attentions,
318
+ cross_attentions=encoder_outputs.cross_attentions,
319
+ )
320
+
321
+
322
+ @auto_docstring
323
+ class BertHashForMaskedLM(BertPreTrainedModel):
324
+ _tied_weights_keys = ["predictions.decoder.bias", "cls.predictions.decoder.weight"]
325
+ config_class = BertHashConfig
326
+
327
+ def __init__(self, config):
328
+ super().__init__(config)
329
+
330
+ if config.is_decoder:
331
+ logger.warning(
332
+ "If you want to use `BertForMaskedLM` make sure `config.is_decoder=False` for "
333
+ "bi-directional self-attention."
334
+ )
335
+
336
+ self.bert = BertHashModel(config, add_pooling_layer=False)
337
+ self.cls = BertOnlyMLMHead(config)
338
+
339
+ # Initialize weights and apply final processing
340
+ self.post_init()
341
+
342
+ @auto_docstring
343
+ def forward(
344
+ self,
345
+ input_ids: Optional[torch.Tensor] = None,
346
+ attention_mask: Optional[torch.Tensor] = None,
347
+ token_type_ids: Optional[torch.Tensor] = None,
348
+ position_ids: Optional[torch.Tensor] = None,
349
+ head_mask: Optional[torch.Tensor] = None,
350
+ inputs_embeds: Optional[torch.Tensor] = None,
351
+ encoder_hidden_states: Optional[torch.Tensor] = None,
352
+ encoder_attention_mask: Optional[torch.Tensor] = None,
353
+ labels: Optional[torch.Tensor] = None,
354
+ output_attentions: Optional[bool] = None,
355
+ output_hidden_states: Optional[bool] = None,
356
+ return_dict: Optional[bool] = None,
357
+ ) -> Union[tuple[torch.Tensor], MaskedLMOutput]:
358
+ r"""
359
+ labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
360
+ Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ...,
361
+ config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the
362
+ loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`
363
+ """
364
+
365
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
366
+
367
+ outputs = self.bert(
368
+ input_ids,
369
+ attention_mask=attention_mask,
370
+ token_type_ids=token_type_ids,
371
+ position_ids=position_ids,
372
+ head_mask=head_mask,
373
+ inputs_embeds=inputs_embeds,
374
+ encoder_hidden_states=encoder_hidden_states,
375
+ encoder_attention_mask=encoder_attention_mask,
376
+ output_attentions=output_attentions,
377
+ output_hidden_states=output_hidden_states,
378
+ return_dict=return_dict,
379
+ )
380
+
381
+ sequence_output = outputs[0]
382
+ prediction_scores = self.cls(sequence_output)
383
+
384
+ masked_lm_loss = None
385
+ if labels is not None:
386
+ loss_fct = CrossEntropyLoss() # -100 index = padding token
387
+ masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1))
388
+
389
+ if not return_dict:
390
+ output = (prediction_scores,) + outputs[2:]
391
+ return ((masked_lm_loss,) + output) if masked_lm_loss is not None else output
392
+
393
+ return MaskedLMOutput(
394
+ loss=masked_lm_loss,
395
+ logits=prediction_scores,
396
+ hidden_states=outputs.hidden_states,
397
+ attentions=outputs.attentions,
398
+ )
399
+
400
+ def prepare_inputs_for_generation(self, input_ids, attention_mask=None, **model_kwargs):
401
+ input_shape = input_ids.shape
402
+ effective_batch_size = input_shape[0]
403
+
404
+ # add a dummy token
405
+ if self.config.pad_token_id is None:
406
+ raise ValueError("The PAD token should be defined for generation")
407
+
408
+ attention_mask = torch.cat([attention_mask, attention_mask.new_zeros((attention_mask.shape[0], 1))], dim=-1)
409
+ dummy_token = torch.full(
410
+ (effective_batch_size, 1), self.config.pad_token_id, dtype=torch.long, device=input_ids.device
411
+ )
412
+ input_ids = torch.cat([input_ids, dummy_token], dim=1)
413
+
414
+ return {"input_ids": input_ids, "attention_mask": attention_mask}
415
+
416
+ @classmethod
417
+ def can_generate(cls) -> bool:
418
+ """
419
+ Legacy correction: BertForMaskedLM can't call `generate()` from `GenerationMixin`, even though it has a
420
+ `prepare_inputs_for_generation` method.
421
+ """
422
+ return False
423
+
424
+
425
+ @auto_docstring(
426
+ custom_intro="""
427
+ Bert Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled
428
+ output) e.g. for GLUE tasks.
429
+ """
430
+ )
431
+ class BertHashForSequenceClassification(BertPreTrainedModel):
432
+ config_class = BertHashConfig
433
+
434
+ def __init__(self, config):
435
+ super().__init__(config)
436
+ self.num_labels = config.num_labels
437
+ self.config = config
438
+
439
+ self.bert = BertHashModel(config)
440
+ classifier_dropout = (
441
+ config.classifier_dropout if config.classifier_dropout is not None else config.hidden_dropout_prob
442
+ )
443
+ self.dropout = nn.Dropout(classifier_dropout)
444
+ self.classifier = nn.Linear(config.hidden_size, config.num_labels)
445
+
446
+ # Initialize weights and apply final processing
447
+ self.post_init()
448
+
449
+ @auto_docstring
450
+ def forward(
451
+ self,
452
+ input_ids: Optional[torch.Tensor] = None,
453
+ attention_mask: Optional[torch.Tensor] = None,
454
+ token_type_ids: Optional[torch.Tensor] = None,
455
+ position_ids: Optional[torch.Tensor] = None,
456
+ head_mask: Optional[torch.Tensor] = None,
457
+ inputs_embeds: Optional[torch.Tensor] = None,
458
+ labels: Optional[torch.Tensor] = None,
459
+ output_attentions: Optional[bool] = None,
460
+ output_hidden_states: Optional[bool] = None,
461
+ return_dict: Optional[bool] = None,
462
+ ) -> Union[tuple[torch.Tensor], SequenceClassifierOutput]:
463
+ r"""
464
+ labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
465
+ Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
466
+ config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
467
+ `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
468
+ """
469
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
470
+
471
+ outputs = self.bert(
472
+ input_ids,
473
+ attention_mask=attention_mask,
474
+ token_type_ids=token_type_ids,
475
+ position_ids=position_ids,
476
+ head_mask=head_mask,
477
+ inputs_embeds=inputs_embeds,
478
+ output_attentions=output_attentions,
479
+ output_hidden_states=output_hidden_states,
480
+ return_dict=return_dict,
481
+ )
482
+
483
+ pooled_output = outputs[1]
484
+
485
+ pooled_output = self.dropout(pooled_output)
486
+ logits = self.classifier(pooled_output)
487
+
488
+ loss = None
489
+ if labels is not None:
490
+ if self.config.problem_type is None:
491
+ if self.num_labels == 1:
492
+ self.config.problem_type = "regression"
493
+ elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int):
494
+ self.config.problem_type = "single_label_classification"
495
+ else:
496
+ self.config.problem_type = "multi_label_classification"
497
+
498
+ if self.config.problem_type == "regression":
499
+ loss_fct = MSELoss()
500
+ if self.num_labels == 1:
501
+ loss = loss_fct(logits.squeeze(), labels.squeeze())
502
+ else:
503
+ loss = loss_fct(logits, labels)
504
+ elif self.config.problem_type == "single_label_classification":
505
+ loss_fct = CrossEntropyLoss()
506
+ loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
507
+ elif self.config.problem_type == "multi_label_classification":
508
+ loss_fct = BCEWithLogitsLoss()
509
+ loss = loss_fct(logits, labels)
510
+ if not return_dict:
511
+ output = (logits,) + outputs[2:]
512
+ return ((loss,) + output) if loss is not None else output
513
+
514
+ return SequenceClassifierOutput(
515
+ loss=loss,
516
+ logits=logits,
517
+ hidden_states=outputs.hidden_states,
518
+ attentions=outputs.attentions,
519
+ )
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Dense",
12
+ "type": "pylate.models.Dense.Dense"
13
+ }
14
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 299,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": "[MASK]",
17
+ "sep_token": {
18
+ "content": "[SEP]",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ },
24
+ "unk_token": {
25
+ "content": "[UNK]",
26
+ "lstrip": false,
27
+ "normalized": false,
28
+ "rstrip": false,
29
+ "single_word": false
30
+ }
31
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ },
43
+ "30522": {
44
+ "content": "[Q] ",
45
+ "lstrip": false,
46
+ "normalized": true,
47
+ "rstrip": false,
48
+ "single_word": false,
49
+ "special": false
50
+ },
51
+ "30523": {
52
+ "content": "[D] ",
53
+ "lstrip": false,
54
+ "normalized": true,
55
+ "rstrip": false,
56
+ "single_word": false,
57
+ "special": false
58
+ }
59
+ },
60
+ "clean_up_tokenization_spaces": false,
61
+ "cls_token": "[CLS]",
62
+ "do_lower_case": true,
63
+ "extra_special_tokens": {},
64
+ "mask_token": "[MASK]",
65
+ "model_max_length": 512,
66
+ "pad_token": "[MASK]",
67
+ "sep_token": "[SEP]",
68
+ "strip_accents": null,
69
+ "tokenize_chinese_chars": true,
70
+ "tokenizer_class": "BertTokenizer",
71
+ "unk_token": "[UNK]"
72
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff