Sarvesh2003 commited on
Commit
741e342
·
verified ·
1 Parent(s): 73f6b5b

Upload model after epoch 8

Browse files
README.md ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - florence-2
4
+ - vision
5
+ - price-prediction
6
+ - lora
7
+ license: mit
8
+ ---
9
+
10
+ # Florence-2 Price Prediction Model - Epoch 8
11
+
12
+ This model is a fine-tuned version of microsoft/Florence-2-base for price prediction tasks.
13
+
14
+ ## Training Details
15
+ - Epoch: 8
16
+ - Training Loss: 10.0511
17
+ - SMAPE: N/A
18
+
19
+ ## Model Description
20
+ This is a LoRA fine-tuned Florence-2 model that predicts prices from images and catalog text.
21
+
22
+ ## Usage
23
+ ```python
24
+ from transformers import AutoProcessor, AutoModelForCausalLM
25
+ from peft import PeftModel
26
+
27
+ processor = AutoProcessor.from_pretrained("Sarvesh2003/florence2-price-prediction-epoch8", trust_remote_code=True)
28
+ model = AutoModelForCausalLM.from_pretrained("Sarvesh2003/florence2-price-prediction-epoch8", trust_remote_code=True)
29
+ ```
adapter_config.json ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "microsoft/Florence-2-base",
5
+ "bias": "none",
6
+ "corda_config": null,
7
+ "eva_config": null,
8
+ "exclude_modules": null,
9
+ "fan_in_fan_out": false,
10
+ "inference_mode": true,
11
+ "init_lora_weights": true,
12
+ "layer_replication": null,
13
+ "layers_pattern": null,
14
+ "layers_to_transform": null,
15
+ "loftq_config": {},
16
+ "lora_alpha": 16,
17
+ "lora_bias": false,
18
+ "lora_dropout": 0.05,
19
+ "megatron_config": null,
20
+ "megatron_core": "megatron.core",
21
+ "modules_to_save": null,
22
+ "peft_type": "LORA",
23
+ "qalora_group_size": 16,
24
+ "r": 8,
25
+ "rank_pattern": {},
26
+ "revision": null,
27
+ "target_modules": [
28
+ "q_proj",
29
+ "v_proj",
30
+ "k_proj",
31
+ "out_proj"
32
+ ],
33
+ "target_parameters": null,
34
+ "task_type": "CAUSAL_LM",
35
+ "trainable_token_indices": null,
36
+ "use_dora": false,
37
+ "use_qalora": false,
38
+ "use_rslora": false
39
+ }
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7ced9221bb31e88f1b4ec64ab8d0441f64bb6d0fe62bc3ec75da755d7a582d27
3
+ size 3561144
added_tokens.json ADDED
@@ -0,0 +1,1026 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "</cap>": 51270,
3
+ "</dcap>": 51274,
4
+ "</grounding>": 51276,
5
+ "</ncap>": 51272,
6
+ "</ocr>": 50268,
7
+ "</od>": 50266,
8
+ "</poly>": 51287,
9
+ "</proposal>": 51285,
10
+ "</region_cap>": 51281,
11
+ "</region_to_desciption>": 51283,
12
+ "</seg>": 51278,
13
+ "<and>": 51288,
14
+ "<cap>": 51269,
15
+ "<dcap>": 51273,
16
+ "<grounding>": 51275,
17
+ "<loc_0>": 50269,
18
+ "<loc_100>": 50369,
19
+ "<loc_101>": 50370,
20
+ "<loc_102>": 50371,
21
+ "<loc_103>": 50372,
22
+ "<loc_104>": 50373,
23
+ "<loc_105>": 50374,
24
+ "<loc_106>": 50375,
25
+ "<loc_107>": 50376,
26
+ "<loc_108>": 50377,
27
+ "<loc_109>": 50378,
28
+ "<loc_10>": 50279,
29
+ "<loc_110>": 50379,
30
+ "<loc_111>": 50380,
31
+ "<loc_112>": 50381,
32
+ "<loc_113>": 50382,
33
+ "<loc_114>": 50383,
34
+ "<loc_115>": 50384,
35
+ "<loc_116>": 50385,
36
+ "<loc_117>": 50386,
37
+ "<loc_118>": 50387,
38
+ "<loc_119>": 50388,
39
+ "<loc_11>": 50280,
40
+ "<loc_120>": 50389,
41
+ "<loc_121>": 50390,
42
+ "<loc_122>": 50391,
43
+ "<loc_123>": 50392,
44
+ "<loc_124>": 50393,
45
+ "<loc_125>": 50394,
46
+ "<loc_126>": 50395,
47
+ "<loc_127>": 50396,
48
+ "<loc_128>": 50397,
49
+ "<loc_129>": 50398,
50
+ "<loc_12>": 50281,
51
+ "<loc_130>": 50399,
52
+ "<loc_131>": 50400,
53
+ "<loc_132>": 50401,
54
+ "<loc_133>": 50402,
55
+ "<loc_134>": 50403,
56
+ "<loc_135>": 50404,
57
+ "<loc_136>": 50405,
58
+ "<loc_137>": 50406,
59
+ "<loc_138>": 50407,
60
+ "<loc_139>": 50408,
61
+ "<loc_13>": 50282,
62
+ "<loc_140>": 50409,
63
+ "<loc_141>": 50410,
64
+ "<loc_142>": 50411,
65
+ "<loc_143>": 50412,
66
+ "<loc_144>": 50413,
67
+ "<loc_145>": 50414,
68
+ "<loc_146>": 50415,
69
+ "<loc_147>": 50416,
70
+ "<loc_148>": 50417,
71
+ "<loc_149>": 50418,
72
+ "<loc_14>": 50283,
73
+ "<loc_150>": 50419,
74
+ "<loc_151>": 50420,
75
+ "<loc_152>": 50421,
76
+ "<loc_153>": 50422,
77
+ "<loc_154>": 50423,
78
+ "<loc_155>": 50424,
79
+ "<loc_156>": 50425,
80
+ "<loc_157>": 50426,
81
+ "<loc_158>": 50427,
82
+ "<loc_159>": 50428,
83
+ "<loc_15>": 50284,
84
+ "<loc_160>": 50429,
85
+ "<loc_161>": 50430,
86
+ "<loc_162>": 50431,
87
+ "<loc_163>": 50432,
88
+ "<loc_164>": 50433,
89
+ "<loc_165>": 50434,
90
+ "<loc_166>": 50435,
91
+ "<loc_167>": 50436,
92
+ "<loc_168>": 50437,
93
+ "<loc_169>": 50438,
94
+ "<loc_16>": 50285,
95
+ "<loc_170>": 50439,
96
+ "<loc_171>": 50440,
97
+ "<loc_172>": 50441,
98
+ "<loc_173>": 50442,
99
+ "<loc_174>": 50443,
100
+ "<loc_175>": 50444,
101
+ "<loc_176>": 50445,
102
+ "<loc_177>": 50446,
103
+ "<loc_178>": 50447,
104
+ "<loc_179>": 50448,
105
+ "<loc_17>": 50286,
106
+ "<loc_180>": 50449,
107
+ "<loc_181>": 50450,
108
+ "<loc_182>": 50451,
109
+ "<loc_183>": 50452,
110
+ "<loc_184>": 50453,
111
+ "<loc_185>": 50454,
112
+ "<loc_186>": 50455,
113
+ "<loc_187>": 50456,
114
+ "<loc_188>": 50457,
115
+ "<loc_189>": 50458,
116
+ "<loc_18>": 50287,
117
+ "<loc_190>": 50459,
118
+ "<loc_191>": 50460,
119
+ "<loc_192>": 50461,
120
+ "<loc_193>": 50462,
121
+ "<loc_194>": 50463,
122
+ "<loc_195>": 50464,
123
+ "<loc_196>": 50465,
124
+ "<loc_197>": 50466,
125
+ "<loc_198>": 50467,
126
+ "<loc_199>": 50468,
127
+ "<loc_19>": 50288,
128
+ "<loc_1>": 50270,
129
+ "<loc_200>": 50469,
130
+ "<loc_201>": 50470,
131
+ "<loc_202>": 50471,
132
+ "<loc_203>": 50472,
133
+ "<loc_204>": 50473,
134
+ "<loc_205>": 50474,
135
+ "<loc_206>": 50475,
136
+ "<loc_207>": 50476,
137
+ "<loc_208>": 50477,
138
+ "<loc_209>": 50478,
139
+ "<loc_20>": 50289,
140
+ "<loc_210>": 50479,
141
+ "<loc_211>": 50480,
142
+ "<loc_212>": 50481,
143
+ "<loc_213>": 50482,
144
+ "<loc_214>": 50483,
145
+ "<loc_215>": 50484,
146
+ "<loc_216>": 50485,
147
+ "<loc_217>": 50486,
148
+ "<loc_218>": 50487,
149
+ "<loc_219>": 50488,
150
+ "<loc_21>": 50290,
151
+ "<loc_220>": 50489,
152
+ "<loc_221>": 50490,
153
+ "<loc_222>": 50491,
154
+ "<loc_223>": 50492,
155
+ "<loc_224>": 50493,
156
+ "<loc_225>": 50494,
157
+ "<loc_226>": 50495,
158
+ "<loc_227>": 50496,
159
+ "<loc_228>": 50497,
160
+ "<loc_229>": 50498,
161
+ "<loc_22>": 50291,
162
+ "<loc_230>": 50499,
163
+ "<loc_231>": 50500,
164
+ "<loc_232>": 50501,
165
+ "<loc_233>": 50502,
166
+ "<loc_234>": 50503,
167
+ "<loc_235>": 50504,
168
+ "<loc_236>": 50505,
169
+ "<loc_237>": 50506,
170
+ "<loc_238>": 50507,
171
+ "<loc_239>": 50508,
172
+ "<loc_23>": 50292,
173
+ "<loc_240>": 50509,
174
+ "<loc_241>": 50510,
175
+ "<loc_242>": 50511,
176
+ "<loc_243>": 50512,
177
+ "<loc_244>": 50513,
178
+ "<loc_245>": 50514,
179
+ "<loc_246>": 50515,
180
+ "<loc_247>": 50516,
181
+ "<loc_248>": 50517,
182
+ "<loc_249>": 50518,
183
+ "<loc_24>": 50293,
184
+ "<loc_250>": 50519,
185
+ "<loc_251>": 50520,
186
+ "<loc_252>": 50521,
187
+ "<loc_253>": 50522,
188
+ "<loc_254>": 50523,
189
+ "<loc_255>": 50524,
190
+ "<loc_256>": 50525,
191
+ "<loc_257>": 50526,
192
+ "<loc_258>": 50527,
193
+ "<loc_259>": 50528,
194
+ "<loc_25>": 50294,
195
+ "<loc_260>": 50529,
196
+ "<loc_261>": 50530,
197
+ "<loc_262>": 50531,
198
+ "<loc_263>": 50532,
199
+ "<loc_264>": 50533,
200
+ "<loc_265>": 50534,
201
+ "<loc_266>": 50535,
202
+ "<loc_267>": 50536,
203
+ "<loc_268>": 50537,
204
+ "<loc_269>": 50538,
205
+ "<loc_26>": 50295,
206
+ "<loc_270>": 50539,
207
+ "<loc_271>": 50540,
208
+ "<loc_272>": 50541,
209
+ "<loc_273>": 50542,
210
+ "<loc_274>": 50543,
211
+ "<loc_275>": 50544,
212
+ "<loc_276>": 50545,
213
+ "<loc_277>": 50546,
214
+ "<loc_278>": 50547,
215
+ "<loc_279>": 50548,
216
+ "<loc_27>": 50296,
217
+ "<loc_280>": 50549,
218
+ "<loc_281>": 50550,
219
+ "<loc_282>": 50551,
220
+ "<loc_283>": 50552,
221
+ "<loc_284>": 50553,
222
+ "<loc_285>": 50554,
223
+ "<loc_286>": 50555,
224
+ "<loc_287>": 50556,
225
+ "<loc_288>": 50557,
226
+ "<loc_289>": 50558,
227
+ "<loc_28>": 50297,
228
+ "<loc_290>": 50559,
229
+ "<loc_291>": 50560,
230
+ "<loc_292>": 50561,
231
+ "<loc_293>": 50562,
232
+ "<loc_294>": 50563,
233
+ "<loc_295>": 50564,
234
+ "<loc_296>": 50565,
235
+ "<loc_297>": 50566,
236
+ "<loc_298>": 50567,
237
+ "<loc_299>": 50568,
238
+ "<loc_29>": 50298,
239
+ "<loc_2>": 50271,
240
+ "<loc_300>": 50569,
241
+ "<loc_301>": 50570,
242
+ "<loc_302>": 50571,
243
+ "<loc_303>": 50572,
244
+ "<loc_304>": 50573,
245
+ "<loc_305>": 50574,
246
+ "<loc_306>": 50575,
247
+ "<loc_307>": 50576,
248
+ "<loc_308>": 50577,
249
+ "<loc_309>": 50578,
250
+ "<loc_30>": 50299,
251
+ "<loc_310>": 50579,
252
+ "<loc_311>": 50580,
253
+ "<loc_312>": 50581,
254
+ "<loc_313>": 50582,
255
+ "<loc_314>": 50583,
256
+ "<loc_315>": 50584,
257
+ "<loc_316>": 50585,
258
+ "<loc_317>": 50586,
259
+ "<loc_318>": 50587,
260
+ "<loc_319>": 50588,
261
+ "<loc_31>": 50300,
262
+ "<loc_320>": 50589,
263
+ "<loc_321>": 50590,
264
+ "<loc_322>": 50591,
265
+ "<loc_323>": 50592,
266
+ "<loc_324>": 50593,
267
+ "<loc_325>": 50594,
268
+ "<loc_326>": 50595,
269
+ "<loc_327>": 50596,
270
+ "<loc_328>": 50597,
271
+ "<loc_329>": 50598,
272
+ "<loc_32>": 50301,
273
+ "<loc_330>": 50599,
274
+ "<loc_331>": 50600,
275
+ "<loc_332>": 50601,
276
+ "<loc_333>": 50602,
277
+ "<loc_334>": 50603,
278
+ "<loc_335>": 50604,
279
+ "<loc_336>": 50605,
280
+ "<loc_337>": 50606,
281
+ "<loc_338>": 50607,
282
+ "<loc_339>": 50608,
283
+ "<loc_33>": 50302,
284
+ "<loc_340>": 50609,
285
+ "<loc_341>": 50610,
286
+ "<loc_342>": 50611,
287
+ "<loc_343>": 50612,
288
+ "<loc_344>": 50613,
289
+ "<loc_345>": 50614,
290
+ "<loc_346>": 50615,
291
+ "<loc_347>": 50616,
292
+ "<loc_348>": 50617,
293
+ "<loc_349>": 50618,
294
+ "<loc_34>": 50303,
295
+ "<loc_350>": 50619,
296
+ "<loc_351>": 50620,
297
+ "<loc_352>": 50621,
298
+ "<loc_353>": 50622,
299
+ "<loc_354>": 50623,
300
+ "<loc_355>": 50624,
301
+ "<loc_356>": 50625,
302
+ "<loc_357>": 50626,
303
+ "<loc_358>": 50627,
304
+ "<loc_359>": 50628,
305
+ "<loc_35>": 50304,
306
+ "<loc_360>": 50629,
307
+ "<loc_361>": 50630,
308
+ "<loc_362>": 50631,
309
+ "<loc_363>": 50632,
310
+ "<loc_364>": 50633,
311
+ "<loc_365>": 50634,
312
+ "<loc_366>": 50635,
313
+ "<loc_367>": 50636,
314
+ "<loc_368>": 50637,
315
+ "<loc_369>": 50638,
316
+ "<loc_36>": 50305,
317
+ "<loc_370>": 50639,
318
+ "<loc_371>": 50640,
319
+ "<loc_372>": 50641,
320
+ "<loc_373>": 50642,
321
+ "<loc_374>": 50643,
322
+ "<loc_375>": 50644,
323
+ "<loc_376>": 50645,
324
+ "<loc_377>": 50646,
325
+ "<loc_378>": 50647,
326
+ "<loc_379>": 50648,
327
+ "<loc_37>": 50306,
328
+ "<loc_380>": 50649,
329
+ "<loc_381>": 50650,
330
+ "<loc_382>": 50651,
331
+ "<loc_383>": 50652,
332
+ "<loc_384>": 50653,
333
+ "<loc_385>": 50654,
334
+ "<loc_386>": 50655,
335
+ "<loc_387>": 50656,
336
+ "<loc_388>": 50657,
337
+ "<loc_389>": 50658,
338
+ "<loc_38>": 50307,
339
+ "<loc_390>": 50659,
340
+ "<loc_391>": 50660,
341
+ "<loc_392>": 50661,
342
+ "<loc_393>": 50662,
343
+ "<loc_394>": 50663,
344
+ "<loc_395>": 50664,
345
+ "<loc_396>": 50665,
346
+ "<loc_397>": 50666,
347
+ "<loc_398>": 50667,
348
+ "<loc_399>": 50668,
349
+ "<loc_39>": 50308,
350
+ "<loc_3>": 50272,
351
+ "<loc_400>": 50669,
352
+ "<loc_401>": 50670,
353
+ "<loc_402>": 50671,
354
+ "<loc_403>": 50672,
355
+ "<loc_404>": 50673,
356
+ "<loc_405>": 50674,
357
+ "<loc_406>": 50675,
358
+ "<loc_407>": 50676,
359
+ "<loc_408>": 50677,
360
+ "<loc_409>": 50678,
361
+ "<loc_40>": 50309,
362
+ "<loc_410>": 50679,
363
+ "<loc_411>": 50680,
364
+ "<loc_412>": 50681,
365
+ "<loc_413>": 50682,
366
+ "<loc_414>": 50683,
367
+ "<loc_415>": 50684,
368
+ "<loc_416>": 50685,
369
+ "<loc_417>": 50686,
370
+ "<loc_418>": 50687,
371
+ "<loc_419>": 50688,
372
+ "<loc_41>": 50310,
373
+ "<loc_420>": 50689,
374
+ "<loc_421>": 50690,
375
+ "<loc_422>": 50691,
376
+ "<loc_423>": 50692,
377
+ "<loc_424>": 50693,
378
+ "<loc_425>": 50694,
379
+ "<loc_426>": 50695,
380
+ "<loc_427>": 50696,
381
+ "<loc_428>": 50697,
382
+ "<loc_429>": 50698,
383
+ "<loc_42>": 50311,
384
+ "<loc_430>": 50699,
385
+ "<loc_431>": 50700,
386
+ "<loc_432>": 50701,
387
+ "<loc_433>": 50702,
388
+ "<loc_434>": 50703,
389
+ "<loc_435>": 50704,
390
+ "<loc_436>": 50705,
391
+ "<loc_437>": 50706,
392
+ "<loc_438>": 50707,
393
+ "<loc_439>": 50708,
394
+ "<loc_43>": 50312,
395
+ "<loc_440>": 50709,
396
+ "<loc_441>": 50710,
397
+ "<loc_442>": 50711,
398
+ "<loc_443>": 50712,
399
+ "<loc_444>": 50713,
400
+ "<loc_445>": 50714,
401
+ "<loc_446>": 50715,
402
+ "<loc_447>": 50716,
403
+ "<loc_448>": 50717,
404
+ "<loc_449>": 50718,
405
+ "<loc_44>": 50313,
406
+ "<loc_450>": 50719,
407
+ "<loc_451>": 50720,
408
+ "<loc_452>": 50721,
409
+ "<loc_453>": 50722,
410
+ "<loc_454>": 50723,
411
+ "<loc_455>": 50724,
412
+ "<loc_456>": 50725,
413
+ "<loc_457>": 50726,
414
+ "<loc_458>": 50727,
415
+ "<loc_459>": 50728,
416
+ "<loc_45>": 50314,
417
+ "<loc_460>": 50729,
418
+ "<loc_461>": 50730,
419
+ "<loc_462>": 50731,
420
+ "<loc_463>": 50732,
421
+ "<loc_464>": 50733,
422
+ "<loc_465>": 50734,
423
+ "<loc_466>": 50735,
424
+ "<loc_467>": 50736,
425
+ "<loc_468>": 50737,
426
+ "<loc_469>": 50738,
427
+ "<loc_46>": 50315,
428
+ "<loc_470>": 50739,
429
+ "<loc_471>": 50740,
430
+ "<loc_472>": 50741,
431
+ "<loc_473>": 50742,
432
+ "<loc_474>": 50743,
433
+ "<loc_475>": 50744,
434
+ "<loc_476>": 50745,
435
+ "<loc_477>": 50746,
436
+ "<loc_478>": 50747,
437
+ "<loc_479>": 50748,
438
+ "<loc_47>": 50316,
439
+ "<loc_480>": 50749,
440
+ "<loc_481>": 50750,
441
+ "<loc_482>": 50751,
442
+ "<loc_483>": 50752,
443
+ "<loc_484>": 50753,
444
+ "<loc_485>": 50754,
445
+ "<loc_486>": 50755,
446
+ "<loc_487>": 50756,
447
+ "<loc_488>": 50757,
448
+ "<loc_489>": 50758,
449
+ "<loc_48>": 50317,
450
+ "<loc_490>": 50759,
451
+ "<loc_491>": 50760,
452
+ "<loc_492>": 50761,
453
+ "<loc_493>": 50762,
454
+ "<loc_494>": 50763,
455
+ "<loc_495>": 50764,
456
+ "<loc_496>": 50765,
457
+ "<loc_497>": 50766,
458
+ "<loc_498>": 50767,
459
+ "<loc_499>": 50768,
460
+ "<loc_49>": 50318,
461
+ "<loc_4>": 50273,
462
+ "<loc_500>": 50769,
463
+ "<loc_501>": 50770,
464
+ "<loc_502>": 50771,
465
+ "<loc_503>": 50772,
466
+ "<loc_504>": 50773,
467
+ "<loc_505>": 50774,
468
+ "<loc_506>": 50775,
469
+ "<loc_507>": 50776,
470
+ "<loc_508>": 50777,
471
+ "<loc_509>": 50778,
472
+ "<loc_50>": 50319,
473
+ "<loc_510>": 50779,
474
+ "<loc_511>": 50780,
475
+ "<loc_512>": 50781,
476
+ "<loc_513>": 50782,
477
+ "<loc_514>": 50783,
478
+ "<loc_515>": 50784,
479
+ "<loc_516>": 50785,
480
+ "<loc_517>": 50786,
481
+ "<loc_518>": 50787,
482
+ "<loc_519>": 50788,
483
+ "<loc_51>": 50320,
484
+ "<loc_520>": 50789,
485
+ "<loc_521>": 50790,
486
+ "<loc_522>": 50791,
487
+ "<loc_523>": 50792,
488
+ "<loc_524>": 50793,
489
+ "<loc_525>": 50794,
490
+ "<loc_526>": 50795,
491
+ "<loc_527>": 50796,
492
+ "<loc_528>": 50797,
493
+ "<loc_529>": 50798,
494
+ "<loc_52>": 50321,
495
+ "<loc_530>": 50799,
496
+ "<loc_531>": 50800,
497
+ "<loc_532>": 50801,
498
+ "<loc_533>": 50802,
499
+ "<loc_534>": 50803,
500
+ "<loc_535>": 50804,
501
+ "<loc_536>": 50805,
502
+ "<loc_537>": 50806,
503
+ "<loc_538>": 50807,
504
+ "<loc_539>": 50808,
505
+ "<loc_53>": 50322,
506
+ "<loc_540>": 50809,
507
+ "<loc_541>": 50810,
508
+ "<loc_542>": 50811,
509
+ "<loc_543>": 50812,
510
+ "<loc_544>": 50813,
511
+ "<loc_545>": 50814,
512
+ "<loc_546>": 50815,
513
+ "<loc_547>": 50816,
514
+ "<loc_548>": 50817,
515
+ "<loc_549>": 50818,
516
+ "<loc_54>": 50323,
517
+ "<loc_550>": 50819,
518
+ "<loc_551>": 50820,
519
+ "<loc_552>": 50821,
520
+ "<loc_553>": 50822,
521
+ "<loc_554>": 50823,
522
+ "<loc_555>": 50824,
523
+ "<loc_556>": 50825,
524
+ "<loc_557>": 50826,
525
+ "<loc_558>": 50827,
526
+ "<loc_559>": 50828,
527
+ "<loc_55>": 50324,
528
+ "<loc_560>": 50829,
529
+ "<loc_561>": 50830,
530
+ "<loc_562>": 50831,
531
+ "<loc_563>": 50832,
532
+ "<loc_564>": 50833,
533
+ "<loc_565>": 50834,
534
+ "<loc_566>": 50835,
535
+ "<loc_567>": 50836,
536
+ "<loc_568>": 50837,
537
+ "<loc_569>": 50838,
538
+ "<loc_56>": 50325,
539
+ "<loc_570>": 50839,
540
+ "<loc_571>": 50840,
541
+ "<loc_572>": 50841,
542
+ "<loc_573>": 50842,
543
+ "<loc_574>": 50843,
544
+ "<loc_575>": 50844,
545
+ "<loc_576>": 50845,
546
+ "<loc_577>": 50846,
547
+ "<loc_578>": 50847,
548
+ "<loc_579>": 50848,
549
+ "<loc_57>": 50326,
550
+ "<loc_580>": 50849,
551
+ "<loc_581>": 50850,
552
+ "<loc_582>": 50851,
553
+ "<loc_583>": 50852,
554
+ "<loc_584>": 50853,
555
+ "<loc_585>": 50854,
556
+ "<loc_586>": 50855,
557
+ "<loc_587>": 50856,
558
+ "<loc_588>": 50857,
559
+ "<loc_589>": 50858,
560
+ "<loc_58>": 50327,
561
+ "<loc_590>": 50859,
562
+ "<loc_591>": 50860,
563
+ "<loc_592>": 50861,
564
+ "<loc_593>": 50862,
565
+ "<loc_594>": 50863,
566
+ "<loc_595>": 50864,
567
+ "<loc_596>": 50865,
568
+ "<loc_597>": 50866,
569
+ "<loc_598>": 50867,
570
+ "<loc_599>": 50868,
571
+ "<loc_59>": 50328,
572
+ "<loc_5>": 50274,
573
+ "<loc_600>": 50869,
574
+ "<loc_601>": 50870,
575
+ "<loc_602>": 50871,
576
+ "<loc_603>": 50872,
577
+ "<loc_604>": 50873,
578
+ "<loc_605>": 50874,
579
+ "<loc_606>": 50875,
580
+ "<loc_607>": 50876,
581
+ "<loc_608>": 50877,
582
+ "<loc_609>": 50878,
583
+ "<loc_60>": 50329,
584
+ "<loc_610>": 50879,
585
+ "<loc_611>": 50880,
586
+ "<loc_612>": 50881,
587
+ "<loc_613>": 50882,
588
+ "<loc_614>": 50883,
589
+ "<loc_615>": 50884,
590
+ "<loc_616>": 50885,
591
+ "<loc_617>": 50886,
592
+ "<loc_618>": 50887,
593
+ "<loc_619>": 50888,
594
+ "<loc_61>": 50330,
595
+ "<loc_620>": 50889,
596
+ "<loc_621>": 50890,
597
+ "<loc_622>": 50891,
598
+ "<loc_623>": 50892,
599
+ "<loc_624>": 50893,
600
+ "<loc_625>": 50894,
601
+ "<loc_626>": 50895,
602
+ "<loc_627>": 50896,
603
+ "<loc_628>": 50897,
604
+ "<loc_629>": 50898,
605
+ "<loc_62>": 50331,
606
+ "<loc_630>": 50899,
607
+ "<loc_631>": 50900,
608
+ "<loc_632>": 50901,
609
+ "<loc_633>": 50902,
610
+ "<loc_634>": 50903,
611
+ "<loc_635>": 50904,
612
+ "<loc_636>": 50905,
613
+ "<loc_637>": 50906,
614
+ "<loc_638>": 50907,
615
+ "<loc_639>": 50908,
616
+ "<loc_63>": 50332,
617
+ "<loc_640>": 50909,
618
+ "<loc_641>": 50910,
619
+ "<loc_642>": 50911,
620
+ "<loc_643>": 50912,
621
+ "<loc_644>": 50913,
622
+ "<loc_645>": 50914,
623
+ "<loc_646>": 50915,
624
+ "<loc_647>": 50916,
625
+ "<loc_648>": 50917,
626
+ "<loc_649>": 50918,
627
+ "<loc_64>": 50333,
628
+ "<loc_650>": 50919,
629
+ "<loc_651>": 50920,
630
+ "<loc_652>": 50921,
631
+ "<loc_653>": 50922,
632
+ "<loc_654>": 50923,
633
+ "<loc_655>": 50924,
634
+ "<loc_656>": 50925,
635
+ "<loc_657>": 50926,
636
+ "<loc_658>": 50927,
637
+ "<loc_659>": 50928,
638
+ "<loc_65>": 50334,
639
+ "<loc_660>": 50929,
640
+ "<loc_661>": 50930,
641
+ "<loc_662>": 50931,
642
+ "<loc_663>": 50932,
643
+ "<loc_664>": 50933,
644
+ "<loc_665>": 50934,
645
+ "<loc_666>": 50935,
646
+ "<loc_667>": 50936,
647
+ "<loc_668>": 50937,
648
+ "<loc_669>": 50938,
649
+ "<loc_66>": 50335,
650
+ "<loc_670>": 50939,
651
+ "<loc_671>": 50940,
652
+ "<loc_672>": 50941,
653
+ "<loc_673>": 50942,
654
+ "<loc_674>": 50943,
655
+ "<loc_675>": 50944,
656
+ "<loc_676>": 50945,
657
+ "<loc_677>": 50946,
658
+ "<loc_678>": 50947,
659
+ "<loc_679>": 50948,
660
+ "<loc_67>": 50336,
661
+ "<loc_680>": 50949,
662
+ "<loc_681>": 50950,
663
+ "<loc_682>": 50951,
664
+ "<loc_683>": 50952,
665
+ "<loc_684>": 50953,
666
+ "<loc_685>": 50954,
667
+ "<loc_686>": 50955,
668
+ "<loc_687>": 50956,
669
+ "<loc_688>": 50957,
670
+ "<loc_689>": 50958,
671
+ "<loc_68>": 50337,
672
+ "<loc_690>": 50959,
673
+ "<loc_691>": 50960,
674
+ "<loc_692>": 50961,
675
+ "<loc_693>": 50962,
676
+ "<loc_694>": 50963,
677
+ "<loc_695>": 50964,
678
+ "<loc_696>": 50965,
679
+ "<loc_697>": 50966,
680
+ "<loc_698>": 50967,
681
+ "<loc_699>": 50968,
682
+ "<loc_69>": 50338,
683
+ "<loc_6>": 50275,
684
+ "<loc_700>": 50969,
685
+ "<loc_701>": 50970,
686
+ "<loc_702>": 50971,
687
+ "<loc_703>": 50972,
688
+ "<loc_704>": 50973,
689
+ "<loc_705>": 50974,
690
+ "<loc_706>": 50975,
691
+ "<loc_707>": 50976,
692
+ "<loc_708>": 50977,
693
+ "<loc_709>": 50978,
694
+ "<loc_70>": 50339,
695
+ "<loc_710>": 50979,
696
+ "<loc_711>": 50980,
697
+ "<loc_712>": 50981,
698
+ "<loc_713>": 50982,
699
+ "<loc_714>": 50983,
700
+ "<loc_715>": 50984,
701
+ "<loc_716>": 50985,
702
+ "<loc_717>": 50986,
703
+ "<loc_718>": 50987,
704
+ "<loc_719>": 50988,
705
+ "<loc_71>": 50340,
706
+ "<loc_720>": 50989,
707
+ "<loc_721>": 50990,
708
+ "<loc_722>": 50991,
709
+ "<loc_723>": 50992,
710
+ "<loc_724>": 50993,
711
+ "<loc_725>": 50994,
712
+ "<loc_726>": 50995,
713
+ "<loc_727>": 50996,
714
+ "<loc_728>": 50997,
715
+ "<loc_729>": 50998,
716
+ "<loc_72>": 50341,
717
+ "<loc_730>": 50999,
718
+ "<loc_731>": 51000,
719
+ "<loc_732>": 51001,
720
+ "<loc_733>": 51002,
721
+ "<loc_734>": 51003,
722
+ "<loc_735>": 51004,
723
+ "<loc_736>": 51005,
724
+ "<loc_737>": 51006,
725
+ "<loc_738>": 51007,
726
+ "<loc_739>": 51008,
727
+ "<loc_73>": 50342,
728
+ "<loc_740>": 51009,
729
+ "<loc_741>": 51010,
730
+ "<loc_742>": 51011,
731
+ "<loc_743>": 51012,
732
+ "<loc_744>": 51013,
733
+ "<loc_745>": 51014,
734
+ "<loc_746>": 51015,
735
+ "<loc_747>": 51016,
736
+ "<loc_748>": 51017,
737
+ "<loc_749>": 51018,
738
+ "<loc_74>": 50343,
739
+ "<loc_750>": 51019,
740
+ "<loc_751>": 51020,
741
+ "<loc_752>": 51021,
742
+ "<loc_753>": 51022,
743
+ "<loc_754>": 51023,
744
+ "<loc_755>": 51024,
745
+ "<loc_756>": 51025,
746
+ "<loc_757>": 51026,
747
+ "<loc_758>": 51027,
748
+ "<loc_759>": 51028,
749
+ "<loc_75>": 50344,
750
+ "<loc_760>": 51029,
751
+ "<loc_761>": 51030,
752
+ "<loc_762>": 51031,
753
+ "<loc_763>": 51032,
754
+ "<loc_764>": 51033,
755
+ "<loc_765>": 51034,
756
+ "<loc_766>": 51035,
757
+ "<loc_767>": 51036,
758
+ "<loc_768>": 51037,
759
+ "<loc_769>": 51038,
760
+ "<loc_76>": 50345,
761
+ "<loc_770>": 51039,
762
+ "<loc_771>": 51040,
763
+ "<loc_772>": 51041,
764
+ "<loc_773>": 51042,
765
+ "<loc_774>": 51043,
766
+ "<loc_775>": 51044,
767
+ "<loc_776>": 51045,
768
+ "<loc_777>": 51046,
769
+ "<loc_778>": 51047,
770
+ "<loc_779>": 51048,
771
+ "<loc_77>": 50346,
772
+ "<loc_780>": 51049,
773
+ "<loc_781>": 51050,
774
+ "<loc_782>": 51051,
775
+ "<loc_783>": 51052,
776
+ "<loc_784>": 51053,
777
+ "<loc_785>": 51054,
778
+ "<loc_786>": 51055,
779
+ "<loc_787>": 51056,
780
+ "<loc_788>": 51057,
781
+ "<loc_789>": 51058,
782
+ "<loc_78>": 50347,
783
+ "<loc_790>": 51059,
784
+ "<loc_791>": 51060,
785
+ "<loc_792>": 51061,
786
+ "<loc_793>": 51062,
787
+ "<loc_794>": 51063,
788
+ "<loc_795>": 51064,
789
+ "<loc_796>": 51065,
790
+ "<loc_797>": 51066,
791
+ "<loc_798>": 51067,
792
+ "<loc_799>": 51068,
793
+ "<loc_79>": 50348,
794
+ "<loc_7>": 50276,
795
+ "<loc_800>": 51069,
796
+ "<loc_801>": 51070,
797
+ "<loc_802>": 51071,
798
+ "<loc_803>": 51072,
799
+ "<loc_804>": 51073,
800
+ "<loc_805>": 51074,
801
+ "<loc_806>": 51075,
802
+ "<loc_807>": 51076,
803
+ "<loc_808>": 51077,
804
+ "<loc_809>": 51078,
805
+ "<loc_80>": 50349,
806
+ "<loc_810>": 51079,
807
+ "<loc_811>": 51080,
808
+ "<loc_812>": 51081,
809
+ "<loc_813>": 51082,
810
+ "<loc_814>": 51083,
811
+ "<loc_815>": 51084,
812
+ "<loc_816>": 51085,
813
+ "<loc_817>": 51086,
814
+ "<loc_818>": 51087,
815
+ "<loc_819>": 51088,
816
+ "<loc_81>": 50350,
817
+ "<loc_820>": 51089,
818
+ "<loc_821>": 51090,
819
+ "<loc_822>": 51091,
820
+ "<loc_823>": 51092,
821
+ "<loc_824>": 51093,
822
+ "<loc_825>": 51094,
823
+ "<loc_826>": 51095,
824
+ "<loc_827>": 51096,
825
+ "<loc_828>": 51097,
826
+ "<loc_829>": 51098,
827
+ "<loc_82>": 50351,
828
+ "<loc_830>": 51099,
829
+ "<loc_831>": 51100,
830
+ "<loc_832>": 51101,
831
+ "<loc_833>": 51102,
832
+ "<loc_834>": 51103,
833
+ "<loc_835>": 51104,
834
+ "<loc_836>": 51105,
835
+ "<loc_837>": 51106,
836
+ "<loc_838>": 51107,
837
+ "<loc_839>": 51108,
838
+ "<loc_83>": 50352,
839
+ "<loc_840>": 51109,
840
+ "<loc_841>": 51110,
841
+ "<loc_842>": 51111,
842
+ "<loc_843>": 51112,
843
+ "<loc_844>": 51113,
844
+ "<loc_845>": 51114,
845
+ "<loc_846>": 51115,
846
+ "<loc_847>": 51116,
847
+ "<loc_848>": 51117,
848
+ "<loc_849>": 51118,
849
+ "<loc_84>": 50353,
850
+ "<loc_850>": 51119,
851
+ "<loc_851>": 51120,
852
+ "<loc_852>": 51121,
853
+ "<loc_853>": 51122,
854
+ "<loc_854>": 51123,
855
+ "<loc_855>": 51124,
856
+ "<loc_856>": 51125,
857
+ "<loc_857>": 51126,
858
+ "<loc_858>": 51127,
859
+ "<loc_859>": 51128,
860
+ "<loc_85>": 50354,
861
+ "<loc_860>": 51129,
862
+ "<loc_861>": 51130,
863
+ "<loc_862>": 51131,
864
+ "<loc_863>": 51132,
865
+ "<loc_864>": 51133,
866
+ "<loc_865>": 51134,
867
+ "<loc_866>": 51135,
868
+ "<loc_867>": 51136,
869
+ "<loc_868>": 51137,
870
+ "<loc_869>": 51138,
871
+ "<loc_86>": 50355,
872
+ "<loc_870>": 51139,
873
+ "<loc_871>": 51140,
874
+ "<loc_872>": 51141,
875
+ "<loc_873>": 51142,
876
+ "<loc_874>": 51143,
877
+ "<loc_875>": 51144,
878
+ "<loc_876>": 51145,
879
+ "<loc_877>": 51146,
880
+ "<loc_878>": 51147,
881
+ "<loc_879>": 51148,
882
+ "<loc_87>": 50356,
883
+ "<loc_880>": 51149,
884
+ "<loc_881>": 51150,
885
+ "<loc_882>": 51151,
886
+ "<loc_883>": 51152,
887
+ "<loc_884>": 51153,
888
+ "<loc_885>": 51154,
889
+ "<loc_886>": 51155,
890
+ "<loc_887>": 51156,
891
+ "<loc_888>": 51157,
892
+ "<loc_889>": 51158,
893
+ "<loc_88>": 50357,
894
+ "<loc_890>": 51159,
895
+ "<loc_891>": 51160,
896
+ "<loc_892>": 51161,
897
+ "<loc_893>": 51162,
898
+ "<loc_894>": 51163,
899
+ "<loc_895>": 51164,
900
+ "<loc_896>": 51165,
901
+ "<loc_897>": 51166,
902
+ "<loc_898>": 51167,
903
+ "<loc_899>": 51168,
904
+ "<loc_89>": 50358,
905
+ "<loc_8>": 50277,
906
+ "<loc_900>": 51169,
907
+ "<loc_901>": 51170,
908
+ "<loc_902>": 51171,
909
+ "<loc_903>": 51172,
910
+ "<loc_904>": 51173,
911
+ "<loc_905>": 51174,
912
+ "<loc_906>": 51175,
913
+ "<loc_907>": 51176,
914
+ "<loc_908>": 51177,
915
+ "<loc_909>": 51178,
916
+ "<loc_90>": 50359,
917
+ "<loc_910>": 51179,
918
+ "<loc_911>": 51180,
919
+ "<loc_912>": 51181,
920
+ "<loc_913>": 51182,
921
+ "<loc_914>": 51183,
922
+ "<loc_915>": 51184,
923
+ "<loc_916>": 51185,
924
+ "<loc_917>": 51186,
925
+ "<loc_918>": 51187,
926
+ "<loc_919>": 51188,
927
+ "<loc_91>": 50360,
928
+ "<loc_920>": 51189,
929
+ "<loc_921>": 51190,
930
+ "<loc_922>": 51191,
931
+ "<loc_923>": 51192,
932
+ "<loc_924>": 51193,
933
+ "<loc_925>": 51194,
934
+ "<loc_926>": 51195,
935
+ "<loc_927>": 51196,
936
+ "<loc_928>": 51197,
937
+ "<loc_929>": 51198,
938
+ "<loc_92>": 50361,
939
+ "<loc_930>": 51199,
940
+ "<loc_931>": 51200,
941
+ "<loc_932>": 51201,
942
+ "<loc_933>": 51202,
943
+ "<loc_934>": 51203,
944
+ "<loc_935>": 51204,
945
+ "<loc_936>": 51205,
946
+ "<loc_937>": 51206,
947
+ "<loc_938>": 51207,
948
+ "<loc_939>": 51208,
949
+ "<loc_93>": 50362,
950
+ "<loc_940>": 51209,
951
+ "<loc_941>": 51210,
952
+ "<loc_942>": 51211,
953
+ "<loc_943>": 51212,
954
+ "<loc_944>": 51213,
955
+ "<loc_945>": 51214,
956
+ "<loc_946>": 51215,
957
+ "<loc_947>": 51216,
958
+ "<loc_948>": 51217,
959
+ "<loc_949>": 51218,
960
+ "<loc_94>": 50363,
961
+ "<loc_950>": 51219,
962
+ "<loc_951>": 51220,
963
+ "<loc_952>": 51221,
964
+ "<loc_953>": 51222,
965
+ "<loc_954>": 51223,
966
+ "<loc_955>": 51224,
967
+ "<loc_956>": 51225,
968
+ "<loc_957>": 51226,
969
+ "<loc_958>": 51227,
970
+ "<loc_959>": 51228,
971
+ "<loc_95>": 50364,
972
+ "<loc_960>": 51229,
973
+ "<loc_961>": 51230,
974
+ "<loc_962>": 51231,
975
+ "<loc_963>": 51232,
976
+ "<loc_964>": 51233,
977
+ "<loc_965>": 51234,
978
+ "<loc_966>": 51235,
979
+ "<loc_967>": 51236,
980
+ "<loc_968>": 51237,
981
+ "<loc_969>": 51238,
982
+ "<loc_96>": 50365,
983
+ "<loc_970>": 51239,
984
+ "<loc_971>": 51240,
985
+ "<loc_972>": 51241,
986
+ "<loc_973>": 51242,
987
+ "<loc_974>": 51243,
988
+ "<loc_975>": 51244,
989
+ "<loc_976>": 51245,
990
+ "<loc_977>": 51246,
991
+ "<loc_978>": 51247,
992
+ "<loc_979>": 51248,
993
+ "<loc_97>": 50366,
994
+ "<loc_980>": 51249,
995
+ "<loc_981>": 51250,
996
+ "<loc_982>": 51251,
997
+ "<loc_983>": 51252,
998
+ "<loc_984>": 51253,
999
+ "<loc_985>": 51254,
1000
+ "<loc_986>": 51255,
1001
+ "<loc_987>": 51256,
1002
+ "<loc_988>": 51257,
1003
+ "<loc_989>": 51258,
1004
+ "<loc_98>": 50367,
1005
+ "<loc_990>": 51259,
1006
+ "<loc_991>": 51260,
1007
+ "<loc_992>": 51261,
1008
+ "<loc_993>": 51262,
1009
+ "<loc_994>": 51263,
1010
+ "<loc_995>": 51264,
1011
+ "<loc_996>": 51265,
1012
+ "<loc_997>": 51266,
1013
+ "<loc_998>": 51267,
1014
+ "<loc_999>": 51268,
1015
+ "<loc_99>": 50368,
1016
+ "<loc_9>": 50278,
1017
+ "<ncap>": 51271,
1018
+ "<ocr>": 50267,
1019
+ "<od>": 50265,
1020
+ "<poly>": 51286,
1021
+ "<proposal>": 51284,
1022
+ "<region_cap>": 51280,
1023
+ "<region_to_desciption>": 51282,
1024
+ "<seg>": 51277,
1025
+ "<sep>": 51279
1026
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
preprocessor_config.json ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "auto_map": {
3
+ "AutoProcessor": "processing_florence2.Florence2Processor"
4
+ },
5
+ "crop_size": {
6
+ "height": 768,
7
+ "width": 768
8
+ },
9
+ "do_center_crop": false,
10
+ "do_convert_rgb": null,
11
+ "do_normalize": true,
12
+ "do_rescale": true,
13
+ "do_resize": true,
14
+ "image_mean": [
15
+ 0.485,
16
+ 0.456,
17
+ 0.406
18
+ ],
19
+ "image_processor_type": "CLIPImageProcessor",
20
+ "image_seq_length": 577,
21
+ "image_std": [
22
+ 0.229,
23
+ 0.224,
24
+ 0.225
25
+ ],
26
+ "processor_class": "Florence2Processor",
27
+ "resample": 3,
28
+ "rescale_factor": 0.00392156862745098,
29
+ "size": {
30
+ "height": 768,
31
+ "width": 768
32
+ }
33
+ }
processing_florence2.py ADDED
@@ -0,0 +1,1148 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2024 Microsoft and The HuggingFace Inc. team.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """
16
+ Processor class for Florence-2.
17
+ """
18
+
19
+ import re
20
+ import logging
21
+ from typing import List, Optional, Union
22
+ import numpy as np
23
+ import math
24
+
25
+ import torch
26
+
27
+ from transformers.feature_extraction_utils import BatchFeature
28
+ from transformers.image_utils import ImageInput, is_valid_image
29
+ from transformers.processing_utils import ProcessorMixin
30
+ from transformers.tokenization_utils_base import (
31
+ PaddingStrategy,
32
+ PreTokenizedInput,
33
+ TextInput,
34
+ TruncationStrategy,
35
+ )
36
+ from transformers import BartTokenizer, BartTokenizerFast
37
+ from transformers.utils import TensorType
38
+
39
+
40
+ logger = logging.getLogger(__name__)
41
+
42
+ # Copied from transformers.models.idefics2.processing_idefics2.is_url
43
+ def is_url(val) -> bool:
44
+ return isinstance(val, str) and val.startswith("http")
45
+
46
+ # Copied from transformers.models.idefics2.processing_idefics2.is_image_or_image_url
47
+ def is_image_or_image_url(elem):
48
+ return is_url(elem) or is_valid_image(elem)
49
+
50
+
51
+ def _is_str_or_image(elem):
52
+ return isinstance(elem, (str)) or is_image_or_image_url(elem)
53
+
54
+
55
+ class Florence2Processor(ProcessorMixin):
56
+ r"""
57
+ Constructs a Florence2 processor which wraps a Florence2 image processor and a Florence2 tokenizer into a single processor.
58
+
59
+ [`Florence2Processor`] offers all the functionalities of [`CLIPImageProcessor`] and [`BartTokenizerFast`]. See the
60
+ [`~Florence2Processor.__call__`] and [`~Florence2Processor.decode`] for more information.
61
+
62
+ Args:
63
+ image_processor ([`CLIPImageProcessor`], *optional*):
64
+ The image processor is a required input.
65
+ tokenizer ([`BartTokenizerFast`], *optional*):
66
+ The tokenizer is a required input.
67
+ """
68
+
69
+ attributes = ["image_processor", "tokenizer"]
70
+ image_processor_class = "CLIPImageProcessor"
71
+ tokenizer_class = ("BartTokenizer", "BartTokenizerFast")
72
+
73
+ def __init__(
74
+ self,
75
+ image_processor=None,
76
+ tokenizer=None,
77
+ ):
78
+ if image_processor is None:
79
+ raise ValueError("You need to specify an `image_processor`.")
80
+ if tokenizer is None:
81
+ raise ValueError("You need to specify a `tokenizer`.")
82
+ if not hasattr(image_processor, "image_seq_length"):
83
+ raise ValueError("Image processor is missing an `image_seq_length` attribute.")
84
+
85
+ self.image_seq_length = image_processor.image_seq_length
86
+
87
+ tokens_to_add = {
88
+ 'additional_special_tokens': \
89
+ tokenizer.additional_special_tokens + \
90
+ ['<od>', '</od>', '<ocr>', '</ocr>'] + \
91
+ [f'<loc_{x}>' for x in range(1000)] + \
92
+ ['<cap>', '</cap>', '<ncap>', '</ncap>','<dcap>', '</dcap>', '<grounding>', '</grounding>', '<seg>', '</seg>', '<sep>', '<region_cap>', '</region_cap>', '<region_to_desciption>', '</region_to_desciption>', '<proposal>', '</proposal>', '<poly>', '</poly>', '<and>']
93
+ }
94
+ tokenizer.add_special_tokens(tokens_to_add)
95
+
96
+ self.tasks_answer_post_processing_type = {
97
+ '<OCR>': 'pure_text',
98
+ '<OCR_WITH_REGION>': 'ocr',
99
+ '<CAPTION>': 'pure_text',
100
+ '<DETAILED_CAPTION>': 'pure_text',
101
+ '<MORE_DETAILED_CAPTION>': 'pure_text',
102
+ '<OD>': 'description_with_bboxes',
103
+ '<DENSE_REGION_CAPTION>': 'description_with_bboxes',
104
+ '<CAPTION_TO_PHRASE_GROUNDING>': "phrase_grounding",
105
+ '<REFERRING_EXPRESSION_SEGMENTATION>': 'polygons',
106
+ '<REGION_TO_SEGMENTATION>': 'polygons',
107
+ '<OPEN_VOCABULARY_DETECTION>': 'description_with_bboxes_or_polygons',
108
+ '<REGION_TO_CATEGORY>': 'pure_text',
109
+ '<REGION_TO_DESCRIPTION>': 'pure_text',
110
+ '<REGION_TO_OCR>': 'pure_text',
111
+ '<REGION_PROPOSAL>': 'bboxes'
112
+ }
113
+
114
+ self.task_prompts_without_inputs = {
115
+ '<OCR>': 'What is the text in the image?',
116
+ '<OCR_WITH_REGION>': 'What is the text in the image, with regions?',
117
+ '<CAPTION>': 'What does the image describe?',
118
+ '<DETAILED_CAPTION>': 'Describe in detail what is shown in the image.',
119
+ '<MORE_DETAILED_CAPTION>': 'Describe with a paragraph what is shown in the image.',
120
+ '<OD>': 'Locate the objects with category name in the image.',
121
+ '<DENSE_REGION_CAPTION>': 'Locate the objects in the image, with their descriptions.',
122
+ '<REGION_PROPOSAL>': 'Locate the region proposals in the image.'
123
+ }
124
+
125
+ self.task_prompts_with_input = {
126
+ '<CAPTION_TO_PHRASE_GROUNDING>': "Locate the phrases in the caption: {input}",
127
+ '<REFERRING_EXPRESSION_SEGMENTATION>': 'Locate {input} in the image with mask',
128
+ '<REGION_TO_SEGMENTATION>': 'What is the polygon mask of region {input}',
129
+ '<OPEN_VOCABULARY_DETECTION>': 'Locate {input} in the image.',
130
+ '<REGION_TO_CATEGORY>': 'What is the region {input}?',
131
+ '<REGION_TO_DESCRIPTION>': 'What does the region {input} describe?',
132
+ '<REGION_TO_OCR>': 'What text is in the region {input}?',
133
+ }
134
+
135
+ self.post_processor = Florence2PostProcesser(tokenizer=tokenizer)
136
+
137
+
138
+ super().__init__(image_processor, tokenizer)
139
+
140
+ def _construct_prompts(self, text):
141
+ # replace the task tokens with the task prompts if task token is in the text
142
+ prompts = []
143
+ for _text in text:
144
+ # 1. fixed task prompts without additional inputs
145
+ for task_token, task_prompt in self.task_prompts_without_inputs.items():
146
+ if task_token in _text:
147
+ assert _text == task_token, f"Task token {task_token} should be the only token in the text."
148
+ _text = task_prompt
149
+ break
150
+ # 2. task prompts with additional inputs
151
+ for task_token, task_prompt in self.task_prompts_with_input.items():
152
+ if task_token in _text:
153
+ _text = task_prompt.format(input=_text.replace(task_token, ''))
154
+ break
155
+ prompts.append(_text)
156
+ return prompts
157
+
158
+ def __call__(
159
+ self,
160
+ text: Union[TextInput, PreTokenizedInput, List[TextInput], List[PreTokenizedInput]] = None,
161
+ images: ImageInput = None,
162
+ tokenize_newline_separately: bool = True,
163
+ padding: Union[bool, str, PaddingStrategy] = False,
164
+ truncation: Union[bool, str, TruncationStrategy] = None,
165
+ max_length=None,
166
+ return_tensors: Optional[Union[str, TensorType]] = TensorType.PYTORCH,
167
+ do_resize: bool = None,
168
+ do_normalize: bool = None,
169
+ image_mean: Optional[Union[float, List[float]]] = None,
170
+ image_std: Optional[Union[float, List[float]]] = None,
171
+ data_format: Optional["ChannelDimension"] = "channels_first", # noqa: F821
172
+ input_data_format: Optional[
173
+ Union[str, "ChannelDimension"] # noqa: F821
174
+ ] = None,
175
+ resample: "PILImageResampling" = None, # noqa: F821
176
+ do_convert_rgb: bool = None,
177
+ do_thumbnail: bool = None,
178
+ do_align_long_axis: bool = None,
179
+ do_rescale: bool = None,
180
+ ) -> BatchFeature:
181
+ """
182
+ Main method to prepare for the model one or several sequences(s) and image(s). This method forwards the `text`
183
+ and `kwargs` arguments to BartTokenizerFast's [`~BartTokenizerFast.__call__`] if `text` is not `None` to encode
184
+ the text. To prepare the image(s), this method forwards the `images` and `kwrags` arguments to
185
+ CLIPImageProcessor's [`~CLIPImageProcessor.__call__`] if `images` is not `None`. Please refer to the doctsring
186
+ of the above two methods for more information.
187
+
188
+ Args:
189
+ text (`str`, `List[str]`, `List[List[str]]`):
190
+ The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings
191
+ (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set
192
+ `is_split_into_words=True` (to lift the ambiguity with a batch of sequences).
193
+ images (`PIL.Image.Image`, `np.ndarray`, `torch.Tensor`, `List[PIL.Image.Image]`, `List[np.ndarray]`, `List[torch.Tensor]`):
194
+ The image or batch of images to be prepared. Each image can be a PIL image, NumPy array or PyTorch
195
+ tensor. In case of a NumPy array/PyTorch tensor, each image should be of shape (C, H, W), where C is a
196
+ number of channels, H and W are image height and width.
197
+ tokenize_newline_separately (`bool`, defaults to `True`):
198
+ Adds a separately tokenized '\n' at the end of the prompt.
199
+ padding (`bool`, `str` or [`~utils.PaddingStrategy`], *optional*, defaults to `False`):
200
+ Select a strategy to pad the returned sequences (according to the model's padding side and padding
201
+ index) among:
202
+ - `True` or `'longest'`: Pad to the longest sequence in the batch (or no padding if only a single
203
+ sequence if provided).
204
+ - `'max_length'`: Pad to a maximum length specified with the argument `max_length` or to the maximum
205
+ acceptable input length for the model if that argument is not provided.
206
+ - `False` or `'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of different
207
+ lengths).
208
+ max_length (`int`, *optional*):
209
+ Maximum length of the returned list and optionally padding length (see above).
210
+ truncation (`bool`, *optional*):
211
+ Activates truncation to cut input sequences longer than `max_length` to `max_length`.
212
+ return_tensors (`str` or [`~utils.TensorType`], *optional*):
213
+ If set, will return tensors of a particular framework. Acceptable values are:
214
+
215
+ - `'tf'`: Return TensorFlow `tf.constant` objects.
216
+ - `'pt'`: Return PyTorch `torch.Tensor` objects.
217
+ - `'np'`: Return NumPy `np.ndarray` objects.
218
+ - `'jax'`: Return JAX `jnp.ndarray` objects.
219
+
220
+ Returns:
221
+ [`BatchFeature`]: A [`BatchFeature`] with the following fields:
222
+
223
+ - **input_ids** -- List of token ids to be fed to a model. Returned when `text` is not `None`. If `suffix`
224
+ is provided, the `input_ids` will also contain the suffix input ids.
225
+ - **attention_mask** -- List of indices specifying which tokens should be attended to by the model (when
226
+ `return_attention_mask=True` or if *"attention_mask"* is in `self.model_input_names` and if `text` is not
227
+ `None`).
228
+ - **pixel_values** -- Pixel values to be fed to a model. Returned when `images` is not `None`.
229
+ - **labels** -- Labels compatible with training if `suffix` is not None
230
+ """
231
+
232
+ return_token_type_ids = False
233
+
234
+ if images is None:
235
+ raise ValueError("`images` are expected as arguments to a `Florence2Processor` instance.")
236
+ if text is None:
237
+ logger.warning_once(
238
+ "You are using Florence-2 without a text prompt."
239
+ )
240
+ text = ""
241
+
242
+ if isinstance(text, List) and isinstance(images, List):
243
+ if len(images) < len(text):
244
+ raise ValueError(
245
+ f"Received {len(images)} images for {len(text)} prompts. Each prompt should be associated with an image."
246
+ )
247
+ if _is_str_or_image(text):
248
+ text = [text]
249
+ elif isinstance(text, list) and _is_str_or_image(text[0]):
250
+ pass
251
+
252
+ pixel_values = self.image_processor(
253
+ images,
254
+ do_resize=do_resize,
255
+ do_normalize=do_normalize,
256
+ return_tensors=return_tensors,
257
+ image_mean=image_mean,
258
+ image_std=image_std,
259
+ input_data_format=input_data_format,
260
+ data_format=data_format,
261
+ resample=resample,
262
+ do_convert_rgb=do_convert_rgb,
263
+ )["pixel_values"]
264
+
265
+ if max_length is not None:
266
+ max_length -= self.image_seq_length # max_length has to account for the image tokens
267
+
268
+ text = self._construct_prompts(text)
269
+
270
+ inputs = self.tokenizer(
271
+ text,
272
+ return_tensors=return_tensors,
273
+ padding=padding,
274
+ max_length=max_length,
275
+ truncation=truncation,
276
+ return_token_type_ids=return_token_type_ids,
277
+ )
278
+
279
+ return_data = {**inputs, "pixel_values": pixel_values}
280
+
281
+ if return_token_type_ids:
282
+ labels = inputs["input_ids"].masked_fill(inputs["token_type_ids"] == 0, -100)
283
+ return_data.update({"labels": labels})
284
+ return BatchFeature(data=return_data)
285
+
286
+ # Copied from transformers.models.clip.processing_clip.CLIPProcessor.batch_decode with CLIP->Florence2
287
+ def batch_decode(self, *args, **kwargs):
288
+ """
289
+ This method forwards all its arguments to BartTokenizerFast's [`~PreTrainedTokenizer.batch_decode`]. Please
290
+ refer to the docstring of this method for more information.
291
+ """
292
+ return self.tokenizer.batch_decode(*args, **kwargs)
293
+
294
+ # Copied from transformers.models.clip.processing_clip.CLIPProcessor.decode with CLIP->Florence2
295
+ def decode(self, *args, **kwargs):
296
+ """
297
+ This method forwards all its arguments to BartTokenizerFast's [`~PreTrainedTokenizer.decode`]. Please refer to
298
+ the docstring of this method for more information.
299
+ """
300
+ return self.tokenizer.decode(*args, **kwargs)
301
+
302
+ @property
303
+ # Copied from transformers.models.clip.processing_clip.CLIPProcessor.model_input_names with CLIP->Florence2
304
+ def model_input_names(self):
305
+ tokenizer_input_names = self.tokenizer.model_input_names
306
+ image_processor_input_names = self.image_processor.model_input_names
307
+ return list(dict.fromkeys(tokenizer_input_names + image_processor_input_names))
308
+
309
+ def post_process_generation(self, text=None, sequence=None, transition_beam_score=None, task=None, image_size=None):
310
+ """
311
+ Post-process the output of the model to each of the task outputs.
312
+
313
+ Args:
314
+ text (`str`): The text to post-process.
315
+ task (`str`): The task to post-process the text for.
316
+ image_size (`Tuple[int, int]`): The size of the image. height x width.
317
+ """
318
+
319
+ task_answer_post_processing_type = self.tasks_answer_post_processing_type.get(task, 'pure_text')
320
+ task_answer = self.post_processor(
321
+ text=text,
322
+ sequence=sequence,
323
+ transition_beam_score=transition_beam_score,
324
+ image_size=image_size,
325
+ parse_tasks=task_answer_post_processing_type,
326
+ )[task_answer_post_processing_type]
327
+
328
+ if task_answer_post_processing_type == 'pure_text':
329
+ final_answer = task_answer
330
+ # remove the special tokens
331
+ final_answer = final_answer.replace('<s>', '').replace('</s>', '')
332
+ elif task_answer_post_processing_type in ['od', 'description_with_bboxes', 'bboxes']:
333
+ od_instances = task_answer
334
+ bboxes_od = [_od_instance['bbox'] for _od_instance in od_instances]
335
+ labels_od = [str(_od_instance['cat_name']) for _od_instance in od_instances]
336
+ final_answer = {'bboxes': bboxes_od, 'labels': labels_od}
337
+ if len(od_instances) and 'score' in od_instances[0]:
338
+ scores_od = [_od_instance['score'] for _od_instance in od_instances]
339
+ final_answer['scores'] = scores_od
340
+ elif task_answer_post_processing_type in ['ocr']:
341
+ bboxes = [_od_instance['quad_box'] for _od_instance in task_answer]
342
+ labels = [str(_od_instance['text']) for _od_instance in task_answer]
343
+ final_answer = {'quad_boxes': bboxes, 'labels': labels}
344
+ elif task_answer_post_processing_type in ['phrase_grounding']:
345
+ bboxes = []
346
+ labels = []
347
+ for _grounded_phrase in task_answer:
348
+ for _bbox in _grounded_phrase['bbox']:
349
+ bboxes.append(_bbox)
350
+ labels.append(_grounded_phrase['cat_name'])
351
+ final_answer = {'bboxes': bboxes, 'labels': labels}
352
+ elif task_answer_post_processing_type in ['description_with_polygons', 'polygons']:
353
+ labels = []
354
+ polygons = []
355
+ for result in task_answer:
356
+ label = result['cat_name']
357
+ _polygons = result['polygons']
358
+ labels.append(label)
359
+ polygons.append(_polygons)
360
+ final_answer = {'polygons': polygons, 'labels': labels}
361
+ elif task_answer_post_processing_type in ['description_with_bboxes_or_polygons']:
362
+ bboxes = []
363
+ bboxes_labels = []
364
+ polygons = []
365
+ polygons_labels = []
366
+ for result in task_answer:
367
+ label = result['cat_name']
368
+ if 'polygons' in result:
369
+ _polygons = result['polygons']
370
+ polygons.append(_polygons)
371
+ polygons_labels.append(label)
372
+ else:
373
+ _bbox = result['bbox']
374
+ bboxes.append(_bbox)
375
+ bboxes_labels.append(label)
376
+ final_answer = {'bboxes': bboxes, 'bboxes_labels': bboxes_labels, 'polygons': polygons, 'polygons_labels': polygons_labels}
377
+ else:
378
+ raise ValueError('Unknown task answer post processing type: {}'.format(task_answer_post_processing_type))
379
+
380
+ final_answer = {
381
+ task: final_answer}
382
+ return final_answer
383
+
384
+ class BoxQuantizer(object):
385
+ def __init__(self, mode, bins):
386
+ self.mode = mode
387
+ self.bins = bins
388
+
389
+ def quantize(self, boxes: torch.Tensor, size):
390
+ bins_w, bins_h = self.bins # Quantization bins.
391
+ size_w, size_h = size # Original image size.
392
+ size_per_bin_w = size_w / bins_w
393
+ size_per_bin_h = size_h / bins_h
394
+ xmin, ymin, xmax, ymax = boxes.split(1, dim=-1) # Shape: 4 * [N, 1].
395
+
396
+ if self.mode == 'floor':
397
+ quantized_xmin = (
398
+ xmin / size_per_bin_w).floor().clamp(0, bins_w - 1)
399
+ quantized_ymin = (
400
+ ymin / size_per_bin_h).floor().clamp(0, bins_h - 1)
401
+ quantized_xmax = (
402
+ xmax / size_per_bin_w).floor().clamp(0, bins_w - 1)
403
+ quantized_ymax = (
404
+ ymax / size_per_bin_h).floor().clamp(0, bins_h - 1)
405
+
406
+ elif self.mode == 'round':
407
+ raise NotImplementedError()
408
+
409
+ else:
410
+ raise ValueError('Incorrect quantization type.')
411
+
412
+ quantized_boxes = torch.cat(
413
+ (quantized_xmin, quantized_ymin, quantized_xmax, quantized_ymax), dim=-1
414
+ ).int()
415
+
416
+ return quantized_boxes
417
+
418
+ def dequantize(self, boxes: torch.Tensor, size):
419
+ bins_w, bins_h = self.bins # Quantization bins.
420
+ size_w, size_h = size # Original image size.
421
+ size_per_bin_w = size_w / bins_w
422
+ size_per_bin_h = size_h / bins_h
423
+ xmin, ymin, xmax, ymax = boxes.split(1, dim=-1) # Shape: 4 * [N, 1].
424
+
425
+ if self.mode == 'floor':
426
+ # Add 0.5 to use the center position of the bin as the coordinate.
427
+ dequantized_xmin = (xmin + 0.5) * size_per_bin_w
428
+ dequantized_ymin = (ymin + 0.5) * size_per_bin_h
429
+ dequantized_xmax = (xmax + 0.5) * size_per_bin_w
430
+ dequantized_ymax = (ymax + 0.5) * size_per_bin_h
431
+
432
+ elif self.mode == 'round':
433
+ raise NotImplementedError()
434
+
435
+ else:
436
+ raise ValueError('Incorrect quantization type.')
437
+
438
+ dequantized_boxes = torch.cat(
439
+ (dequantized_xmin, dequantized_ymin,
440
+ dequantized_xmax, dequantized_ymax), dim=-1
441
+ )
442
+
443
+ return dequantized_boxes
444
+
445
+
446
+ class CoordinatesQuantizer(object):
447
+ """
448
+ Quantize coornidates (Nx2)
449
+ """
450
+
451
+ def __init__(self, mode, bins):
452
+ self.mode = mode
453
+ self.bins = bins
454
+
455
+ def quantize(self, coordinates: torch.Tensor, size):
456
+ bins_w, bins_h = self.bins # Quantization bins.
457
+ size_w, size_h = size # Original image size.
458
+ size_per_bin_w = size_w / bins_w
459
+ size_per_bin_h = size_h / bins_h
460
+ assert coordinates.shape[-1] == 2, 'coordinates should be shape (N, 2)'
461
+ x, y = coordinates.split(1, dim=-1) # Shape: 4 * [N, 1].
462
+
463
+ if self.mode == 'floor':
464
+ quantized_x = (x / size_per_bin_w).floor().clamp(0, bins_w - 1)
465
+ quantized_y = (y / size_per_bin_h).floor().clamp(0, bins_h - 1)
466
+
467
+ elif self.mode == 'round':
468
+ raise NotImplementedError()
469
+
470
+ else:
471
+ raise ValueError('Incorrect quantization type.')
472
+
473
+ quantized_coordinates = torch.cat(
474
+ (quantized_x, quantized_y), dim=-1
475
+ ).int()
476
+
477
+ return quantized_coordinates
478
+
479
+ def dequantize(self, coordinates: torch.Tensor, size):
480
+ bins_w, bins_h = self.bins # Quantization bins.
481
+ size_w, size_h = size # Original image size.
482
+ size_per_bin_w = size_w / bins_w
483
+ size_per_bin_h = size_h / bins_h
484
+ assert coordinates.shape[-1] == 2, 'coordinates should be shape (N, 2)'
485
+ x, y = coordinates.split(1, dim=-1) # Shape: 4 * [N, 1].
486
+
487
+ if self.mode == 'floor':
488
+ # Add 0.5 to use the center position of the bin as the coordinate.
489
+ dequantized_x = (x + 0.5) * size_per_bin_w
490
+ dequantized_y = (y + 0.5) * size_per_bin_h
491
+
492
+ elif self.mode == 'round':
493
+ raise NotImplementedError()
494
+
495
+ else:
496
+ raise ValueError('Incorrect quantization type.')
497
+
498
+ dequantized_coordinates = torch.cat(
499
+ (dequantized_x, dequantized_y), dim=-1
500
+ )
501
+
502
+ return dequantized_coordinates
503
+
504
+
505
+ class Florence2PostProcesser(object):
506
+ r"""
507
+ Florence-2 post process for converting text prediction to various tasks results.
508
+
509
+ Args:
510
+ config: A dict of configs.
511
+ tokenizer: A tokenizer for decoding text to spans.
512
+ sample config:
513
+ UNIFIED_POST_PROCESS:
514
+ # commom configs
515
+ NUM_BBOX_HEIGHT_BINS: 1000
516
+ NUM_BBOX_WIDTH_BINS: 1000
517
+ COORDINATES_HEIGHT_BINS: 1000
518
+ COORDINATES_WIDTH_BINS: 1000
519
+ # task specific configs, override the common configs
520
+ PRASE_TASKS:
521
+ - TASK_NAME: 'video_dense_caption'
522
+ PATTERN: 'r<time_(\d+)><time_(\d+)>([a-zA-Z0-9 ]+)'
523
+ SCORE_MODE: 'avg_cat_name_scores'
524
+ NUM_BINS: 100
525
+ - TASK_NAME: 'od'
526
+ PATTERN: 'r<loc_(\d+)><loc_(\d+)><loc_(\d+)><loc_(\d+)>([a-zA-Z0-9 ]+)'
527
+ SCORE_MODE: 'avg_cat_name_scores'
528
+
529
+ Returns:
530
+ parsed_dict (dict): A dict of parsed results.
531
+ """
532
+ def __init__(
533
+ self,
534
+ tokenizer=None
535
+ ):
536
+ parse_tasks = []
537
+ parse_task_configs = {}
538
+ config = self._create_default_config()
539
+ for task in config['PARSE_TASKS']:
540
+ parse_tasks.append(task['TASK_NAME'])
541
+ parse_task_configs[task['TASK_NAME']] = task
542
+
543
+ self.config = config
544
+ self.parse_tasks = parse_tasks
545
+ self.parse_tasks_configs = parse_task_configs
546
+
547
+ self.tokenizer = tokenizer
548
+ if self.tokenizer is not None:
549
+ self.all_special_tokens = set(self.tokenizer.all_special_tokens)
550
+
551
+ self.init_quantizers()
552
+ self.black_list_of_phrase_grounding = self._create_black_list_of_phrase_grounding()
553
+
554
+ def _create_black_list_of_phrase_grounding(self):
555
+ black_list = {}
556
+
557
+ if 'phrase_grounding' in self.parse_tasks and self.parse_tasks_configs['phrase_grounding']['FILTER_BY_BLACK_LIST']:
558
+ black_list = set(
559
+ ['it', 'I', 'me', 'mine',
560
+ 'you', 'your', 'yours',
561
+ 'he', 'him', 'his',
562
+ 'she', 'her', 'hers',
563
+ 'they', 'them', 'their', 'theirs',
564
+ 'one', 'oneself',
565
+ 'we', 'us', 'our', 'ours',
566
+ 'you', 'your', 'yours',
567
+ 'they', 'them', 'their', 'theirs',
568
+ 'mine', 'yours', 'his', 'hers', 'its',
569
+ 'ours', 'yours', 'theirs',
570
+ 'myself', 'yourself', 'himself', 'herself', 'itself',
571
+ 'ourselves', 'yourselves', 'themselves',
572
+ 'this', 'that',
573
+ 'these', 'those',
574
+ 'who', 'whom', 'whose', 'which', 'what',
575
+ 'who', 'whom', 'whose', 'which', 'that',
576
+ 'all', 'another', 'any', 'anybody', 'anyone', 'anything',
577
+ 'each', 'everybody', 'everyone', 'everything',
578
+ 'few', 'many', 'nobody', 'none', 'one', 'several',
579
+ 'some', 'somebody', 'someone', 'something',
580
+ 'each other', 'one another',
581
+ 'myself', 'yourself', 'himself', 'herself', 'itself',
582
+ 'ourselves', 'yourselves', 'themselves',
583
+ 'the image', 'image', 'images', 'the', 'a', 'an', 'a group',
584
+ 'other objects', 'lots', 'a set',
585
+ ]
586
+ )
587
+
588
+ return black_list
589
+
590
+ def _create_default_config(self):
591
+ config = {
592
+ 'NUM_BBOX_HEIGHT_BINS': 1000,
593
+ 'NUM_BBOX_WIDTH_BINS': 1000,
594
+ 'BOX_QUANTIZATION_MODE': 'floor',
595
+ 'COORDINATES_HEIGHT_BINS': 1000,
596
+ 'COORDINATES_WIDTH_BINS': 1000,
597
+ 'COORDINATES_QUANTIZATION_MODE': 'floor',
598
+ 'PARSE_TASKS': [
599
+ {
600
+ 'TASK_NAME': 'od',
601
+ 'PATTERN': r'([a-zA-Z0-9 ]+)<loc_(\\d+)><loc_(\\d+)><loc_(\\d+)><loc_(\\d+)>',
602
+ 'SCORE_MODE': 'avg_loc_scores'
603
+ },
604
+ {
605
+ 'TASK_NAME': 'ocr',
606
+ 'PATTERN': r'(.+?)<loc_(\d+)><loc_(\d+)><loc_(\d+)><loc_(\d+)><loc_(\d+)><loc_(\d+)><loc_(\d+)><loc_(\d+)>',
607
+ 'AREA_THRESHOLD': 0.00
608
+ },
609
+ {
610
+ 'TASK_NAME': 'phrase_grounding',
611
+ 'FILTER_BY_BLACK_LIST': True
612
+ },
613
+ {
614
+ 'TASK_NAME': 'pure_text',
615
+ },
616
+ {
617
+ 'TASK_NAME': 'description_with_bboxes',
618
+ 'SCORE_MODE': 'avg_loc_scores'
619
+ },
620
+ {
621
+ 'TASK_NAME': 'description_with_polygons',
622
+ },
623
+ {
624
+ 'TASK_NAME': 'polygons',
625
+ },
626
+ {
627
+ 'TASK_NAME': 'bboxes',
628
+ },
629
+ {
630
+ 'TASK_NAME': 'description_with_bboxes_or_polygons',
631
+ }
632
+ ]
633
+ }
634
+
635
+ return config
636
+
637
+ def init_quantizers(self):
638
+ # we have box_quantizer (od, grounding) and coordinates_quantizer (ocr, referring_segmentation)
639
+ num_bbox_height_bins = self.config.get('NUM_BBOX_HEIGHT_BINS', 1000)
640
+ num_bbox_width_bins = self.config.get('NUM_BBOX_WIDTH_BINS', 1000)
641
+ box_quantization_mode = self.config.get('BOX_QUANTIZATION_MODE', 'floor')
642
+ self.box_quantizer = BoxQuantizer(
643
+ box_quantization_mode,
644
+ (num_bbox_width_bins, num_bbox_height_bins),
645
+ )
646
+
647
+ num_bbox_height_bins = self.config['COORDINATES_HEIGHT_BINS'] if 'COORDINATES_HEIGHT_BINS' in self.config else self.config.get('NUM_BBOX_HEIGHT_BINS', 1000)
648
+ num_bbox_width_bins = self.config['COORDINATES_WIDTH_BINS'] if 'COORDINATES_WIDTH_BINS' in self.config else self.config.get('NUM_BBOX_WIDTH_BINS', 1000)
649
+ box_quantization_mode = self.config.get('COORDINATES_QUANTIZATION_MODE') if 'COORDINATES_QUANTIZATION_MODE' in self.config else self.config.get('BOX_QUANTIZATION_MODE', 'floor')
650
+ self.coordinates_quantizer = CoordinatesQuantizer(
651
+ box_quantization_mode,
652
+ (num_bbox_width_bins, num_bbox_height_bins),
653
+ )
654
+
655
+ def decode_with_spans(self, tokenizer, token_ids):
656
+ filtered_tokens = tokenizer.convert_ids_to_tokens(
657
+ token_ids, skip_special_tokens=False)
658
+ assert len(filtered_tokens) == len(token_ids)
659
+
660
+ sub_texts = []
661
+ for token in filtered_tokens:
662
+ if token in self.all_special_tokens:
663
+ sub_texts.append(token)
664
+ else:
665
+ if isinstance(tokenizer, (BartTokenizer, BartTokenizerFast)):
666
+ sub_text = tokenizer.convert_tokens_to_string([token])
667
+ else:
668
+ raise ValueError(f'type {type(tokenizer)} not supported')
669
+ sub_texts.append(sub_text)
670
+
671
+ text = ''
672
+ spans = []
673
+ for sub_text in sub_texts:
674
+ span = (len(text), len(text) + len(sub_text)) # [start index, end index).
675
+ text += sub_text
676
+ spans.append(span)
677
+
678
+ return text, spans
679
+
680
+ def parse_od_from_text_and_spans(
681
+ self,
682
+ text,
683
+ pattern,
684
+ image_size,
685
+ phrase_centric=False
686
+ ):
687
+ parsed = list(re.finditer(pattern, text))
688
+
689
+ instances = []
690
+ for i in range(len(parsed)):
691
+ # Prepare instance.
692
+ instance = {}
693
+
694
+ if phrase_centric:
695
+ bbox_bins = [int(parsed[i].group(j)) for j in range(2, 6)]
696
+ else:
697
+ bbox_bins = [int(parsed[i].group(j)) for j in range(1, 5)]
698
+ instance['bbox'] = self.box_quantizer.dequantize(
699
+ boxes=torch.tensor(bbox_bins),
700
+ size=image_size
701
+ ).tolist()
702
+
703
+ if phrase_centric:
704
+ instance['cat_name'] = parsed[i].group(1).lower().strip()
705
+ else:
706
+ instance['cat_name'] = parsed[i].group(5).lower().strip()
707
+ instances.append(instance)
708
+
709
+ return instances
710
+
711
+ def parse_ocr_from_text_and_spans(self,
712
+ text,
713
+ pattern,
714
+ image_size,
715
+ area_threshold=-1.0,
716
+ ):
717
+ bboxes = []
718
+ labels = []
719
+ text = text.replace('<s>', '')
720
+ # ocr with regions
721
+ parsed = re.findall(pattern, text)
722
+ instances = []
723
+ image_width, image_height = image_size
724
+
725
+ for ocr_line in parsed:
726
+ ocr_content = ocr_line[0]
727
+ quad_box = ocr_line[1:]
728
+ quad_box = [int(i) for i in quad_box]
729
+ quad_box = self.coordinates_quantizer.dequantize(
730
+ torch.tensor(np.array(quad_box).reshape(-1, 2)),
731
+ size=image_size
732
+ ).reshape(-1).tolist()
733
+
734
+ if area_threshold > 0:
735
+ x_coords = [i for i in quad_box[0::2]]
736
+ y_coords = [i for i in quad_box[1::2]]
737
+
738
+ # apply the Shoelace formula
739
+ area = 0.5 * abs(sum(x_coords[i] * y_coords[i + 1] - x_coords[i + 1] * y_coords[i] for i in range(4 - 1)))
740
+
741
+ if area < (image_width * image_height) * area_threshold:
742
+ continue
743
+
744
+ bboxes.append(quad_box)
745
+ labels.append(ocr_content)
746
+ instances.append({
747
+ 'quad_box': quad_box,
748
+ 'text': ocr_content,
749
+ })
750
+ return instances
751
+
752
+ def parse_phrase_grounding_from_text_and_spans(self, text, pattern, image_size):
753
+ # ignore <s> </s> and <pad>
754
+ cur_span = 0
755
+ if text.startswith('<s>'):
756
+ cur_span += 3
757
+
758
+ text = text.replace('<s>', '')
759
+ text = text.replace('</s>', '')
760
+ text = text.replace('<pad>', '')
761
+
762
+ pattern = r"([^<]+(?:<loc_\d+>){4,})"
763
+ phrases = re.findall(pattern, text)
764
+
765
+ # pattern should be text pattern and od pattern
766
+ pattern = r'^\s*(.*?)(?=<od>|</od>|<box>|</box>|<bbox>|</bbox>|<loc_)'
767
+ box_pattern = r'<loc_(\d+)><loc_(\d+)><loc_(\d+)><loc_(\d+)>'
768
+
769
+ instances = []
770
+ for pharse_text in phrases:
771
+ phrase_text_strip = pharse_text.replace('<ground>', '', 1)
772
+ phrase_text_strip = pharse_text.replace('<obj>', '', 1)
773
+
774
+ if phrase_text_strip == '':
775
+ cur_span += len(pharse_text)
776
+ continue
777
+
778
+ # Prepare instance.
779
+ instance = {}
780
+
781
+ # parse phrase, get string
782
+ phrase = re.search(pattern, phrase_text_strip)
783
+ if phrase is None:
784
+ cur_span += len(pharse_text)
785
+ continue
786
+
787
+ # parse bboxes by box_pattern
788
+ bboxes_parsed = list(re.finditer(box_pattern, pharse_text))
789
+ if len(bboxes_parsed) == 0:
790
+ cur_span += len(pharse_text)
791
+ continue
792
+
793
+ phrase = phrase.group()
794
+ # remove leading and trailing spaces
795
+ phrase = phrase.strip()
796
+
797
+ if phrase in self.black_list_of_phrase_grounding:
798
+ cur_span += len(pharse_text)
799
+ continue
800
+
801
+ # a list of list
802
+ bbox_bins = [[int(_bboxes_parsed.group(j)) for j in range(1, 5)] for _bboxes_parsed in bboxes_parsed]
803
+ instance['bbox'] = self.box_quantizer.dequantize(
804
+ boxes=torch.tensor(bbox_bins),
805
+ size=image_size
806
+ ).tolist()
807
+
808
+ # exclude non-ascii characters
809
+ phrase = phrase.encode('ascii',errors='ignore').decode('ascii')
810
+ instance['cat_name'] = phrase
811
+
812
+ instances.append(instance)
813
+
814
+ return instances
815
+
816
+ def parse_description_with_bboxes_from_text_and_spans(
817
+ self,
818
+ text,
819
+ spans=None,
820
+ scores=None,
821
+ score_mode=None,
822
+ pattern=None,
823
+ image_size=None,
824
+ allow_empty_phrase=False
825
+ ):
826
+ def find_matched_token_indices(cur_span, token_spans):
827
+ inds = []
828
+ for i, token_span in enumerate(token_spans):
829
+ if not (token_span[1] <= cur_span[0] or token_span[0] >= cur_span[1]):
830
+ inds.append(i)
831
+ return inds
832
+
833
+ cur_span = 0
834
+ if text.startswith('<s>'):
835
+ cur_span += 3
836
+
837
+ text = text.replace('<s>', '')
838
+ text = text.replace('</s>', '')
839
+ text = text.replace('<pad>', '')
840
+
841
+ if allow_empty_phrase:
842
+ pattern = rf"(?:(?:<loc_\d+>){{4,}})"
843
+ else:
844
+ pattern = r"([^<]+(?:<loc_\d+>){4,})"
845
+ phrases = re.findall(pattern, text)
846
+
847
+ # pattern should be text pattern and od pattern
848
+ pattern = r'^\s*(.*?)(?=<od>|</od>|<box>|</box>|<bbox>|</bbox>|<loc_)'
849
+ box_pattern = r'<loc_(\d+)><loc_(\d+)><loc_(\d+)><loc_(\d+)>'
850
+
851
+ instances = []
852
+ for pharse_text in phrases:
853
+ phrase_text_strip = pharse_text.replace('<ground>', '', 1)
854
+ phrase_text_strip = pharse_text.replace('<obj>', '', 1)
855
+
856
+ if phrase_text_strip == '' and not allow_empty_phrase:
857
+ cur_span += len(pharse_text)
858
+ continue
859
+
860
+ # parse phrase, get string
861
+ phrase = re.search(pattern, phrase_text_strip)
862
+ if phrase is None:
863
+ cur_span += len(pharse_text)
864
+ continue
865
+
866
+ phrase_span = phrase.span()
867
+ phrase = phrase.group()
868
+ # remove leading and trailing spaces
869
+ phrase = phrase.strip()
870
+
871
+ # parse bboxes by box_pattern
872
+ bboxes_parsed = list(re.finditer(box_pattern, pharse_text))
873
+ if len(bboxes_parsed) == 0:
874
+ cur_span += len(pharse_text)
875
+ continue
876
+
877
+ # a list of list
878
+ bbox_bins = [[int(_bboxes_parsed.group(j)) for j in range(1, 5)] for _bboxes_parsed in bboxes_parsed]
879
+
880
+ bboxes = self.box_quantizer.dequantize(
881
+ boxes=torch.tensor(bbox_bins),
882
+ size=image_size
883
+ ).tolist()
884
+
885
+ if score_mode == 'avg_loc_scores':
886
+ if spans is None or scores is None:
887
+ all_scores = None
888
+ else:
889
+ bbox_end_spans = [_bboxes_parsed.span(0) for _bboxes_parsed in bboxes_parsed]
890
+ all_scores = []
891
+ for _spans in bbox_end_spans:
892
+ token_inds = find_matched_token_indices((_spans[0] + cur_span, _spans[1]+ cur_span), spans)
893
+ loc_scores = [scores[token_i] for token_i in token_inds]
894
+ score = sum(loc_scores) / len(loc_scores)
895
+ all_scores.append(score)
896
+ elif score_mode == 'avg_cat_name_scores':
897
+ if spans is None or scores is None:
898
+ all_scores = None
899
+ else:
900
+ cat_name_token_inds = find_matched_token_indices((phrase_span[0] + cur_span, phrase_span[1]+cur_span), spans)
901
+ cat_name_scores = [scores[token_i] for token_i in cat_name_token_inds]
902
+ score = sum(cat_name_scores) / len(cat_name_scores)
903
+ all_scores = [score] * len(bboxes)
904
+ elif score_mode is None:
905
+ all_scores = None
906
+ else:
907
+ raise ValueError('Unknown score mode: {}'.format(score_mode))
908
+
909
+ phrase = phrase.encode('ascii',errors='ignore').decode('ascii')
910
+ for _idx, _bboxes in enumerate(bboxes):
911
+ # Prepare instance.
912
+ instance = {}
913
+ instance['bbox'] = _bboxes
914
+ # exclude non-ascii characters
915
+ instance['cat_name'] = phrase
916
+ if all_scores is not None:
917
+ instance['score'] = math.exp(all_scores[_idx])
918
+ instances.append(instance)
919
+
920
+ cur_span += len(pharse_text)
921
+
922
+ return instances
923
+
924
+ def parse_description_with_polygons_from_text_and_spans(self, text, pattern, image_size,
925
+ allow_empty_phrase=False,
926
+ polygon_sep_token='<sep>',
927
+ polygon_start_token='<poly>',
928
+ polygon_end_token='</poly>',
929
+ with_box_at_start=False,
930
+ ):
931
+
932
+ # ref_seg format: '<expression><x1><y1><x2><y2><><><sep><><><><>'
933
+ # ignore <s> </s> and <pad>
934
+
935
+ text = text.replace('<s>', '')
936
+ text = text.replace('</s>', '')
937
+ text = text.replace('<pad>', '')
938
+
939
+ if allow_empty_phrase:
940
+ pattern = rf"(?:(?:<loc_\d+>|{re.escape(polygon_sep_token)}|{re.escape(polygon_start_token)}|{re.escape(polygon_end_token)}){{4,}})"
941
+ else:
942
+ # [^<]+: This part matches one or more characters that are not the < symbol.
943
+ # The ^ inside the square brackets [] is a negation, meaning it matches anything except <.
944
+ #
945
+ pattern = rf"([^<]+(?:<loc_\d+>|{re.escape(polygon_sep_token)}|{re.escape(polygon_start_token)}|{re.escape(polygon_end_token)}){{4,}})"
946
+ phrases = re.findall(pattern, text)
947
+
948
+ phrase_string_pattern = r'^\s*(.*?)(?=<od>|</od>|<box>|</box>|<bbox>|</bbox>|<loc_|<poly>)'
949
+ box_pattern = rf'((?:<loc_\d+>)+)(?:{re.escape(polygon_sep_token)}|$)'
950
+
951
+ # one polygons instance is separated by polygon_start_token and polygon_end_token
952
+ polygons_instance_pattern = rf'{re.escape(polygon_start_token)}(.*?){re.escape(polygon_end_token)}'
953
+
954
+ instances = []
955
+ for phrase_text in phrases:
956
+
957
+ # exclude loc_\d+>
958
+ # need to get span if want to include category score
959
+ phrase_text_strip = re.sub(r'^loc_\d+>', '', phrase_text, count=1)
960
+
961
+ # phrase = phrase.replace('<poly>', '')
962
+ # phrase = phrase.replace('poly>', '')
963
+
964
+ if phrase_text_strip == '' and not allow_empty_phrase:
965
+ continue
966
+
967
+
968
+ # parse phrase, get string
969
+ phrase = re.search(phrase_string_pattern, phrase_text_strip)
970
+ if phrase is None:
971
+ continue
972
+ phrase = phrase.group()
973
+ # remove leading and trailing spaces
974
+ phrase = phrase.strip()
975
+
976
+ # parse bboxes by box_pattern
977
+
978
+ # split by polygon_start_token and polygon_end_token first using polygons_instance_pattern
979
+ if polygon_start_token in phrase_text and polygon_end_token in phrase_text:
980
+ polygons_instances_parsed = list(re.finditer(polygons_instance_pattern, phrase_text))
981
+ else:
982
+ polygons_instances_parsed = [phrase_text]
983
+
984
+ for _polygons_instances_parsed in polygons_instances_parsed:
985
+ # Prepare instance.
986
+ instance = {}
987
+
988
+ # polygons_parsed= list(re.finditer(box_pattern, phrase_text))
989
+ if isinstance(_polygons_instances_parsed, str):
990
+ polygons_parsed= list(re.finditer(box_pattern, _polygons_instances_parsed))
991
+ else:
992
+ polygons_parsed= list(re.finditer(box_pattern, _polygons_instances_parsed.group(1)))
993
+ if len(polygons_parsed) == 0:
994
+ continue
995
+
996
+ # a list of list (polygon)
997
+ bbox = []
998
+ polygons = []
999
+ for _polygon_parsed in polygons_parsed:
1000
+ # group 1: whole <loc_\d+>...</loc_\d+>
1001
+ _polygon = _polygon_parsed.group(1)
1002
+ # parse into list of int
1003
+ _polygon = [int(_loc_parsed.group(1)) for _loc_parsed in re.finditer(r'<loc_(\d+)>', _polygon)]
1004
+ if with_box_at_start and len(bbox) == 0:
1005
+ if len(_polygon) > 4:
1006
+ # no valid bbox prediction
1007
+ bbox = _polygon[:4]
1008
+ _polygon = _polygon[4:]
1009
+ else:
1010
+ bbox = [0, 0, 0, 0]
1011
+ # abandon last element if is not paired
1012
+ if len(_polygon) % 2 == 1:
1013
+ _polygon = _polygon[:-1]
1014
+
1015
+ # reshape into (n, 2)
1016
+ _polygon = self.coordinates_quantizer.dequantize(
1017
+ torch.tensor(np.array(_polygon).reshape(-1, 2)),
1018
+ size=image_size
1019
+ ).reshape(-1).tolist()
1020
+ # reshape back
1021
+ polygons.append(_polygon)
1022
+
1023
+ instance['cat_name'] = phrase
1024
+ instance['polygons'] = polygons
1025
+ if len(bbox) != 0:
1026
+ instance['bbox'] = self.box_quantizer.dequantize(
1027
+ boxes=torch.tensor([bbox]),
1028
+ size=image_size
1029
+ ).tolist()[0]
1030
+
1031
+ instances.append(instance)
1032
+
1033
+ return instances
1034
+
1035
+ def __call__(
1036
+ self,
1037
+ text=None,
1038
+ sequence=None,
1039
+ transition_beam_score=None,
1040
+ image_size=None,
1041
+ parse_tasks=None,
1042
+ ):
1043
+ """
1044
+ Args:
1045
+ text: model outputs
1046
+ image_size: (width, height)
1047
+ parse_tasks: a list of tasks to parse, if None, parse all tasks.
1048
+ """
1049
+ if parse_tasks is not None:
1050
+ if isinstance(parse_tasks, str):
1051
+ parse_tasks = [parse_tasks]
1052
+ for _parse_task in parse_tasks:
1053
+ assert _parse_task in self.parse_tasks, f'parse task {_parse_task} not supported'
1054
+
1055
+ # sequence or text should be provided
1056
+ assert sequence is not None or text is not None, 'sequence or text should be provided'
1057
+ assert sequence is None or text is None, 'only one of sequence and text should be provided'
1058
+
1059
+ if sequence is not None:
1060
+ sequence = sequence.tolist()[1:]
1061
+ text, spans = self.decode_with_spans(self.tokenizer, sequence)
1062
+ if transition_beam_score is not None:
1063
+ transition_beam_score = transition_beam_score.tolist()
1064
+ assert len(sequence) == len(transition_beam_score)
1065
+ else:
1066
+ spans = None
1067
+ transition_beam_score = None
1068
+
1069
+ parsed_dict = {
1070
+ 'text': text
1071
+ }
1072
+
1073
+ for task in self.parse_tasks:
1074
+ if parse_tasks is not None and task not in parse_tasks:
1075
+ continue
1076
+
1077
+ pattern = self.parse_tasks_configs[task].get('PATTERN', None)
1078
+ score_mode = self.parse_tasks_configs[task].get('SCORE_MODE', None)
1079
+
1080
+ if task == 'ocr':
1081
+ instances = self.parse_ocr_from_text_and_spans(
1082
+ text,
1083
+ pattern=pattern,
1084
+ image_size=image_size,
1085
+ area_threshold=self.parse_tasks_configs[task].get('AREA_THRESHOLD', 0.0),
1086
+ )
1087
+ parsed_dict['ocr'] = instances
1088
+ elif task == 'phrase_grounding':
1089
+ instances = self.parse_phrase_grounding_from_text_and_spans(
1090
+ text,
1091
+ pattern=pattern,
1092
+ image_size=image_size,
1093
+ )
1094
+ parsed_dict['phrase_grounding'] = instances
1095
+ elif task == 'pure_text':
1096
+ parsed_dict['pure_text'] = text
1097
+ elif task == 'description_with_bboxes':
1098
+ instances = self.parse_description_with_bboxes_from_text_and_spans(
1099
+ text,
1100
+ spans=spans,
1101
+ scores=transition_beam_score,
1102
+ score_mode=score_mode,
1103
+ pattern=pattern,
1104
+ image_size=image_size,
1105
+ )
1106
+ parsed_dict['description_with_bboxes'] = instances
1107
+ elif task == 'description_with_polygons':
1108
+ instances = self.parse_description_with_polygons_from_text_and_spans(
1109
+ text,
1110
+ pattern=pattern,
1111
+ image_size=image_size,
1112
+ )
1113
+ parsed_dict['description_with_polygons'] = instances
1114
+ elif task == 'polygons':
1115
+ instances = self.parse_description_with_polygons_from_text_and_spans(
1116
+ text,
1117
+ pattern=pattern,
1118
+ image_size=image_size,
1119
+ allow_empty_phrase=True,
1120
+ )
1121
+ parsed_dict['polygons'] = instances
1122
+ elif task == 'bboxes':
1123
+ instances = self.parse_description_with_bboxes_from_text_and_spans(
1124
+ text,
1125
+ pattern=pattern,
1126
+ image_size=image_size,
1127
+ allow_empty_phrase=True,
1128
+ )
1129
+ parsed_dict['bboxes'] = instances
1130
+ elif task == 'description_with_bboxes_or_polygons':
1131
+ if '<poly>' in text:
1132
+ # only support either polygons or bboxes, not both at the same time
1133
+ instances = self.parse_description_with_polygons_from_text_and_spans(
1134
+ text,
1135
+ pattern=pattern,
1136
+ image_size=image_size,
1137
+ )
1138
+ else:
1139
+ instances = self.parse_description_with_bboxes_from_text_and_spans(
1140
+ text,
1141
+ pattern=pattern,
1142
+ image_size=image_size,
1143
+ )
1144
+ parsed_dict['description_with_bboxes_or_polygons'] = instances
1145
+ else:
1146
+ raise ValueError("task {} is not supported".format(task))
1147
+
1148
+ return parsed_dict
processor_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "auto_map": {
3
+ "AutoProcessor": "processing_florence2.Florence2Processor"
4
+ },
5
+ "processor_class": "Florence2Processor"
6
+ }
special_tokens_map.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
The diff for this file is too large to render. See raw diff
 
vocab.json ADDED
The diff for this file is too large to render. See raw diff