timm
/

Image Feature Extraction
timm
PyTorch
Safetensors
Transformers
rwightman HF Staff commited on
Commit
be56b98
·
verified ·
1 Parent(s): 9624c4f

Update model config and README

Browse files
Files changed (2) hide show
  1. README.md +3 -2
  2. config.json +1 -1
README.md CHANGED
@@ -1,5 +1,6 @@
1
  ---
2
  tags:
 
3
  - timm
4
  - transformers
5
  pipeline_tag: image-feature-extraction
@@ -19,7 +20,7 @@ A DINOv3 ViT model image feature encoder. Distilled on LVD-1689M from the DINOv3
19
  * The original models keep RoPE periods as a persistent `bfloat16` buffer. `timm` generates `float32` periods at init. This results in some numerical differences, however the `timm` approach should be less problematic running on devices without bfloat16 support, and appears to work as well if not slightly better for fine-tuning. `model.rope.periods = model.rope.periods.to(torch.bfloat16).to(torch.float32)` will truncate the periods to bfloat16 and result in matching outputs.
20
 
21
  ## Model Details
22
- - **Model Type:** Image feature encoder
23
  - **Model Stats:**
24
  - Params (M): 840.5
25
  - GMACs: 224.9
@@ -190,4 +191,4 @@ See the associated paper for details on the evaluation protocols
190
  doi = {10.5281/zenodo.4414861},
191
  howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
192
  }
193
- ```
 
1
  ---
2
  tags:
3
+ - image-feature-extraction
4
  - timm
5
  - transformers
6
  pipeline_tag: image-feature-extraction
 
20
  * The original models keep RoPE periods as a persistent `bfloat16` buffer. `timm` generates `float32` periods at init. This results in some numerical differences, however the `timm` approach should be less problematic running on devices without bfloat16 support, and appears to work as well if not slightly better for fine-tuning. `model.rope.periods = model.rope.periods.to(torch.bfloat16).to(torch.float32)` will truncate the periods to bfloat16 and result in matching outputs.
21
 
22
  ## Model Details
23
+ - **Model Type:** Image Feature Encoder
24
  - **Model Stats:**
25
  - Params (M): 840.5
26
  - GMACs: 224.9
 
191
  doi = {10.5281/zenodo.4414861},
192
  howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
193
  }
194
+ ```
config.json CHANGED
@@ -4,7 +4,7 @@
4
  "num_features": 1280,
5
  "global_pool": "avg",
6
  "pretrained_cfg": {
7
- "tag": "lvdm_1689m",
8
  "custom_load": false,
9
  "input_size": [
10
  3,
 
4
  "num_features": 1280,
5
  "global_pool": "avg",
6
  "pretrained_cfg": {
7
+ "tag": "lvd_1689m",
8
  "custom_load": false,
9
  "input_size": [
10
  3,