Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
tags:
|
| 3 |
+
- computer-vision
|
| 4 |
+
- image-classification
|
| 5 |
+
license:
|
| 6 |
+
- cc0-1.0
|
| 7 |
+
library_name: keras
|
| 8 |
+
---
|
| 9 |
+
|
| 10 |
+
## Image Classification using MobileViT
|
| 11 |
+
This repo contains the model and the notebook [to this Keras example on MobileViT](https://keras.io/examples/vision/mobilevit/).
|
| 12 |
+
|
| 13 |
+
Full credits to: [Sayak Paul](https://twitter.com/RisingSayak)
|
| 14 |
+
|
| 15 |
+
## Background Information
|
| 16 |
+
MobileViT architecture (Mehta et al.), combines the benefits of Transformers (Vaswani et al.) and convolutions. With Transformers, we can capture long-range dependencies that result in global representations. With convolutions, we can capture spatial relationships that model locality.
|
| 17 |
+
|
| 18 |
+
Besides combining the properties of Transformers and convolutions, the authors introduce MobileViT as a general-purpose mobile-friendly backbone for different image recognition tasks. Their findings suggest that, performance-wise, MobileViT is better than other models with the same or higher complexity (MobileNetV3, for example), while being efficient on mobile devices.
|
| 19 |
+
|
| 20 |
+
## Training Data
|
| 21 |
+
The model is trained on a [tf_flowers dataset](https://www.tensorflow.org/datasets/catalog/tf_flowers)
|