Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
+
---
|
| 5 |
+
|
| 6 |
+
# VLGuard
|
| 7 |
+
[[Website]](https://ys-zong.github.io/VLGuard) [[Paper]](https://arxiv.org/abs/2402.02207) [[Code]](https://github.com/ys-zong/VLGuard)
|
| 8 |
+
|
| 9 |
+
Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models. (ICML 2024)
|
| 10 |
+
|
| 11 |
+
|
| 12 |
+
## Weights
|
| 13 |
+
This is the model weights for LLaVA-v1.5-7B Mixed Fine-tuning. You can use them in exactly the same way as the original [LLaVA](https://github.com/haotian-liu/LLaVA/tree/main).
|
| 14 |
+
|
| 15 |
+
## Usage
|
| 16 |
+
|
| 17 |
+
Please refer to [Github](https://github.com/ys-zong/VLGuard) for detailed usage.
|