update doc
Browse files
README.md
CHANGED
|
@@ -1,3 +1,24 @@
|
|
| 1 |
---
|
| 2 |
-
license:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
license: openrail++
|
| 3 |
+
base_model: Lykon/dreamshaper-xl-turbo
|
| 4 |
+
language:
|
| 5 |
+
- en
|
| 6 |
+
tags:
|
| 7 |
+
- stable-diffusion
|
| 8 |
+
- stable-diffusion-xl
|
| 9 |
+
- onnxruntime
|
| 10 |
+
- onnx
|
| 11 |
+
- text-to-image
|
| 12 |
---
|
| 13 |
+
|
| 14 |
+
|
| 15 |
+
# DreamShaper XL Turbo for ONNX Runtime CUDA provider
|
| 16 |
+
|
| 17 |
+
## Introduction
|
| 18 |
+
|
| 19 |
+
This repository hosts the optimized versions of **DreamShaper XL Turbo** to accelerate inference with ONNX Runtime CUDA execution provider.
|
| 20 |
+
|
| 21 |
+
The models are generated by [Olive](https://github.com/microsoft/Olive/tree/main/examples/stable_diffusion) with command like the following:
|
| 22 |
+
```
|
| 23 |
+
python stable_diffusion_xl.py --provider cuda --optimize --model_id Lykon/dreamshaper-xl-turbo
|
| 24 |
+
```
|