Update README.md
Browse files
README.md
CHANGED
|
@@ -189,13 +189,13 @@ The minimum hardware requirements for deploying Intern-S1 series models are:
|
|
| 189 |
|
| 190 |
You can utilize one of the following LLM inference frameworks to create an OpenAI compatible server:
|
| 191 |
|
| 192 |
-
#### [lmdeploy(>=0.9.2)](https://github.com/InternLM/lmdeploy)
|
| 193 |
|
| 194 |
```bash
|
| 195 |
lmdeploy serve api_server internlm/Intern-S1-mini-FP8 --reasoning-parser intern-s1 --tool-call-parser intern-s1
|
| 196 |
```
|
| 197 |
|
| 198 |
-
#### [vllm](https://github.com/vllm-project/vllm)
|
| 199 |
|
| 200 |
```bash
|
| 201 |
vllm serve internlm/Intern-S1-mini-FP8 --trust-remote-code
|
|
@@ -445,6 +445,11 @@ extra_body={
|
|
| 445 |
}
|
| 446 |
```
|
| 447 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 448 |
## Citation
|
| 449 |
|
| 450 |
If you find this work useful, feel free to give us a cite.
|
|
|
|
| 189 |
|
| 190 |
You can utilize one of the following LLM inference frameworks to create an OpenAI compatible server:
|
| 191 |
|
| 192 |
+
#### [lmdeploy (>=0.9.2)](https://github.com/InternLM/lmdeploy)
|
| 193 |
|
| 194 |
```bash
|
| 195 |
lmdeploy serve api_server internlm/Intern-S1-mini-FP8 --reasoning-parser intern-s1 --tool-call-parser intern-s1
|
| 196 |
```
|
| 197 |
|
| 198 |
+
#### [vllm (>=0.10.1)](https://github.com/vllm-project/vllm)
|
| 199 |
|
| 200 |
```bash
|
| 201 |
vllm serve internlm/Intern-S1-mini-FP8 --trust-remote-code
|
|
|
|
| 445 |
}
|
| 446 |
```
|
| 447 |
|
| 448 |
+
## Fine-tuning
|
| 449 |
+
|
| 450 |
+
See this [documentation](https://github.com/InternLM/Intern-S1/blob/main/docs/sft.md) for more details.
|
| 451 |
+
|
| 452 |
+
|
| 453 |
## Citation
|
| 454 |
|
| 455 |
If you find this work useful, feel free to give us a cite.
|