pandora-s commited on
Commit
8146732
·
verified ·
1 Parent(s): c36c33c
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -188,7 +188,7 @@ We recommand that you use Mistral-Large-Instruct-2411 in a server/client setting
188
  vllm serve mistralai/Mistral-Large-Instruct-2411 --tokenizer_mode mistral --config_format mistral --load_format mistral --tensor_parallel_size 8
189
  ```
190
 
191
- **Note:** Running Ministral-8B on GPU requires over 300 GB of GPU RAM.
192
 
193
 
194
  2. To ping the client you can use a simple Python snippet.
 
188
  vllm serve mistralai/Mistral-Large-Instruct-2411 --tokenizer_mode mistral --config_format mistral --load_format mistral --tensor_parallel_size 8
189
  ```
190
 
191
+ **Note:** Running Mistral-Large-Instruct-2411 on GPU requires over 300 GB of GPU RAM.
192
 
193
 
194
  2. To ping the client you can use a simple Python snippet.