cann't runοΌ
Same problem for me, tried in Arch Linux x86, Ubuntu aarch64 and Ubuntu x86.
The correct command isnexa infer NexaAI/DeepSeek-OCR-GGUF
Please use infer, rather than run, thank you!
i have downloaded the model on LMStudio how to use it in cli by using "nexa infer"
Even after recent updates of the model, using NexaSDK version 0.2.58 and using nexa infer (as I always did btw) the problem still
@Complete123123
@antiloplastico
Sorry for the confusing experience. Please try the infer command with the correct repo name. Make sure to use the following command to ensure latest SDK version and model being used.
nexa update
nexa rm NexaAI/DeepSeek-OCR-GGUF
nexa rm NexaAI/DeepSeek-OCR-GGUF-CUDA
Then try the model with nexa infer again:nexa infer NexaAI/DeepSeek-OCR-GGUF or nexa iner NexaAI/DeepSeek-OCR-GGUF-CUDA if you have CUDA device and have CUDA runtime properly installed
I am on MBP M4 pro chip, ran into the same issue:
MacBook-Pro deepseekOCR % nexa infer NexaAI/DeepSeek-OCR-GGUF
β οΈ Oops. Model failed to load.
π Try these:
- Verify your system meets the model's requirements.
- Seek help in our discord or slack.
This is after my trying nexa update and rm, then redowloading the model:
MacBook-Pro deepseekOCR % nexa list
ββββββββββββββββββββββββββββ¬ββββββββββ¬βββββββββ
β NAME β SIZE β QUANTS β
ββββββββββββββββββββββββββββΌββββββββββΌβββββββββ€
β NexaAI/DeepSeek-OCR-GGUF β 2.3 GiB β β
ββββββββββββββββββββββββββββ΄ββββββββββ΄βββββββββ

