These are quantizations of the model Chandra-OCR
This is the imatrix version. text_en_medium has been used to create it.
Original model: https://huggingface.co/datalab-to/chandra
Download the latest llama.cpp to use them.
Try to use the best quality you can run.
For the mmproj, try to use the F32 version as it will produce the best results. F32 > BF16 > F16
Why no Q8_0/F16/BF16 in here? Because these do not use imatrix, so go and download the normal ones from here: noctrex/Chandra-OCR-GGUF.
- Downloads last month
- 984
Hardware compatibility
Log In
to view the estimation
3-bit
4-bit
5-bit
6-bit
Model tree for noctrex/Chandra-OCR-i1-GGUF
Base model
datalab-to/chandra