These are quantizations of the model Chandra-OCR

This is the imatrix version. text_en_medium has been used to create it.

Original model: https://huggingface.co/datalab-to/chandra

Download the latest llama.cpp to use them.

Try to use the best quality you can run.
For the mmproj, try to use the F32 version as it will produce the best results. F32 > BF16 > F16

Why no Q8_0/F16/BF16 in here? Because these do not use imatrix, so go and download the normal ones from here: noctrex/Chandra-OCR-GGUF.

Downloads last month
984
GGUF
Model size
8B params
Architecture
qwen3vl
Hardware compatibility
Log In to view the estimation

3-bit

4-bit

5-bit

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for noctrex/Chandra-OCR-i1-GGUF

Base model

datalab-to/chandra
Quantized
(5)
this model