Quantized via bitsandbytes gemma2 27b

Downloads last month
-
Safetensors
Model size
27B params
Tensor type
F32
F16
I8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for benzlokzik/gemma-2-27b-quantized-8bit-uint

Base model

google/gemma-2-27b
Quantized
(16)
this model