gemma-3-270m-f32

  • can be converted to gguf with convert_hf_to_gguf.py
  • then run it with llama.cpp, ollama, etc.
Downloads last month
6
Safetensors
Model size
0.3B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for callgg/gemma-3-270m-f32

Finetuned
(91)
this model