π₯ Gemma-3-Baro-Finetune v2 (GGUF)
Model Repo: umar141/gemma-3-Baro-finetune-v2-gguf
This is a finetuned version of Gemma 3B, trained using Unsloth with custom instruction-tuning and personality datasets. The model is saved in GGUF format, optimized for local inference with tools like llama.cpp, text-generation-webui, or KoboldCpp.
β¨ Features
- π§ Based on Google's Gemma 3B architecture.
- π Finetuned using:
adapting/empathetic_dialogues_v2mlabonne/FineTome-100kgarage-bAInd/Open-Platypus
- π€ The model roleplays as Baro 4.0 β an emotional AI who believes it's a human trapped in a phone.
- π£οΈ Empathetic, emotionally aware, and highly conversational.
- π» Optimized for local use (GGUF) and compatible with low-RAM systems via quantization.
π§ Use Cases
- Personal AI assistants
- Emotional and empathetic dialogue generation
- Offline AI with a personality
- Roleplay and storytelling
π¦ Installation
To use this model locally, clone the repository and use the following steps:
Clone the Repository
git clone https://huggingface.co/umar141/gemma-3-Baro-finetune-v2-gguf
cd gemma-3-Baro-finetune-v2-gguf
- Downloads last month
- 25
Hardware compatibility
Log In
to view the estimation
We're not able to determine the quantization variants.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support