MotoData-Phi2-GGUF
This is a fine-tuned version of the microsoft/phi-2 model, trained on the Beluga6969/MotoData dataset.
This repository contains the quantized GGUF model file for easy use on local CPUs, as well as the original LoRA adapter for those who wish to build upon this work.
Model Details
- Base Model:
microsoft/phi-2 - Dataset:
Beluga6969/MotoData - Fine-tuning Method: QLoRA
Intended Use
This model is designed to be a helpful chatbot and question-answering assistant for motorcycle enthusiasts. It can answer questions about motorcycle maintenance, models, and general two-wheeler knowledge based on the data it was trained on.
Example Prompts:
- "What should I check before a long motorcycle trip?"
- "Tell me about the Royal Enfield Himalayan."
- "How do you clean and lubricate a motorcycle chain?"
How to Use This GGUF Model
The primary file in this repository is phi2-custom-q4_k_m.gguf. You can run this file on your local computer (CPU or GPU) using tools like LM Studio, Ollama, or llama.cpp.
Using with LM Studio
- Download and install LM Studio.
- In the app, search for
Prithwiraj731/MotoData-Phi2-GGUF. - Download the
phi2-custom-q4_k_m.gguffile from the list. - Go to the Chat tab (๐ฌ icon), select the model at the top, and start your conversation!
Using with Ollama
- Download and install Ollama.
- Create a file named
Modelfile(without any extension) and paste the following content into it:FROM ./phi2-custom-q4_k_m.gguf TEMPLATE "<start_of_turn>user\n{{ .Prompt }}<end_of_turn>\n<start_of_turn>model\n" - Place this
Modelfilein the same directory as the GGUF file you downloaded. - Open your terminal and run the command:
ollama create MotoDataPhi2 -f ./Modelfile - You can now chat with the model by running:
ollama run MotoDataPhi2
Using the LoRA Adapter
For advanced users, the fine_tuned_phi2_adapter folder is provided. You can merge this with the original microsoft/phi-2 model to create your own versions or continue fine-tuning.
Model fine-tuned by Prithwiraj731.
- Downloads last month
- 12
4-bit