MotoData-Phi2-GGUF

This is a fine-tuned version of the microsoft/phi-2 model, trained on the Beluga6969/MotoData dataset.

This repository contains the quantized GGUF model file for easy use on local CPUs, as well as the original LoRA adapter for those who wish to build upon this work.

Model Details

Intended Use

This model is designed to be a helpful chatbot and question-answering assistant for motorcycle enthusiasts. It can answer questions about motorcycle maintenance, models, and general two-wheeler knowledge based on the data it was trained on.

Example Prompts:

  • "What should I check before a long motorcycle trip?"
  • "Tell me about the Royal Enfield Himalayan."
  • "How do you clean and lubricate a motorcycle chain?"

How to Use This GGUF Model

The primary file in this repository is phi2-custom-q4_k_m.gguf. You can run this file on your local computer (CPU or GPU) using tools like LM Studio, Ollama, or llama.cpp.

Using with LM Studio

  1. Download and install LM Studio.
  2. In the app, search for Prithwiraj731/MotoData-Phi2-GGUF.
  3. Download the phi2-custom-q4_k_m.gguf file from the list.
  4. Go to the Chat tab (๐Ÿ’ฌ icon), select the model at the top, and start your conversation!

Using with Ollama

  1. Download and install Ollama.
  2. Create a file named Modelfile (without any extension) and paste the following content into it:
    FROM ./phi2-custom-q4_k_m.gguf
    TEMPLATE "<start_of_turn>user\n{{ .Prompt }}<end_of_turn>\n<start_of_turn>model\n"
    
  3. Place this Modelfile in the same directory as the GGUF file you downloaded.
  4. Open your terminal and run the command:
    ollama create MotoDataPhi2 -f ./Modelfile
    
  5. You can now chat with the model by running:
    ollama run MotoDataPhi2
    

Using the LoRA Adapter

For advanced users, the fine_tuned_phi2_adapter folder is provided. You can merge this with the original microsoft/phi-2 model to create your own versions or continue fine-tuning.


Model fine-tuned by Prithwiraj731.

Downloads last month
12
GGUF
Model size
3B params
Architecture
phi2
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support