--- library_name: transformers tags: - gguf - quantized - llama.cpp - ollama base_model: Qwen/Qwen2.5-Coder-3B-Instruct --- # Yalexis/qwen2.5-coder-3b-b2b-website-gguf This is a Q4_K_M quantized GGUF version of [Yalexis/qwen2.5-coder-3b-b2b-website](https://huggingface.co/Yalexis/qwen2.5-coder-3b-b2b-website). ## Model Details - **Base Model:** Qwen/Qwen2.5-Coder-3B-Instruct - **Fine-tuned Model:** Yalexis/qwen2.5-coder-3b-b2b-website - **Quantization:** Q4_K_M - **Format:** GGUF - **File Size:** 1.80 GB ## Usage ### Ollama 1. Create a Modelfile: ``` FROM ./qwen2.5-coder-3b-b2b-website-q4_k_m.gguf ``` 2. Create the model: ```bash ollama create qwen-b2b-website -f Modelfile ``` 3. Run the model: ```bash ollama run qwen-b2b-website ``` ### llama.cpp ```bash ./llama-cli -m qwen2.5-coder-3b-b2b-website-q4_k_m.gguf -p "Your prompt here" ``` ### LM Studio Simply download this model in LM Studio and start chatting! ## Model Information This model was fine-tuned for B2B website generation with a 10k token context window.