geoffmunn's picture
Add quantized models with per-model cards, MODELFILE, CLI examples, and auto-upload
0df78e8 verified
|
raw
history blame
2.8 kB
metadata
license: apache-2.0
tags:
  - gguf
  - qwen
  - llama.cpp
  - quantized
  - text-generation
  - chat
  - reasoning   - agent   - multilingual
base_model: Qwen/Qwen3-Coder-30B-A3B-Instruct
author: geoffmunn

Qwen3-Coder-30B-A3B-Instruct-Q4_K_M

Quantized version of Qwen/Qwen3-Coder-30B-A3B-Instruct at Q4_K_M level, derived from f32 base weights.

Model Info

Quality & Performance

Metric Value
Quality
Speed ⚡ Fast
RAM Required ~29.1 GB
Recommendation Best speed/quality trade-off. Recommended for general-purpose usage.

Prompt Template (ChatML)

This model uses the ChatML format used by Qwen:

\\ ext <|im_start|>system You are a helpful assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant \\

Set this in your app (LM Studio, OpenWebUI, etc.) for best results.

Generation Parameters

Recommended defaults:

Parameter Value
Temperature 0.6
Top-P 0.95
Top-K 20
Min-P 0.0
Repeat Penalty 1.1

Stop sequences: <|im_end|>, <|im_start|>\

🖥️ CLI Example Using Ollama or TGI Server

Here’s how you can query this model via API using \curl\ and \jq. Replace the endpoint with your local server.

\\ash curl http://localhost:11434/api/generate -s -N -d '{ "model": "hf.co/geoffmunn/$MODEL_NAME:${QTYPE}", "prompt": "Respond exactly as follows: Write a haiku about autumn leaves falling gently in a quiet forest.", "temperature": 0.7, "top_p": 0.95, "top_k": 20, "min_p": 0.0, "repeat_penalty": 1.1, "stream": false }' | jq -r '.response' \\

🎯 Why this works well:

  • The prompt is meaningful and achievable for this model size.
  • Temperature tuned appropriately: lower for factual (\.5), higher for creative (\.7).
  • Uses \jq\ to extract clean output.

Verification

Check integrity:

\\ash certutil -hashfile 'Qwen3-Coder-30B-A3B-Instruct-f32:Q4_K_M.gguf' SHA256

Compare with values in SHA256SUMS.txt

\\

Usage

Compatible with:

  • LM Studio – local AI model runner
  • OpenWebUI – self-hosted AI interface
  • GPT4All – private, offline AI chatbot
  • Directly via \llama.cpp\

License

Apache 2.0 – see base model for full terms.