Improved Deepseek-Qwen3 Fine-Tuned Model
This repository contains my fine-tuned version of deepseek-ai/DeepSeek-R1-0528-Qwen3-8B.
This model has been converted to GGUF format for easy use with tools like LM Studio, Ollama, and llama.cpp. The original Hugging Face format (float16) is also included for developers.
Model Description
- What is this model?
- This is a fine-tuned version of the DeepSeek-Qwen model.
- What is it good at?
- ** This model was trained on a diverse mix of coding and instruction-following datasets, making it a strong general-purpose assistant with a focus on code generation and problem-solving.
- What are its limitations?
- ** The model is sensitive to its prompt format and inference settings. For best results, users should adhere strictly to the prompt template and settings below.
- On rare occasions, the model may produce responses that are not relevant to the prompt. Lowering the temperature can often help mitigate this.
- In some UIs like LM Studio, if the prompt template is not configured correctly, the model may appear to "think" but not generate a response. This is a settings issue, not a model failure (see "Critical Settings" below).
How to Use (GGUF for LM Studio, Ollama, etc.)
Download your preferred GGUF file from the "Files and versions" tab of this repository.
Critical Settings
For this model to work correctly, you MUST configure the following settings in your inference software (LM Studio, Ollama, etc.):
- Prompt Template: This model uses the Qwen2 (ChatML) format. You must select this preset. (Note: Qwen2 is a more common name for this template)
- Stop String / Stop Token: Add
<|im_end|>as a stop token to prevent the model from rambling. - Temperature: A lower temperature (e.g.,
0.7or below) is recommended to reduce off-topic responses.
Recommended Quantizations
| File Name | Recommended Use |
|---|---|
my-model-Q4_K_M.gguf |
Balanced: Good quality and performance. (Recommended) |
my-model-Q5_K_M.gguf |
High Quality: Better quality, larger file size. |
my-model-Q8_0.gguf |
Best Quality: For users with lots of RAM/VRAM. |
my-model-F16.gguf |
Full Precision: No quantization, for testing and research. |
How to Use (Hugging Face Transformers)
This is for Python developers who want to use the original, full-precision model.
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "Aadeshisdoingsomething/Improved-Deepseek-Qwen3"
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Added trust_remote_code=True to prevent a common user error
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True)
How to Use with Ollama
This model is fully compatible with Ollama.
1. Pull the Model (Easy Method)
This single command will download and install the recommended Q4_K_M version of the model and make it available to use.
ollama run Aadeshisdoingsomething/Improved-Deepseek-Qwen3
2. Custom Quant (Hard Method)
It is essentailly the same, just edit the ModelFile to a different quant, like
FROM ./my-model-Q5_K_M.gguf
- Downloads last month
- 59
4-bit
5-bit
8-bit
16-bit
Model tree for Aadeshisdoingsomething/Improved-Deepseek-Qwen-3
Base model
deepseek-ai/DeepSeek-R1-0528-Qwen3-8B