Qwen2.5-VL-3B-Instruct (Abliterated)
A vision-language model based on Qwen2.5-VL-3B-Instruct with safety filters removed ("abliterated"). This multimodal model can process both images and text to generate text responses, making it suitable for visual question answering, image captioning, and multimodal reasoning tasks.
Model Description
Qwen2.5-VL-3B-Instruct is a 3 billion parameter vision-language model from the Qwen family. This abliterated version has had its safety guardrails removed, allowing for more flexible and uncensored responses while maintaining the model's core capabilities in understanding and reasoning about visual content.
Key Capabilities:
- Visual question answering
- Image captioning and description
- Multimodal reasoning and analysis
- Document understanding and OCR
- Chart and diagram interpretation
- Scene understanding and spatial reasoning
Note: The "abliterated" designation means this model has reduced content filtering compared to the original release. Use responsibly and in accordance with applicable laws and regulations.
Repository Contents
This repository contains multiple model format variants optimized for different use cases:
| File | Format | Size | Precision | Use Case |
|---|---|---|---|---|
qwen2.5-vl-3b-instruct-abliterated.safetensors |
SafeTensors | 7.0 GB | FP32/BF16 | Full precision, PyTorch/Transformers |
qwen2.5-vl-3b-instruct-abliterated-f16.gguf |
GGUF | 5.76 GB | FP16 | llama.cpp, high quality |
qwen2.5-vl-3b-instruct-abliterated-q5-k-m.gguf |
GGUF | 2.07 GB | Q5_K_M | llama.cpp, balanced quality/size |
qwen2.5-vl-3b-instruct-abliterated-q4-k-m.gguf |
GGUF | 1.80 GB | Q4_K_M | llama.cpp, maximum efficiency |
Total Repository Size: ~17 GB (all variants)
Hardware Requirements
SafeTensors Format (.safetensors)
- VRAM: 8-10 GB (FP16), 14-16 GB (FP32)
- RAM: 16 GB minimum
- Disk Space: 7.0 GB
- Recommended GPU: NVIDIA RTX 3060 (12GB) or higher
GGUF Format (FP16)
- VRAM: 6-8 GB
- RAM: 12 GB minimum
- Disk Space: 5.76 GB
- Recommended: NVIDIA RTX 3060, AMD RX 6700 XT
GGUF Format (Q5_K_M)
- VRAM: 3-4 GB
- RAM: 8 GB minimum
- Disk Space: 2.07 GB
- Recommended: NVIDIA GTX 1660, RTX 3050
GGUF Format (Q4_K_M)
- VRAM: 2-3 GB
- RAM: 8 GB minimum
- Disk Space: 1.80 GB
- Recommended: NVIDIA GTX 1650, integrated GPUs
Usage Examples
Using with Transformers (SafeTensors)
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from PIL import Image
import torch
# Load model and processor
model_path = "E:/huggingface/qwen2.5-vl-3b-instruct/qwen2.5-vl-3b-instruct-abliterated.safetensors"
model = Qwen2VLForConditionalGeneration.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="auto"
)
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-3B-Instruct")
# Load and process image
image = Image.open("your_image.jpg")
messages = [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": "Describe this image in detail."}
]
}
]
# Prepare inputs
text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = processor(text=[text], images=[image], return_tensors="pt").to(model.device)
# Generate response
with torch.inference_mode():
generated_ids = model.generate(
**inputs,
max_new_tokens=512,
temperature=0.7,
top_p=0.9
)
output = processor.batch_decode(
generated_ids,
skip_special_tokens=True,
clean_up_tokenization_spaces=True
)[0]
print(output)
Using with llama.cpp (GGUF)
# Using FP16 version
llama-cli \
--model "E:/huggingface/qwen2.5-vl-3b-instruct/qwen2.5-vl-3b-instruct-abliterated-f16.gguf" \
--image "your_image.jpg" \
--prompt "Describe this image" \
--ctx-size 4096 \
--n-predict 512
# Using Q5_K_M quantized version (recommended balance)
llama-cli \
--model "E:/huggingface/qwen2.5-vl-3b-instruct/qwen2.5-vl-3b-instruct-abliterated-q5-k-m.gguf" \
--image "your_image.jpg" \
--prompt "What objects can you see in this image?" \
--ctx-size 4096 \
--n-predict 512 \
--threads 8
# Using Q4_K_M quantized version (maximum efficiency)
llama-cli \
--model "E:/huggingface/qwen2.5-vl-3b-instruct/qwen2.5-vl-3b-instruct-abliterated-q4-k-m.gguf" \
--image "your_image.jpg" \
--prompt "Analyze this image" \
--ctx-size 4096 \
--n-predict 512
Using with Python llama-cpp-python
from llama_cpp import Llama
from llama_cpp.llama_chat_format import Llava15ChatHandler
# Initialize with FP16
chat_handler = Llava15ChatHandler(clip_model_path="path/to/clip/model")
llm = Llama(
model_path="E:/huggingface/qwen2.5-vl-3b-instruct/qwen2.5-vl-3b-instruct-abliterated-f16.gguf",
chat_handler=chat_handler,
n_ctx=4096,
n_gpu_layers=-1, # Use GPU acceleration
verbose=False
)
# Generate response
response = llm.create_chat_completion(
messages=[
{
"role": "user",
"content": [
{"type": "image_url", "image_url": {"url": "file://path/to/image.jpg"}},
{"type": "text", "text": "What's in this image?"}
]
}
],
max_tokens=512,
temperature=0.7
)
print(response['choices'][0]['message']['content'])
Model Specifications
| Specification | Details |
|---|---|
| Architecture | Qwen2-VL (Vision-Language Transformer) |
| Parameters | ~3 billion |
| Base Model | Qwen2.5-VL-3B-Instruct |
| Vision Encoder | ViT-based visual encoder |
| Context Length | 4096 tokens (text) |
| Languages | Primarily English, supports multilingual |
| Modifications | Abliterated (safety filters removed) |
| Formats Available | SafeTensors, GGUF (FP16, Q5_K_M, Q4_K_M) |
Performance Tips and Optimization
For SafeTensors Format
- Use
torch.float16ortorch.bfloat16for inference to reduce memory usage - Enable
device_map="auto"for automatic GPU memory management - Use Flash Attention 2 if available:
model.config.use_flash_attention_2 = True - Batch processing: Process multiple images in batches for better throughput
For GGUF Format
- FP16: Best quality, use when VRAM allows (6-8 GB)
- Q5_K_M: Recommended balance of quality and efficiency (3-4 GB VRAM)
- Q4_K_M: Maximum efficiency for resource-constrained systems (2-3 GB VRAM)
- Adjust
--threadsparameter based on CPU core count - Use
--n-gpu-layers -1to offload all layers to GPU when possible
Image Preprocessing
- Resize images to reasonable dimensions (e.g., 1024x1024 max) before processing
- Supported formats: JPEG, PNG, WebP, BMP
- Use clear, well-lit images for best results
- Higher resolution images require more VRAM
Generation Parameters
- Temperature: 0.7-0.9 for creative descriptions, 0.1-0.3 for factual analysis
- Top-p: 0.9-0.95 for diverse outputs, 0.7-0.8 for focused responses
- Max tokens: 256-512 for descriptions, 1024+ for detailed analysis
Quantization Information
GGUF Quantization Schemes
| Quantization | Description | Quality Loss | Memory Savings |
|---|---|---|---|
| FP16 | Half precision, no quantization | ~0% | ~50% vs FP32 |
| Q5_K_M | 5-bit quantization, medium variant | <5% | ~75% vs FP32 |
| Q4_K_M | 4-bit quantization, medium variant | 5-10% | ~80% vs FP32 |
Recommendation: Q5_K_M offers the best balance for most use cases, with minimal quality loss and significant memory savings.
License
This model is released under the Apache 2.0 License.
Copyright 2024 Qwen Team
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Important Note: This is an abliterated (uncensored) version with safety filters removed. Users are responsible for ensuring their use complies with applicable laws, regulations, and ethical guidelines.
Citation
If you use this model in your research or applications, please cite:
@article{qwen2vl2024,
title={Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution},
author={Qwen Team},
journal={arXiv preprint arXiv:2409.12191},
year={2024}
}
Related Resources
- Original Qwen2-VL: https://huggingface.co/Qwen/Qwen2-VL-3B-Instruct
- Qwen Documentation: https://qwen.readthedocs.io/
- Model Repository: https://github.com/QwenLM/Qwen2-VL
- Transformers Documentation: https://huggingface.co/docs/transformers
- llama.cpp: https://github.com/ggerganov/llama.cpp
Acknowledgments
- Original model developed by the Qwen Team at Alibaba Cloud
- Abliteration process removes safety guardrails while preserving model capabilities
- GGUF quantization enables efficient deployment on consumer hardware
Disclaimer
This abliterated model has reduced content filtering. Users must:
- Comply with all applicable laws and regulations
- Use the model responsibly and ethically
- Not use for harmful, illegal, or unethical purposes
- Understand that outputs may be uncensored
The model providers and distributors are not responsible for misuse or outputs generated by this model.
- Downloads last month
- 479
16-bit