Qwen3-VL-2B-Instruct (Abliterated)

Qwen3-VL-2B-Instruct is a vision-language multimodal model capable of understanding both images and text inputs. This abliterated version has had safety guardrails removed for research and unconstrained creative applications.

Model Description

Qwen3-VL-2B-Instruct-Abliterated is a modified version of Alibaba's Qwen3-VL vision-language model with 2 billion parameters. The model combines:

  • Vision Understanding: Advanced image comprehension and analysis
  • Text Generation: High-quality natural language responses
  • Multimodal Reasoning: Ability to reason about visual and textual information together
  • Instruction Following: Fine-tuned to follow user instructions accurately
  • Uncensored Output: Abliteration process removes refusal training for research applications

This model can perform tasks such as:

  • Image captioning and detailed description
  • Visual question answering (VQA)
  • OCR (Optical Character Recognition)
  • Document understanding and analysis
  • Scene understanding and reasoning
  • Creative visual storytelling

Repository Contents

qwen3-vl-2b-instruct/
โ”œโ”€โ”€ qwen3-vl-2b-instruct-abliterated-f16.gguf      (3.3 GB)
โ””โ”€โ”€ qwen3-vl-2b-instruct-abliterated.safetensors   (4.0 GB)

Total Repository Size: ~7.3 GB

File Descriptions

  • qwen3-vl-2b-instruct-abliterated-f16.gguf - FP16 quantized GGUF format for efficient inference with llama.cpp and compatible frameworks
  • qwen3-vl-2b-instruct-abliterated.safetensors - Full-precision SafeTensors format for use with transformers library

Hardware Requirements

Minimum Requirements

  • VRAM: 4-6 GB (GGUF quantized format)
  • RAM: 8 GB system memory
  • Disk Space: 8 GB free space
  • GPU: CUDA-compatible GPU recommended

Recommended Requirements

  • VRAM: 8 GB+ (SafeTensors full precision)
  • RAM: 16 GB system memory
  • Disk Space: 10 GB free space
  • GPU: NVIDIA RTX 3060 or better

Usage Examples

Using SafeTensors with Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer
from PIL import Image

# Load model and tokenizer
model_path = r"E:\huggingface\qwen3-vl-2b-instruct"
model = AutoModelForCausalLM.from_pretrained(
    model_path,
    torch_dtype="auto",
    device_map="auto",
    trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(
    model_path,
    trust_remote_code=True
)

# Load image
image = Image.open("example.jpg")

# Create prompt
prompt = "Describe this image in detail."

# Generate response
inputs = tokenizer(prompt, images=image, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)

print(response)

Using GGUF with llama.cpp

# Run with llama.cpp
./llama.cpp \
  --model "E:\huggingface\qwen3-vl-2b-instruct\qwen3-vl-2b-instruct-abliterated-f16.gguf" \
  --image example.jpg \
  --prompt "What do you see in this image?" \
  --n-predict 256 \
  --temp 0.7

Using with LM Studio or Text Generation WebUI

  1. LM Studio:

    • Load the GGUF model file
    • Select vision-language mode
    • Upload image and provide text prompt
  2. Text Generation WebUI (oobabooga):

    • Place SafeTensors file in models/ directory
    • Load model with trust_remote_code=True
    • Use multimodal extension for image inputs

Python Vision-Language Example

from transformers import Qwen2VLForConditionalGeneration, AutoProcessor
from PIL import Image
import torch

# Load model
model_path = r"E:\huggingface\qwen3-vl-2b-instruct"
processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)
model = Qwen2VLForConditionalGeneration.from_pretrained(
    model_path,
    torch_dtype=torch.float16,
    device_map="auto",
    trust_remote_code=True
)

# Prepare inputs
image = Image.open("scene.jpg")
conversation = [
    {
        "role": "user",
        "content": [
            {"type": "image"},
            {"type": "text", "text": "What is happening in this scene?"}
        ]
    }
]

# Process and generate
text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
inputs = processor(text=[text_prompt], images=[image], return_tensors="pt")
inputs = inputs.to(model.device)

# Generate response
output_ids = model.generate(**inputs, max_new_tokens=512)
generated_text = processor.batch_decode(
    output_ids,
    skip_special_tokens=True,
    clean_up_tokenization_spaces=True
)[0]

print(generated_text)

Model Specifications

Specification Details
Model Type Vision-Language Model (VLM)
Architecture Qwen3-VL (Transformer-based)
Parameters 2 billion (2B)
Precision FP16/FP32 (SafeTensors), Quantized (GGUF)
Context Length 4096 tokens
Vision Encoder Built-in vision transformer
Training Data Multimodal image-text datasets
Modification Abliterated (safety filters removed)
Formats SafeTensors, GGUF

Supported Image Formats

  • JPEG, PNG, BMP, TIFF
  • Recommended resolution: 224x224 to 1024x1024
  • Automatic resizing and preprocessing

Performance Tips

Optimization Recommendations

  1. Use Quantized GGUF for Speed:

    • GGUF format provides faster inference
    • Lower memory usage (3.3 GB vs 4.0 GB)
    • Minimal quality loss for most tasks
  2. GPU Acceleration:

    • Enable CUDA for 3-5x speedup
    • Use device_map="auto" for automatic GPU utilization
    • Consider FP16 precision: torch_dtype=torch.float16
  3. Batch Processing:

    • Process multiple images in batches for efficiency
    • Use appropriate batch sizes based on VRAM
  4. Image Preprocessing:

    • Resize large images before processing
    • Use JPEG format for faster loading
    • Normalize images using processor's built-in methods
  5. Memory Management:

    • Clear CUDA cache between large operations
    • Use gradient checkpointing if fine-tuning
    • Monitor VRAM usage with torch.cuda.memory_summary()

Expected Performance

  • Inference Speed: 20-50 tokens/second (GPU)
  • Image Processing: 0.5-2 seconds per image
  • Memory Usage: 4-8 GB VRAM depending on format

Abliteration Process

What is Abliteration?

Abliteration is a technique that removes safety refusal mechanisms from language models while preserving their core capabilities. This version has been modified to:

  • Remove content policy restrictions
  • Eliminate refusal responses
  • Enable uncensored creative outputs
  • Maintain model quality and coherence

Use Cases:

  • Research on model behavior and safety mechanisms
  • Creative writing and storytelling without constraints
  • Academic studies on model alignment
  • Personal experimentation and learning

Responsible Use: This uncensored model should be used responsibly and ethically. Users are accountable for ensuring their applications comply with applicable laws and ethical guidelines.

License

This model is released under the Apache 2.0 License.

License Terms

  • โœ… Commercial use allowed
  • โœ… Modification and distribution permitted
  • โœ… Private use allowed
  • โš ๏ธ Must include license and copyright notice
  • โš ๏ธ No warranty provided

Base Model License: The original Qwen3-VL model is licensed under Apache 2.0 by Alibaba Cloud.

Modification Notice: This abliterated version is a derivative work with safety mechanisms removed. Use responsibly and in accordance with applicable laws.

Citation

If you use this model in your research or applications, please cite:

@misc{qwen3vl2b-abliterated,
  title={Qwen3-VL-2B-Instruct-Abliterated},
  author={Abliteration Community},
  year={2024},
  howpublished={\url{https://huggingface.co/qwen3-vl-2b-instruct-abliterated}},
  note={Abliterated version of Qwen3-VL-2B-Instruct}
}

@article{qwen3vl,
  title={Qwen3-VL: Vision-Language Models at Scale},
  author={Alibaba Cloud},
  journal={arXiv preprint},
  year={2024}
}

Links and Resources

Official Resources

Community Resources

Support and Discussion

  • Issues: Report problems with the model
  • Discussions: Share use cases and improvements
  • Pull Requests: Contribute documentation updates

Disclaimer

This abliterated model has had safety mechanisms removed and may generate content without restrictions. Users are solely responsible for:

  • Ensuring legal compliance in their jurisdiction
  • Following ethical guidelines for AI usage
  • Not using the model for harmful or illegal purposes
  • Understanding the implications of uncensored AI outputs

No Warranty: This model is provided "as-is" without any guarantees of accuracy, safety, or fitness for any particular purpose.


Model Version: v1.1 Last Updated: 2025-10-30 Format Versions: SafeTensors (4.0 GB), GGUF FP16 (3.3 GB)

Downloads last month
215
GGUF
Model size
2B params
Architecture
qwen3vl
Hardware compatibility
Log In to view the estimation

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Collection including wangkanai/qwen3-vl-2b-instruct