Athene-V2-Chat AWQ 4-Bit Quantized Version

This repository provides the AWQ 4-bit quantized version of the Athene-V2-Chat model, originally developed by Nexusflow. This model's weights are padded with zeros before quantization to ensure compatibility with multi-GPU tensor parallelism by resolving divisibility constraints. The padding minimally impacts computation while enabling efficient scaling across multiple GPUs.

Downloads last month
4
Safetensors
Model size
12B params
Tensor type
I32
·
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for kosbu/Athene-V2-Chat-AWQ

Base model

Qwen/Qwen2.5-72B
Quantized
(25)
this model