Text Generation
Transformers
Safetensors
qwen3_moe
quantized
gptq
w4a16
llm-compressor
qwen3
mixture-of-experts
coding
programming
code generation
code
codeqwen
Mixture of Experts
coder
qwen2
chat
qwen
qwen-coder
Qwen3-30B-A3B-Instruct-2507
Qwen3-30B-A3B
mixture of experts
128 experts
8 active experts
256k context
finetune
brainstorm 20x
brainstorm
optional thinking
rocm
amd
r9700
conversational
compressed-tensors
File size: 759 Bytes
c93976e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
quant_stage:
quant_modifiers:
GPTQModifier:
config_groups:
group_0:
targets: [Linear]
weights:
num_bits: 4
type: int
symmetric: true
group_size: 128
strategy: group
block_structure: null
dynamic: false
actorder: !!python/object/apply:compressed_tensors.quantization.quant_args.ActivationOrdering [
static]
observer: minmax
observer_kwargs: {}
input_activations: null
output_activations: null
format: null
targets: [Linear]
ignore: [lm_head]
block_size: 128
dampening_frac: 0.01
actorder: static
offload_hessians: false
|