-
warshanks/Qwen3-30B-A3B-Instruct-2507-AWQ
Text Generation • 5B • Updated • 121 -
warshanks/Qwen3-16B-A3B-abliterated-AWQ
Text Generation • 3B • Updated • 45 -
warshanks/Huihui-Qwen3-14B-abliterated-v2-AWQ
Text Generation • 3B • Updated • 804 • 2 -
warshanks/Qwen3-8B-abliterated-AWQ
Text Generation • 2B • Updated • 4
Ben Shankles PRO
warshanks
AI & ML interests
MLX, AWQ, GPTQ
Recent Activity
new activity
11 days ago
inclusionAI/Ring-mini-2.0-GGUF:Official llama.cpp support merged
liked
a model
about 1 month ago
unsloth/Apriel-1.5-15b-Thinker-GGUF
new activity
about 2 months ago
LiquidAI/LFM2-350M-ENJP-MT-GGUF:Sensitive to Quantization
Organizations
AWQ Quants
W4A16 quants made with llmcompressor
MedGemma MLX
MedGemma MLX conversions I've done
-
mlx-community/medgemma-4b-it-4bit
Image-Text-to-Text • 0.9B • Updated • 55 • 2 -
mlx-community/medgemma-4b-it-6bit
Image-Text-to-Text • 1B • Updated • 24 • 1 -
mlx-community/medgemma-4b-it-8bit
Image-Text-to-Text • 1B • Updated • 62 • 1 -
mlx-community/medgemma-4b-it-bf16
Image-Text-to-Text • 5B • Updated • 47 • 1
DeepSeek MLX
DeepSeek MLX conversions I've done
-
mlx-community/DeepSeek-R1-0528-Qwen3-8B-4bit
Text Generation • 1B • Updated • 688 • 4 -
mlx-community/DeepSeek-R1-0528-Qwen3-8B-6bit
Text Generation • 8B • Updated • 22 • 1 -
mlx-community/DeepSeek-R1-0528-Qwen3-8B-8bit
Text Generation • 2B • Updated • 44 • 1 -
mlx-community/DeepSeek-R1-0528-Qwen3-8B-bf16
Text Generation • 8B • Updated • 45 • • 2
Nemotron MLX
Nemotron MLX conversions I've done
-
mlx-community/AceReason-Nemotron-7B-4bit
Text Generation • 1B • Updated • 61 -
mlx-community/AceReason-Nemotron-7B-8bit
Text Generation • 2B • Updated -
mlx-community/AceReason-Nemotron-7B-bf16
Text Generation • 8B • Updated • 3 -
mlx-community/AceReason-Nemotron-1.1-7B-4bit
Text Generation • 1B • Updated • 1
Abliterated MLX
Abliterated model conversions to MLX
-
mlx-community/Josiefied-Health-Qwen3-8B-abliterated-v1-4bit
Text Generation • 1B • Updated • 8 -
mlx-community/Josiefied-Health-Qwen3-8B-abliterated-v1-8bit
Text Generation • 2B • Updated • 10 -
mlx-community/Josiefied-Health-Qwen3-8B-abliterated-v1-bf16
Text Generation • 8B • Updated • 14 • 1 -
mlx-community/Josiefied-DeepSeek-R1-0528-Qwen3-8B-abliterated-v1-4bit
Text Generation • 1B • Updated • 100 • 1
Menlo Research AWQ
Menlo Research Quantizations
Mistral Quants
Mistral Quantizations
Lingshu MLX
Lingshu MLX conversions
-
Lingshu: A Generalist Foundation Model for Unified Multimodal Medical Understanding and Reasoning
Paper • 2506.07044 • Published • 113 -
mlx-community/Lingshu-7B-4bit
Image-Text-to-Text • 1B • Updated • 6 -
mlx-community/Lingshu-7B-6bit
Image-Text-to-Text • Updated • 3 -
mlx-community/Lingshu-7B-8bit
Image-Text-to-Text • Updated • 3
Medical MLX
Healthcare oriented LLM conversions to MLX
-
mlx-community/medgemma-4b-it-4bit
Image-Text-to-Text • 0.9B • Updated • 55 • 2 -
mlx-community/medgemma-4b-it-6bit
Image-Text-to-Text • 1B • Updated • 24 • 1 -
mlx-community/medgemma-4b-it-8bit
Image-Text-to-Text • 1B • Updated • 62 • 1 -
mlx-community/medgemma-4b-it-bf16
Image-Text-to-Text • 5B • Updated • 47 • 1
TheDrummer MLX
TheDrummer MLX model conversions I've done
MiMo-VL MLX-VLM
Qwen 3 AWQ
-
warshanks/Qwen3-30B-A3B-Instruct-2507-AWQ
Text Generation • 5B • Updated • 121 -
warshanks/Qwen3-16B-A3B-abliterated-AWQ
Text Generation • 3B • Updated • 45 -
warshanks/Huihui-Qwen3-14B-abliterated-v2-AWQ
Text Generation • 3B • Updated • 804 • 2 -
warshanks/Qwen3-8B-abliterated-AWQ
Text Generation • 2B • Updated • 4
Menlo Research AWQ
Menlo Research Quantizations
AWQ Quants
W4A16 quants made with llmcompressor
Mistral Quants
Mistral Quantizations
MedGemma MLX
MedGemma MLX conversions I've done
-
mlx-community/medgemma-4b-it-4bit
Image-Text-to-Text • 0.9B • Updated • 55 • 2 -
mlx-community/medgemma-4b-it-6bit
Image-Text-to-Text • 1B • Updated • 24 • 1 -
mlx-community/medgemma-4b-it-8bit
Image-Text-to-Text • 1B • Updated • 62 • 1 -
mlx-community/medgemma-4b-it-bf16
Image-Text-to-Text • 5B • Updated • 47 • 1
Lingshu MLX
Lingshu MLX conversions
-
Lingshu: A Generalist Foundation Model for Unified Multimodal Medical Understanding and Reasoning
Paper • 2506.07044 • Published • 113 -
mlx-community/Lingshu-7B-4bit
Image-Text-to-Text • 1B • Updated • 6 -
mlx-community/Lingshu-7B-6bit
Image-Text-to-Text • Updated • 3 -
mlx-community/Lingshu-7B-8bit
Image-Text-to-Text • Updated • 3
DeepSeek MLX
DeepSeek MLX conversions I've done
-
mlx-community/DeepSeek-R1-0528-Qwen3-8B-4bit
Text Generation • 1B • Updated • 688 • 4 -
mlx-community/DeepSeek-R1-0528-Qwen3-8B-6bit
Text Generation • 8B • Updated • 22 • 1 -
mlx-community/DeepSeek-R1-0528-Qwen3-8B-8bit
Text Generation • 2B • Updated • 44 • 1 -
mlx-community/DeepSeek-R1-0528-Qwen3-8B-bf16
Text Generation • 8B • Updated • 45 • • 2
Medical MLX
Healthcare oriented LLM conversions to MLX
-
mlx-community/medgemma-4b-it-4bit
Image-Text-to-Text • 0.9B • Updated • 55 • 2 -
mlx-community/medgemma-4b-it-6bit
Image-Text-to-Text • 1B • Updated • 24 • 1 -
mlx-community/medgemma-4b-it-8bit
Image-Text-to-Text • 1B • Updated • 62 • 1 -
mlx-community/medgemma-4b-it-bf16
Image-Text-to-Text • 5B • Updated • 47 • 1
Nemotron MLX
Nemotron MLX conversions I've done
-
mlx-community/AceReason-Nemotron-7B-4bit
Text Generation • 1B • Updated • 61 -
mlx-community/AceReason-Nemotron-7B-8bit
Text Generation • 2B • Updated -
mlx-community/AceReason-Nemotron-7B-bf16
Text Generation • 8B • Updated • 3 -
mlx-community/AceReason-Nemotron-1.1-7B-4bit
Text Generation • 1B • Updated • 1
TheDrummer MLX
TheDrummer MLX model conversions I've done
Abliterated MLX
Abliterated model conversions to MLX
-
mlx-community/Josiefied-Health-Qwen3-8B-abliterated-v1-4bit
Text Generation • 1B • Updated • 8 -
mlx-community/Josiefied-Health-Qwen3-8B-abliterated-v1-8bit
Text Generation • 2B • Updated • 10 -
mlx-community/Josiefied-Health-Qwen3-8B-abliterated-v1-bf16
Text Generation • 8B • Updated • 14 • 1 -
mlx-community/Josiefied-DeepSeek-R1-0528-Qwen3-8B-abliterated-v1-4bit
Text Generation • 1B • Updated • 100 • 1
MiMo-VL MLX-VLM