-
-
-
-
-
-
Inference Providers
Active filters:
Qwen3
nvidia/Qwen3-Next-80B-A3B-Thinking-NVFP4
Text Generation
•
Updated
•
9.58k
•
9
nvidia/Qwen3-Next-80B-A3B-Instruct-NVFP4
Text Generation
•
Updated
•
4.09k
•
4
DavidAU/Qwen3-48B-A4B-Savant-Commander-GATED-12x-Closed-Open-Source-Distill-GGUF
Text Generation
•
34B
•
Updated
•
2.24k
•
13
nightmedia/Qwen3-30B-A3B-Architect18-qx64-hi-mlx
Text Generation
•
31B
•
Updated
•
143
•
3
McG-221/Qwen3-30B-A3B-Architect18-SE_Sarek-Edition-mlx-8Bit
Text Generation
•
31B
•
Updated
•
113
•
2
mradermacher/Qwen3-14B-Scientist-BF16-i1-GGUF
15B
•
Updated
•
2.08k
•
2
DavidAU/Qwen3-4B-Gemini-TripleX-High-Reasoning-Thinking-Heretic-Uncensored-GGUF
Text Generation
•
4B
•
Updated
•
3.51k
•
11
Text Generation
•
19B
•
Updated
•
249
•
4
NVFP4/Qwen3-30B-A3B-Thinking-2507-FP4
Text Generation
•
16B
•
Updated
•
248
•
4
JunHowie/Qwen3-4B-Instruct-2507-GPTQ-Int4
Text Generation
•
4B
•
Updated
•
1.77k
•
1
Text Generation
•
5B
•
Updated
•
4.13k
•
8
mradermacher/Qwen3-0.6B-Heretic-Heresy-Edition-GGUF
0.6B
•
Updated
•
41
•
1
ChiKoi7/GPT-5-Distill-Qwen3-4B-Instruct-Heretic-GGUF
4B
•
Updated
•
3.19k
•
1
mradermacher/Qwen3-30B-A3B-Architect18-i1-GGUF
31B
•
Updated
•
1.89k
•
1
mradermacher/Qwen3-14B-Scientist-BF16-GGUF
15B
•
Updated
•
419
•
1
nightmedia/Qwen3-14B-Scientist-BF16
Text Generation
•
Updated
•
2
•
1
McG-221/Qwen3-30B-A3B-Element5-mlx-8Bit
Text Generation
•
31B
•
Updated
•
137
•
1
DavidAU/Qwen3-48B-A4B-Savant-Commander-Distill-12X-Closed-Open-Heretic-Uncensored-GGUF
Text Generation
•
34B
•
Updated
•
4.75k
•
18
DavidAU/Qwen3-8B-Hivemind-Instruct-Heretic-Abliterated-Uncensored-NEO-Imatrix-GGUF
Text Generation
•
8B
•
Updated
•
10.1k
•
9
DavidAU/Qwen3-24B-A4B-Freedom-HQ-Thinking-Abliterated-Heretic-NEOMAX-Imatrix-GGUF
Text Generation
•
18B
•
Updated
•
5.8k
•
10
DavidAU/Qwen3-4B-Hivemind-Instruct-Heretic-Abliterated-Uncensored-NEO-Imatrix-GGUF
Text Generation
•
4B
•
Updated
•
3.25k
•
6
DavidAU/Qwen3-24B-A4B-Freedom-Thinking-Abliterated-Heretic-NEO-Imatrix-GGUF
Text Generation
•
17B
•
Updated
•
8.28k
•
16
JunHowie/Qwen3-0.6B-GPTQ-Int4
Text Generation
•
0.6B
•
Updated
•
433
•
1
JunHowie/Qwen3-0.6B-GPTQ-Int8
Text Generation
•
0.6B
•
Updated
•
21
JunHowie/Qwen3-1.7B-GPTQ-Int4
Text Generation
•
2B
•
Updated
•
459
•
1
JunHowie/Qwen3-1.7B-GPTQ-Int8
Text Generation
•
2B
•
Updated
•
17
JunHowie/Qwen3-32B-GPTQ-Int4
Text Generation
•
33B
•
Updated
•
800
•
3
JunHowie/Qwen3-32B-GPTQ-Int8
Text Generation
•
33B
•
Updated
•
254
•
3
JunHowie/Qwen3-30B-A3B-GPTQ-Int4
Text Generation
•
5B
•
Updated
•
32
•
1
Text Generation
•
Updated
•
15