Model
string
Size
string
Precision
string
GPU_Type
string
Num_GPUs
int64
Serving_Engine
string
Concurrency
int64
Tokens_per_sec
float64
TTFT_ms
float64
TPOT_ms
float64
Prompt_Tokens
int64
Output_Tokens
int64
Context_Window
int64
Quantization
string
Source_URL
string
Source_Notes
string
DeepSeek-R1-Distill-Qwen
7B
FP16
NVIDIA A100
1
vLLM
50
3,362.71
309.36
14.96
null
null
8,192
null
null
DeepSeek distill vLLM test on A100; concurrency=50
DeepSeek-R1-Distill-Qwen
14B
FP16
NVIDIA A100
1
vLLM
50
3,003.57
579.43
25.31
null
null
8,192
null
null
DeepSeek distill vLLM test on A100; concurrency=50
DeepSeek-R1-Distill-Qwen
32B
FP16
NVIDIA A100
1
vLLM
50
577.17
1,299.31
52.65
null
null
8,192
null
null
DeepSeek distill vLLM test on A100; concurrency=50
QwQ (Qwen preview)
32B
FP16
NVIDIA A100
1
vLLM
50
615.31
1,301.37
59.92
null
null
8,192
null
null
QwQ preview on vLLM; concurrency=50
Gemma-2
9B
FP16
NVIDIA A100
1
vLLM
50
1,868.44
405.48
71.98
null
null
8,192
null
null
Gemma-2 9B vLLM; concurrency=50
Gemma-2
27B
FP16
NVIDIA A100
1
vLLM
50
495.93
1,109.74
45.87
null
null
8,192
null
null
Gemma-2 27B vLLM; concurrency=50
DeepSeek-R1-Distill-Llama
8B
FP16
NVIDIA A100
1
vLLM
50
3,003.57
327.75
17.88
null
null
8,192
null
null
DeepSeek distill Llama-8B vLLM; concurrency=50
Llama-3.1
8B
FP8
NVIDIA B200
8
vLLM
null
128,794
null
null
null
null
null
null
null
MLPerf-style server aggregate; engine vLLM. [1] indicates 8xB200 hits ~160k tok/s.
Llama-3.1
8B
FP8
NVIDIA H200
8
vLLM
null
64,915
null
null
null
null
null
null
null
MLPerf-style server aggregate; engine vLLM. [1] indicates 8xH200 hits ~140k tok/s.
Llama-3.1
70B
BF16
Intel Gaudi 3
8
vLLM
null
21,264
null
null
null
null
null
null
null
Intel/third-party measurement; open weights
Llama-3.1
70B
BF16
NVIDIA H200
8
SGLang
10
null
7.292
0.042
4,096
256
null
null
[2]
VMware benchmark; E2E Latency 18ms. TPOT is extremely low.
Llama-3.1
70B
BF16
AMD MI300X
8
vLLM
null
null
null
null
null
null
null
null
[3]
1.8x higher throughput and 5.1x faster TTFT than TGI at 32 QPS.
Llama-3.1
70B
FP8
NVIDIA B200
8
vLLM
null
null
null
null
null
null
null
null
null
Target row; populate once B200 MLPerf v5.1+ data is available.
Llama-3.1
405B
FP8
NVIDIA H100
8
vLLM
null
291.5
null
null
null
null
null
null
null
Approx aggregate tok/s reported; low concurrency.
Llama-3.1
405B
FP8
AMD MI300X
8
vLLM (ROCm)
256
1,846
null
138.67
128
128
null
null
[4]
Per-token latency 17.75s E2E. (17750ms / 128 tokens = 138.67ms TPOT).
Qwen-2.5
7B
BF16
NVIDIA L40S
1
vLLM
32
null
null
null
null
null
null
AWQ
[5]
Target row. Source [5] inaccessible.
Qwen-2.5
14B
BF16
NVIDIA L40S
1
vLLM
32
null
null
null
null
null
null
AWQ
[5]
Target row. Source [5] inaccessible.
Qwen-2.5
32B
BF16
NVIDIA H100
1
vLLM
64
null
null
null
null
null
null
AWQ
[5]
Target row. Source [5] inaccessible.
Qwen-2.5
72B
BF16
NVIDIA H100
8
SGLang
128
null
null
null
null
null
null
null
[6]
Target row. Source [6] confirms vLLM/SGLang tests on 8xH100 but provides no hard numbers.
Qwen-3
14B
BF16
NVIDIA H200
1
vLLM
64
null
null
null
null
null
null
AWQ
null
Target row for 2025 posts with concurrency curves.
Qwen-3
32B
BF16
NVIDIA H200
4
vLLM
128
null
null
null
null
null
null
AWQ
null
Target row for 2025 posts with concurrency curves.
Qwen-3
72B
BF16
NVIDIA B200
8
SGLang
128
null
null
null
null
null
null
null
null
Target row; large-model serving.
Qwen-3
110B
BF16
AMD MI300X
8
vLLM (ROCm)
128
null
null
null
null
null
null
null
[7]
Target row; populate from ROCm case studies. Source [7] confirms support but gives no metrics.
Qwen-3
235B
BF16
Intel Gaudi 3
8
SGLang
64
null
null
null
null
null
null
null
[8]
Target row; [8] reference this but provide no data.
Qwen-3
235B
BF16
NVIDIA H200
4
SGLang
32
null
null
null
1,000
1,000
null
FP8
[9]
SGLang benchmark on H200 (proxy for B200). 45 tok/s *per user*. 1400 tok/s *total*.
DeepSeek-V3-Base
37B
BF16
NVIDIA H100
1
vLLM
32
null
null
null
null
null
null
null
[10]
Target row. [10] confirms 671B total / 37B active params.
DeepSeek-V3
37B
BF16
NVIDIA H100
4
SGLang
128
null
null
null
null
null
null
null
[11]
Target row; [11, 12] confirm SGLang support and optimizations.
DeepSeek-R1-Distill
70B
BF16
NVIDIA H200
8
vLLM
128
null
null
null
null
null
null
null
[13]
Target row. [13] lists 8-GPU (Latency) and 4-GPU (Throughput) optimized configs.
DeepSeek-R1-Distill
70B
BF16
AMD MI355X
8
vLLM (ROCm)
128
null
null
null
null
null
null
null
[7]
Target row; [7] confirms platform support, no metrics provided.
DeepSeek-R1-Distill
32B
BF16
Intel Gaudi 3
4
SGLang
64
null
null
null
null
null
null
null
null
Target row.
Gemma-3
12B
BF16
NVIDIA H100
1
vLLM
32
477.49
null
null
null
null
null
null
[14]
Low end of a 50-concurrency benchmark range (477-4193 tok/s).
Gemma-3
27B
BF16
NVIDIA H200
1
vLLM
64
null
null
null
null
null
null
null
[15]
Target row. [15] discusses benchmarking but provides no results.
Gemma-2
9B
BF16
NVIDIA L40S
1
SGLang
32
null
null
null
null
null
null
null
null
Target row.
Gemma-2
27B
BF16
Intel Gaudi 3
2
vLLM
32
null
null
null
null
null
null
null
[16]
Target row. [16] mentions Gaudi 2, not 3.
Phi-4
14B
BF16
NVIDIA H100
1
vLLM
32
260.01
null
null
null
null
null
null
null
Microsoft Phi-4 speed note; [17] confirms benchmarking at 16 RPS.
Phi-4-mini
3.8B
FP16
NVIDIA A100
1
vLLM
64
null
null
null
null
null
null
INT4/FP8
[18]
Target row. [18] notes tokenizer bugs impacting vLLM use.
Yi-1.5
9B
FP16
NVIDIA H100
1
vLLM
32
null
null
null
null
null
null
null
null
Target row.
Yi-1.5
34B
BF16
NVIDIA H200
2
SGLang
64
null
null
null
null
null
null
null
null
Target row.
Mixtral
8x7B MoE
BF16
NVIDIA H100
1
vLLM
32
null
null
null
null
null
null
null
[19]
Target row. [19] confirms vLLM/TP/PP benchmarks exist. 2 active experts.
Mixtral
8x22B MoE
BF16
NVIDIA H200
8
SGLang
128
null
null
null
null
null
null
null
[20]
Target row. [20] notes MoE complexity, no hard numbers.
DBRX
132B
BF16
NVIDIA H100
8
vLLM
64
null
null
null
null
null
null
null
[21]
Target row. 4 active experts. [21] notes 2x+ throughput over 70B dense model at batch > 32.
DBRX
132B
BF16
AMD MI300X
8
vLLM (ROCm)
64
null
null
null
null
null
null
null
null
Target row.
Llama-3.2
3B
BF16
NVIDIA L40S
1
vLLM
32
95
null
null
128
2,048
131,072
null
null
Single-GPU L40S example.
Hermes-3 (Llama-3.2)
3B
BF16
NVIDIA RTX 4090
1
vLLM
null
60.69
null
null
null
null
null
null
[22]
SGLang vs vLLM benchmark. 4090 is proxy for L40S.
Hermes-3 (Llama-3.2)
3B
BF16
NVIDIA RTX 4090
1
SGLang
null
118.34
null
null
null
null
null
null
[22]
SGLang is ~2x faster than vLLM on this small model.
Llama-3.1
8B
BF16
Intel Gaudi 3
1
SGLang
32
null
null
null
null
null
null
null
[23]
Target row. [23, 24] confirm SGLang support on Gaudi 3.
Llama-3.1
8B
BF16
Intel Gaudi 3
1
vLLM
1,000
9,579.96
null
null
null
null
null
null
[25]
Added row. Total throughput at 1000 concurrent requests (27.7 QPS).
Llama-3.1
8B
BF16
AMD MI300X
1
vLLM (ROCm)
null
18,752
null
null
null
null
null
null
[1]
Added row. Single-GPU benchmark. Compare to H200 (25k tok/s).
Llama-3.1
70B
BF16
NVIDIA L40S
8
vLLM
64
null
null
null
null
null
null
null
[26]
Target row. [26] notes L40S.8x used for DBRX (132B), proving 70B is feasible.
Llama-3.1
70B
BF16
Intel Gaudi 3
8
SGLang
128
null
null
null
null
null
null
null
[27]
Target row. [27] confirms vLLM FP8 calibration for 70B on Gaudi.
Llama-3.1
70B
BF16
Intel Gaudi 3
4
vLLM
1,000
9,072.96
null
null
null
null
null
null
[25]
Added row. Normalized throughput (per-param basis) at 1000 requests.
Mistral
7B
BF16
Intel Gaudi 3
1
vLLM
1,000
10,382.47
null
38.54
null
null
null
null
[25]
Added row. 23.51 QPS. TPOT is ms per token.
Qwen-3-Math
72B
BF16
NVIDIA H200
8
vLLM
64
null
null
null
null
null
null
null
null
Target row.
Qwen-2.5-Coder
32B
BF16
NVIDIA H100
2
SGLang
64
null
null
null
null
null
null
null
[28]
Target row. [28] discusses training, not inference.
Phi-4
14B
BF16
Intel Gaudi 3
1
vLLM
32
null
null
null
null
null
null
null
[29]
Target row. [29] confirms FP8 support on Gaudi.
Gemma-2
27B
BF16
AMD MI355X
4
vLLM (ROCm)
64
null
null
null
null
null
null
null
[30]
Target row. [30] confirms "Paiton" optimizations for Gemma 2 27B on AMD.
Yi-1.5
34B
BF16
Intel Gaudi 3
4
SGLang
64
null
null
null
null
null
null
null
null
Target row.
Qwen-3
110B
BF16
NVIDIA B200
8
vLLM
128
null
null
null
null
null
null
null
null
Target row.
Qwen-3
235B
BF16
NVIDIA B200
8
SGLang
128
null
null
null
null
null
null
null
null
Target row.
Llama-3.1-8B-Instruct
8B
BF16
NVIDIA H100
1
vLLM (v0)
5
588.62
318
null
null
null
null
null
[31]
vLLM v0.9.0 benchmark; avg latency 16.98s
Llama-3.1-8B-Instruct
8B
BF16
NVIDIA H100
1
vLLM (v0)
50
2,742.96
357
null
null
null
null
null
[31]
vLLM v0.9.0 benchmark; avg latency 26.18s
Llama-3.1-8B-Instruct
8B
BF16
NVIDIA H100
1
vLLM (v0)
100
2,744.1
415
null
null
null
null
null
null
vLLM v0.9.0 benchmark; avg latency 26.16s
Llama-3.1-8B-Instruct
8B
BF16
NVIDIA H100
1
vLLM (v1)
5
634.87
276
null
null
null
null
null
[31]
vLLM v0.9.0 (V1 sched) benchmark; avg latency 15.75s
Llama-3.1-8B-Instruct
8B
BF16
NVIDIA H100
1
vLLM (v1)
50
3,141.16
348
null
null
null
null
null
[31]
vLLM v0.9.0 (V1 sched) benchmark; avg latency 22.80s
Llama-3.1-8B-Instruct
8B
BF16
NVIDIA H100
1
vLLM (v1)
100
3,036.62
373
null
null
null
null
null
[31]
vLLM v0.9.0 (V1 sched) benchmark; avg latency 23.59s
Llama-3.1-8B-Instruct
8B
BF16
NVIDIA H100
1
SGLang
5
666.54
136
null
null
null
null
null
[31]
SGLang v0.4.9 benchmark; avg latency 15.00s. Note 2x better TTFT vs vLLM.
Llama-3.1-8B-Instruct
8B
BF16
NVIDIA H100
1
SGLang
50
3,077.68
258
null
null
null
null
null
[31]
SGLang v0.4.9 benchmark; avg latency 23.38s.
Llama-3.1-8B-Instruct
8B
BF16
NVIDIA H100
1
SGLang
100
3,088.08
254
null
null
null
null
null
[31]
SGLang v0.4.9 benchmark; avg latency 23.29s. Note stable TTFT.
Llama-3.1
70B
FP8
NVIDIA H100
2
vLLM
1
35
null
null
null
null
null
null
[32]
Sequential requests.
Llama-3.1
70B
FP8
NVIDIA H100
2
SGLang
1
38
null
null
null
null
null
null
[32]
Sequential requests.
Llama-3.1
70B
FP8
NVIDIA H100
2
vLLM
null
null
null
null
null
null
null
null
[32]
Concurrent requests; performance *collapses* by ~50%.
Llama-3.1
70B
FP8
NVIDIA H100
2
SGLang
null
null
null
null
null
null
null
null
[32]
Concurrent requests; performance is *stable*.
Llama-3.1
8B
BF16
NVIDIA H100
1
vLLM
1
80
null
null
null
null
null
null
[32]
Sequential requests.
Llama-3.1
8B
BF16
NVIDIA H100
1
SGLang
1
91
null
null
null
null
null
null
[32]
Sequential requests.
Llama-3.1
8B
BF16
NVIDIA H100
1
vLLM
null
null
null
null
null
null
null
null
[32]
Concurrent requests; performance *collapses* by >50%.
Llama-3.1
8B
BF16
NVIDIA H100
1
SGLang
null
null
null
null
null
null
null
null
[32]
Concurrent requests; performance is *stable*.
Qwen-1.5B
1.5B
null
null
1
vLLM
null
98.27
null
null
null
null
null
null
null
Latency 0.13s; precision and hardware not specified.
Qwen-1.5B
1.5B
null
null
1
SGLang
null
210.48
null
null
null
null
null
null
null
Latency 0.58s; precision and hardware not specified.
Hermes-3
null
null
null
1
vLLM
null
60.69
null
null
null
null
null
null
null
Latency 0.21s; model size, precision and hardware not specified.
Hermes-3
null
null
null
1
SGLang
null
118.34
null
null
null
null
null
null
null
Latency 1.03s; model size, precision and hardware not specified.