Model
string | Size
string | Precision
string | GPU_Type
string | Num_GPUs
int64 | Serving_Engine
string | Concurrency
int64 | Tokens_per_sec
float64 | TTFT_ms
float64 | TPOT_ms
float64 | Prompt_Tokens
int64 | Output_Tokens
int64 | Context_Window
int64 | Quantization
string | Source_URL
string | Source_Notes
string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
DeepSeek-R1-Distill-Qwen
|
7B
|
FP16
|
NVIDIA A100
| 1
|
vLLM
| 50
| 3,362.71
| 309.36
| 14.96
| null | null | 8,192
| null | null |
DeepSeek distill vLLM test on A100; concurrency=50
|
DeepSeek-R1-Distill-Qwen
|
14B
|
FP16
|
NVIDIA A100
| 1
|
vLLM
| 50
| 3,003.57
| 579.43
| 25.31
| null | null | 8,192
| null | null |
DeepSeek distill vLLM test on A100; concurrency=50
|
DeepSeek-R1-Distill-Qwen
|
32B
|
FP16
|
NVIDIA A100
| 1
|
vLLM
| 50
| 577.17
| 1,299.31
| 52.65
| null | null | 8,192
| null | null |
DeepSeek distill vLLM test on A100; concurrency=50
|
QwQ (Qwen preview)
|
32B
|
FP16
|
NVIDIA A100
| 1
|
vLLM
| 50
| 615.31
| 1,301.37
| 59.92
| null | null | 8,192
| null | null |
QwQ preview on vLLM; concurrency=50
|
Gemma-2
|
9B
|
FP16
|
NVIDIA A100
| 1
|
vLLM
| 50
| 1,868.44
| 405.48
| 71.98
| null | null | 8,192
| null | null |
Gemma-2 9B vLLM; concurrency=50
|
Gemma-2
|
27B
|
FP16
|
NVIDIA A100
| 1
|
vLLM
| 50
| 495.93
| 1,109.74
| 45.87
| null | null | 8,192
| null | null |
Gemma-2 27B vLLM; concurrency=50
|
DeepSeek-R1-Distill-Llama
|
8B
|
FP16
|
NVIDIA A100
| 1
|
vLLM
| 50
| 3,003.57
| 327.75
| 17.88
| null | null | 8,192
| null | null |
DeepSeek distill Llama-8B vLLM; concurrency=50
|
Llama-3.1
|
8B
|
FP8
|
NVIDIA B200
| 8
|
vLLM
| null | 128,794
| null | null | null | null | null | null | null |
MLPerf-style server aggregate; engine vLLM. [1] indicates 8xB200 hits ~160k tok/s.
|
Llama-3.1
|
8B
|
FP8
|
NVIDIA H200
| 8
|
vLLM
| null | 64,915
| null | null | null | null | null | null | null |
MLPerf-style server aggregate; engine vLLM. [1] indicates 8xH200 hits ~140k tok/s.
|
Llama-3.1
|
70B
|
BF16
|
Intel Gaudi 3
| 8
|
vLLM
| null | 21,264
| null | null | null | null | null | null | null |
Intel/third-party measurement; open weights
|
Llama-3.1
|
70B
|
BF16
|
NVIDIA H200
| 8
|
SGLang
| 10
| null | 7.292
| 0.042
| 4,096
| 256
| null | null |
[2]
|
VMware benchmark; E2E Latency 18ms. TPOT is extremely low.
|
Llama-3.1
|
70B
|
BF16
|
AMD MI300X
| 8
|
vLLM
| null | null | null | null | null | null | null | null |
[3]
|
1.8x higher throughput and 5.1x faster TTFT than TGI at 32 QPS.
|
Llama-3.1
|
70B
|
FP8
|
NVIDIA B200
| 8
|
vLLM
| null | null | null | null | null | null | null | null | null |
Target row; populate once B200 MLPerf v5.1+ data is available.
|
Llama-3.1
|
405B
|
FP8
|
NVIDIA H100
| 8
|
vLLM
| null | 291.5
| null | null | null | null | null | null | null |
Approx aggregate tok/s reported; low concurrency.
|
Llama-3.1
|
405B
|
FP8
|
AMD MI300X
| 8
|
vLLM (ROCm)
| 256
| 1,846
| null | 138.67
| 128
| 128
| null | null |
[4]
|
Per-token latency 17.75s E2E. (17750ms / 128 tokens = 138.67ms TPOT).
|
Qwen-2.5
|
7B
|
BF16
|
NVIDIA L40S
| 1
|
vLLM
| 32
| null | null | null | null | null | null |
AWQ
|
[5]
|
Target row. Source [5] inaccessible.
|
Qwen-2.5
|
14B
|
BF16
|
NVIDIA L40S
| 1
|
vLLM
| 32
| null | null | null | null | null | null |
AWQ
|
[5]
|
Target row. Source [5] inaccessible.
|
Qwen-2.5
|
32B
|
BF16
|
NVIDIA H100
| 1
|
vLLM
| 64
| null | null | null | null | null | null |
AWQ
|
[5]
|
Target row. Source [5] inaccessible.
|
Qwen-2.5
|
72B
|
BF16
|
NVIDIA H100
| 8
|
SGLang
| 128
| null | null | null | null | null | null | null |
[6]
|
Target row. Source [6] confirms vLLM/SGLang tests on 8xH100 but provides no hard numbers.
|
Qwen-3
|
14B
|
BF16
|
NVIDIA H200
| 1
|
vLLM
| 64
| null | null | null | null | null | null |
AWQ
| null |
Target row for 2025 posts with concurrency curves.
|
Qwen-3
|
32B
|
BF16
|
NVIDIA H200
| 4
|
vLLM
| 128
| null | null | null | null | null | null |
AWQ
| null |
Target row for 2025 posts with concurrency curves.
|
Qwen-3
|
72B
|
BF16
|
NVIDIA B200
| 8
|
SGLang
| 128
| null | null | null | null | null | null | null | null |
Target row; large-model serving.
|
Qwen-3
|
110B
|
BF16
|
AMD MI300X
| 8
|
vLLM (ROCm)
| 128
| null | null | null | null | null | null | null |
[7]
|
Target row; populate from ROCm case studies. Source [7] confirms support but gives no metrics.
|
Qwen-3
|
235B
|
BF16
|
Intel Gaudi 3
| 8
|
SGLang
| 64
| null | null | null | null | null | null | null |
[8]
|
Target row; [8] reference this but provide no data.
|
Qwen-3
|
235B
|
BF16
|
NVIDIA H200
| 4
|
SGLang
| 32
| null | null | null | 1,000
| 1,000
| null |
FP8
|
[9]
|
SGLang benchmark on H200 (proxy for B200). 45 tok/s *per user*. 1400 tok/s *total*.
|
DeepSeek-V3-Base
|
37B
|
BF16
|
NVIDIA H100
| 1
|
vLLM
| 32
| null | null | null | null | null | null | null |
[10]
|
Target row. [10] confirms 671B total / 37B active params.
|
DeepSeek-V3
|
37B
|
BF16
|
NVIDIA H100
| 4
|
SGLang
| 128
| null | null | null | null | null | null | null |
[11]
|
Target row; [11, 12] confirm SGLang support and optimizations.
|
DeepSeek-R1-Distill
|
70B
|
BF16
|
NVIDIA H200
| 8
|
vLLM
| 128
| null | null | null | null | null | null | null |
[13]
|
Target row. [13] lists 8-GPU (Latency) and 4-GPU (Throughput) optimized configs.
|
DeepSeek-R1-Distill
|
70B
|
BF16
|
AMD MI355X
| 8
|
vLLM (ROCm)
| 128
| null | null | null | null | null | null | null |
[7]
|
Target row; [7] confirms platform support, no metrics provided.
|
DeepSeek-R1-Distill
|
32B
|
BF16
|
Intel Gaudi 3
| 4
|
SGLang
| 64
| null | null | null | null | null | null | null | null |
Target row.
|
Gemma-3
|
12B
|
BF16
|
NVIDIA H100
| 1
|
vLLM
| 32
| 477.49
| null | null | null | null | null | null |
[14]
|
Low end of a 50-concurrency benchmark range (477-4193 tok/s).
|
Gemma-3
|
27B
|
BF16
|
NVIDIA H200
| 1
|
vLLM
| 64
| null | null | null | null | null | null | null |
[15]
|
Target row. [15] discusses benchmarking but provides no results.
|
Gemma-2
|
9B
|
BF16
|
NVIDIA L40S
| 1
|
SGLang
| 32
| null | null | null | null | null | null | null | null |
Target row.
|
Gemma-2
|
27B
|
BF16
|
Intel Gaudi 3
| 2
|
vLLM
| 32
| null | null | null | null | null | null | null |
[16]
|
Target row. [16] mentions Gaudi 2, not 3.
|
Phi-4
|
14B
|
BF16
|
NVIDIA H100
| 1
|
vLLM
| 32
| 260.01
| null | null | null | null | null | null | null |
Microsoft Phi-4 speed note; [17] confirms benchmarking at 16 RPS.
|
Phi-4-mini
|
3.8B
|
FP16
|
NVIDIA A100
| 1
|
vLLM
| 64
| null | null | null | null | null | null |
INT4/FP8
|
[18]
|
Target row. [18] notes tokenizer bugs impacting vLLM use.
|
Yi-1.5
|
9B
|
FP16
|
NVIDIA H100
| 1
|
vLLM
| 32
| null | null | null | null | null | null | null | null |
Target row.
|
Yi-1.5
|
34B
|
BF16
|
NVIDIA H200
| 2
|
SGLang
| 64
| null | null | null | null | null | null | null | null |
Target row.
|
Mixtral
|
8x7B MoE
|
BF16
|
NVIDIA H100
| 1
|
vLLM
| 32
| null | null | null | null | null | null | null |
[19]
|
Target row. [19] confirms vLLM/TP/PP benchmarks exist. 2 active experts.
|
Mixtral
|
8x22B MoE
|
BF16
|
NVIDIA H200
| 8
|
SGLang
| 128
| null | null | null | null | null | null | null |
[20]
|
Target row. [20] notes MoE complexity, no hard numbers.
|
DBRX
|
132B
|
BF16
|
NVIDIA H100
| 8
|
vLLM
| 64
| null | null | null | null | null | null | null |
[21]
|
Target row. 4 active experts. [21] notes 2x+ throughput over 70B dense model at batch > 32.
|
DBRX
|
132B
|
BF16
|
AMD MI300X
| 8
|
vLLM (ROCm)
| 64
| null | null | null | null | null | null | null | null |
Target row.
|
Llama-3.2
|
3B
|
BF16
|
NVIDIA L40S
| 1
|
vLLM
| 32
| 95
| null | null | 128
| 2,048
| 131,072
| null | null |
Single-GPU L40S example.
|
Hermes-3 (Llama-3.2)
|
3B
|
BF16
|
NVIDIA RTX 4090
| 1
|
vLLM
| null | 60.69
| null | null | null | null | null | null |
[22]
|
SGLang vs vLLM benchmark. 4090 is proxy for L40S.
|
Hermes-3 (Llama-3.2)
|
3B
|
BF16
|
NVIDIA RTX 4090
| 1
|
SGLang
| null | 118.34
| null | null | null | null | null | null |
[22]
|
SGLang is ~2x faster than vLLM on this small model.
|
Llama-3.1
|
8B
|
BF16
|
Intel Gaudi 3
| 1
|
SGLang
| 32
| null | null | null | null | null | null | null |
[23]
|
Target row. [23, 24] confirm SGLang support on Gaudi 3.
|
Llama-3.1
|
8B
|
BF16
|
Intel Gaudi 3
| 1
|
vLLM
| 1,000
| 9,579.96
| null | null | null | null | null | null |
[25]
|
Added row. Total throughput at 1000 concurrent requests (27.7 QPS).
|
Llama-3.1
|
8B
|
BF16
|
AMD MI300X
| 1
|
vLLM (ROCm)
| null | 18,752
| null | null | null | null | null | null |
[1]
|
Added row. Single-GPU benchmark. Compare to H200 (25k tok/s).
|
Llama-3.1
|
70B
|
BF16
|
NVIDIA L40S
| 8
|
vLLM
| 64
| null | null | null | null | null | null | null |
[26]
|
Target row. [26] notes L40S.8x used for DBRX (132B), proving 70B is feasible.
|
Llama-3.1
|
70B
|
BF16
|
Intel Gaudi 3
| 8
|
SGLang
| 128
| null | null | null | null | null | null | null |
[27]
|
Target row. [27] confirms vLLM FP8 calibration for 70B on Gaudi.
|
Llama-3.1
|
70B
|
BF16
|
Intel Gaudi 3
| 4
|
vLLM
| 1,000
| 9,072.96
| null | null | null | null | null | null |
[25]
|
Added row. Normalized throughput (per-param basis) at 1000 requests.
|
Mistral
|
7B
|
BF16
|
Intel Gaudi 3
| 1
|
vLLM
| 1,000
| 10,382.47
| null | 38.54
| null | null | null | null |
[25]
|
Added row. 23.51 QPS. TPOT is ms per token.
|
Qwen-3-Math
|
72B
|
BF16
|
NVIDIA H200
| 8
|
vLLM
| 64
| null | null | null | null | null | null | null | null |
Target row.
|
Qwen-2.5-Coder
|
32B
|
BF16
|
NVIDIA H100
| 2
|
SGLang
| 64
| null | null | null | null | null | null | null |
[28]
|
Target row. [28] discusses training, not inference.
|
Phi-4
|
14B
|
BF16
|
Intel Gaudi 3
| 1
|
vLLM
| 32
| null | null | null | null | null | null | null |
[29]
|
Target row. [29] confirms FP8 support on Gaudi.
|
Gemma-2
|
27B
|
BF16
|
AMD MI355X
| 4
|
vLLM (ROCm)
| 64
| null | null | null | null | null | null | null |
[30]
|
Target row. [30] confirms "Paiton" optimizations for Gemma 2 27B on AMD.
|
Yi-1.5
|
34B
|
BF16
|
Intel Gaudi 3
| 4
|
SGLang
| 64
| null | null | null | null | null | null | null | null |
Target row.
|
Qwen-3
|
110B
|
BF16
|
NVIDIA B200
| 8
|
vLLM
| 128
| null | null | null | null | null | null | null | null |
Target row.
|
Qwen-3
|
235B
|
BF16
|
NVIDIA B200
| 8
|
SGLang
| 128
| null | null | null | null | null | null | null | null |
Target row.
|
Llama-3.1-8B-Instruct
|
8B
|
BF16
|
NVIDIA H100
| 1
|
vLLM (v0)
| 5
| 588.62
| 318
| null | null | null | null | null |
[31]
|
vLLM v0.9.0 benchmark; avg latency 16.98s
|
Llama-3.1-8B-Instruct
|
8B
|
BF16
|
NVIDIA H100
| 1
|
vLLM (v0)
| 50
| 2,742.96
| 357
| null | null | null | null | null |
[31]
|
vLLM v0.9.0 benchmark; avg latency 26.18s
|
Llama-3.1-8B-Instruct
|
8B
|
BF16
|
NVIDIA H100
| 1
|
vLLM (v0)
| 100
| 2,744.1
| 415
| null | null | null | null | null | null |
vLLM v0.9.0 benchmark; avg latency 26.16s
|
Llama-3.1-8B-Instruct
|
8B
|
BF16
|
NVIDIA H100
| 1
|
vLLM (v1)
| 5
| 634.87
| 276
| null | null | null | null | null |
[31]
|
vLLM v0.9.0 (V1 sched) benchmark; avg latency 15.75s
|
Llama-3.1-8B-Instruct
|
8B
|
BF16
|
NVIDIA H100
| 1
|
vLLM (v1)
| 50
| 3,141.16
| 348
| null | null | null | null | null |
[31]
|
vLLM v0.9.0 (V1 sched) benchmark; avg latency 22.80s
|
Llama-3.1-8B-Instruct
|
8B
|
BF16
|
NVIDIA H100
| 1
|
vLLM (v1)
| 100
| 3,036.62
| 373
| null | null | null | null | null |
[31]
|
vLLM v0.9.0 (V1 sched) benchmark; avg latency 23.59s
|
Llama-3.1-8B-Instruct
|
8B
|
BF16
|
NVIDIA H100
| 1
|
SGLang
| 5
| 666.54
| 136
| null | null | null | null | null |
[31]
|
SGLang v0.4.9 benchmark; avg latency 15.00s. Note 2x better TTFT vs vLLM.
|
Llama-3.1-8B-Instruct
|
8B
|
BF16
|
NVIDIA H100
| 1
|
SGLang
| 50
| 3,077.68
| 258
| null | null | null | null | null |
[31]
|
SGLang v0.4.9 benchmark; avg latency 23.38s.
|
Llama-3.1-8B-Instruct
|
8B
|
BF16
|
NVIDIA H100
| 1
|
SGLang
| 100
| 3,088.08
| 254
| null | null | null | null | null |
[31]
|
SGLang v0.4.9 benchmark; avg latency 23.29s. Note stable TTFT.
|
Llama-3.1
|
70B
|
FP8
|
NVIDIA H100
| 2
|
vLLM
| 1
| 35
| null | null | null | null | null | null |
[32]
|
Sequential requests.
|
Llama-3.1
|
70B
|
FP8
|
NVIDIA H100
| 2
|
SGLang
| 1
| 38
| null | null | null | null | null | null |
[32]
|
Sequential requests.
|
Llama-3.1
|
70B
|
FP8
|
NVIDIA H100
| 2
|
vLLM
| null | null | null | null | null | null | null | null |
[32]
|
Concurrent requests; performance *collapses* by ~50%.
|
Llama-3.1
|
70B
|
FP8
|
NVIDIA H100
| 2
|
SGLang
| null | null | null | null | null | null | null | null |
[32]
|
Concurrent requests; performance is *stable*.
|
Llama-3.1
|
8B
|
BF16
|
NVIDIA H100
| 1
|
vLLM
| 1
| 80
| null | null | null | null | null | null |
[32]
|
Sequential requests.
|
Llama-3.1
|
8B
|
BF16
|
NVIDIA H100
| 1
|
SGLang
| 1
| 91
| null | null | null | null | null | null |
[32]
|
Sequential requests.
|
Llama-3.1
|
8B
|
BF16
|
NVIDIA H100
| 1
|
vLLM
| null | null | null | null | null | null | null | null |
[32]
|
Concurrent requests; performance *collapses* by >50%.
|
Llama-3.1
|
8B
|
BF16
|
NVIDIA H100
| 1
|
SGLang
| null | null | null | null | null | null | null | null |
[32]
|
Concurrent requests; performance is *stable*.
|
Qwen-1.5B
|
1.5B
| null | null | 1
|
vLLM
| null | 98.27
| null | null | null | null | null | null | null |
Latency 0.13s; precision and hardware not specified.
|
Qwen-1.5B
|
1.5B
| null | null | 1
|
SGLang
| null | 210.48
| null | null | null | null | null | null | null |
Latency 0.58s; precision and hardware not specified.
|
Hermes-3
| null | null | null | 1
|
vLLM
| null | 60.69
| null | null | null | null | null | null | null |
Latency 0.21s; model size, precision and hardware not specified.
|
Hermes-3
| null | null | null | 1
|
SGLang
| null | 118.34
| null | null | null | null | null | null | null |
Latency 1.03s; model size, precision and hardware not specified.
|
Dataset Card: llm-perfdata
Dataset Description
This dataset curates throughput and latency benchmarks for popular large language models across hardware targets. Each row represents an observed configuration—model, precision, serving engine, and load profile—paired with sources that document how the measurement was collected. The goal is to keep a transparent, reproducible ledger that helps compare serving trade-offs without digging through scattered notebooks.
Provenance & Caveats
All entries are derived solely from online, publicly available sources. Because performance numbers depend on external documentation, there may be gaps, inconsistencies, or occasional inaccuracies. Expect the dataset to drift out of date as serving stacks and software releases evolve; refresh measurements regularly when citing results.
Data Schema
The dataset contains the following columns:
- Model — published model identifier (e.g.,
DeepSeek-R1-Distill-Qwen). - Size — parameter scale shorthand such as
7Bor32B. - Precision — numeric precision used during serving (
FP16,BF16,INT4, etc.). - GPU_Type — accelerator family (for example
NVIDIA A100). - Num_GPUs — integer count of GPUs participating in the run.
- Serving_Engine — runtime layer (
vLLM,TensorRT-LLM, custom stacks). - Concurrency — concurrent request count exercised in the benchmark.
- Tokens_per_sec — aggregate output throughput.
- TTFT_ms — time-to-first-token in milliseconds.
- TPOT_ms — tail period of token generation in milliseconds.
- Prompt_Tokens / Output_Tokens — tokens in the input and generated output.
- Context_Window — maximum supported tokens for the configuration.
- Quantization — applied quantization strategy, if any.
- Source_URL — public link to the benchmark report or raw logs.
- Source_Notes — short free-text context, hardware topology, or caveats.
Leave optional numeric metrics blank when a source does not provide them and describe missing context in Source_Notes.
Usage
from datasets import load_dataset
dataset = load_dataset("metrum-ai/llm-perfdata")
print(dataset)
# Access the data
for example in dataset['train']:
print(f"Model: {example['Model']}")
print(f"Throughput: {example['Tokens_per_sec']} tokens/sec")
print(f"Source: {example['Source_URL']}")
Or load directly as a pandas DataFrame:
import pandas as pd
from datasets import load_dataset
dataset = load_dataset("metrum-ai/llm-perfdata")
df = dataset['train'].to_pandas()
# Filter by model and precision
filtered = df[(df["Model"] == "DeepSeek-R1-Distill-Qwen") & (df["Precision"] == "FP16")]
Analysts typically pivot on Serving_Engine and Concurrency to compare throughput scaling. Cite the Source_URL when referencing numbers externally.
Attribution
If you use this dataset, you must provide attribution to Metrum AI. Please cite this dataset using the citation format provided below.
License
This dataset is released under the MIT License.
MIT License
Copyright (c) 2025 Metrum AI
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and/or data and associated documentation files (the "Software and/or Data"), to deal in the Software and/or Data without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software and/or Data, and to permit persons to whom the Software and/or Data is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software and/or Data.
THE SOFTWARE AND/OR DATA IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE AND/OR DATA OR THE USE OR OTHER DEALINGS IN THE SOFTWARE AND/OR DATA.
No Warranty and Limitation of Liability
THIS DATASET IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. The data is compiled from publicly available sources and may contain errors, inaccuracies, or become outdated. Metrum AI makes no representations or warranties regarding the accuracy, completeness, reliability, or suitability of this dataset for any purpose.
IN NO EVENT SHALL METRUM AI BE LIABLE FOR ANY CLAIM, DAMAGES, OR OTHER LIABILITY, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with this dataset or the use or other dealings in this dataset. This includes, without limitation, direct, indirect, incidental, special, consequential, or punitive damages, or any loss of profits, revenues, data, use, goodwill, or other intangible losses.
Use of this dataset is at your own risk.
Additional Disclaimers
No Endorsement: The inclusion of any model, hardware, software, or service in this dataset does not constitute an endorsement, recommendation, or approval by Metrum AI. All trademarks, product names, and company names are the property of their respective owners.
Third-Party Sources: This dataset aggregates data from publicly available third-party sources. Metrum AI does not control, verify, or guarantee the accuracy of information from these sources. Users should independently verify any information before relying on it.
No Professional Advice: This dataset is provided for informational and research purposes only. It does not constitute professional, technical, or business advice. Users should consult with qualified professionals for decisions based on this data.
Data Completeness: This dataset may not include all available performance benchmarks. The absence of data for a particular model, hardware configuration, or metric does not imply that such data does not exist or is not relevant.
No Guarantee of Availability: Metrum AI does not guarantee that this dataset will be available at all times or that it will be updated regularly. The dataset may be modified, discontinued, or removed without notice.
Forward-Looking Statements: Any performance metrics or benchmarks in this dataset reflect historical or current conditions and may not be indicative of future performance.
User Responsibility: Users are solely responsible for their use of this dataset, including compliance with applicable laws, regulations, and third-party rights. Users should conduct their own due diligence before making any decisions based on this data.
Citation
@dataset{llm_perfdata,
title = {LLM Perfdata},
author = {Metrum AI},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/metrum-ai/llm-perfdata}
}
- Downloads last month
- 27