jonasluehrs-jaai commited on
Commit
b20f67d
·
verified ·
1 Parent(s): dd523b1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +112 -0
README.md CHANGED
@@ -19,3 +19,115 @@ configs:
19
  - split: train
20
  path: data/train-*
21
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
  - split: train
20
  path: data/train-*
21
  ---
22
+
23
+ # Synthetic Dataset: High Context, Low Generation
24
+
25
+ ## Dataset Description
26
+
27
+ This is a synthetic benchmark dataset designed to test LLM inference performance in **high-context, low-generation** scenarios. The dataset consists of 2,000 samples with randomly generated tokens that simulate real-world workloads where models process long input contexts but generate relatively short responses.
28
+
29
+ ### Use Cases
30
+
31
+ This dataset is ideal for benchmarking:
32
+ - **Document analysis** with short answers
33
+ - **Long-context Q&A** systems
34
+ - **Information extraction** from large documents
35
+ - **Prompt processing efficiency** and TTFT (Time-to-First-Token) optimization
36
+
37
+ ### Dataset Characteristics
38
+
39
+ - **Number of Samples**: 2,000
40
+ - **Prompt Length Distribution**: Normal distribution
41
+ - Mean: 32,000 tokens
42
+ - Standard deviation: 10,000 tokens
43
+ - **Response Length Distribution**: Normal distribution
44
+ - Mean: 200 tokens
45
+ - Standard deviation: 50 tokens
46
+ - **Tokenizer**: meta-llama/Llama-3.1-8B-Instruct
47
+
48
+ ### Dataset Structure
49
+
50
+ Each sample contains:
51
+ - `prompt`: A sequence of randomly generated tokens (high context)
52
+ - `prompt_length`: Number of tokens in the prompt
53
+ - `response_length`: Number of tokens in the response
54
+
55
+ ```python
56
+ {
57
+ 'prompt': str,
58
+ 'prompt_length': int,
59
+ 'response_length': int
60
+ }
61
+ ```
62
+
63
+ ### Token Generation
64
+
65
+ - Tokens are randomly sampled from the vocabulary of the Llama-3.1-8B-Instruct tokenizer
66
+ - Each sample is independently generated with lengths drawn from the specified distributions
67
+ - The dataset ensures realistic token sequences while maintaining controlled length distributions
68
+
69
+ ## Related Datasets
70
+
71
+ This dataset is part of a suite of three synthetic benchmark datasets, each designed for different workload patterns:
72
+
73
+ 1. **🔷 synthetic_dataset_high-low** (this dataset)
74
+ - High context (32k tokens), low generation (200 tokens)
75
+ - Focus: Prompt processing efficiency, TTFT optimization
76
+
77
+ 2. **synthetic_dataset_mid-mid**
78
+ - Medium context (1k tokens), medium generation (1k tokens)
79
+ - Focus: Balanced workload, realistic API scenarios
80
+
81
+ 3. **synthetic_dataset_low-mid**
82
+ - Low context (10-120 tokens), medium generation (1.5k tokens)
83
+ - Focus: Generation throughput, creative writing scenarios
84
+
85
+ ## Benchmarking with vLLM
86
+
87
+ This dataset is designed for use with the vLLM inference framework. The vLLM engine supports a `min_tokens` parameter, allowing you to pass `min_tokens=max_tokens=response_length` for each prompt. This ensures that the response length follows the defined distribution.
88
+
89
+ ### Setup
90
+
91
+ First, install vLLM and start the server:
92
+
93
+ ```bash
94
+ pip install vllm
95
+
96
+ # Start the vLLM server
97
+ vllm serve meta-llama/Llama-3.1-8B-Instruct
98
+ ```
99
+
100
+ ### Usage Example
101
+
102
+ ```python
103
+ from datasets import load_dataset
104
+ from openai import OpenAI
105
+
106
+ # Load the dataset
107
+ dataset = load_dataset("jonasluehrs-jaai/synthetic_dataset_high-low")
108
+
109
+ # Initialize vLLM client (OpenAI-compatible API)
110
+ client = OpenAI(
111
+ base_url="http://localhost:8000/v1",
112
+ api_key="token-abc123",
113
+ )
114
+
115
+ # Use a sample from the dataset
116
+ sample = dataset['train'][0]
117
+
118
+ # Make a completion request with controlled response length
119
+ completion = client.chat.completions.create(
120
+ model="meta-llama/Llama-3.1-8B-Instruct",
121
+ messages=[{"role": "user", "content": sample['prompt']}],
122
+ max_tokens=sample['response_length'],
123
+ extra_body={"min_tokens": sample['response_length']},
124
+ )
125
+
126
+ print(f"Generated {len(completion.choices[0].message.content)} characters")
127
+ ```
128
+
129
+ For more information, see the [vLLM OpenAI-compatible server documentation](https://docs.vllm.ai/en/v0.8.3/serving/openai_compatible_server.html).
130
+
131
+ ## License
132
+
133
+ This dataset is released under the MIT License.