jonasluehrs-jaai's picture
Update README.md
b20f67d verified
|
raw
history blame
4.03 kB
metadata
dataset_info:
  features:
    - name: messages
      dtype: string
    - name: prompt_length
      dtype: int64
    - name: response_length
      dtype: int64
  splits:
    - name: train
      num_bytes: 423667776
      num_examples: 2000
  download_size: 384009093
  dataset_size: 423667776
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

Synthetic Dataset: High Context, Low Generation

Dataset Description

This is a synthetic benchmark dataset designed to test LLM inference performance in high-context, low-generation scenarios. The dataset consists of 2,000 samples with randomly generated tokens that simulate real-world workloads where models process long input contexts but generate relatively short responses.

Use Cases

This dataset is ideal for benchmarking:

  • Document analysis with short answers
  • Long-context Q&A systems
  • Information extraction from large documents
  • Prompt processing efficiency and TTFT (Time-to-First-Token) optimization

Dataset Characteristics

  • Number of Samples: 2,000
  • Prompt Length Distribution: Normal distribution
    • Mean: 32,000 tokens
    • Standard deviation: 10,000 tokens
  • Response Length Distribution: Normal distribution
    • Mean: 200 tokens
    • Standard deviation: 50 tokens
  • Tokenizer: meta-llama/Llama-3.1-8B-Instruct

Dataset Structure

Each sample contains:

  • prompt: A sequence of randomly generated tokens (high context)
  • prompt_length: Number of tokens in the prompt
  • response_length: Number of tokens in the response
{
    'prompt': str,
    'prompt_length': int,
    'response_length': int
}

Token Generation

  • Tokens are randomly sampled from the vocabulary of the Llama-3.1-8B-Instruct tokenizer
  • Each sample is independently generated with lengths drawn from the specified distributions
  • The dataset ensures realistic token sequences while maintaining controlled length distributions

Related Datasets

This dataset is part of a suite of three synthetic benchmark datasets, each designed for different workload patterns:

  1. 🔷 synthetic_dataset_high-low (this dataset)

    • High context (32k tokens), low generation (200 tokens)
    • Focus: Prompt processing efficiency, TTFT optimization
  2. synthetic_dataset_mid-mid

    • Medium context (1k tokens), medium generation (1k tokens)
    • Focus: Balanced workload, realistic API scenarios
  3. synthetic_dataset_low-mid

    • Low context (10-120 tokens), medium generation (1.5k tokens)
    • Focus: Generation throughput, creative writing scenarios

Benchmarking with vLLM

This dataset is designed for use with the vLLM inference framework. The vLLM engine supports a min_tokens parameter, allowing you to pass min_tokens=max_tokens=response_length for each prompt. This ensures that the response length follows the defined distribution.

Setup

First, install vLLM and start the server:

pip install vllm

# Start the vLLM server
vllm serve meta-llama/Llama-3.1-8B-Instruct

Usage Example

from datasets import load_dataset
from openai import OpenAI

# Load the dataset
dataset = load_dataset("jonasluehrs-jaai/synthetic_dataset_high-low")

# Initialize vLLM client (OpenAI-compatible API)
client = OpenAI(
    base_url="http://localhost:8000/v1",
    api_key="token-abc123",
)

# Use a sample from the dataset
sample = dataset['train'][0]

# Make a completion request with controlled response length
completion = client.chat.completions.create(
    model="meta-llama/Llama-3.1-8B-Instruct",
    messages=[{"role": "user", "content": sample['prompt']}],
    max_tokens=sample['response_length'],
    extra_body={"min_tokens": sample['response_length']},
)

print(f"Generated {len(completion.choices[0].message.content)} characters")

For more information, see the vLLM OpenAI-compatible server documentation.

License

This dataset is released under the MIT License.