The dataset viewer is not available for this split.
Error code: TooBigContentError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Arc CRM Benchmark Dataset
Dataset Description
The Arc CRM Benchmark is a production-realistic synthetic CRM environment dataset for evaluating LLM agents on state-modifying workflows. This dataset provides a comprehensive testbed for measuring agent performance, reliability, and adaptation through continual learning frameworks.
The dataset contains 1,200 multi-turn conversations covering diverse CRM workflows with varying complexity. Each conversation simulates realistic user interactions with a CRM system, requiring agents to execute tool calls, manage state, and handle cross-turn references.
Dataset Summary
- Total Conversations: 1,200
- Format: JSONL (one conversation per line)
- Complexity Distribution:
- Simple (1-3 turns): 280 conversations (~23%)
- Medium (4-6 turns): 625 conversations (~52%)
- Complex (7-10 turns): 295 conversations (~25%)
Workflow Categories
The dataset spans 9 distinct workflow categories derived from production CRM task definitions:
- Opportunity Management: Create, modify, search, view details
- Quote Generation and Management
- Client and Contact Management
- Document Upload and Management
- Contract Creation and Tracking
- Note and Communication Logging
- Cross-entity workflows combining multiple operations
Key Features
- Production-Realistic CRM Schema: Full entity model with strict validation, foreign-key relationships, enum constraints, and business logic guards
- Template References: Conversations use
{{turn_N.field}}syntax for cross-turn entity references - Schema Compliance: All tool arguments validated against production CRM schema
- Deterministic Generation: Every conversation can be regenerated from seed data and schema definitions
- Initial State: Each conversation includes initial entity state (clients, opportunities, quotes, contracts, documents, notes)
- Expected Responses: Ground-truth assistant responses for LLM judge evaluation
- Success Criteria: Multiple evaluation modes (all_turns, final_state, both)
- Failure Scenarios: Includes conversations with expected failures for robustness testing
Dataset Structure
Each conversation contains:
conversation_id: Unique identifier for the conversationworkflow_category: Category of workflow (e.g., "Opportunity Management", "Client Management")complexity_level: "simple", "medium", or "complex"turns: List of conversation turns, each containing:turn_id: Sequential turn number (1-indexed)user_utterance: Natural language user inputexpected_tool: Tool name expected to be calledexpected_args: Dictionary of expected arguments (may contain{{turn_N.field}}templates)references_previous_turns: List of turn IDs this turn referencesexpect_success: Whether this turn is expected to succeedexpected_error_substring: If expect_success=False, substring to match in error messagefailure_category: Category of failure if this is a failure scenarioexpected_response: Structured description of expected assistant reply with evaluation criteria
initial_entities: Dictionary of entities that exist before conversation starts (seed_data with Client, Contact, Opportunity, Quote, Contract entities)final_expected_state: Expected state after all turns complete (for validation)success_criteria: How to evaluate success ("all_turns", "final_state", or "both")contains_failure: Whether conversation contains a failure scenariofailure_turn: Turn number where failure is expected (if contains_failure=True)verification_mode: How to verify conversation success ("database" or "mock")chain_id: Optional chain identifier if conversation is part of a workflow chainsegment_number: Optional segment number within a chain (1-indexed)segment_boundaries: Optional list of turn numbers where segments end (for chained conversations)expected_outcome: Optional expected outcome descriptioncumulative_context: Optional dictionary of context accumulated from previous segments (for chains)
Usage
Loading the Dataset
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("Arc-Intelligence/arc-crm-benchmark", split="train")
# Get first conversation
conv = dataset[0]
print(f"Conversation ID: {conv['conversation_id']}")
print(f"Complexity: {conv['complexity_level']}")
print(f"Workflow: {conv['workflow_category']}")
print(f"Number of turns: {len(conv['turns'])}")
Note: The Hugging Face dataset viewer is not available for this dataset due to the size and complexity of individual conversations (each conversation contains deeply nested structures with multiple turns, initial entities, and expected responses). However, the dataset is fully functional and can be loaded programmatically using the datasets library as shown above.
Example: Iterating Through Turns
conversation = dataset[0]
for turn in conversation['turns']:
print(f"Turn {turn['turn_id']}: {turn['user_utterance']}")
print(f" Expected tool: {turn['expected_tool']}")
print(f" Expected args: {turn['expected_args']}")
if turn.get('expected_response'):
print(f" Expected response: {turn['expected_response']['text']}")
Example: Accessing Initial State
conversation = dataset[0]
initial_entities = conversation['initial_entities']['seed_data']
# Access pre-existing clients
if initial_entities and 'Client' in initial_entities:
for client_id, client_data in initial_entities['Client'].items():
print(f"Client: {client_data['name']} ({client_id})")
Evaluation
This dataset is designed for evaluating:
- Tool calling accuracy: Correct tool selection and argument parsing
- Multi-turn conversation handling: Maintaining context across turns
- State management: Tracking and modifying CRM entities correctly
- Cross-turn reference resolution: Resolving
{{turn_N.field}}template references - Response quality: Natural language communication of results
- Robustness: Handling failure scenarios and error conditions
The dataset is compatible with the Arc CRM Benchmark evaluation harness, which provides comprehensive metrics including tool execution validation, response quality assessment via LLM judge, and token usage tracking.
Related Resources
- Repository: github.com/Arc-Computer/arc-crm-benchmark
- Atlas SDK: github.com/Arc-Computer/atlas-sdk - Runtime adaptive learning framework
- Documentation: docs.arc.computer
Citation
If you use this dataset in your research, please cite:
@software{arc_crm_benchmark,
title = {Arc CRM Benchmark: A Synthetic Environment for LLM Agent Evaluation},
author = {Arc Intelligence},
year = {2025},
url = {https://github.com/Arc-Computer/arc-crm-benchmark},
version = {1.0}
}
License
This dataset is released under the MIT License. See the LICENSE file for details.
Acknowledgments
This benchmark was developed in collaboration with the Reply Scale AI research team to provide a production-realistic testbed for evaluating LLM agents on state-modifying workflows. Given their extensive exposure to production CRM systems deployed at large organizations, the Reply Scale AI research team contributed critical domain expertise in designing the CRM schema, workflow patterns, and interaction models. This collaboration ensured the benchmark accurately reflects the API structures, validation constraints, and operational complexity found in enterprise production environments, enabling researchers and practitioners to evaluate agent reliability, efficiency, and adaptation capabilities in realistic scenarios that mirror actual deployment conditions.
Contact
For questions, issues, or collaboration opportunities:
- GitHub Issues: github.com/Arc-Computer/arc-crm-benchmark/issues
- Organization: Arc Intelligence
- Downloads last month
- 30