Agent Benchmarks
Viewer • Updated • 692 • 970 • 22Note SciBench is a novel benchmark for college-level scientific problems sourced from instructional textbooks
google/frames-benchmark
Viewer • Updated • 824 • 7k • 237Note 824 challenging multi-hop questions requiring information from 2-15 Wikipedia articles. If we adapt this to a search agent with no RAG component it's super interesting.
-
gaia-benchmark/GAIA
Viewer • Updated • 932 • 16.8k • 504
HuggingFaceH4/MATH-500
Benchmark • Updated • 500 • 89.3k • 246Note can be nicely adapted to an agentic setting with code
smolagents/browse_comp
Viewer • Updated • 1.27k • 615 • 4Note A simple and challenging benchmark that measures the ability of AI agents to locate hard-to-find information online
zai-org/ComplexFuncBench
Updated • 171 • 11Note The ComplexFuncBench dataset encompass 1,000 complex function calling samples from five aspects: (1) Function calling with multi-step in single turn; (2) Function calling with user-provided constraints; (3) Function calling that requires parameter value reasoning from implicit information; (4) Function calling with long parameter values that exceed 500 tokens; and (5) Function calling with 128k long-context length.
galileo-ai/agent-leaderboard
Viewer • Updated • 1.28k • 383 • 32Note Comprehensive evaluation across multiple domains and interaction types by leveraging diverse datasets: BFCL: Mathematics, Entertainment, Education, and Academic Domains τ-bench: Retail and Airline Industry Scenarios xLAM: Cross-domain Data Generation (21 Domains) ToolACE: API Interactions across 390 Domains
zai-org/SWE-Dev-train
Viewer • Updated • 20.1k • 341 • 16Note Training traces that helped a 32B model achieve gpt-4o level performance on SWE-bench. OpenHands framework
-
SWE-Gym/SWE-Gym
Viewer • Updated • 2.44k • 12.6k • 21