DeepResearchGym: A Free, Transparent, and Reproducible Evaluation Sandbox for Deep Research

Paper · arXiv 2505.19253 · Published May 25, 2025
Agentic ResearchDeep ResearchEvaluations

Deep research systems represent an emerging class of agentic information retrieval methods that generate comprehensive and well-supported reports to complex queries. However, most existing frameworks rely on dynamic commercial search APIs, which pose reproducibility and transparency challenges in addition to their cost. To address these limitations, we introduce DeepResearchGym, an opensource sandbox that combines a reproducible search API with a rigorous evaluation protocol for benchmarking deep research systems. The API indexes large-scale public web corpora, namely ClueWeb22 and FineWeb, using a state-of-the-art dense retriever and approximate nearest neighbor search via DiskANN. It achieves lower latency than popular commercial APIs while ensuring stable document rankings across runs, and is freely available for research use. To evaluate deep research systems’ outputs, we extend the Researchy Questions benchmark with automatic metrics through LLM-as-a-judge assessments to measure alignment with users’ information needs, retrieval faithfulness, and report quality. Experimental results show that systems integrated with DeepResearchGym achieve performance comparable to those using commercial APIs, with performance rankings remaining consistent across evaluation metrics. A human evaluation study further confirms that our automatic protocol aligns with human preferences, validating the framework’s ability to help support controlled assessment of deep research systems. Our code and API documentation are available at https://www.deepresearchgym.ai.

Building on this foundation, several deep research systems have been optimized for short-form factoid-style answering. These include reinforcement learning approaches that enable search agents to autonomously navigate the web, issue iterative queries, and synthesize concise responses [10, 33, 42], as well as prompt-based methods like Search-o1 [14], which equips LLMs with the ability to trigger web searches when encountering knowledge gaps, leveraging the collected evidence to guide synthesis.

A complementary line of work has advanced towards comprehensive long-form report generation frameworks. GPTResearcher [5] orchestrates multi-agent workflows to coordinate planning, retrieval, and drafting across hybrid data sources, incorporating techniques such as report planning [38] and query decomposition [4] to enhance long-form synthesis, while maintaining coherence and completeness. Building on these paradigms, other deep research systems emphasize agentic tool use to extend reasoning capabilities beyond pure text-based retrieval. For instance, OpenDeepSearch [1] implements two agentic variants: one that follows an action-observation cycle, allowing the model to iteratively query external resources and refine its reasoning; and another that augments this by generating and executing Python scripts for more complex computational tasks.