Search Arena: Analyzing Search-Augmented LLMs
Search-augmented language models combine web search with Large Language Models (LLMs) to improve response groundedness and freshness. However, analyzing these systems remains challenging: existing datasets are limited in scale and narrow in scope, often constrained to static, single-turn, fact-checking questions. In this work, we introduce Search Arena, a crowd-sourced, large-scale, human-preference dataset of over 24,000 paired multi-turn user interactions with search-augmented LLMs. The dataset spans diverse intents and languages, and contains full system traces with around 12,000 human preference votes. Our analysis reveals that user preferences are influenced by the number of citations, even when the cited content does not directly support the attributed claims, uncovering a gap between perceived and actual credibility. Furthermore, user preferences vary across cited sources, revealing that community-driven platforms are generally preferred and static encyclopedic sources are not always appropriate and reliable. To assess performance across different settings, we conduct cross-arena analyses by testing search-augmented LLMs in a general-purpose chat environment and conventional LLMs in search-intensive settings. We find that web search does not degrade and may even improve performance in non-search settings; however, the quality in search settings is significantly affected if solely relying on the model’s parametric knowledge. We open-sourced the dataset to support future research in this direction.
fact-checking accounts for only one-fifth of real-world user queries; the majority of user prompts, such as seeking analyses, recommendations, or problem-solving guidance, require a combination of factual retrieval, reasoning, and open-ended dialogue. User expectations also extend beyond factual correctness: preferences can be shaped by the number, relevance, and credibility of citations, as well as the presentation style of responses.
We find that reasoning, larger search context window, and longer responses are positively associated with user preferences. Since citations are central to the trustworthiness of web-grounded outputs, we also examine citation features. Our results show that users prefer responses with a higher number of cited sources (Figure 4). In addition, users prefer model responses citing tech-related platforms, community blogs, and social networks, but less favor Wikipedia (Figure 6). While correctly attributed citations, as expected, positively interact with user preferences (β=0.285), we also observe a positive association between user preference and the number of irrelevant citations (β =0.273). This finding raises concerns that users may be overly influenced by the presence of citations, even when they do not support the associated claims.