Question Answering and Search
Related topics:
- A Non-Factoid Question-Answering TaxonomyINSTRUCTION REASON EVIDENCE-BASED COMPARISON EXPERIENCE DEBATE INSTRUCTION You want to understand the procedure/method of doing/achieving something. Instructions/guidelines provided in a step-…
- ALIGN: Prompt-based Attribute Alignment for Reliable, Responsible, and Personalized LLM-based Decision-MakingLarge language models (LLMs) are increasingly being used as decision aids. However, users have diverse values and preferences that can affect their decision-making, which requires novel methods for LL…
- Abg-CoQA: Clarifying Ambiguity in Conversational Question AnsweringWe introduce Abg-CoQA, a novel crowdsourced dataset for clarifying ambiguities in conversational question answering systems. Our dataset contains 8,615 questions with answers where 994 questions are a…
- Active Listening: Personalized Question Generation in Open-Domain Social Conversation with User Model Based PromptingWe hypothesize that users of conversational systems want a more personalized experience, and existing work shows that users are highly receptive to personalized questions (PQs). Question Generation ta…
- Aligning Language Models to Explicitly Handle AmbiguityHowever, conversational agents built upon even the most recent large language models (LLMs) face challenges in processing ambiguous inputs, primarily due to the following two hurdles: (1) LLMs are not…
- Answer is All You Need: Instruction-following Text Embedding via Answering the QuestionThis work aims to build a text embedder that can capture characteristics of texts specified by user instructions. Despite its tremendous potential to deploy user-oriented embeddings, none of previous …
- Asking Clarifying Questions Based on Negative Feedback in Conversational SearchConversational search systems make it possible to improve user satisfaction by asking questions to clarify users’ search intents. This, however, can take significant effort to answer a series of quest…
- Automatic Extraction of Metaphoric Analogies from Literary Texts: Task Formulation, Dataset Construction, and EvaluationExtracting metaphors and analogies from free text requires high-level reasoning abilities such as abstraction and language understanding. Our study focuses on the extraction of the concepts that form …
- Backtracing: Retrieving the Cause of the QueryWhile information retrieval (IR) systems may provide answers for such user queries, they do not directly assist content creators—such as lecturers who want to improve their content—identify segments t…
- Beyond Accuracy: The Role of Calibration in Self-Improving Large Language ModelsLarge Language Models (LLMs) have demonstrated remarkable self-improvement capabilities, whereby models iteratively revise their outputs through self-generated feedback. While this reflective mechanis…
- Building and Evaluating Open-Domain Dialogue Corpora with Clarifying QuestionsNass and Moon (2000) conclude that people have similar expectations from talking to bots and humans. This similarity is a possible explanation for why sometimes user requests might be ambiguous and in…
- Can Large Language Models Understand Context?Understanding context is key to understanding human language, an ability which Large Language Models (LLMs) have been increasingly seen to demonstrate to an impressive extent. However, though the eval…
- Chain-of-Questions Training with Latent Answers for Robust Multistep Question AnsweringOur framework trains a model to generate sub-questions and their corresponding sub-answers one at a time, as shown in Fig. 1, then aggregates those sub-answers to answer the original question
- Clarifying the Path to User Satisfaction: An Investigation into Clarification UsefulnessSeveral models are proposed in the ConvAI3 challenge (Aliannejadi et al., 2020), aiming to incorporate CQs in the ranking process, mostly proposed based on pre-trained language models. Complementing t…
- Cue-CoT: Chain-of-thought Prompting for Responding to In-depth Dialogue Questions with LLMsHowever, most of the previous works prompt the LLMs to directly generate a response based on the dialogue context, overlooking the underlying linguistic cues about the user status exhibited in the con…
- Diplomat: A Dialogue Dataset for Situated PragMATic Reasoning“We introduce a new benchmark, Diplomat, aiming at a unified paradigm for pragmatic reasoning and situated conversational understanding. Compared with previous works that treat different figurative ex…
- Do We Trust ChatGPT as much as Google Search and Wikipedia?Understanding how users perceive content from generative AI tools is crucial because it can help reduce unwarranted trust in inaccurate information and mitigate the spread of misinformation. A focus g…
- Domain-specific Question Answering with Hybrid SearchWith the increasing adoption of Large Language Models (LLMs) in enterprise settings, ensuring accurate and reliable question-answering systems remains a critical challenge. Building upon our previous …
- Editing-Based SQL Query Generation for Cross-Domain Context-Dependent QuestionsWe focus on the cross-domain context dependent text-to-SQL generation task. Based on the observation that adjacent natural language questions are often linguistically dependent and their corresponding…
- Enhancing AI-Assisted Group Decision Making through LLM-Powered Devil's AdvocateGroup decision making plays a crucial role in our complex and interconnected world. The rise of AI technologies has the potential to provide data-driven insights to facilitate group decision making, a…
- From Passive to Active Reasoning: Can Large Language Models Ask the Right Questions under Incomplete Information?While existing benchmarks probe the reasoning abilities of large language models (LLMs) across diverse domains, they predominantly assess passive reasoning, providing models with all the information n…
- Generator-Retriever-Generator: A Novel Approach to Open-domain Question Answering“Open-domain question answering (QA) tasks usually require the retrieval of relevant information from a large corpus to generate accurate answers. We propose a novel approach called Generator-Retrieve…
- JointLK: Joint Reasoning with Language Models and Knowledge Graphs for Commonsense Question Answering“An extensive research path is to elaborately design graph neural networks (GNNs) (Scarselli et al., 2008) to perform reasoning over explicit structural common sense knowledge from external knowledge …
- Knowledge Graph Prompting for Multi-Document Question Answering“The ’pre-train, prompt, predict’ paradigm of large language models (LLMs) has achieved remarkable success in open domain question answering (OD-QA). However, few works explore this paradigm in the sc…
- Knowledge Retrieval Based on Generative AIAbstract—This study develops a question-answering system based on Retrieval-Augmented Generation (RAG) using Chinese Wikipedia and Lawbank as retrieval sources. Using TTQA and TMMLU+ as evaluation dat…
- Language Models Are Greedy Reasoners: A Systematic Formal Analysis of Chain-of-Thoughtit is unclear how these models obtain the answers and whether they rely on simple heuristics rather than the generated chain-of-thought. To enable systematic exploration of the reasoning ability of LL…
- Large Language Models Know Your Contextual Search Intent: A Prompting Framework for Conversational SearchHowever, one of the main challenges for this beautiful vision is that the users’ queries may contain some linguistic problems (e.g., omissions and coreference) and it becomes much harder to capture th…
- Large Language Models Meet Knowledge Graphs for Question Answering: Synthesis and OpportunitiesHowever, LLM-based QA struggles with complex QA tasks due to poor reasoning capacity, outdated knowledge, and hallucinations. Several recent works synthesize LLMs and knowledge graphs (KGs) for QA to …
- Learning to Ask Appropriate Questions in Conversational RecommendationConversational recommender systems (CRSs) have revolutionized the conventional recommendation paradigm by embracing dialogue agents to dynamically capture the fine-grained user preference. In a typica…
- Learning to Ask Critical Questions for Assisting Product SearchIn this paper, we propose a dual-learning model that hybrids the best from both implicit session feedback and proactively clarifying with users on the most critical questions. Hence, there are two br…
- Learning to Select the Relevant History Turns in Conversational Question Answering“The increasing demand for web-based digital assistants has given a rapid rise in the interest of the Information Retrieval (IR) community towards the field of conversational question answering (ConvQ…
- LongRAG: A Dual-Perspective Retrieval-Augmented Generation Paradigm for Long-Context Question AnsweringLongcontext question answering (LCQA) (Caciularu et al., 2022), which has been recently advanced significantly by LLMs, is a complex task that requires reasoning over a long document or multiple docum…
- Minds versus Machines: Rethinking Entailment Verification with Language ModelsLeveraging a comprehensively curated entailment verification benchmark, we evaluate both human and LLM performance across various reasoning categories. Our benchmark includes datasets from three categ…
- Multi-hop Question Answering via Reasoning ChainsMulti-hop question answering requires models to gather information from different parts of a text to answer a question. Most current approaches learn to address this task in an end-to-end way with neu…
- MultiChallenge: A Realistic Multi-Turn Conversation Evaluation Benchmark Challenging to Frontier LLMsWe present MultiChallenge, a pioneering benchmark evaluating large language models (LLMs) on conducting multi-turn conversations with human users, a crucial yet underexamined capability for their appl…
- News Source Citing Patterns in AI Search SystemsWe address this gap by analyzing data from the AI Search Arena, a head-to-head evaluation platform for AI search systems. The dataset comprises over 24,000 conversations and 65,000 responses from mode…
- No that's not what I meant: Handling Third Position Repair in Conversational Question AnsweringThe ability to handle miscommunication is crucial to robust and faithful conversational AI. People usually deal with miscommunication immediately as they detect it, using highly systematic interaction…
- Probing Structured Semantics Understanding and Generation of Language Models via Question AnsweringAs John McCarthy (McCarthy, 1990, 1959) points out, in order to a better understanding of natural language, it is necessary for an intelligence system to understand the “deep structure” (Chomsky, 2011…
- Probing the Multi-turn Planning Capabilities of LLMs via 20 Question GamesLarge language models (LLMs) are effective at answering questions that are clearly asked. However, when faced with ambiguous queries they can act unpredictably and produce incorrect outputs. This unde…
- Progressive-Hint Prompting Improves Reasoning in Large Language ModelsThe performance of Large Language Models (LLMs) in reasoning tasks depends heavily on prompt design, with Chain-of-Thought (CoT) and self-consistency being critical methods that enhance this ability. …
- Query Understanding in the Age of Large Language Models“The central problem of IR systems, also referred to as the “holy grail” of IR, is to overcome the vocabulary mismatch between the user and the system [75]. This leads to the challenge of matching the…
- QuestBench: Can LLMs ask the right question to acquire information in reasoning tasks?Large language models (LLMs) have shown impressive performance on reasoning benchmarks like math and logic. While many works have largely assumed well-defined tasks, real-world queries are often under…
- ReasonVQA: A Multi-hop Reasoning Benchmark with Structural Knowledge for Visual Question AnsweringIn this paper, we propose a new dataset, ReasonVQA, for the Visual Question Answering (VQA) task. Our dataset is automatically integrated with structured encyclopedic knowledge and constructed using a…
- Reinforcement Learning for Optimizing RAG for Domain ChatbotsLarge Language Models (LLM), conversational assistants have become prevalent for domain use cases. LLMs acquire the ability to contextual question answering through extensive training, and Retrieval A…
- SEAL: Self-Evolving Agentic Learning for Conversational Question Answering over Knowledge GraphsKnowledge-based conversational question answering (KBCQA) confronts persistent challenges in resolving coreference, modeling contextual dependencies, and executing complex logical reasoning. Existing …
- Sleep-time Compute: Beyond Inference Scaling at Test-timeScaling test-time compute has emerged as a key ingredient for enabling large language models (LLMs) to solve difficult problems, but comes with high latency and inference cost. We introduce sleep-time…
- Stream of Search (SoS): Learning to Search in LanguageLanguage models are rarely shown fruitful mistakes while training. They then struggle to look beyond the next token, suffering from a snowballing of errors and struggling to predict the consequence of…
- Structured and Natural Responses Co-generation for Conversational SearchDescription automatically generated](file:////Users/adrianchan/Library/Group%20Containers/UBF8T346G9.Office/TemporaryItems/msohtmlclip/clip_image006.png)
- Talking About Large Language Models“Third, a great many tasks that demand intelligence in humans can be reduced to next token prediction with a sufficiently performant model. It is the last of these three surprises that is the focus of…
- The Consensus Game: Language Model Generation via Equilibrium SearchWhen applied to question answering and other text generation tasks, language models (LMs) may be queried generatively (by sampling answers from their output distribution) or discriminatively (by using…
- Think-on-Graph: Deep and Responsible Reasoning of Large Language Model with Knowledge Graph“Large language models (LLMs) have made significant strides in various tasks, yet they often struggle with complex reasoning and exhibit poor performance in scenarios where knowledge traceability, tim…
- Thinking Assistants: LLM-Based Conversational Assistants that Help Users Think By Asking rather than Answeringcomplex tasks like research and strategic thinking often benefit from a more comprehensive approach to augmenting the thinking process rather than passively getting information. We introduce the conce…
- Thread: A Logic-Based Data Organization Paradigm for How-To Question Answering with Retrieval Augmented GenerationRecent advances in retrieval-augmented generation have significantly improved the performance of question-answering systems, particularly on factoid ‘5Ws’ questions. However, these systems still face …
- Topic Shift Detection for Mixed Initiative ResponseConversational systems have become a part and parcel of our everyday life and virtual assistants like Amazon’s Alexa1, Google Home2 or Apple’s Siri 3 are soon becoming conventional household items (Te…
- Typed-RAG: Type-aware Multi-Aspect Decomposition for Non-Factoid Question AnsweringNon-factoid question-answering (NFQA) poses a significant challenge due to its open-ended nature, diverse intents, and the need for multi-aspect reasoning, which renders conventional factoid QA approa…
- Uncertainty of Thoughts: Uncertainty-Aware Planning Enhances Information Seeking in Large Language ModelsIn this work, we introduce Uncertainty of Thoughts (UoT), an algorithm to augment large language models with the ability to actively seek information by asking effective questions. UoT combines 1) an …