Prompts and Prompting
Related topics:
- 1. ELI5 (Explain Like I'm 5)Let AI explain anything you don’t understand—fast, and without complicated prompts. Just type `ELI5: [your topic]` and get a simple, clear explanation.
- 2. TL;DR (Summarize Long Text)Want a quick summary? Just write `TLDR:` and paste in any long text you want condensed. It’s that easy.
- 3. Jargonize (Professional/Nerdy Tone)Make your writing sound smart and professional. Perfect for LinkedIn posts, pitch decks, whitepapers, and emails. Just add `Jargonize:` before your text.
- 4. Humanize (Sound More Natural)Struggling to make AI sound human? No need for extra tools—just type `Humanize:` before your prompt and get natural, conversational responses. Bonus: No more cringe words like “revolutionary,” “ga…
- 5. Feynman Technique (Deep Understanding)Go beyond basics and really understand complex topics. This 4-step technique breaks things down so you actually get it: · Teach it to a child (ELI5) · Identify knowledge gaps · …
- A Survey on Large Language Models for RecommendationLikangWu , Zhi Zheng , Zhaopeng Qiu , Hao Wang, Hongchao Gu, Tingjia Shen, Chuan Qin, Chen Zhu, Hengshu Zhu, Qi Liu, Hui Xiong, Enhong Chen University of Science and Technology of China, Career Scie…
- A Survey on Prompt TuningPrompt tuning has emerged as a promising parameter-efficient fine-tuning (PEFT) approach that offers several advantages: (1) parameter efficiency through updating only a small group of continuous vect…
- A comprehensive taxonomy of hallucinations in Large Language ModelsThis report provides a comprehensive taxonomy of LLM hallucinations, beginning with a formal definition and a theoretical framework that posits its inherent inevitability in computable LLMs, irrespect…
- ALIGN: Prompt-based Attribute Alignment for Reliable, Responsible, and Personalized LLM-based Decision-MakingLarge language models (LLMs) are increasingly being used as decision aids. However, users have diverse values and preferences that can affect their decision-making, which requires novel methods for LL…
- Ask, and it shall be given: Turing completeness of promptingIn this work, we show that prompting is in fact Turing-complete: there exists a finite-size Transformer such that for any computable function, there exists a corresponding prompt following which the T…
- Attribute Controlled Dialogue Prompting“Prompt-tuning has become an increasingly popular parameter-efficient method for adapting large pretrained language models to downstream tasks. However, both discrete prompting and continuous promptin…
- AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Promptswe develop AUTOPROMPT, an automated method to create prompts for a diverse set of tasks, based on a gradient-guided search. Using AUTOPROMPT, we show that masked language models (MLMs) have an inheren…
- Automatic Prompt Augmentation and Selection with Chain-of-Thought from Labeled Data“The recent success in large language models (LLMs) has shown that properly prompted LLMs demonstrate emergent capabilities on complex understanding and question-answering tasks (Wei et al., 2022a). E…
- Automatic Prompt Optimization with "Gradient Descent" and Beam SearchWe propose a simple and nonparametric solution to this problem, Prompt Optimization with Textual Gradients (ProTeGi), which is inspired by numerical gradient descent to automatically improve prompts, …
- Beyond Prompt-Induced Lies: Investigating LLM Deception on Benign PromptsLarge Language Models (LLMs) have been widely deployed in reasoning, planning, and decision-making tasks, making their trustworthiness a critical concern. The potential for intentional deception, wher…
- Boosted Prompt Ensembles for Large Language Modelswe propose a prompt ensembling method for large language models, which uses a small dataset to construct a set of few shot prompts that together comprise a “boosted prompt ensemble”. The few shot exam…
- Bridging the gulf of envisioning: Cognitive design challenges in llm interfaces.Large language models (LLMs) exhibit dynamic capabilities and appear to comprehend complex and ambiguous natural language prompts. However, calibrating LLM interactions is challenging for interface de…
- CDW-CoT: Clustered Distance-Weighted Chain-of-Thoughts ReasoningLarge Language Models (LLMs) have recently achieved impressive results in complex reasoning tasks through Chain of Thought (CoT) prompting. However, most existing CoT methods rely on using the same pr…
- Can AI Have a Personality? Prompt Engineering for AI Personality Simulation: A Chatbot Case Study in Gender-Affirming Voice Therapy TrainingAbstract—This thesis investigates whether large language models (LLMs) can be guided to simulate a consistent personality through prompt engineering. The study explores this concept within the context…
- ChatGPT codes
- CoT-Self-Instruct: Building high-quality synthetic prompts for reasoning and non-reasoning tasksWe propose CoT-Self-Instruct, a synthetic data generation method that instructs LLMs to first reason and plan via Chain-of-Thought (CoT) based on the given seed tasks, and then to generate a new synth…
- CogBench: a large language model walks into a psychology labCogBench, a benchmark that includes ten behavioral metrics derived from seven cognitive psychology experiments. This novel approach offers a toolkit for phenotyping LLMs’ behavior. We apply CogBench t…
- Complexity-Based Prompting for Multi-Step Reasoningwhich reasoning examples make the most effective prompts. In this work, we propose complexity-based prompting, a simple and effective example selection scheme for multi-step reasoning. We show that pr…
- Conversational Prompt EngineeringConversational Prompt Engineering (CPE), a user-friendly tool that helps users create personalized prompts for their specific tasks. CPE uses a chat model to briefly interact with users, helping them …
- Decomposed Prompting: A Modular Approach for Solving Complex TasksFew-shot prompting is a surprisingly powerful way to use Large Language Models (LLMs) to solve various tasks. However, this approach struggles as the task complexity increases or when the individual r…
- Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference“Particularly, we view LLMs as language layers in a Deep Language Network (DLN). The learnable parameters of each layer are the associated natural language prompts and the LLM at a given layer receive…
- Dialogue State Tracking with a Language Model using Schema-Driven PromptingTask-oriented conversational systems often use dialogue state tracking to represent the user’s intentions, which involves filling in values of pre-defined slots. Many approaches have been proposed, of…
- Do Prompt-Based Models Really Understand the Meaning of Their Prompts?“While last years saw a gold rush of papers (summarized in §2) that proposed automatic methods for optimizing prompts, Logan IV et al. (2021) compare a representative sample of these newly proposed me…
- Dynamic Prompting: A Unified Framework for Prompt Tuningthe efficacy of employing fixed soft prompts with a predetermined position for concatenation with inputs for all instances, irrespective of their inherent disparities, remains uncertain. Variables suc…
- EmotionPrompt: Leveraging Psychology for Large Language Models Enhancement via Emotional Stimulus“Large language models (LLMs) have achieved significant performance in many fields, such as reasoning, language understanding, and math problem-solving, and are regarded as an important step to artifi…
- Empowering Psychotherapy with Large Language Models: Cognitive Distortion Detection through Diagnosis of Thought PromptingIn the era of Large Language Models, we believe it is the right time to develop AI assistance for computational psychotherapy. We study the task of cognitive distortion detection and propose the Diagn…
- Extrapolation by Association: Length Generalization Transfer in TransformersTransformer language models have demonstrated impressive generalization capabilities in natural language domains, yet we lack a fine-grained understanding of how such generalization arises. In this pa…
- From Prompt Engineering to Prompt Science With Human in the LoopLarge Language Models (LLMs), in the recent years, have become more sophisticated and capable for them to be applicable in many situations and tasks. These tasks are not limited to information extract…
- From Sparse to Dense: GPT-4 Summarization with Chain of Density Prompting“Selecting the “right” amount of information to include in a summary is a difficult task. A good summary should be detailed and entity-centric without being overly dense and hard to follow. To better …
- Generating Proto-Personas through Prompt Engineering: A Case Study on Efficiency, Effectiveness and EmpathyIn this paper, we propose and empirically investigate a prompt engineering-based approach to generate proto-personas with the support of Generative AI (GenAI). Our goal is to evaluate the approach in …
- Guiding Large Language Models via Directional Stimulus Prompting“Since directly optimizing LLMs for specific tasks is either inefficient and infeasible for most users and developers, researchers resort to optimizing prompts instead. Prompt engineering approaches, …
- How Many Instructions Can LLMs Follow at Once?Production-grade LLM systems require robust adherence to dozens or even hundreds of instructions simultaneously. However, the instruction-following capabilities of LLMs at high instruction densities h…
- Inference-Aware Prompt Optimization for Aligning Black-Box Large Language ModelsPrompt optimization methods have demonstrated significant effectiveness in aligning black-box large language models (LLMs). In parallel, inference scaling strategies such as BEST-OF-N Sampling and MAJ…
- Instance-adaptive Zero-shot Chain-of-Thought Promptingthe efficacy of a singular, task-level prompt uniformly applied across the whole of instances is inherently limited since one prompt cannot be a good partner for all, a more appropriate approach shoul…
- Instruction Induction: From Few Examples to Natural Language Task Descriptions**The Instruction Paradigm** Efrat and Levy [2020] propose to learn new tasks from natural language instructions. Mishra et al. [2022] and Wang et al. [2022b] collect crowdsourcing instructions used t…
- Investigating task-specific prompts and sparse autoencoders for activation monitoringLanguage models can behave in unexpected and unsafe ways, and so it is valuable to monitor their outputs. Internal activations of language models encode additional information that could be useful for…
- KiPT: Knowledge-injected Prompt Tuning for Event DetectionEvent detection aims to detect events from the text by identifying and classifying event triggers (the most representative words). Most of the existing works rely heavily on complex downstream network…
- LLMs as Method Actors: A Model for Prompt Engineering and ArchitectureWe introduce “Method Actors” as a mental model for guiding LLM prompt engineering and prompt architecture. Under this mental model, LLMs should be thought of as actors; prompts as scripts and cues; an…
- Large Language Models Are Human-level Prompt Engineers“Prompt Engineering Prompting offers a natural and intuitive interface for humans to interact with and use generalist models such as LLMs. Due to its flexibility, prompting has been widely used as a g…
- Learning To Retrieve Prompts for In-Context LearningIn-context learning is a recent paradigm in natural language understanding, where a large pre-trained language model (LM) observes a test instance and a few training examples as its input, and directl…
- Leveraging Few-Shot Data Augmentation and Waterfall Prompting for Response Generation“Task-Oriented Dialogue (TOD) Systems are traditionally designed to facilitate users in achieving specific objectives, such as looking up train times or booking a flight in a dialogue setting. For the…
- Metacognitive Prompting Improves Understanding in Large Language Models“While previous research primarily focuses on refining the logical progression of responses, the concept of metacognition— often defined as “thinking about thinking”—offers a unique perspective. Origi…
- PRewrite: Prompt Rewriting with Reinforcement Learning“With the wide-scale proliferation of LLMs, prompting LLMs has become critical to achieving desired results on various downstream tasks. With the right prompts, LLMs can show impressive performance on…
- Plan, Verify and Switch: Integrated Reasoning with Diverse X-of-ThoughtsAs large language models (LLMs) have shown effectiveness with different prompting methods, such as Chain of Thought, Program of Thought, we find that these methods have formed a great complementarity …
- Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language ProcessingThis paper surveys and organizes research works in a new paradigm in natural language processing, which we dub “prompt-based learning”. Unlike traditional supervised learning, which trains a model to …
- Prefix-Tuning: Optimizing Continuous Prompts for Generationwe propose prefix-tuning, a lightweight alternative to fine-tuning for natural language generation tasks, which keeps language model parameters frozen and instead optimizes a sequence of continuous ta…
- ProSA: Assessing and Understanding the Prompt Sensitivity of LLMsOur extensive study, spanning multiple tasks, uncovers that prompt sensitivity fluctuates across datasets and models, with larger models exhibiting enhanced robustness. We observe that few-shot exampl…
- Progressive-Hint Prompting Improves Reasoning in Large Language ModelsThe performance of Large Language Models (LLMs) in reasoning tasks depends heavily on prompt design, with Chain-of-Thought (CoT) and self-consistency being critical methods that enhance this ability. …
- Prompt Architecture Determines Reasoning Quality: A Variable Isolation Study on the Car Wash ProblemThe car wash problem asks a simple question: “I want to wash my car. The car wash is 100 meters away. Should I walk or drive?” Every major LLM tested—Claude, GPT-4, Gemini— recommended walking. The co…
- Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm“Prior to GPT-3, the standard approach to the evaluation and use of such models has involved fine- tuning on a portion of a task dataset [12]. GPT-3 achieved state-of-the-art performance on a wide var…
- Promptbreeder: Self-Referential Self-Improvement Via Prompt EvolutionPopular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strateg…
- Pron vs Prompt: Can Large Language Models already Challenge a World-Class Fiction Author at Creative Text Writing?Are LLMs ready to compete in creative writing skills with a top (rather than average) novelist? To provide an initial answer for this question, we have carried out a contest between Patricio Pron (an …
- RAG-Gym: Systematic Optimization of Language Agents for Retrieval-Augmented GenerationRetrieval-augmented generation (RAG) has shown great promise for knowledge-intensive tasks and recently advanced with agentic RAG, where language agents engage in multi-round interactions with externa…
- Re3: Generating Longer Stories With Recursive Reprompting and Revision“Of course, recent years have also witnessed a dramatic rise in the capabilities of general-purpose (non-finetuned) large pretrained language models. Of particular note are their strong zero-shot capa…
- Reasoning Strategies in Large Language Models: Can They Follow, Prefer, and Optimize?Human reasoning involves different strategies, each suited to specific problems. Prior work shows that large language model (LLMs) tend to favor a single reasoning strategy, potentially limiting their…
- Revisiting Prompt Engineering: A Comprehensive Evaluation for LLM-based Personalized RecommendationLarge language models (LLMs) can perform recommendation tasks by taking prompts written in natural language as input. Compared to traditional methods such as collaborative filtering, LLM-based recomme…
- Role play with large language modelsHere we advocate two basic metaphors for LLM-based dialogue agents. First, taking a simple and intuitive view, we can see a dialogue agent as role-playing a single character. Second, taking a more nua…
- RoleLLM: Benchmarking, Eliciting, and Enhancing Role-Playing Abilities of Large Language ModelsHowever, the closed-source nature of state-of-the-art LLMs and their general-purpose training limit role-playing optimization. In this paper, we introduce RoleLLM, a framework to benchmark, elicit, an…
- Self-Discover: Large Language Models Self-Compose Reasoning Structures*Table 2. All 39 reasoning modules consisting of high-level cognitive heuristics for problem-solving. We adopt them from Fernando et al.* (_2023_). Reasoning Modules 1 How could I devise an experim…
- Skills-in-Context Prompting: Unlocking Compositionality in Large Language ModelsWe investigate how to elicit compositional generalization capabilities in large language models (LLMs). Compositional generalization empowers LLMs to solve complex problems by combining foundational s…
- Style Vectors for Steering Generative Large Language ModelsThis research explores strategies for steering the output of large language models (LLMs) towards specific styles, such as sentiment, emotion, or writing style, by adding style vectors to the activati…
- Systematic synthesis of design prompts for large language models in conceptual designConceptual design can be modeled as a proposition making process, where designers make logical propositions to communicate and construct intangible concepts. Not only can LLMs interpret designers’ pro…
- Test-time Prompt InterventionTest-time compute has led to remarkable success in the large language model (LLM) community, particularly for complex tasks, where longer chains of thought (CoTs) are generated to enhance reasoning ca…
- The Prompt Report: A Systematic Survey of Prompting TechniquesI have the v2, published Dec 30 We note that the transition dynamics between states depend primarily on the verb used in the action (e.g., take, put, cook, ...). Predicting action-driven transitions…
- Towards a Deeper Understanding of Reasoning Capabilities in Large Language ModelsAbstract. While large language models demonstrate impressive performance on static benchmarks, the true potential of large language models as self-learning and reasoning agents in dynamic environments…
- Triggering Hallucinations in LLMs: A Quantitative Study of Prompt-Induced Hallucination in Large Language ModelsIn this study, we propose a class of compact yet effective prompts (~30 tokens in length) that synthetically fuse semantically distant concepts in ways that resist scientific integration—such as combi…
- UPRISE: Universal Prompt Retrieval for Improving Zero-Shot EvaluationWe propose UPRISE (Universal Prompt Retrieval for Improving zero-Shot Evaluation), which tunes a lightweight and versatile retriever that automatically retrieves prompts for a given zero-shot task inp…
- What Makes a Good Natural Language Prompt?Despite the importance of understanding natural language prompts, there remains limited consensus on how to quantify them. Current approaches rely predominantly on outcome-centric measurements, such a…
- When Prompts Go Wrong: Evaluating Code Model Robustness to Ambiguous, Contradictory, and Incomplete Task DescriptionsLarge Language Models (LLMs) have demonstrated impressive performance in code generation tasks under idealized conditions, where task descriptions are clear and precise. However, in practice task desc…