Do prompt techniques work the same across all LLM tiers?
Do chain-of-thought and rephrasing prompts help or hurt recommendation tasks equally across cost-efficient and high-performance models? Understanding tier-dependent effects could optimize prompt selection.
Prompt engineering wisdom from NLP — chain-of-thought, step-by-step reasoning, instruction rephrasing — does not transfer cleanly to recommendation. The Anonymous evaluation across 23 prompt types, 8 datasets, and 12 LLMs finds that the optimal prompt depends on the model tier.
For cost-efficient (smaller) LLMs, three prompt families help: those that rephrase instructions, those that supply background knowledge, and those that make reasoning easier to follow. These compensate for limited innate capability by externalizing structure. For high-performance LLMs, simple prompts often outperform complex ones — and reduce inference cost. Step-by-step reasoning prompts and reasoning-style models often produce lower accuracy on recommendation specifically.
The reason is task-specific. Recommendation tasks emphasize the relationship between users and items, which is a relational matching task. Step-by-step deduction prompts evolved to support multi-step inference (math, logic, complex reasoning) that doesn't apply here. Adding chain-of-thought to a recommendation prompt introduces a reasoning bias that distracts from the user-item alignment the task actually rewards.
The implication: import prompt techniques carefully. The "best practice" depends on what the task structurally needs (in recommendation, often nothing more than weighing user history against candidates) and the LLM's native capability tier. Generic NLP prompt patterns can be net-negative when applied to non-NLP tasks.
Source: Recommenders Personalized
Related concepts in this collection
-
Why does chain-of-thought reasoning fail for personalization?
Standard reasoning traces produce logically sound but personally irrelevant answers. This explores why generic thinking doesn't anchor to user preferences and what might fix it.
extends: reasoning hurting recommendation is a specific case of reasoning hurting personalization-style tasks at high model tiers
-
Does LLM input augmentation beat direct LLM recommendation?
Can LLMs enrich item descriptions more effectively than making recommendations directly? This explores whether specialized models work better when LLMs focus on what they do best: content understanding rather than ranking.
complements: input-augmentation and rephrasing are the cheap-model wins this benchmark also documents
-
Where do recommendation biases come from in language models?
Do LLM-based recommenders inherit systematic biases from pretraining that differ fundamentally from traditional collaborative filtering systems? Understanding these sources matters for building fairer, more accurate recommendations.
complements: prompt selection interacts with pretraining biases differently across tiers — reasoning prompts may amplify pretraining-popularity in stronger models
-
Can routers select the right model before generation happens?
Explores whether LLMs can be matched to queries by estimating difficulty upfront, before any generation begins. This matters because routing could cut costs significantly while preserving response quality.
complements: tier-dependent prompt selection is a per-query decision that interacts with model-routing decisions
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
LLM-based recommender prompt selection depends on model tier — cost-efficient models benefit from rephrasing, high-performance models do worse with reasoning prompts