← All notes

How do recommendation feeds shape what people see and believe?

How algorithmic feeds function as persuasion systems that shape content creation, opinion dynamics, and targeted influence at scale.

Topic Hub · 31 linked notes · 7 sections
View as

Feed Mechanisms and Political Effects

4 notes

How do feed ranking weights shape what content gets produced?

Feed-ranking weights are typically treated as neutral tuning parameters, but do they actually function as political levers that reshape producer behavior and the content supply itself?

Explore related Read →

Do different recommender types shape opinion convergence differently?

Explores whether the mechanism by which products are recommended—buying together versus viewing together—creates distinct patterns in how product ratings converge or diverge across a network.

Explore related Read →

Can friends with different tastes improve recommendations?

Does incorporating social networks through friends' diverse preferences rather than similar tastes lead to better recommendations? This challenges conventional homophily-based approaches that assume friends like the same things.

Explore related Read →

Can cross-user behavior reveal news relations that individual histories miss?

When a single user's reading history is too sparse for personalized recommendations, can patterns from many users' collective clicking behavior expose hidden connections between articles that no individual user alone could discover?

Explore related Read →

Persuasion at Scale and Targeted Ads

3 notes

Can generative AI scale personality-targeted political persuasion?

Does removing the human-writing bottleneck through generative AI make it feasible to target voters at scale based on individual psychological traits? This matters because it could reshape political microtargeting economics and capabilities.

Explore related Read →

What dominates AI compute in production systems today?

While public discussion centers on large language models, Facebook's infrastructure data reveals a different story about which AI workloads actually consume the most compute cycles in real production environments.

Explore related Read →

Can bandit algorithms beat collaborative filtering for news?

News recommendation faces constant content churn and cold-start users—settings where traditional collaborative filtering struggles. Can a contextual bandit approach like LinUCB explicitly balance exploration and exploitation better than static methods?

Explore related Read →

Bias, Calibration, and the Diversity-Accuracy Tradeoff

3 notes

Why do accuracy-optimized recommenders crowd out minority interests?

Explores why recommendation models that maximize accuracy systematically over-represent a user's dominant interests while suppressing their lesser ones, even when both are measurable and real.

Explore related Read →

Why do recommender systems struggle to balance accuracy and diversity?

Recommender systems treat accuracy and diversity as competing objectives, requiring separate tuning. But what if the conflict is artificial, stemming from how we measure success rather than a fundamental tension?

Explore related Read →

Does embedding dimensionality secretly drive popularity bias in recommenders?

Conventional wisdom treats low-dimensional models as overfitting protection. But does this practice inadvertently cause recommenders to systematically favor popular items, reducing diversity and fairness regardless of the optimization metric used?

Explore related Read →

Rating Behavior and Selection Effects

6 notes

Do online ratings actually reflect independent customer opinions?

How much do previously-posted ratings shape the ones that come after, and does this social influence distort what ratings supposedly measure? Understanding this matters for anyone relying on review aggregates to judge product quality.

Explore related Read →

Do online reviews actually measure product quality or just buyer preferences?

Online reviews come only from customers who already expected to like a product. This self-selection might hide the true quality signal beneath layers of preference bias and writing motivation. What can aggregated ratings actually tell us?

Explore related Read →

Why do online reviewers publish negative ratings despite positive experiences?

When people post reviews publicly, do they adjust their honest opinions to seem more discerning? Schlosser's experiments test whether audience awareness shifts how people rate products compared to private ratings.

Explore related Read →

Why do people bother writing online ratings at all?

People rate products without pay or recognition, yet do it anyway. Understanding what motivates raters—and how costs affect who rates—reveals why rating distributions may not reflect true customer satisfaction.

Explore related Read →

Why do LLMs generate polite reviews even when users hated products?

Large language models trained with RLHF develop a politeness bias that overrides negative sentiment in review generation. Understanding this bias and how to counteract it is crucial for creating accurate, user-aligned review systems.

Explore related Read →

Can user history override an LLM's politeness bias in reviews?

LLMs trained on web text tend to be systematically polite, generating positive reviews even when users are dissatisfied. Can providing a user's prior reviews and ratings as context help the model generate authentically negative reviews that match the user's actual experience?

Explore related Read →

Filter Bubble Simulation and Aggregate Effects

2 notes

Can LLM agents realistically simulate filter bubble effects in recommendations?

Can generative agents with emotion and memory modules faithfully reproduce how recommendation systems create echo chambers and user fatigue? This matters because real-world A/B testing is expensive and slow.

Explore related Read →

Can language models discover what users actually want from activity logs?

Users pursue month-long interest journeys that transcend individual item clicks. Can LLMs extract these persistent goals from behavioral patterns, and does this change how we should think about personalization?

Explore related Read →

User-Modeling Mechanics for Feeds

7 notes

Can conversational recommenders recover lost preference signals from history?

Conversational recommenders abandoned item and user similarity signals when they shifted to dialogue-focused design. Can integrating historical sessions and look-alike users restore these channels without losing dialogue benefits?

Explore related Read →

How can user vectors capture diverse interests without exploding in size?

Fixed-length user vectors compress all interests into one representation, losing information about varied tastes. Can we represent diverse interests efficiently without expanding dimensionality?

Explore related Read →

Can attention mechanisms reveal which user taste explains each recommendation?

Single-vector user models collapse diverse tastes into one representation, losing expressiveness. Can weighting multiple personas by item relevance surface the right taste at the right time while making recommendations traceable?

Explore related Read →

Can users steer recommendations with natural language at inference?

Can recommendation systems let users specify their preferences in natural language at inference time without retraining? This matters because it would let new users and existing users dynamically adjust what they want to see.

Explore related Read →

Can retrieval enhancement fix explainable recommendations for sparse users?

When users have few historical interactions, embedded recommendation models struggle to generate personalized explanations. Can augmenting sparse histories with retrieved relevant reviews—selected by aspect—overcome this fundamental data limitation?

Explore related Read →

Do comparisons help users evaluate items better than isolated descriptions?

Can framing product evaluations relationally—by comparing to other items—ground assessment in user reasoning better than absolute descriptions? This matters because recommendation explanations often ask users to do comparison work mentally.

Explore related Read →

Can user preference guide AI writing tool alignment?

If writers prefer AI-polished text but object to the persona shifts it introduces, does optimizing for preference actually solve the alignment problem or obscure it?

Explore related Read →