Do chatbot relationships lose their appeal as novelty wears off?
Explores whether the positive social dynamics observed in one-time chatbot studies persist or fade through repeated interactions. Critical for designing systems intended for sustained engagement over weeks or months.
Evidence from longitudinal studies with the chatbot Mitsuku shows that social processes related to relationship formation decreased throughout interactions, likely due to a novelty effect wearing off. This is a critical knowledge gap: one-shot interaction studies dominate conversational agent research, and their findings may not hold across multiple interactions.
Chatbots are not only designed for short-term purposes but often for medium- and longer-term interactions. Health coaching, therapeutic support, daily functioning screening — these all require sustained engagement over weeks or months. If the social processes that drive initial engagement decay, the design challenge shifts from "how to make a good first impression" to "how to sustain engagement through the novelty decay."
The implication: researchers and designers who extrapolate from one-shot studies to longitudinal products are making an empirically unsupported leap. The positive findings from single-session experiments — increased self-disclosure, anthropomorphism, trust — may be novelty-dependent rather than stable properties of the interaction.
This creates a design requirement: chatbots intended for repeated use need engagement mechanisms that go beyond initial social impression. Personalization is one approach (since Does chatbot personalization build trust or expose privacy risks?), but it comes with its own dual-edged dynamics.
Personalization as counterforce: A longitudinal study on personalized vs non-personalized conversational agents provides evidence that personalization can counteract novelty decay. Each additional interaction means the agent learns more about the user AND the user expects more from the agent — creating a dynamic tension. Personalization effects on perceived anthropomorphism and trust are positive, but they coexist with increased perceived privacy risks. The CASA framework itself needs updating: "the capabilities of the agents and the overall experience of users with technology have evolved since CASA was first proposed." Agents are now more accessible (smartphones, messaging platforms), more data-rich, and more personalized — meaning the novelty-decay dynamics documented with Mitsuku may operate differently with modern agents that genuinely adapt over time. The question becomes whether personalization creates genuine relationship deepening or merely delays the novelty decay curve.
Source: Psychology Chatbots Conversation
Related concepts in this collection
-
Does chatbot personalization build trust or expose privacy risks?
Explores whether personalization features that increase user trust and social connection simultaneously heighten privacy concerns and create rising behavioral expectations over time.
personalization as the attempted solution to novelty decay
-
Why do static persona descriptions produce repetitive dialogue?
Does relying on fixed attribute lists to define conversational personas limit dialogue depth and consistency? Research suggests static descriptions may cause repetition and self-contradiction in generated responses.
static personas would accelerate novelty decay; dynamic modeling may mitigate it
-
How should chatbot design vary by relationship duration?
Do chatbots serving one-time users need different design than those supporting long-term relationships? This matters because applying the same design to all temporal profiles creates usability mismatches.
novelty decay is archetype-specific: ad-hoc supporters never encounter it (single use), temporary assistants may outrun it (defined duration), but persistent companions must design for it explicitly; the temporal taxonomy predicts where novelty decay matters most
-
Do humans apply human-human scripts to AI interactions?
Does CASA theory correctly explain how people interact with media agents, or have decades of technology use created separate interaction scripts? Understanding which scripts drive behavior matters for AI design.
novelty decay may reflect script stabilization: once users develop media-agent-specific scripts for a chatbot, the interaction becomes routinized and novelty drops; relationship formation processes decrease as scripts solidify
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
novelty effects in chatbot relationships decay predictably over repeated interactions — social processes related to relationship formation decrease