How should chatbot design vary by relationship duration?
Do chatbots serving one-time users need different design than those supporting long-term relationships? This matters because applying the same design to all temporal profiles creates usability mismatches.
Despite diverse chatbot characteristics investigated for design implications — general vs. domain-specific, dyadic vs. multiparty — there is a scarcity of research on design differences contingent on chatbots' temporal profiles. Since chatbots are social actors and time is essential to social interactions, the time horizon fundamentally changes what the chatbot IS.
A taxonomy of 22 design dimensions organized across temporal profiles yields three archetypes from analysis of 120 chatbots:
Ad-hoc Supporters — single or few occasional interactions. The chatbot is a "communication medium" (Zhao, 2006). Users seek to get something done quickly. Design priorities: task efficiency, low friction, minimal onboarding.
Temporary Assistants — multiple interactions over a defined period. Typical: an educational chatbot teaching a course's content over one semester. Design priorities: progress tracking, content scaffolding, periodic engagement.
Persistent Companions — long-term or life-long relationships. Users are committed to undergo longer personal learning or development processes. The chatbot becomes a "social actor" (Reeves & Nass, 1996). Design priorities: relationship continuity, memory, personalization, incremental self-disclosure.
Four temporal dimensions characterize any chatbot: (D1) time horizon of the relationship, (D2) duration of individual interactions, (D3) frequency of interactions, and (D4) consecutiveness — whether interactions build on each other or are independent.
The critical insight: the same chatbot DESIGN applied to all three archetypes produces a mismatch. An Ad-hoc Supporter designed with relationship-building features wastes user time. A Persistent Companion designed without memory or personalization decays into irrelevance. Since Do chatbot relationships lose their appeal as novelty wears off?, the Persistent Companion archetype faces a specific challenge that Ad-hoc Supporters never encounter.
A complementary 17-dimension taxonomy across intelligence, interaction, and context perspectives (Janssen et al., 2020) yields five archetypes: goal-oriented daily, non goal-oriented daily, utility facilitator, utility expert, and relationship-oriented. The overlap between the temporal and functional taxonomies suggests that time horizon and purpose are the two primary axes of chatbot design space.
Source: Design Frameworks
Related concepts in this collection
-
Do chatbot relationships lose their appeal as novelty wears off?
Explores whether the positive social dynamics observed in one-time chatbot studies persist or fade through repeated interactions. Critical for designing systems intended for sustained engagement over weeks or months.
persistent companions face novelty decay
-
How do time gaps shape what people discuss across conversation sessions?
Do AI systems account for how elapsed time between conversations changes the way people reference and discuss past events? Current models mostly handle single sessions, but real interactions span days, weeks, and months.
inter-session temporal dynamics add a fifth dimension to temporal design
-
Does chatbot personalization build trust or expose privacy risks?
Explores whether personalization features that increase user trust and social connection simultaneously heighten privacy concerns and create rising behavioral expectations over time.
persistent companions require personalization, which triggers dual dynamics
-
Can AI chatbots create genuine therapeutic bonds with users?
Research on Woebot and Wysa found users reported feeling cared for and formed therapeutic bonds comparable to human therapy, despite knowing the agents were not human. This challenges assumptions about whether bonds require human relationships.
therapeutic chatbots are paradigmatic persistent companions
-
Can one model compress all conversation memory and eliminate retrieval?
Instead of storing and retrieving discrete memories, can a single LLM compress all past conversations into event recaps, user portraits, and relationship dynamics? This explores whether compression-based memory avoids the bottleneck of traditional retrieval systems.
COMEDY's three-dimensional memory (event recaps, user portraits, relationship dynamics) directly serves the Persistent Companion archetype's need for relationship continuity
-
How should agents decide what memories to keep?
Agent memory management splits between agents autonomously recognizing important information versus programmatic triggers. Understanding this choice reveals why different memory architectures prioritize different information types.
the explicit/implicit memory choice should match the temporal archetype: ad-hoc supporters need minimal memory, persistent companions need rich explicit memory
-
Can attachment theory prevent parasocial harm in AI companions?
Explores whether psychological frameworks from human relationships—particularly attachment theory—can establish safety boundaries that protect users from unhealthy emotional dependence on AI systems while maintaining therapeutic benefit.
the Persistent Companion archetype faces unique safety challenges that attachment theory addresses: boundary maintenance and emotional regulation prevent the long-term relationship from enabling parasocial manipulation
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
chatbot temporal design determines relationship type — ad-hoc supporters temporary assistants and persistent companions require fundamentally different design