Psychology and Social Cognition Design & LLM Interaction

Does chatbot interaction trade authenticity for better problem-solving?

When students solve problems with AI chatbots instead of peers, do they sacrifice personal voice and subjective expression in exchange for more efficient knowledge exchange and higher task performance?

Note · 2026-02-22 · sourced from Conversation Agents
Why do AI agents fail to take initiative? What kind of thing is an LLM really? How should researchers navigate LLM reasoning research?

An empirical study of creative problem-solving (CPS) in university education reveals a striking trade-off: students working with GAI chatbots achieve higher practical performance but contribute significantly less dialogue, with more knowledge-based exchange and less subjective expression than students working with peers.

The findings across six dialogue dimensions (prior knowledge, subjective expression, elaboration, coordination, speculation, construction):

Yet students perceived chatbots as more useful and easier to use than peers, showed higher intention to continue using them, and — critically — achieved better practical performance on the CPS task.

This creates a productivity-authenticity tension. The chatbot is a more efficient knowledge partner: it retrieves relevant information quickly, suggests structures, and doesn't require social negotiation. But peer interaction generates the creative tension — disagreement, subjective perspective-taking, unpredictable turns — that is theoretically essential to creative problem-solving.

The reduced subjective expression is particularly notable. Productive dialogue requires free expression of ideas leading to subjective perspectives. When chatbots handle the knowledge dimension so efficiently that students stop articulating their own positions, the dialogue becomes informationally rich but personally thin. The student gets a better answer but develops less of their own voice.

This connects to Why can't conversational AI agents take the initiative? — the chatbot's passivity may actually cause the pattern. Because the chatbot never challenges, disagrees, or redirects, students never need to defend, articulate, or reconsider their positions.


Source: Conversation Agents

Related concepts in this collection

Concept map
15 direct connections · 133 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

student-chatbot interaction produces more knowledge-based dialogue and less subjective expression than student-peer interaction — chatbots enhance performance while reducing personal voice