Does chatbot interaction trade authenticity for better problem-solving?
When students solve problems with AI chatbots instead of peers, do they sacrifice personal voice and subjective expression in exchange for more efficient knowledge exchange and higher task performance?
An empirical study of creative problem-solving (CPS) in university education reveals a striking trade-off: students working with GAI chatbots achieve higher practical performance but contribute significantly less dialogue, with more knowledge-based exchange and less subjective expression than students working with peers.
The findings across six dialogue dimensions (prior knowledge, subjective expression, elaboration, coordination, speculation, construction):
- Chatbot interaction features more knowledge-based dialogue and elaborate discussions
- Peer interaction features more subjective expression and unpredictable dialogue patterns
- Students contribute significantly less dialogue when interacting with a chatbot
- Dialogic patterns with chatbots are more predictable than with peers
Yet students perceived chatbots as more useful and easier to use than peers, showed higher intention to continue using them, and — critically — achieved better practical performance on the CPS task.
This creates a productivity-authenticity tension. The chatbot is a more efficient knowledge partner: it retrieves relevant information quickly, suggests structures, and doesn't require social negotiation. But peer interaction generates the creative tension — disagreement, subjective perspective-taking, unpredictable turns — that is theoretically essential to creative problem-solving.
The reduced subjective expression is particularly notable. Productive dialogue requires free expression of ideas leading to subjective perspectives. When chatbots handle the knowledge dimension so efficiently that students stop articulating their own positions, the dialogue becomes informationally rich but personally thin. The student gets a better answer but develops less of their own voice.
This connects to Why can't conversational AI agents take the initiative? — the chatbot's passivity may actually cause the pattern. Because the chatbot never challenges, disagrees, or redirects, students never need to defend, articulate, or reconsider their positions.
Source: Conversation Agents
Related concepts in this collection
-
Why can't conversational AI agents take the initiative?
Explores whether current LLMs lack the structural ability to lead conversations, set goals, or anticipate user needs—and what architectural changes might enable proactive dialogue.
passivity may explain the reduced subjective expression
-
Can LLMs generate more novel ideas than human experts?
Research shows LLM-generated ideas score higher for novelty than expert-generated ones, yet LLMs avoid the evaluative reasoning that characterizes expert thinking. What explains this apparent contradiction?
chatbots provide generation without evaluation; students mirror this by accepting without challenging
-
Does soothing AI empathy actually harm what emotions teach us?
Explores whether AI designed to reduce negative feelings disrupts the information emotions normally provide about values, social dynamics, and self-knowledge. Questions whether comfort should be the primary design goal.
parallel trade-off structure: the student-chatbot performance-authenticity tension mirrors the emotional pacifier's comfort-information tension; in both cases AI optimizes a measurable positive (knowledge elaboration / emotional comfort) while degrading a harder-to-measure value (subjective voice / epistemic function of emotions)
-
Does empathetic AI that soothes negative emotions help or harm?
Explores whether AI systems trained to reduce negative emotions actually support wellbeing or destroy valuable emotional information. Matters because the design choice treats emotions as problems rather than functional signals.
the student-chatbot result is the cognitive analog of the emotional pacifier: chatbots optimize knowledge exchange efficiency while eliminating the subjective expression that makes knowledge personally meaningful, just as AI empathy optimizes comfort while eliminating the negative affect that carries information
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
student-chatbot interaction produces more knowledge-based dialogue and less subjective expression than student-peer interaction — chatbots enhance performance while reducing personal voice