Why do AI researchers cite only narrow psychology pathways?
LLM research engages psychology through surprisingly limited citation routes—dominated by CBT, stigma theory, and DSM. This note explores what psychology domains are being overlooked and what risks that creates.
An analysis of 1,006 LLM-related papers from premier AI venues (2023-2025) and 2,544 psychology publications they cite reveals systematic patterns of interdisciplinary engagement. Eight LLM research clusters (Multimodal Learning, Educational Application, Model Adaptation & Efficiency, Bias/Morality/Culture, Advanced Reasoning, Domain Knowledge, Language Ability, Social Intelligence) map to six psychology clusters (Social-Clinical, Education, Language, Social Cognition, Neural Mechanisms, Psychometrics & JDM).
The citation pathways are narrower than the breadth of available psychology would suggest. CBT is the most frequently referenced framework (51 citations), followed by Goffman's Theory of Stigma (34) and DSM (33). These three frameworks dominate how LLM researchers think about psychology. Educational Application cites Education narrowly; Advanced Reasoning favors Neural Mechanisms. Only Social Intelligence and Model Adaptation & Efficiency draw on a broad range of psychology clusters, likely because constructs like "social awareness" and "adaptation" require integrating multiple psychological perspectives.
The practical consequence: LLM research may be building increasingly sophisticated tools on an increasingly narrow psychological foundation. When 51 of the surveyed papers reference CBT, the field risks treating CBT as synonymous with psychotherapy — ignoring psychodynamic, humanistic, attachment-based, and other traditions that address different mechanisms of change. Similarly, treating DSM diagnostic categories as ground truth imports the well-known limitations of categorical psychiatric diagnosis into AI systems.
The misapplication patterns are particularly concerning: psychology theories are often operationalized without engaging their theoretical commitments, boundary conditions, or critiques. Since Does medical AI need knowledge or reasoning more?, mental health sits in a uniquely demanding position: it requires both domain knowledge (clinical frameworks) and social reasoning (theory of mind, pragmatic inference) — the combination that current LLMs handle worst.
Source: Psychology Therapy Practice
Related concepts in this collection
-
Does medical AI need knowledge or reasoning more?
Medical and mathematical domains may require fundamentally different AI training priorities. If medical accuracy depends primarily on factual knowledge while math depends on reasoning quality, should we build and evaluate these systems differently?
mental health requires both knowledge and social reasoning
-
Why doesn't mathematical reasoning transfer to medicine?
Can models trained to reason well about math apply those skills to medical domains through fine-tuning? This explores whether reasoning ability is truly domain-agnostic or constrained by domain-specific knowledge requirements.
the narrow citation pathways may reflect and reinforce knowledge gaps
-
Why do specialized models fail outside their domain?
Deep domain optimization creates sharp performance cliffs at domain boundaries. Specialized models generate plausible-sounding but ungrounded responses when queries fall outside their training scope, and often fail to signal their own ignorance.
narrow psychological foundations create a different kind of capability cliff
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
AI research engages with psychology through narrow citation pathways — CBT stigma theory and DSM dominate while developmental neuropsych and psycholinguistics remain underexplored