Why do LLMs excel at social norms yet fail at theory of mind?
LLMs show a striking paradox: they predict social norms at superhuman levels but regress on theory of mind tasks compared to older models. What explains this disconnect, and what does it reveal about how these systems reason about minds versus rules?