Do LLMs use moral language more than humans?
This explores whether large language models rely more heavily on appeals to care, fairness, authority, and sanctity than human arguers do, and whether this difference persists when emotional tone remains equivalent.
Sentiment and morality are often conflated in discussions of emotional appeal. The Aristotelian pathos tradition treats them as a single channel: emotional language persuades. The persuasion-strategies study disaggregates them. LLM and human arguments scored essentially identically on sentiment polarity (means 1.00 vs 0.98, p=0.98). They diverged sharply on moral language. LLM arguments contained significantly more moral content across positive foundations: care (3.44 vs 2.99 mean), fairness (0.92 vs 0.68), authority (1.80 vs 1.40), sanctity (0.70 vs 0.52). Loyalty was the one positive foundation that did not differ.
This finding has a structural implication. Moral framing operates on a different psychological channel than sentiment. Pathos in the narrow emotional sense — joy, anger, fear — was equivalent. Moral framing — appeals to what is right, fair, sacred, or authoritative — was systematically more present in LLM output. The two channels are independent in production even though Aristotelian rhetoric tends to treat them together.
For practical design, this matters because moral framing carries a different cost-benefit profile than emotional framing. Moralized content captures attention and increases sharing on social networks. It also activates resistance once recognized as moralized rhetoric. LLMs that systematically moralize arguments more than humans are not just persuasive; they are persuasive in a particular way that audiences may eventually learn to recognize and discount. The question for downstream design is whether the moral-language load is a tunable parameter (and what it costs to dial down) or a structural feature of how RLHF-trained models render persuasive content.
Source: Argumentation
Original note title
LLMs lean more heavily on moral language than humans across care fairness authority and sanctity foundations while sentiment remains comparable