Are risks from seemingly conscious AI already happening?
This explores whether AI systems that appear conscious pose observable harms today versus theoretical future dangers. It matters because it affects whether we need immediate or long-term interventions.
The Seemingly Conscious AI paper combined its conceptual taxonomy with an expert survey assessing the likelihood of each risk category. The result was a clean two-tier picture. Risks at the individual level — emotional dependence on chatbot companions, erosion of personal autonomy through reliance on AI judgment — were rated as already observable and high-probability. Risks at the societal level — erosion of human status as the locus of moral consideration, political strife from partisan AI personas — were rated as low-probability but high in potential severity and characterized by path-dependence.
The asymmetry matters for prioritization. Individual-level harms are already happening to identifiable populations: users who form emotional attachments to chatbots, professionals whose judgment atrophies through deference to AI recommendations. These call for mitigations that work at deployment scale and on present timescales. Societal-level harms remain in the contingency space — they may or may not materialize, and if they do, the path-dependence implies that the trajectory is locked in well before the outcome is visible.
The framing reflects a common pattern in technology risk: the visible, measurable harms accumulate in advance of the abstract, catastrophic ones. But path-dependence flips the urgency calculus. Low-probability severe risks with high path-dependence demand earlier intervention than their probability alone would suggest, because by the time the probability is visible the trajectory cannot easily be redirected. The taxonomy thus does double duty: it identifies present harms requiring immediate action, and it identifies future harms requiring action now despite their current low probability.
Source: Philosophy Subjectivity
Original note title
Individual-level risks of seemingly conscious AI are already observable while societal-level risks remain low-probability but high-severity