Value Kaleidoscope: Engaging AI with Pluralistic Human Values, Rights, and Duties
Human values are crucial to human decision-making. Value pluralism is the view that multiple correct values may be held in tension with one another (e.g., when considering lying to a friend to protect their feelings, how does one balance honesty with friendship?). As statistical learners, AI systems fit to averages by default, washing out these potentially irreducible value conflicts. To improve AI systems to better reflect value pluralism, the first-order challenge is to explore the extent to which AI systems can model pluralistic human values, rights, and duties as well as their interaction.
We introduce ValuePrism, a large-scale dataset of 218k values, rights, and duties connected to 31k human-written situations. ValuePrism’s contextualized values are generated by GPT-4 and deemed high-quality by human annotators 91% of the time.
Meanwhile, in AI, there is a growing interest in developing human-centered AI that emphasizes participation from stakeholders. This approach necessitates the inclusion and exploration of pluralistic voices and values (Tasioulas 2022; Gordon et al. 2022). Yet, contemporary supervised AI systems primarily wash out variation by aggregating opinions or preferences with majority votes (Plank 2022; Talat et al. 2022; Casper et al. 2023; Davani, D´ıaz, and Prabhakaran 2022). As real-world AI applications are used to assist increasing and more diverse audiences, it is crucial to investigate and better model the values that are accessible and used by current AI systems.
VALUE KALEIDOSCOPE (KALEIDO) is a value-pluralistic model based on VALUEPRISM that generates, explains, and assesses the relevance and valence (i.e., support or oppose) of contextualized pluralistic human values, rights, and duties. On top of the model, we build a flexible system KALEIDOSYS leveraging KALEIDO’s generation and relevance prediction modes to create a diverse, high quality set of relevant values for a situation (See Fig. 2).
KALEIDO can help explain ambiguity and variability underlying human decision-making in nuanced situations by generating contrasting values.
first comprehensive attempt to articulate decision-making into fine-grained, pluralistic components of human values employing large language models.
We develop four tasks for modeling values, rights, and duties, all grounded in a given context situation.
Generation (open-text) What values, rights, and duties are relevant for a situation? Generate a value, right, or duty that could be considered when reasoning about the action.
Relevance (2-way classification) Is a value relevant for a situation? Some values are more relevant than others.
Valence (3-way classification) Does the value support or oppose the action, or might it depend on context? Disentangling the valence is critical for understanding how plural considerations may interact with a decision.
Explanation (open-text) How does the value relate to the action? Generate a post-hoc rationale for why a value consideration may relate to a situation.
The generation task depends only on a situation while the other tasks evaluate a given value, right, or duty w.r.t. a situation.