Fine-tuning Pre-trained Language Models for Dialogical Argument Mining with Inference Anchoring Theory

Paper · Source
ArgumentationLinguistics, NLP, NLUKnowledge Graphs

In this paper, we present our framework for DialAM-2024 Task A: Identification of Propositional Relations and Task B: Identification of Illocutionary Relations. The goal of Task A is to detect argumentative relations between propositions in an argumentative dialogue (Inference, Conflict, Rephrase), while Task B while Task B aims to detect illocutionary relations between locutions and argumentative propositions in a dialogue, e.g.„ Asserting, Agreeing, Arguing, Disagreeing. Noticing the definition of the relations are strict and professional under the context of IAT framework, we meticulously curate prompts which not only incorporate formal definition of the relations, but also exhibit the subtle differences between them. The PTLMs are then fine-tuned on the human-designed prompts to enhance its discrimination capability in classifying different theoretical relations by learning from the human instruction and the ground truth samples.

Dialogical argument mining is an emerging field that aims to bridge the gap between the analysis of argumentation and dialogue (Budzynska et al., 2014b; Ruiz-Dolz et al., 2024; Kawarada et al., 2024). Traditional argument mining approaches have often focused on opinion mining within monological texts (Lawrence and Reed, 2019; Arumugam, 2022) or document form contents (Ruosch et al., 2022; Sazid and Mercer, 2022; Khondoker and Yousuf, 2022). However, real-world argumentation frequently occurs in dialogical contexts, where multiple participants engage in a dynamic exchange of viewpoints (Feger and Dietze, 2024; Lai et al., 2024; Alsinet et al., 2022). This complexity necessitates a more holistic approach that considers both the argumentative structures and the dialogical interactions

Being aware of this, since this text classification task is highly specified and targeted, we meticulously curated descriptive prompting for both sub-tasks. The prompt is then aggregated with given texts as the inputs for large model. Predefined special tokens like [SEP], [CLS] and [EOS] are also added to the final input texts to assist the model to understand the relationship between the different parts of the input.

Task B, on the other hand, seeks to identify the illocutionary relations that exist between the locutions uttered in the dialogue and the argumentative propositions associated with them. In other words, given a set of locutions (L-nodes) and propositions (I-nodes), the goal is to uncover the Illocutionary connections (YA-nodes) that link them.

Our approach features commendable results in the identification of illocutionary relations with concise preprocessing procedures, as evidenced by our high F1 score and precision in Task B. Despite the notable success in Task B, our system encountered challenges in Task A, particularly in achieving consistent recall rates. This indicates that additional context beyond adjacent propositions and locutions may be necessary for enhancing the identification of argumentative relations.

B Prompt design

P1="Illocutionary relations include 0:Asserting, 1:Pure Questioning, 2:Challenging, 3:Assertive Questioning, 4:Rhetorical Questioning, 5:Agreeing, 6:Default Illocuting, 7:Arguing, 8:Restating, 9:Disagreeing.The illocutionary relation between the two sentences is [mask].".