How well can large language models explain business processes?
One such system’s functionality is Situation-Aware eXplainability (SAX), which relates to generating causally sound and yet human-interpretable explanations that take into account the process context in which the explained condition occurred.
In this paper, we present the SAX4BPM framework developed to generate SAX explanations. The SAX4BPM suite consists of a set of services and a central knowledge repository. The functionality of these services is to elicit the various knowledge ingredients that underlie SAX explanations. A key innovative component among these ingredients is the causal process execution view. In this work, we integrate the framework with an LLM to leverage its power to synthesize the various input ingredients for the sake of improved SAX explanations.
Since the use of LLMs for SAX is also accompanied by a certain degree of doubt related to its capacity to adequately fulfill SAX along with its tendency for hallucination and lack of inherent capacity to reason, we pursued a methodological evaluation of the quality of the generated explanations. To this aim, we developed a designated scale and conducted a rigorous user study. Our findings show that the input presented to the LLMs aided with the guard-railing of its performance, yielding SAX explanations having better-perceived fidelity.
While we acknowledge that the objective quality of explanations is important, in this work we attend to the use of LLMs as a means to synthesize the explanations by articulating the various system’s output contents in a textual form, where the eventual explanations presented alter users’ perceptions. We hypothesize in this work that the perceived quality of LLM-generated explanations of this sort could be altered and even improved via the methodological injection of different knowledge articulations given to it as an input leveraging LLM prompt engineering.
The paper’s contribution is in providing a tool for integration with LLMs to automate such explanations articulation in BPs, examining how such explanations may be shaped by different inputs to the LLM, and a methodological evaluation of the perceived quality of such explanations yielding a designated scale.
BPs as they generally fail to:
• Express the BP model constraints (i.e., the semantics of the process model),
• Include the richness of contextual situations that affect process outcomes (additional information that affects the outcome but usually not modeled),
• Reflect the true causal execution dependencies among the activities in the BP, or
• Make sense and be interpretable to process users (explanations are usually not given in a human-interpretable form that can ease the understanding by humans).
Hypothesis 1 (H1). Explanations generated by LLMs informed by knowledge about business processes will be perceived as having higher fidelity compared to explanations generated by uninformed LLMs.
Hypothesis 2 (H2). Explanations generated by LLMs informed by knowledge about BPs will be perceived as having higher interpretability compared to explanations generated by uninformed LLMs.
• Process view - we infer activities and directly-follows ordering of activities by aggregation from events [35]. Accordingly, we infer the flows-to relation through process discovery, e.g., [37]. Respectively, indirectly-follows is inferred as a transitive closure of directly follows.
• Causal view - we infer the causes relationship as a causal-execution-dependency as in [10]. Accordingly, we infer the causes relation among the activities as annotated. Respectively, indirectly-causes is inferred as a transitive closure of causes.
• XAI view - we capture the set of features and their corresponding importance values in the context of each activity.
To illustrate the application of the SAX4BPM services, we generated data on a BP of parking fines. In this process (see Figure 5), a parking ticket is given when a vehicle is parked in a prohibited lot and does not possess a disabled permit. In this case, two types of fines can be given depending on whether the parking place is hazardous (e.g., the vehicle is parked on a sidewalk or a crosswalk) or not. In the case of a hazardous place, an extended fine is issued, and a tow truck is called. Note that the arrival time of the tow truck is always later than the time taking to submit the extended fine. We generated the dataset for this example using BIMP open-source log simulation tool4 and stored it in the KG.
https://www.nucleoo.com/en/blog/how-llms-are-re-shaping-the-consulting-industry/