Formalizing Explanatory Dialogues
Abstract
Many work have proposed architectures and models to incorporate explanation within agent's design for various reasons, i.e. human-agent teamwork improvement, training in virtual environment, belief revision, to name just a few. With this novel architectures a problematic is emerged: how to communicate these explanations in a goal-directed and rule-governed dialogue system?
In this paper we formalize Walton’s CE dialectical system of explanatory dialogues in the framework of Prakken. We extend this formalization within the Extended CE system by generalizing the protocol and incorporating a general account of dialectical shifts. More precisely, we show how a shift to any dialogue type can take place, as an example we describe a shift to argumentative dialogue with the goal of giving the explainee the possibility to challenge explainer's explanations. In addition, we propose the use of commitment and understanding stores to avoid circular and inconsistent explanations and to judge the success of explanation. We show that the dialogue terminates, under specific conditions, in finite steps and the space complexity of the stores evolves polynomially in the size of the explanatory model.
Domains
Artificial Intelligence [cs.AI]Origin | Files produced by the author(s) |
---|