Abstract
Computer games are more and more often used for training purposes. These virtual training games are exploited to train competences like leadership, negotiation or social skills. In such virtual training, a human trainee interacts with one or more virtual characters playing the trainee’s team members, colleagues or opponents. To learn
... read more
from virtual training, it is important that the virtual characters display realistic human behavior. This can be achieved by human players who control the virtual game characters, or by intelligent software agents that generate the behavior of virtual characters automatically. Using intelligent agents instead of humans allows trainees to train independently of others, which gives them more training opportunities. A potential problem of using intelligent agents is that trainees do not always understand why the agents behave the way they do. For instance, virtual team members (played by intelligent agents) that do not follow the instructions of their leader (a human trainee) may have misunderstood the instructions, or disobey them on purpose. After playing the scenario, the trainee does not know whether he should communicate clearer, or give better or safer instructions. A solution is to let virtual agents explain the reasons behind their behavior. When trainees can ask their co-players to explain the motivations for their actions, they are given the opportunity to better understand played scenarios and their own performance. This thesis proposes an approach to automatically generate explanations of the behavior of virtual agents in training games. Psychological research shows that people usually explain and understand human (or human-like) behavior in terms of mental concepts like beliefs, goals and intentions. In the proposed approach, actions of virtual agents are also explained by mental concepts. To generate such explanations in an efficient way, agents are implemented in a BDI-based (Belief Desire Intention) programming language. The behavior of BDI agents is represented by beliefs, goals, plans and intentions, and their actions are determined by a reasoning process on their mental concepts. Thus, the mental concepts that are responsible for the generation of an action can be reused to explain that action. The approach can generate different types of explanations. Empirical studies with instructors, experts and novices, respectively, showed that people generally prefer explanations that contain a combination of the belief that triggered an action, and the goal that is achieved by the action. In a validation study in the domain of virtual negotiation training, subjects indicated that the agent’s explanations increased their understanding in the motivations behind its behavior. In a validation study in the domain of human-agent teamwork, subjects better understood the agent’s behavior and preferred the amount of information provided by the agent when the agent explained its behavior. Finally, the approach was extended to make agents capable of providing explanations containing predictions about the behavior of other agents. For that, the explainable agents were equipped with a theory of mind, that is, the ability to attribute mental states such as beliefs and goals to others, and based on that, make predictions about their behavior.
show less