Abstract
Mental-level modeling of software is a powerful abstraction in artificial intelligence that is encountered, for example, in the case of autonomous BDI-based agents. An agent of this type is equipped with goals that it attempts to achieve by selecting plans that it considers to be appropriate in light of particular
... read more
beliefs that it holds. Those goals and beliefs can pertain to its 'physical' environment, but also to the mental state (goals and beliefs) of other agents. Having and maintaining a model of the mental state of others is important for believable social behavior, as is desirable in the case of human-computer interaction, but also (for example) in the case of computer-based characters that interact with each other in software applications like (serious) games. This dissertation focuses on the explanation of (partially) observed behavior by means of mental state attribution, and in doing so focuses on the perspective of a virtual beholder-entity that observes others' actions. In case those actions are of a computer-based agent, it may be so that the rules determining that agent's behavior are also available to the beholder for explanation. A scenario where this may occur is in applications that demand believability of virtual characters, and where it is feasible to give a beholder some notion of others' behavior-producing rules but not the full details about their goals and beliefs. In that case, nonmonotonic (abductive) explanatory reasoning can be employed in regard to observed actions, in order to obtain a notion of the observed agents' possible mental states. This dissertation formalizes this form of reasoning, presenting both an abductive logical account as well as a specification for its implementation in terms of answer set programming. Apart from the fact that an observed agent could have had a particular mental state, reasoning about observed actions involves the notion of 'dynamics'. In regard to the abductive account, it holds that mental states, which have been inferred as explanations for observed behavior, should be attributed to the agent in some state preceding the actions it performed. In this dissertation, propositional dynamic logic (PDL) is used as a tool for modeling those dynamics, focusing in part on the case where actions of computer-based agents are observed. Moreover, PDL is used to formalize first-order 'mindreading' - a term which is typically encountered in the literature as a theory-neutral term for referring to the explanation of behavior in mentalistic terms. In this dissertation, existing psychological models of mindreading are discussed, and employed as a basis for determining the logical format of particular patterns of mindreading. Having a formal grasp on this format can be helpful both in the elicitation of concrete instances of those patterns pertaining to behavior in particular (software) environments, as well as their implementation as an aspect of AI. This dissertation concludes with a comparison to related (logic-based) approaches to plan/intention recognition and mindreading, of which there are plenty, pointing out differences and opportunities for crossover.
show less