Abstract
Reasoning with uncertainty and evidence plays an important role in decision-making and problem solving in many domains, including medicine, engineering, forensics, intelligence and law. To aid domain experts in performing their tasks, various tools and techniques exist that allow them to make sense of a problem, including informal graph-based sense-making
... read more
tools such as mind maps and argument diagrams, which allow for structuring and visualising the problem and the user's reasoning involved in solving it. A limitation of these tools is that they are only intended for visualising the user's reasoning and thinking: they do not allow for automated reasoning or computations with the visualised information. Hence, while these tools are suited for creating an initial sketch of a problem, they do not support experts in formally evaluating the problem. Formal systems for reasoning about evidence have been proposed in the field of artificial intelligence (AI), which in contrast with aforementioned sense-making tools are precisely defined in terms of their notation and semantics and allow for automated inference, formal evaluation and computation. The inner workings of formal systems are well-known, and conditions can be studied under which instantiations of these systems are guaranteed to be well-behaved and satisfy desirable properties. However, domain experts typically do not have the expertise to construct formal representations within AI systems. Accordingly, in this thesis I aim to facilitate the construction of instantiations of formal AI systems to allow domain experts to formally evaluate their problems. To this end, I study how domain knowledge captured in an initial sketch of a problem expressed using a sense-making tool can be exploited to guide the construction of formal representations within AI systems. The construction methods that I propose allow domain experts to analyse their problems in a precise and thorough manner using these formal models. I focus on two types of formal systems proposed in AI, namely probabilistic models, more specifically Bayesian networks (BNs), and computational argumentation. Argumentation is particularly suited for adversarial settings such as the legal domain, where arguments for and against claims are constructed from evidence. Arguments can then be formally evaluated on their acceptability. Probabilistic models such as BNs allow for reasoning with numeric uncertainty such as statistical and probabilistic information, thereby allowing experts to evaluate their problem in a probabilistic manner by computing probabilities of interest.
show less