Abstract
Our aim is to model the decision-making and communication of software agents, such that justifications of their beliefs are preserved. To model this, we address the following issues. We provide a use-semantics for epistemic statements such that we can express what it means for an agent to believe propositions. This
... read more
use-semantics allows our agents to believe propositions conform to the conventions of the human experts that our agents are said to represent. Will use games that are similar to Wittgenstein's languages games to provide the semantics for decision-making and communication. We provide decision games that allow our agents to predicate `to believe' and to predicate `to be ignorant' about propositions in accordance with the conventions of their community. Additionally, we provide dialogue games that allow our agents to communicate their beliefs and desires. In these dialogue games we define when agents may utter the speech acts of posing questions and posing requests in accordance with the conventions of their community. Two types of questions are formalised: the question for information that justifies agents' decision to adopt beliefs, and the question for information that justifies agents' decisions to retract beliefs. Analogously, two types of requests are formalised: the request that another agent decides to adopt beliefs, and the request that another agent decides to retract beliefs. If an agent has a disagreement with another agent, and the dialogue games could not make either agent justified to decide to change their beliefs such that the disagreement would be resolved, then, as seen from the agent's perspective, the disagreement is irresolvable. Because communication is governed by dialogue games and decision-making to change beliefs is governed by decision games, the agents have an operational definition when they run out of options to resolve their disagreements. We provide generic decision rules that resolve disagreements, and decision rules that settle irresolvable disagreements with an agreement to disagree. We argue that an agreement to disagree is tantamount to believing an inconsistent proposition, and that an agreement to disagree between two agents can be a third agent's justification to believe an inconsistent proposition. Because the meaning of epistemic propositions, such as beliefs, is provided by a use-semantics, our agents can give a sensible meaning to inconsistent propositions, just as to any other consistent proposition. If we are to enable our agents to believe inconsistent propositions, we need a logic that allows them to have such propositions consistently. To cater for additional epistemic modalities that our agents may need a multi-valued logic is defined in which a proposition's truth value is not restricted to denote truth and falsity. We define a formal method that allows propositions with epistemic modality such as ignorance, inconsistency and bias. Our agents' decision process and communications reflect possible conventions on the prevailing view in the community that our agents are said to represent. We have succeeded in designing a multiagent system in which the beliefs of agents are and remain conform to the views of their communities.
show less