You are viewing a javascript disabled version of the site. Please enable Javascript for this site to function properly.
Go to headerGo to navigationGo to searchGo to contentsGo to footer
In content section. Select this link to jump to navigation

An argumentation-based approach for reasoning about trust in information sources

Abstract

During a dialogue, agents exchange information with each other and need thus to deal with incoming information. For that purpose, they should be able to reason effectively about trustworthiness of information sources. This paper proposes an argument-based system that allows an agent to reason about its own beliefs and information received from other sources. An agent's beliefs are of two kinds: beliefs about the environment (like the window is closed) and beliefs about trusting sources (like agent i trusts agent j). Six basic forms of trust are discussed in the paper including the most common one on sincerity. Starting with a base which contains such information, the system builds two types of arguments: arguments in favour of trusting a given source of information and arguments in favour of believing statements which may be received from other agents. We discuss how the different arguments interact and how an agent may decide to trust another source and thus to accept information coming from that source. The system is then extended in order to deal with graded trust (like agent i trusts to some extent agent j).

1.Introduction

An increasing number of software applications are being conceived, designed, and implemented using the notion of autonomous agents. These applications vary from email filtering (Maes, 1996), through electronic commerce (Rodriguez, Noriega, Sierra, & Padget, 1997; Wellman, 1993), to large industrial applications (Jennings et al., 1996). In all of these disparate cases, the agents are autonomous in the sense that they have the ability to decide for themselves which goals they should adopt and how these goals should be achieved (Wooldridge & Jennings, 1995). In most such applications, the autonomous components need to interact with one another because of the inherent interdependencies which exist between them. They need to communicate in order to resolve differences of opinion and conflicts of interest that result from differences in preferences, work together to find solutions to dilemmas and to construct proofs that they cannot manage alone, or simply to inform each other of pertinent facts. In other words they need the ability to engage in dialogues. Consequently, agents should be able to manage and deal with trust in information sources. In negotiation dialogues, for instance, one makes contracts with trustworthy agents. More generally, agents consider information coming from other sources only if these latter are trustworthy. As a result of this requirement on providing agents with the ability to deal with trust, an important amount of work has been done. Two main categories of works can be distinguished:

  • Works on understanding and formalising the notion of trust in information sources. Such works try to answer the question: what does the sentence ‘agent x trusts agent y’ mean? Examples of answers can be found in Castelfranchi (2011), Castelfranchi and Falcone (2000), Falcone, Piunti, Venanzi, and Castelfranchi (2013), Marsh (1994). In Demolombe (1998, 2001), it is argued that trust is generally not absolute but rather concerns some properties of an agent like his or her competence, sincerity, cooperativity … 

  • Works on reasoning about trust. The idea is to decide whether to trust or not a given source of information. Two categories of models are particularly proposed: (i) statistics-based models (Matt, Morge, & Toni 2010; Shi, Bochmann, & Adams 2005) which rely on past behaviour of a source in order to predict its future behaviour. (ii) logical models (Demolombe, 2004; Demolombe & Lorini, 2008) which infer trust in some properties from trust in other properties.

Besides, since the seminal book by Walton and Krabbe (1995) in which they distinguished between six types of dialogues, there has been much work on providing agents with the ability to engage in such dialogues. Typically, these focus on one type of dialogue like persuasion (Amgoud, Maudet, & Parsons, 2000), inquiry (Black & Hunter, 2009), negotiation (Sycara, 1990) and deliberation (McBurney, Hitchcock, & Parsons, 2007). Furthermore, Walton and Krabbe emphasised the need to argue in dialogues in order to convince other parties to accept opinions or offers. Consequently, in most works on modelling dialogues, agents are equipped with argumentation systems for reasoning about their own beliefs, building arguments and evaluating arguments received from other sources. While this use of argumentation is a common theme in all work mentioned above, none of those proposals consider trust in information sources when dealing with incoming information or when making deals with other agents. They rather assume that agents are trustworthy and accept any information (respectively, offer) sent by any agent as soon as it does not contradict their own beliefs (respectively, it satisfies their goals). However, agents are not necessarily neither sincere nor reliable as argued in the huge literature about trust in information sources. This would mean that in existing works, agents may accept claims even if their sources are not trustworthy. They may also make deals with unreliable agents.

This paper fills the gap by proposing an argumentation system that agents may use in dialogues for reasoning about different kinds of beliefs including beliefs about trust in information sources. The system fulfils thus three tasks. It states whether:

  • to believe in a given statement

  • to trust or not a given source

  • to accept or not an information/offer received from a source.

We consider a fine-grained notion of trust as opposed to absolute trust. Indeed, an agent trusts (or distrusts) another agent in a given property and not in absolute way. For instance, one may trust someone is his sincerity but not in his competence. In this paper, we focus on the six properties identified by Demolombe (1998, 2004), namely validity, completeness, sincerity, cooperativity, competence and vigilance. In the first part of the paper, trust is considered as a binary notion, i.e. an agent either trusts in a given property of an entity or not. The system starts with a belief base which is encoded in modal logic and which contains formulas expressing information about the environment (e.g. my car is red) and information about trust (e.g. agent i trusts in the sincerity of agent j). It builds arguments in favour of statements and establishes the attacks between them. The arguments are evaluated using Dung's semantics (Dung, 1995), and finally the inferences to be drawn from the base are identified. We show that the system satisfies nice properties, namely the rationality postulates defined in Amgoud (2013) about consistency and closure under consequence operator. In the second part of the paper, the system is extended in order to deal with graded trust as developed in Demolombe (2009) and in Demolombe and Liau (2001). The logical language that is used for representing beliefs is extended in such a way to encode certainty degrees of beliefs (such as, agent i has some doubts about climate change) and regularities degrees of relationships between facts (such as, if we are in London, it rains almost every day). From these two kinds of degrees, each argument is assigned an importance level which may not be the same for all arguments. Finally, arguments are evaluated using not only the attack relation but also a preference relation issued from the importance levels of arguments.

The paper is structured as follows: Section 2 introduces the logical formalism that will be used for representing and reasoning about agent's beliefs. Section 3 defines the six forms of trust that were initially introduced in Demolombe (2004), Lorini and Demolombe (2008) in case of binary trust. Section 4 presents the argumentation system as well as it properties. Section 5 presents the graded version of trust as proposed in Demolombe (2009), Demolombe and Liau (2001), and an argumentation system that can take into account varying degrees of trust and beliefs. Section 6 compares our model with existing works on argumentation-based trust. The last section concludes.

2.Logical formalism

This section introduces the logical framework (i.e. the logical language L and its axiomatics) that will be used for representing and reasoning about beliefs and trust in information sources. The syntactic primitives of L are the following:

  • ATOM: set of atomic propositions denoted by p,q,r,

  • AGENT: a non-empty set of agents denoted by i,j,k,

The language L is the set of formulas defined by the following BNF:

ϕ::=p¬ϕϕϕBeliϕInfj,iϕ,
where p ranges over ATOM and i and j range over AGENT. The other logical connectives are defined as usual. The intuitive meaning of the modal operators is:
  • Beliϕ1: agent i believes that φ holds

  • Infj,iϕ: agent j has informed agent i that φ holds

The axiomatics of the logic are the axiomatics of a Propositional Multi Modal Logic (Chellas, 1980). Indeed, in addition to the axiomatics of Classical Propositional Calculus we have the following axiom schemas and inference rules.

  • (K) Beli(ϕψ)(BeliϕBeliψ)

  • (D) ¬(BeliϕBeli¬ϕ)

  • (Nec) If ⊢ φ, then Beliϕ

Roughly speaking the intuitive meaning of (K) is that agent i can apply the modus ponens rule to derive consequences, (D) means that i’s beliefs are not inconsistent and (Nec) means that i is not ignorant of the logical truths.

The modal operator Infj,i obeys the following axiom schemas:

  • (EQV) If ϕψ, then Infj,iϕInfj,iψ

  • (CONJ) Infj,iϕInfj,iψInfj,i(ϕψ)

  • (OBS) Infj,iϕBeliInfj,iϕ

  • (OBS') ¬Infj,iϕBeli¬Infj,iϕ

The intuitive meaning of (EQV) is that informing actions about two logically equivalent formulas have the same effects. For instance, to inform about the fact John is at home and John is working has the same effects as to inform about the fact that John is working and John is at home. The meaning of (CONJ) is that to inform about the fact John is at home and to inform about the fact John is working has the same effects as to inform about the fact John is working at home. The justification of this axiom schema is that informing actions are considered at an abstract level and two distinct concrete actions may be considered as the ‘same’ action if they produce the same effect on the receiver's beliefs. The axiom schemas (OBS) and (OBS’) assume that if an agent j informs (respectively, does not inform) an agent i about φ, then i is aware of this fact. This would mean that the communication channels are assumed to be perfect.

According to Chellas’ terminology, modalities such as Beli obey a normal system KD and modalities of the kind Infj,i obey a particular kind of classical system. Axiom schemas (OBS) and (OBS’) show how these two kinds of modalities interact.

In the sequel, the symbol ⊢ refers to the consequence operator that is based on the previous axiom schemas. Besides, a belief base is a subset of L which contains the beliefs of a given agent iAGENT.

3.Binary trust in information sources

Throughout this section, we consider two interacting agents i and j and assume that i receives a piece of information ϕL from agent j. An important question is then what is the effect of this action on what the receiver believes? In Demolombe (1998, 2004), it was argued that this depends on the sender's properties the receiver trusts in. Six properties were particularly distinguished and investigated: Trust in sincerity: sincerity is the relationship between what the trustee says and what he believes. For instance, the fact that Juliet trusts Romeo in his sincerity about the fact Juliet is beautiful means that Juliet believes that if Romeo says to Juliet that she is beautiful, then Romeo believes that she is beautiful. The general definition is: the truster believes that if he or she is informed by the trustee about some proposition, then the trustee believes that this proposition is true. Formally:

TrustSinc(i,j,ϕ)=defBeli(Infj,iϕBeljϕ).

It is worth mentioning that the fact that an agent i believes in the sincerity of another agent j regarding proposition φ does not mean that i believes φ. The claim may be false and j is not aware about that. A strong version of sincerity is the property of validity.

Trust in validity: validity is the relationship between what the trustee says and what is true. For instance, the fact that Romeo trusts Juliet in her validity about the fact that Juliet loves Romeo means that Romeo believes that if Juliet says to Romeo that she loves him, then it is true that she loves him. The general definition is: the truster (i) believes that if he or she is informed by the trustee (j) about some proposition, then this proposition is true.

TrustVal(i,j,ϕ)=defBeli(Infj,iϕϕ).

Trust in completeness: completeness is the relationship between what is true and what the trustee says; it is the dual of validity. For instance the fact that Romeo trusts Juliet in her completeness about the fact that Juliet loves Romeo means that Romeo believes that if it is true that Juliet loves him, then Juliet will tell Romeo that she loves him. The general definition is: the truster believes that if some proposition is true, then the truster is informed by the trustee about this proposition.

TrustCmp(i,j,ϕ)=defBeli(ϕInfj,iϕ).

Trust in cooperativity: cooperativity is the relationship between what the trustee believes and what he says; it is the dual of sincerity. For instance, the fact that Juliet trusts Romeo in his cooperativity about the fact Juliet is beautiful means that Juliet believes that if Romeo believes that she is beautiful, then Romeo says to her that she is beautiful. The general definition is: the truster believes that if the trustee believes that some proposition is true, then the truster is informed by the trustee about this proposition.

TrustCoop(i,j,ϕ)=defBeli(BeljϕInfj,iϕ).

Trust in competence: competence is the relationship between what the trustee believes and what is true. For instance, the fact that Juliet trusts Romeo in his competence about the fact that the door of her house is closed means that Juliet believes that if Romeo believes that the door of her house is closed, then it is true that the door is closed. The general definition is: the truster believes that if the trustee believes that some proposition is true, then this proposition is true.

TrustComp(i,j,ϕ)=defBeli(Beljϕϕ).

Trust in vigilance: vigilance is the relationship between what is true and what the trustee believes; it is the dual of competence. For instance, the fact that Juliet trusts Romeo in his vigilance about the fact that the door of her house is closed means that Juliet believes that if it is true that the door of her house is closed, then Romeo believes that the door of her house is closed. The general definition is: the truster believes that if some proposition is true, then the trustee believes that this proposition is true.

TrustVigi(i,j,ϕ)=defBeli(ϕBeljϕ).

In Parsons et al. (2012) other properties, called argument schemes, are discussed like trust in agent's reputation or trust in agent's character. For the purpose of the paper, we only focus on the six above properties and propose a formal framework for reasoning with and about them.

Remarks

It is worth mentioning that the presented definitions of trust are specific to particular propositions. For instance, a patient (p) may trust in the competence of his or her doctor (d) regarding diagnosis g1. This is represented by the formula Belp(Beldg1g1). This does not mean that the patient also trusts the doctor on another diagnosis g2. Note also that the six formulas are elements of L.

As said before, completeness is the dual of validity, cooperativity is the dual of sincerity and vigilance is the dual of competence (Figure 1). The dual properties play a significant role. Let us consider the case where the trustee is a guard in charge of informing people living in a building if the elevator fails. If these people trust the guard's completeness, they infer that the elevator is working from the fact they have not received a warning from the guard.

Figure 1.

Relationships between believing, informing and truth.

Relationships between believing, informing and truth.

It is also easy to show that the six properties are not independent. Indeed, trust in validity follows from trust in sincerity and trust in competence. Similarly, trust in completeness follows from trust in vigilance and trust in cooperativity. In formal terms we have:

  • (V) TrustSinc(i,j,ϕ)TrustComp(i,j,ϕ)TrustVal(i,j,ϕ)

  • (C) TrustVigi(i,j,ϕ)TrustCoop(i,j,ϕ)TrustCmp(i,j,ϕ)

The effects of informing actions depending on the different kinds of trust are summarised below:

  • (E1) TrustSinc(i,j,ϕ)(Infj,iϕBeliBeljϕ)

  • (E2) TrustVal(i,j,ϕ)(Infj,iϕBeliϕ)

  • (E3) TrustCoop(i,j,ϕ)(¬Infj,iϕBeli¬Beljϕ)

  • (E4) TrustCmp(i,j,ϕ)(¬Infj,iϕBeli¬ϕ)

Property (E2) (resp. (E4)) shows sufficient conditions about trust that guarantee that performing (resp. not performing) the action Infj,iϕ has the effect that i believes that φ is true (resp. false). Notice that from i’s trust in j competence (resp. trust vigilance) performing (resp. not performing) the action Infj,iϕ does not allow i to infer that φ is true (resp. false). For instance, even if i trusts the doctor j’s competence about cancer diagnosis, i may not trust the doctor's sincerity, and if the doctor tells i that he or she has no cancer, i will not believe this. The reason why i does not trust the doctor's sincerity may be that i believes that the doctor wants to protect i from bad news.

The effects of informing actions can be derived from the different kinds of assumptions about the trust relationships between agents. For instance, if the truster i trusts j’s sincerity about the proposition φ and j informs i about φ, the truster can infer that the trustee believes what s/he has transmitted to him or her (i). If, in addition, the truster trusts j’s competence (i.e. the formula Beli(Beljϕϕ) is in the beliefs base of agent i), then the truster can infer that φ is true. Notice that this consequence is in the scope of what the truster believes (i.e. what is inferred is Beliϕ and not φ). Let us assume, for instance, that the truster i has some disease, j is a doctor and j tells i that i has a flu. If i trusts the doctor's sincerity about this diagnosis, i can infer that the doctor does believe that i has a flu. If i also trusts the doctor's competence, i can infer that s/he has a flu. Then, the final effect of what the doctor said is that i believes that the doctor believes that i has a flu and also that i believes that s/he has a flu. Notice that, if i trusts the doctor only in his or her validity, the effect of what the doctor said is that i believes that s/he has a flu but it is not necessarily the case that i believes that the doctor believes that i has a flu (see Demolombe 2011). Indeed, it could be the case that i believes that the doctor just transmits a diagnosis that has been made by an assistant who is trusted to be sincere and competent, while the doctor is not. From a formal point of view, it is not necessarily the case that contraposition of property (V) holds.

4.Argumentation-based reasoning system

Argumentation is seen as a reasoning process in which arguments are built and evaluated in order to increase or decrease the acceptability of a given standpoint. The latter may be a belief, an action, a goal, etc. Argumentation has become an artificial intelligence keyword for the last 20 years. In its essence, argumentation can be seen as a particularly useful and intuitive paradigm for doing non-monotonic reasoning. The advantage of argumentation is that the reasoning process is composed of modular and quite intuitive steps, and thus avoids the monolithic approach of many traditional logics for defeasible reasoning. An argumentation process starts with the construction of a set of arguments from a given knowledge base. As some of these arguments may attack each other, one needs to apply a criterion for determining the sets of arguments that can be regarded as acceptable: the so-called extensions.

In what follows, we propose an argumentation system for reasoning about the different kinds of beliefs an agent i may have, in particular beliefs about trust in information sources. The system instantiates the abstract framework of Dung (1995) and uses one of its semantics in order to evaluate arguments. Before presenting the system, we start by recalling briefly Dung's framework and then show how arguments in favour of beliefs can be built and how these arguments may interact with each other.

4.1.Dung's abstract argumentation framework

The most abstract argumentation framework in the literature was proposed by Dung (1995). It consists of a set of arguments and a binary relation expressing attacks between the arguments. Both notions (i.e. arguments and attacks) are abstract entities and thus their origin and structure are left unspecified.

Definition 4.1

An argumentation framework is a pair (A,R) where A is a set of arguments and RA×A is an attack relation.

A pair (a,b)R means that a attacks b. A set EA attacks an argument b iff aE such that (a,b)R. We sometimes use the infix notation aRb to denote (a,b)R.

An argumentation framework (A,R) is seen as a graph whose nodes are the arguments of A and its edges are the attacks in R. The arguments are evaluated using a semantics. In Dung (1995), different semantics were proposed, and some of them were refined, for instance in Baroni, Giacomin, and Guida (2005) and Dung, Mancarella, and Toni (2007). For the purpose of the paper, we only recall stable semantics since our aim is not to discuss the outcomes of our system under all semantics, but rather to show how to build arguments in favour of trust in information sources and how to decide to accept information coming from sources. Thus, we only need one semantics for illustration purposes.

Definition 4.2

Let T=(A,R) be an argumentation framework and EA. E is a stable extension iff:

  • a,bE such that (a,b)R

  • E attacks any argument in AE

Ext(T) denotes the set of all stable extensions of T.

It is worth recalling that stable extensions are maximal (for set inclusion) non-conflicting sets of arguments.

Example 4.3

Let us consider the argumentation framework T=(A,R) such that:

  • A={a,b,c,d,e,f,g}

  • R={(c,b),(b,e),(e,c),(d,c),(a,d),(d,a),(a,f),(f,g)}

This framework has five maximal (for set inclusion) non-conflicting sets of arguments:

  • E1={a,c,g},

  • E2={d,e,f},

  • E3={b,d,f},

  • E4={a,e,g}, and

  • E5={a,b,g}.

It has one stable extension E3, i.e. Ext(T)={E3}.

An argumentation framework may be infinite, i.e. its set of arguments may be infinite. Consequently, it may have an infinite number of extensions (under a given semantics).

4.2.Binary trust supported by arguments

This section introduces an argumentation system for reasoning about the different kinds of beliefs an agent i may have. As already said, argumentation is an alternative approach for reasoning with inconsistent information. It follows three main steps: (i) constructing arguments and counterarguments from a logical belief base, (ii) defining the status of each argument, and (iii) specifying the conclusions to be drawn from the base. In what follows, we focus on a given agent i and propose a model for reasoning about his beliefs. The model instantiates Dung's framework by defining all the above items.

Starting from the logic (L,) described in Section 2 and a possibly inconsistent beliefs base KiL, the system computes a consistent set of beliefs the agent should rely on. The base Ki can be seen as the i’s ‘candidate’ beliefs. It may contain trust information as defined in the previous section (e.g. Beli(ϕBeljϕ)), beliefs about the environment (e.g. Beliϕ where φ stands for ‘the window is closed’) and beliefs about informing actions received from other agents (e.g. BeliInfj,iϕ). Note that the base Ki={BeliInfj,iϕ,BeliInfj,i¬ϕ} is not inconsistent. Here agent i believes that he was informed by j that both formulas φ and ≠g φ hold. However, the base Ki={Beliϕ,Beli¬ϕ} is inconsistent.

The system is a logical instantiation of the abstract framework proposed by Dung (1995) in his seminal paper. It consists thus of a set of arguments, an attack relation between the arguments and a semantics for evaluating the arguments. The arguments are built from the base Ki. They are logical proofs for formulas in L that satisfy two requirements: consistency and minimality.

Definition 4.4

An argument built from a belief base Ki is a pair (H, h) where:

  • HKi and hL

  • H is consistent

  • Hh

  • HH such that H′ ⊢ h

H is called the support of the argument and h its conclusion. Arg(Ki) is the set of all arguments that can be built from Ki.

Let us illustrate this notion of argument with an example.

Example 4.5

Assume the following belief base of agent i:

Ki=Beli(δ)Beli(Infj,iϕ)Beli(¬Infk,iφ)Beli(Infj,iϕBeljϕ)Beli(φInfk,iφ).
From Ki, an infinite number of arguments is built including the following ones:
  • (1) ({Beli(δ)},Beli(δ))

  • (2) ({Beli(Infj,iϕ)},Beli(Infj,iϕ))

  • (3) ({Beli(¬Infk,iφ)},Beli(¬Infk,iφ))

  • (4) ({Beli(Infj,iϕBeljϕ),Beli(Infj,iϕ)},Beli(Beljϕ))

  • (5) ({Beli(φInfk,iφ),Beli(¬Infk,iφ)},Beli¬φ)

The previous arguments support various beliefs of agent i. Some of them, like (4) and (5), make use of beliefs on trust in information sources. To put it differently, they rely on agent's trust in order to make inferences. Such arguments are very useful in dialogue systems where agents may receive new information from other entities and should thus decide whether to accept it or not.

Arguments may also support the six forms of trust we discussed in Section 3. They show whether agent i should or should not trust another agent in one of the properties (sincerity, validity, cooperativity, completeness and competence). Let us consider the following example.

Example 4.6

Assume the following base:

Ki=Beli(φ)TrustSinc(i,j,ϕ),TrustVal(i,k,φ),Beli(Infk,iφ),
where i is the programme chair of a conference, k is an area chair member of the programme committee and j is a reviewer. Assume that ϕ stands for ‘j makes fair reviews’ and φ for ‘j makes a fair review for paper ID x’. Examples of arguments that are built from this base are the following ones:
  • (1) ({Beli(Infk,iφ)},Beli(Infk,iφ))

  • (2) ({Beli(Infk,iφ),TrustVal(i,k,φ)},Beliφ)

  • (3) ({Beli(Infk,iφ), TrustVal(i,k,φ),

    BeliφTrustSinc(i,j,ϕ)}, TrustSinc(i,j,ϕ))

Note that the argument (3) is in favour of trusting in the sincerity of agent j regarding proposition φ.

The second component of an argumentation framework is its attack relation which expresses conflicts that may raise between arguments. In argumentation literature, several relations were proposed (see Gorogiannis and Hunter (2011) for a summary of relations proposed for propositional frameworks). Some of them, like the well-known rebutting, are symmetric. However, it was shown in Amgoud and Besnard (2009) that any argumentation framework which is grounded on a Tarskian logic Tarski (1956) and uses a symmetric attack relation may violate the rationality postulates proposed in Caminada and Amgoud (2007), namely the one on consistency. Indeed, such a framework may have an extension which supports inconsistent conclusions. Since modal logic is a particular case of Tarski's logics, then the argumentation system we propose here will suffer from the same problem as shown in the following example.

Example 4.7

Let us consider the following belief base:

Ki=Beli(ϕ),Beli(¬φ),Beli(ϕφ).
Let us consider the following arguments:

aac-5-881417-i001.jpg

Let R be the rebutting relation defined as follows: (H, h) rebuts (H′, h′) iff h=Beliϕ, h=Beliφ and ϕ¬φ. Note that this relation is symmetric. The attacks among arguments are as depicted in figure above. The set {a1,a2,a3} is a stable extension of (Arg(Ki),R). However, {Beli(ϕ),Beli(¬φ),Beli(ϕφ)} is inconsistent. This means that the extension supports contradictory conclusions!

In what follows we avoid thus symmetric relations. We discuss next various forms of attacks. The first one is the so-called assumption-attack proposed in Elvang-Gøransson, Fox, and Krause (1993). It consists of weakening an argument by undermining one of its premises (i.e. an element of its support).

Definition 4.8

Let (H,h),(H,h) be two arguments of Arg(Ki). (H, h) assumption-attacks (H′, h′) iff there exists h″∈H′ such that h=Beliϕ and h=Beli¬ϕ.

Let us illustrate this relation on the following example.

Example 4.9

Let us consider the following base:

Ki=Beli(Infj,iϕBeljϕ),Beli(Infj,iϕϕ),Beli(Infj,iϕ),Beli(¬ϕ).
The argument ({Beli(Infj,iϕϕ),Beli(¬ϕ)},Beli(¬Infj,iϕ)) assumption attacks the argument ({Beli(Infj,iϕBeljϕ), Beli(Infj,iϕ)}, Beli(Beljϕ)).

It is worth mentioning that this attack relation concerns all types of arguments that may be built from a beliefs base (i.e. arguments supporting ordinary beliefs and those supporting trust in information sources). The following definition introduces another way for attacking arguments in favour of trust in an agent's sincerity. The basic idea is to show a case where the trusted agent sent an information that s/he does not believe. To put it differently, the attack consists of proving that the trustee may lie.

Definition 4.10

Let (H,h),(H,h) be two arguments of Arg(Ki). (H, h) sinc-attacks (H′, h′) iff h=Beli(Infj,iφ¬Beljφ) and TrustSinc(i,j,ϕ)H.

An argument in favour of trust in validity may also be undermined by an argument whose conclusion is a formula which is sent by the trusted agent and which is invalid (i.e. it does not hold).

Definition 4.11

Let (H,h),(H,h) be two arguments of Arg(Ki). (H, h) val-attacks (H′, h′) iff h=Beli(Infj,iφ¬φ) and TrustVal(i,j,ϕ)H.

Similarly, an argument in favour of trust in completeness may be attacked. Recall that such an argument provides a reason for believing that if a given formula holds, then the truster agent will be informed about it by the trustee. An attacker highlights a formula which holds and for which the trustee does not send any message.

Definition 4.12

Let (H,h),(H,h) be two arguments of Arg(Ki). (H, h) com-attacks (H′, h′) iff h=Beli(φ¬Infj,iφ) and TrustCmp(i,j,ϕ)H.

Recall that trust in the cooperativity of an agent means that if he believes a statement, then he will inform the truster about it. An attack against an argument supporting such information consists of presenting a case where the trustee was not cooperative.

Definition 4.13

Let (H,h),(H,h) be two arguments of Arg(Ki). (H, h) coop-attacks (H′, h′) iff h=Beli(Beljφ¬Infj,iφ) and TrustCoop(i,j,ϕ)H.

An argument in favour of trust in the competence of an agent may be attacked by an argument supporting a statement that is believed by this agent but which is not true.

Definition 4.14

Let (H,h),(H,h) be two arguments of Arg(Ki). (H, h) comp-attacks (H′, h′) iff h=Beli(Beljφ¬φ) and TrustComp(i,j,ϕ)H.

Trust in an agent's vigilance may be attacked by exhibiting a claim which holds but is ignored by the agent.

Definition 4.15

Let (H,h),(H,h) be two arguments of Arg(Ki). (H, h) vigi-attacks (H′, h′) iff h=Beli(φ¬Beljφ) and TrustVigi(i,j,ϕ)H.

Remark

It is worth mentioning that assumption-attack relation is conflict-dependent, i.e. if (H, h) attacks (H′, h′) then HH′ is necessarily inconsistent. This is not the case for the six other relations as shown in the following example.

Example 4.16

Let us consider the following base:

Ki=Beli(Infj,iϕBeljϕ),Beli(Infj,iφ),Beli(¬Beljφ).
Assume that φ stands for ‘The weather is cloudy’ and ϕ stands for ‘People pay few taxes’. Note that the base Ki is consistent. However, the argument ({Beli(Infj,iφ),Beli(¬Beljφ)},Beli(Infj,iφ¬Beljφ)) sinc-attacks the argument ({Beli(Infj,iϕBeljϕ)},Beli(Infj,iϕBeljϕ)).

The seven forms of attacks are captured by a binary relation on the set of arguments which is denoted by ℜ.

Definition 4.17

Let (H, h) and (H′, h′) be two arguments of Arg(Ki). (H, h) ℜ (H′, h′) iff:

  • (H, h) assumption-attacks (H′, h′), or

  • (H, h) sinc-attacks (H′, h′), or

  • (H, h) val-attacks (H′, h′), or

  • (H, h) com-attacks (H′, h′), or

  • (H, h) coop-attacks (H′, h′), or

  • (H, h) comp-attacks (H′, h′), or

  • (H, h) vigi-attacks (H′, h′).

The following example shows that the attack relation ℜ is not symmetric.

Example 4.16

It is easy to check that there is only one attack between arguments of Arg(Ki): ({Beli(Infj,iφ), Beli(¬Beljφ)}, Beli(Infj,iφ¬Beljφ))({Beli(Infj,iϕBeljϕ)}, Beli(Infj,iϕBeljϕ). Thus, ℜ is not symmetric.

Next we show that the relation ℜ may admit self-attacking arguments.

Example 4.18

Let us consider the following base:

Ki=TrustSinc(i,j,ϕ),Beli((Infj,iϕBeljϕ)Beli(¬Beljφ)),Beli(Infj,iφ).
The argument ({TrustSinc(i,j,ϕ), Beli(Infj,iφ), Beli((Infj,iϕBeljϕ)Beli(¬Beljφ))}, Beli(Infj,iφ¬Beljφ)) sinc-attacks itself.

An argumentation system for reasoning about the beliefs of an agent is defined as follows.

Definition 4.19

An argumentation system built over a belief base Ki is a pair T=(Arg(Ki),) where Arg(Ki)×Arg(Ki) is as given in Definition 4.17.

Since arguments may be conflicting, it is important to define the acceptable ones. For that purpose, we use the stable semantics proposed in Dung (1995). This semantics allows to partition the powerset of the set of arguments into two sets: stable extensions and non-extensions. The extensions are used in order to define the inferences to be drawn from the belief base Ki of agent i. These inferences represent what agent i should believe according to the available information. The idea is that a formula is inferred if it is supported by at least one argument in every extension. Note that the argument needs not to be the same in all the extensions.

Definition 4.20

Let T=(Arg(Ki),) be an argumentation system built over a beliefs base Ki and Ext(T) its set of stable extensions. A formula ϕL is inferred from Ki iff for all EExt(T), there exists (H,ϕ)E.

Output(T) denotes the set of all beliefs inferred from Ki using system T.

Example 4.9

Example 4.9Cont

Let us consider the belief base Ki of agent i. The set Arg(Ki) of arguments is infinite. It contains among others the following arguments:

aac-5-881417-i002.jpg

The following figure summarises the attacks between the eight arguments:

aac-5-881417-i003.jpg

It can be checked that the argumentation system T=(Arg(Ki),) has three stable extensions. Note that we do not provide the complete result since Arg(Ki) is infinite, but give some insights on the arguments that are included in the extensions. Below, if an argument ai (i=1 … 8) does not appear in an extension, then it does not belong to that extension. For instance, a1E1.

  • E1={a2,a3,a4,a5,a6,}

  • E2={a1,a2,a4,a7,}

  • E3={a1,a3,a4,a5,a8,}.

It is worth noticing that the argument a4 belongs to the three extensions. Thus, Beli(Infj,iϕBeljϕ)Output(T) meaning that according to the available information, agent i believes in the sincerity of agent j regarding φ. However, Beli¬ϕ and Beliϕ are supported by arguments only in some extensions. Then, Beli¬ϕOutput(T) and BeliϕOutput(T) meaning that agent i ignores φ’s truth value.

Example 4.16

Example 4.16Cont

The table below shows some arguments that may be built from Ki.

aac-5-881417-i004.jpg

The following figure summarises the attacks between the four arguments:

aac-5-881417-i005.jpg

It can be checked that the argumentation system T=(Arg(Ki), ℜ) has one stable extension: E={a2,a3,a4,}. Thus, Beli(Infj,iφ)Output(T), Beli(¬Beljφ)Output(T) but Beli(Infj,iϕBeljϕ)Output(T). This means that agent i will no longer believe in the sincerity of agent j about φ.

4.3.Properties of the system

Remember that a belief base of an agent may be inconsistent. We show that the set of inferences drawn from that base using the argumentation system is consistent. Before giving the formal result, we start by another property which shows that every stable extension of the system supports a consistent set of beliefs. Note that this property corresponds exactly to the rationality postulate on consistency that was proposed in Caminada and Amgoud (2007) for rule-based logics and generalised later in Amgoud (2013) for Tarskian logics.

Proposition 4.21

Let T=(Arg(Ki),) be an argumentation system built over a beliefs base Ki and Ext(T) its set of stable extensions. For all EExt(T), the following properties hold:

  • The set (Hk,hk)EHk is consistent.

  • The set {h|(H,h)E} is consistent.

Proof

Let E be a stable extension of T=(Arg(Ki),). Assume that the set (Hk,hk)EHk is inconsistent. Thus, X(Hk,hk)EHk such that X is a minimal (wrt set inclusion) inconsistent set. Since each Hk is consistent, then |X|>1. Thus, for all Bel(x)X, X{Bel(x)} is a minimal set such that X{Bel(x)}Bel(¬x). Then, (X{Bel(x)},Bel(¬x)) and ({Bel(x)},Bel(x)) are both arguments. Moreover, (X{Bel(x)},Bel(¬x)) assumption-attacks ({Bel(x)},Bel(x)). Besides, (H,h)E such that Bel(x)H. Thus, (X{Bel(x)},Bel(¬x)) assumption-attacks (H, h). Since E is conflict-free, then (X{Bel(x)}, Bel(¬x))E and (H,h)E such that (H,h)(X{Bel(x)},Bel(¬x)). (1) Assume that (H′, h′) assumption-attacks (X{Bel(x)},Bel(¬x)). Thus, BelxX{Bel(x)} such that HBel¬x. However, BelxH for some (H,h)E. Thus, (H′, h′) assumption-attacks (H″, h″). This contradicts the fact that E is conflict-free. (2) Assume now that (H′, h′) sinc-attacks (X{Bel(x)},Bel(¬x)). Then, h=Bel(infi,j,φ¬Beljφ) and TrustSinc(i,j,ϕ)X{Bel(x). So, (H,h)E such that TrustSinc(i,j,ϕ)H. Thus, (H′, h′) assumption-attacks (H″, h″). This contradicts the fact that E is conflict-free. The same reasoning holds for the remaining forms of attacks. Then, (Hk,hk)EHk is consistent. From the previous result, it follows that the set {h|(H,h)E} is consistent as well.

It is worth mentioning that the set of formulas used in the arguments of a stable extension is a consistent subbase of the beliefs base Ki but not necessarily maximal for set inclusion. This is mainly due to the six attack relations which are not based on inconsistency. Example 4.16 shows a case of a system built over a consistent beliefs base. The system has one stable extension E, and it can be checked that its corresponding base, i.e. (Hk,hk)EHk, is different from Ki.

From this property of the system, it follows that the set Output(T) is also consistent.

Proposition 4.22

Let T=(Arg(Ki),) be an argumentation system built over a beliefs base Ki. The set Output(T) is consistent.

Proof

From Definition 4.20, it follows that Output(T){h|(H,h)E} for any EExt(T). Since {h|(H,h)E} is consistent then so is Output(T).

The next property concerns another rationality postulate in Amgoud (2013) which claims that the extensions should be closed under sub-arguments. The idea is that accepting an argument in a given extension implies accepting all its sub-parts in that extension.

Proposition 4.23

Let T=(Arg(Ki),) be an argumentation system built over a beliefs base Ki. For all EExt(T), if (H,h)E then for all (H,h)Arg(Ki) such that HH, it holds that (H,h)E.

Proof

Let E be a stable extension of T=(Arg(Ki),). Let (H,h)E and (H,h)Arg(Ki) such that HH and (H,h)E. Then, (H,h)E such that (H,h)(H,h). (1) Assume that (H″, h″) assumption-attacks (H′, h′). Then, BelxH such that h=Bel¬x. But Bel xH since HH. So (H″, h″) assumption-attacks (H, h). This contradicts the fact that E is conflict-free. (2) Assume now that (H″, h″) sinc-attacks (H′, h′). Then, h=Bel(infi,j,φ¬Beljφ) and TrustSinc(i,j,ϕ)H. Then TrustSinc(i,j,ϕ)H. Consequently, (H″, h″) sinc-attacks (H, h). This contradicts the fact that E is conflict-free. The same reasoning holds for the remaining forms of attacks.

The next property concerns the third rationality postulate in Amgoud (2013) which claims that the extensions should be closed under the consequence operator, ⊢ in our case. This property guarantees that the system does not forget intuitive conclusions. Before presenting the formal result, let us first introduce a useful notation.

Notation:

For XL, CN(X)={ϕLXϕ}.

Proposition 4.24

Let T=(Arg(Ki),) be an argumentation system built over a beliefs base Ki and Ext(T) its set of stable extensions. For all EExt(T), {h|(H,h)E} = CN({h|(H,h)E}).

Proof

Let E be a stable extension of the system T=(Arg(Ki),). Let X={h|(H,h)E}. Assume that XCN(X). Thus, hCN(X) and hX. Besides, X(Hk,hk)ECN(Hk)CN((Hk,hk)EHk). It follows also that CN(X)CN((Hk,hk)EHk) and thus hCN((Hk,hk)EHk). Two possible cases:

  • (1) hCN(), (,h)Arg(Ki) but (,h)E. This means that (H,h)(,h). But the seven attack relations ensure h or h=Belx and h=Bel x. This is impossible.

  • (2) hCN() and S(Hk,hk)EHk such that (S,h)Arg(Ki) since (Hk,hk)EHk is consistent (see Proposition 4.21). Moreover, (S,h)E. Hence, (H,h)E such that (H,h)(S,h). Assume that ℜ is assumption attack. Then, h=Bel¬xS. But, this implies that (H,h)E such that Bel¬xH meaning that (H,h)(H,h). This contradicts the fact that E is conflict-free. The same reasoning applies for the six remaining relations since they are all based on attacking the support.

We show next that the set Output(T) is closed under ⊢.

Proposition 4.25

Let T=(Arg(Ki),) be an argumentation system built over a beliefs base Ki such that Ext(T). It holds that Output(T)=CN(Output(T)).

Proof

Let T=(Arg(Ki),) be a system built over a beliefs base Ki such that Ext(T). It is clear that Output(T)CN(Output(T)).

Assume now that hCN(Output(T)) and hOutput(T). Then, h1,,hnOutput(T) such that hCN({h1,,hn}). Besides, h1, … , hnEkExt(T) {ϕ|(H,ϕ)Ek}. From monotonicity of CN, it follows that: CN({h1,,hn})CN(EkExt(T) {ϕ|(H,ϕ)Ek}). It holds also that hCN({ϕ|(H,ϕ)E1}) CN({ϕ|(H,ϕ)En}). From Proposition 4.23, h{ϕ|(H,ϕ)E1} {ϕ|(H,ϕ)En}. Consequently, hOutput(T).

This means, for instance, that if TrustSinc(i,j,ϕ)Output(T) and Beli(Infj,iϕ)Output(T), then Beli(Beljϕ)Output(T).

5.Graded trust in information sources

In most situations it is an over-simplification to say that an agent i trusts (or does not trust) another agent j. Rather, in informal terms, we may say that i has a limited trust in j, or i’s trust in j is high. We are thus faced with the question: ‘what is the meaning of graded trust?’.

Demolombe (2009) proposed two different answers to this question. The first answer, when trust is represented by a formula of the form Beli(ϕjψj), is that i is uncertain to be in a world where the set of φj worlds (i.e. the set of worlds where φj is true) is included in the set of ψj worlds (the set of worlds where ψj is true). For example, agent i may be uncertain about the fact that agent j is sincere about p, that is, about the fact that in every circumstance where j informs i about p, it is the case that j believes p. Here, graded trust can be defined by the strength level of i’s belief about j’s sincerity. Notice that this uncertainty level refers to i’s beliefs and not to the fact that j is more or less sincere. In more formal terms, according to this interpretation, graded trust can be represented by a formula

Belig(ϕjψj)
which is read as follows: the strength level of i’s belief about the fact that ‘ϕjψj is true’ is g. In the sequel, Belig denotes a ‘graded belief’ of agent i.

The second answer by Demolombe (2009) is: ‘i believes that the set of φj worlds is partially included in the set of ψj worlds’. In such a case, the fact that i’s trust in j’s sincerity is high can be interpreted as: i believes that in almost all circumstances, if j informs i about p, then j believes p. According to this interpretation trust level refers to the regularity level of the relationship between the fact that φj is true and the fact that ψj is true. Graded trust is thus formally represented by the formula:

Beli(ϕjhψj),
where h may be a numerical value which represents graded regularity.

For the purpose of our proposal, graded trust may refer to both kinds of levels (uncertainty and regularity). It is thus represented by formulas of the form:

Belig(ϕjhψj)
whose intended meaning is that the strength level of i’s belief about the fact that φj entails ψj with a regularity level h is g. It is worth pointing out that in general these two levels are independent. It may be the case, for example, that i strongly believes that j’s sincerity is low or that i strongly believes that j’s sincerity is high and it may also be the case that i has a low level of belief about the fact that j’s sincerity is low.

5.1.Extended logic

In what follows, we extend the logical language of Section 2 for reasoning about graded trust. Let us first recall the intuitive meaning of the new operators:

  • Beligϕ: the strength level of i’s belief about the fact that φ is true is (exactly) g.

  • ϕkψ: φ entails ψ at level h.

  • □φ: φ holds in all the situations.

The operator □ is introduced for formal purposes that are explained below. We also assume two additional sets that contain levels of beliefs and regularity:

  • GRB: finite set of belief levels.

  • GRR: finite set of regularity levels.

Notice that no particular assumption is made on the nature of the elements of these sets. However, we assume that they are both equipped with a preordering ≤ (i.e. a reflexive and transitive binary relation). For x, yGRB (respectively, in x, yGRR), xy means that y is at least as strong as x. The strict relation associated with ≤ is denoted by < and defined as follows: x<y=def(xy)not(yx). Moreover, both sets has a lower and an upper bounds denoted, respectively, min and max.2 For every x in GRB or in GRR, minxmax.

Notations:

Forall(g,cond)F(g)=defgG,cond(g)F(g), Exists(g,cond)F(g)=defgG,cond(g)F(g), and ψh=defhψ.

The logic associated with the extended language is based on the following inference rules and axiom schemas.3

aac-5-881417-i006.jpg

The first rule (SubstBel), states that in Belig(ϕ), φ can be substituted by any logically equivalent formula. (Weak) says that if ψ is a logical consequence of φ (i.e. ϕψ), then, if i has ascribed a strength level to his or her belief about φ and to his or her belief about ψ, then the level of ψ cannot be lower than the level of φ. (ClosDisj) says that if the levels of beliefs of two formulas φ1 and φ2 are fixed, then the level of their disjunction is the maximum of these two levels. With (ClosConj) schema, if the levels of beliefs of two formulas φ1 and φ2 are fixed, then the level of their conjunction is the minimum of these two levels. (UnicBel) states that the strength level of i’s belief is unique for every sentence. According to (Consist) schema, graded beliefs are considered as standard beliefs to which an agent i has assigned a strength level. It may be that i has not assigned a strength level to some belief, for instance, because s/he has no argument to assign it such or such level. According to this axiom schema Belig(ϕ) can be rephrased as: i believes φ and the strength level of this belief is g. The axiom schema (MinBel) states that if φ represents the formula which is believed at the minimum level and ψ is believed at some belief level, then φ implies ψ. It is worth noticing that this axiom is consistent with (ClosConj). From an intuitive point of view a formula which is believed at the minimal level is a formula which denotes a proposition which is more specific than any other formula which is believed at any other level. That means that the set of φ worlds is included into the set of ψ worlds. (MaxBel) says that if φ represents the formula which is believed at the maximum level and ψ is believed at some belief level, then ψ implies φ. This axiom schema is consistent with (ClosDisj). From an intuitive point of view a formula which is believed at the maximal level is a formula which denotes a proposition which is less specific than any other formula which is believed at any other level. This means that the set of φ worlds contains the set of ψ worlds. The schema (MaxTau) states that if φ is a theorem of the logic, then the belief level of φ is max. According to schema (PosInt), if a formula φ is believed at level g, then i believes, in the standard sense, that φ is believed at level g. This positive introspection axiom schema means that no level is ascribed by i to his or her evaluation of the level of a belief. If such a level would be ascribed, one could ask the question: what is i’s evaluation of this ‘second’ order level?, and we would be led to an infinite number of introspection levels, which is far to be intuitive. (NegInt) says that if formula φ is not believed at level g, then i believes, in the standard sense, that φ is not believed at level g. Note that the justification of (NegInt) is similar to the justification of (PosInt). The axiom (SubstReg) concerns the conditional connective h; it says that in the formula ϕhψ, both φ and ψ can be substituted by logically equivalent formulas. (Detach) states that if φ entails ψ at level h, then if φ holds, ψ holds at level h. Note that ‘ψ holds at level h’ is an abbreviation for ‘Truehϕ’. The axiom (Trans) says that there exists a function F such that if n=F(h1,k1,h2,k2), then if φ entails ψ at level h1, ϕψ entails θ at level k1, φ entails ≠g ψ at the level h2 and ϕ¬ψ entails θ at level k2, then φ entails θ at level n. This axiom seems quite complex but it is mandatory since, in general, from (ϕh1ψ)(ϕψk1θ), we cannot infer what is the value of n such that: ϕnθ, because there may be φ worlds that are θ worlds and which are not ψ worlds. Notice that axiom schema (Trans) is perfectly compatible with conditional probabilities if we accept some uniform distribution assumptions. In this case, the form of F is n=(h1×k1)+(h2×k2). Even if this is not a sufficient justification, by analogy with conditional probabilities we have adopted the following function F: n=Max{Min{h1,k1}, Min{h2,k2}}. The axiom (UnicReg) states that the regularity level of φ entails ψ is unique whereas (MinReg) ensures that φ entails ψ at the minimum level iff φ implies ≠g ψ. The intuitive idea is that ϕminψ holds iff the set of φ worlds and the set of ψ worlds are disjointed. The sentence ϕminψ can be interpreted in the context of conditional probabilities as 0=Pr(ψϕ). According to axiom (MaxReg), a formula φ entails ψ at the maximum level iff φ implies ψ. The intuitive idea is that ϕmaxψ holds iff the set of φ worlds is included into the set of ψ worlds. Note that the sentence ϕmaxψ can be interpreted in the context of conditional probabilities as 1=Pr(ψϕ). The last axiom (DetachBel) follows from the axioms (MaxTau), (ClosConj), (SubstBel) and (Weak).

In the sequel, L will denote the extended language and ⊢* the extended logic, i.e. the logic ⊢ extended with the previous axioms.

5.2.Preference-based argumentation for graded trust

There is a clear consensus in the literature that arguments do not necessarily have the same strength. It may be the case that an argument relies on certain information while another argument is built on less certain information, or that an argument promotes an important value while another promotes a weaker one. In both cases, the former argument is clearly stronger than the latter. These differences in arguments’ strengths make it possible to compare them. Consequently, several preference relations between arguments have been defined in the literature (Amgoud, 1999; Benferhat, Dubois, & Prade, 1993; Cayrol, Royer, & Saurel, 1993; Simari & Loui, 1992). There is also a consensus on the fact that preferences should be taken into account in the evaluation of arguments (see Amgoud & Cayrol, 2002; Bench-Capon, 2003; Modgil, 2009; Prakken & Sartor, 1997; Simari & Loui, 1992).

In Amgoud and Cayrol (2002), a first abstract preference-based argumentation framework was proposed. It takes as input a set of arguments, an attack relation, and a preference relation ⪰ between arguments. For two arguments a and b, ab means that the argument a is at least as strong as b. The relation ⪰ is abstract and can be instantiated in different ways. However, it is assumed to be a (total or partial) pre-ordering (i.e. reflexive and transitive). The strict version associated with ⪰ is denoted by ≻ and is defined as follows: ab iff ab and not ba. Whatever the source of this preference relation is, the idea is to ignore an attack if the attacked argument is stronger than its attacker. Dung's semantics are applied on the remaining attacks. This approach is particularly interesting when the attack relation is symmetric. However, when the attack relation is not symmetric like the relation given in Definition 4.17, the extensions of the argumentation framework may be conflicting leading thus to counter-intuitive results. Consequently, Amgoud and Vesic (2009) proposed a new approach which consists of inverting the direction of an attack whenever the attacker is weaker than its target as follows:

Definition 5.1

Let (A,R,) be an argumentation framework. For two arguments a,bA, adefeats  b iff

  • aRb and not ba or

  • bRa and ab

Dung's semantics are then applied to the new framework (A,defeats) for evaluating the arguments. In what follows, we propose an instantiation of this abstract framework for reasoning about graded trust. As in the binary case, we assume a knowledge base Ki containing the beliefs of an agent i. Formulas of Ki are elements of the extended language L. Arguments are built from Ki following Definition 4.4, however by replacing the relation ⊢ by ⊢*.

Definition 5.2

Let Ki be a beliefs base of agent i. An argument is a pair (H, h) where:

  • HKi and hL

  • H is consistent

  • H ⊢* h

  • HH such that H′ ⊢* h

Arguments attack each other as shown in Definition 4.17, i.e. the relation used in the binary case. However, they may have different strength levels. It is the strength level of the weakest (or the less certain) formula used in its support.

Definition 5.3

Let (H, h) be an argument such that H={Belig1ϕ1,,Belignϕn}. The strength level of (H, h), denoted Level(H,h), is Min{g1,,gn}.

These strengths are used in order to compare arguments. The idea is to prefer the one with the greatest strength level, i.e. the one whose support is based on more certain information.

Definition 5.4

Let (H,h),(H,h) be two arguments. (H,h)(H,h) iff Level(H,h)Level(H,h).

The argumentation framework (Arg(Ki),,) is used for reasoning about the beliefs base Ki of agent i. Let us illustrate this framework on a simple example.

Example 5.5

Let us consider the following base:

Ki=Belig4(parking),Belig2(parkingoffice),Belig3(meeting),Belig4(meeting¬office),
where it is assumed that the set of strengths of beliefs is {g1,g2,g3,g4,g5} and this set has the structure of a total order. Moreover, the following abbreviations have been adopted: parking: Luis’ car is at the parking, office: Luis is at his office, meeting: Luis is attending a meeting, and teaching: Luis’ is teaching.

The intuitive justifications of the beliefs strength levels are that agent i has observed that Luis’ car is at the parking and i is not strongly convinced that this fact guarantees that Luis is at his office. Moreover, i has been informed that Luis is attending a meeting and i knows that a meeting cannot happens at Luis’ office.

From Ki, an infinite number of arguments can be built including the following ones:

a1=(H1, h1), where H1 is {Belig4(parking),Belig2(parkingoffice)} and h1=Belig2(parkingoffice).

From Level(a) definition we have: Level(a1)=min{g2,g4}=g2.

From the inference rules (SubstBel) and (Weak), we also have the argument:

a2=(H1, h2), where h2=Belig(office) and g is greater or equal to g2. We also have: Level(a2)=g2. Notice that the strength of the consequence h2 is not necessarily the same as the strength of its support H1.

We also have the arguments:

a3=(H3, h3), where H3 is {Belig3(meeting),Belig4(meeting¬office)} and h3=Belig3(meeting¬office). We have Level(a3)=g3.

a4=(H3, h4), where h4=Belig(¬office) and g′ is greater or equal to g3.

In the logic presented in Section 5 graded beliefs are assumed to be standard beliefs (see schema (Consist)) and standard beliefs must be consistent in the sense of schema (D). According to this logic arguments a2 and a4 lead to an inconsistency. This kind of inconsistency can be removed if it is accepted that Beligϕ means that the strength level of the fact i believes that φ may be true is g (instead of i believes that φ is true is g). This interpretation of Beligϕ can be formally represented replacing the schema (Consist) by: (Consist’) Beligϕ¬Beli¬ϕ.

In the same context we could have the following knowledge base Ki where agent i trusts agent j in his validity about meeting and j has informed i about meeting.

Ki={Belig4,Belig2(parkingoffice),Belig4(Infj,imeeting),Belig2(Infj,imeetingmeeting),Belig4(meeting¬office)}

In Ki we have the argument a5.

a5=(H5, h5), where H5 is {Belig4(Infj,imeeting),Belig2(Infj,imeetingmeeting),Belig4(meeting¬office)} and h5=Belig2(Infj,imeetingmeeting¬office) and Level(a5)=g2.

We may have a more complex knowledge base Ki where is represented the fact that if Luis is teaching, he cannot be attending a meeting.

Ki={Belig4(parking),Belig2(parkingoffice),Belig4(Infj,imeeting),Belig2(Infj,imeetingmeeting),Belig4(meeting¬office),Belig4(teaching),Belig4(teaching¬meeting)}

Now, we have the argument a6:

a6=(H6, h6) where H6 is {Belig4(teaching),Belig4(teaching¬meeting)} and h6=Belig4(teaching¬meeting) and Level(a6)=g4.

Since the consequence h6 of a6 is Belg4(teaching¬meeting) and in the support H3 of a3 we have Belig3(meeting), we can accept, thanks to a limited change in the attack definition, that a6 attacks a3. Moreover, we have Level(a6)>Level(a3), then we can infer that a6 defeats a3.

6.Related work

Trust modelling has become a hot topic during the last 10 years. More than 20 definitions were proposed for this complex concept. Among others the following one was proposed by Falcone and Castelfranchi (2001):

Trust is a mental state, a complex attitude of an agent i towards another agent j about the behaviour/action a relevant for the goal g.

Gambetta (1990) defines trust as a subjective probability by which an agent i expects that another agent j performs a given action on which its welfare depends. In Liau (2003), trust is represented as agent's beliefs and the author focused on trust in validity and its impact on the assimilation of information received from the trustee. The basic idea is the following: if agent i believes that agent j has told him or her the truth of φ and i trusts the judgment of j on φ, then i will also believe φ. Our formalism follows this line of research and considers six forms of trust including validity, sincerity, and competence. It shows how to build arguments in favour (respectively, against) each form of trust, and how to use beliefs concerning the trustworthiness of the other agents in order to infer new beliefs.

Some attempts on combining argumentation theory and trust have been made in the literature. Based on the representation proposed in Liau (2003), Villata, Boella, Gabbay, and van der Torre (2011) presented an instantiation of the meta-argumentation model Boella, Gabbay, van der Torre, and Villata (2009) for reasoning about trust in validity. The technique of meta-argumentation applies Dung's theory of abstract argumentation to itself. The instantiation contains arguments built from beliefs and meta-arguments. An example of a meta argument is of the form Trust i meaning that ‘agent i is trustable’. Our formalism is more general since it reasons about more forms of trust. Moreover, it is much more simple since it instantiates directly Dung's framework with a clear and intuitive logical language in which various kinds of beliefs are represented.

An argumentation-based model for reasoning about inconsistent and uncertain information was proposed in Tang, Cai, McBurney, Sklar, and Parsons (2012). It is as an instantiation of the preference-based argumentation framework proposed in Amgoud and Cayrol (2002) where arguments do not necessarily have the same strengths and are thus compared using a binary relation expressing preferences. The arguments are built from a base which contains beliefs pervaded with degrees of certainty. These degrees are then combined for computing the certainty levels of the supports of arguments which in turn are used for comparing arguments. The particularity of the model is the use of trusted information in order to assign degrees for inferred beliefs. Indeed, the model takes as input a simple network whose nodes are agents and edges represent trust relationships between nodes. For instance, an arc from agent i towards agent j means that agent i trusts agent j. Weights are associated with edges and express degrees of trust. Our formalism is based on a richer model of trust. It distinguishes between six forms of trusts instead of an absolute trust in Tang et al. (2012). Moreover, our formalism not only uses trusted information in order to infer new beliefs but also reasons about trust itself and infers beliefs about trust.

More recently, in Parsons et al. (2012) the authors focused on identifying 10 sources of trust and presented them in terms of argument schemes, i.e. syllogisms justifying trustworthiness in an agent. Examples of sources are authority, reputation and expert opinion which is called in our formalism competence. Critical questions showing how each argument scheme can be attacked were also proposed. While some of the proposed sources make sense, others are debatable. For instance, trust because of pragmatism says that an agent i may decide to trust another agent j because it serves i’s interests to do so. There is a form of wishful thinking which is not compatible with the fact that trust is a belief.

Another interesting contribution on the combination of argumentation theory and trust was done in Stranders, de Weerdt, and Witteveen (2007). The focus is on computing to what extent agent i trusts agent j. This is done from statistical data and arguments. The model is an instantiation of the abstract decision model proposed in Amgoud and Prade (2009). Our formalism does not use statistical data. Moreover, it is an inference model and not a decision making one.

Finally, in Matt et al. (2010) the authors proposed a model for evaluating the trust an agent may have in another. For that purpose, arguments in favour of trust are built. They are mainly grounded on statistical data which makes this approach different from the one we followed in the present paper.

7.Conclusion

This paper tackled the important questions of formalising and reasoning about trust in information sources. It proposed a formal model based on the construction and evaluation of arguments. The model presents several advantages: first, it is grounded on an accurate and simple logical language for representing trust in information sources. Indeed, modal logic is used for distinguishing between what is true (respectively, false) and what is believed by an agent. Second, unlike existing works that define absolute trust in an agent, our model uses a fine-grained notion of trust. It distinguishes between six forms of trust including trust in the sincerity of an agent and trust in his competence. The third feature of our model is that it plays two distinct roles: (i) it shows how to take into account trust in information sources in order to deal and reason about information coming from those sources, (ii) it shows whether to trust or not a given source of information on the basis of available beliefs. This makes our model a good candidate for dialogue systems.

There are a number of ways to extend this work. Our future direction consists of investigating the properties of the model under other semantics, namely preferred semantics. We have shown that the attack relations we have defined are very special since they are not grounded on inconsistency. Consequently, despite the fact that arguments are consistent, self-attacking arguments may exist preventing thus the existence of stable extensions.

Another interesting future direction consists of refining the logical language by considering the notion of topic. The basic idea is to represent information such as: Agent i trusts the competence of agent j in psychology but not in philosophy. Our formal definitions can be extended in this direction thanks to the logic of aboutness developed by Demolombe and Jones (1995). The logical language of this logic contains a predicate A(t, φ) whose intuitive meaning is that formula φ is about topic t. This predicate can be used, for instance, for expressing the fact that i trusts j in his validity for any sentence about a given topic t: x(A(t,x)TrustVal(i,j,x). Another direction consists of handling graded trust. In the proposed model, trust is a binary notion: an agent either fully trusts another agent or fully distrust the agent. However, in everyday life one may have a limited trust in a person. It is thus important to define to what extent an agent trusts another.

Notes

1 Sometimes we abuse notation and write Beli(ϕ) instead of Beliϕ.

2 We use the same notations for the minimal element and for the maximal element in GRB and in GRR while they are not necessarily identical. The context allows us to avoid ambiguities.

3 Notice that (ϕψ) (respectively, (ϕ¬ψ)) does not mean that ϕψ (respectively, ϕ¬ψ) is a valid formula.

References

1 

Amgoud, L. (1999). Contribution à l'intégration des préférences dans le raisonnement argumentatif (PhD thesis), Université Paul Sabatier, Toulouse, France.

2 

Amgoud, L. (2013). Postulates for logic-based argumentation systems. Journal of Approximate Reasoning. 10.1016/j.ijar.2013.10.004.

3 

Amgoud, L., & Besnard, P. (2009). Bridging the gap between abstract argumentation systems and logic. In Lecture notes in computer science vol. 5785 (pp. 12–27). Berlin/Heidelberg/New York: Springer.

4 

Amgoud, L., & Cayrol, C. (2002). A reasoning model based on the production of acceptable arguments. Annals of Mathematics and Artificial Intelligence, 34, 197–216. doi: 10.1023/A:1014490210693

5 

Amgoud, L., Maudet, N., & Parsons, S. (2000). Modelling dialogues using argumentation. In Proceedings of the 4th International Conference on Multi-Agent Systems (ICMAS’00) (pp. 31–38). Boston, MA: IEEE.

6 

Amgoud, L., & Prade, H. (2009). Using arguments for making and explaining decisions. Artificial Intelligence Journal, 173, 413–436. doi: 10.1016/j.artint.2008.11.006

7 

Amgoud, L., & Vesic, S. (2009). Repairing preference-based argumentation systems. In Proceedings of International Joint Conference on Artificial Intelligence (IJCAI’09) (pp. 665–670). Pasadena, CA: AAAI.

8 

Baroni, P., Giacomin, M., and Guida, G. (2005). SCC-recursiveness: A general schema for argumentation semantics. Artificial Intelligence Journal, 168, 162–210. doi: 10.1016/j.artint.2005.05.006

9 

Bench-Capon, T.J.M. (2003). Persuasion in practical argument using value-based argumentation frameworks. Journal of Logic and Computation, 13(3), 429–448. doi: 10.1093/logcom/13.3.429

10 

Benferhat, S., Dubois, D., & Prade, H. (1993). Argumentative inference in uncertain and inconsistent knowledge bases. In Proceedings of the 9th Conference on Uncertainty in Artificial Intelligence (UAI’93) (pp. 411–419). San Francisco, CA: Morgan Kaufmann.

11 

Black, E., & Hunter, A. (2009). An inquiry dialogue system. Autonomous Agents and Multi-Agent Systems, 19, 173–209. doi: 10.1007/s10458-008-9074-5

12 

Boella, G., Gabbay, D., van der Torre, L., & Villata, S. (2009). Meta-argumentation modelling I: Methodology and techniques. Studia Logica, 93, 297–355. doi: 10.1007/s11225-009-9213-2

13 

Caminada, M., & Amgoud, L. (2007). On the evaluation of argumentation formalisms. Artificial Intelligence Journal, 171(5–6), 286–310. doi: 10.1016/j.artint.2007.02.003

14 

Castelfranchi, C. (2011). Trust: Nature and dynamics. In ACM SIGCHI Italian chapter international conference on computer–human interaction (pp. 13–14). New York, NY: ACM.

15 

Castelfranchi, C., & Falcone, R. (2000). Trust is much more than subjective probability: Mental components and sources of trust. In Proceedings of the 33rd annual Hawaii international conference on system sciences. IEEE.

16 

Cayrol, C., Royer, V., & Saurel, C. (1993). Management of preferences in assumption-based reasoning. Lecture notes in computer science, vol. 682, 13–22. doi: 10.1007/3-540-56735-6_39

17 

Chellas, B. (1980). Modal logic: An introduction. Cambridge: Cambridge University Press.

18 

Demolombe, R. (1998). To trust information sources: A proposal for a modal logical framework. In C. Castelfranchi & Y.-H. Tan (Eds.), In Autonomous agents ’98 workshop on ‘Deception, fraud and trust in agent societies’ (pp. 20–34). Minneapolis.

19 

Demolombe, R. (2001). To trust information sources: A proposal for a modal logical framework. In C. Castelfranchi & T. Yao-Hua (Eds.), Trust and deception in virtual societies. Dordrecht: Kluwer.

20 

Demolombe, R. (2004). Reasoning about trust: A formal logical framework. In Lecture notes on computer science vol. 2995 (pp. 291–303). Berlin/Heidelberg/New York: Springer.

21 

Demolombe, R. (2009). Graded trust. In R. Falcone, S. Barber, J. Sabater-Mir, & M. Singh (Eds.), Proceedings of the trust in agent societies workshop at AAMAS 2009. Budapest. Retrieved from http://www.irit.fr/Robert.Demolombe/publications/2009/aamas09.pdf

22 

Demolombe, R. (2011). Transitivity and propagation of trust in information sources: An analysis in modal logic. In Lecture notes in computer science vol. 6814 (pp. 13–28). Berlin/Heidelberg/ New York: Springer.

23 

Demolombe, R., & Jones, A. (1995). Reasoning about topics: Towards a formal theory. American Association for Artificial Intelligence fall symposium. Pasadena, CA: AAAI.

24 

Demolombe, R., & Liau, C.J. (2001). A logic of graded trust and belief fusion. In C. Castelfranchi and R. Falcone (Eds.), Proceedings of 4th workshop on deception, fraud and trust. Retrieved from http://www.irit.fr/Robert.Demolombe/publications/2001/trust01.pdf.

25 

Demolombe, R., & Lorini, E. (2008). A logical account of trust in information sources. In Lecture notes on computer science vol. 5396. Berlin/Heidelberg/New York: Springer.

26 

Dung, P.M. (1995). On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artificial Intelligence Journal, 77, 321–357. doi: 10.1016/0004-3702(94)00041-X

27 

Dung, P., Mancarella, P., & Toni, F. (2007). Computing ideal skeptical argumentation. Artificial Intelligence Journal, 171, 642–674. doi: 10.1016/j.artint.2007.05.003

28 

Elvang-Gøransson, M., Fox, J., & Krause, P. (1993). Acceptability of arguments as ‘logical uncertainty’. In Lecture notes on computer science vol. 747 (pp. 85–90). Berlin/Heidelberg/New York: Springer.

29 

Falcone, R., & Castelfranchi, C. (2001). Social trust: A cognitive approach. In C. Castelfranchi & T. Yao-Hua (Eds.), Trust and deception in virtual societies, 55–90. Dordrecht: Kluwer.

30 

Falcone, R., Piunti, M., Venanzi, M., & Castelfranchi, C. (2013). From manifesta to krypta: The relevance of categories for trusting others. ACM Transactions on Intelligent Systems and Technology, 4, 27.

31 

Gambetta, D. (1990). Can we trust them? In Trust: Making and breaking cooperative relations (pp. 213–238). Oxford: Basil Blackwell.

32 

Gorogiannis, N., & Hunter, A. (2011). Instantiating abstract argumentation with classical logic arguments: Postulates and properties. Artificial Intelligence Journal, 175(9–10), 1479–1497. doi: 10.1016/j.artint.2010.12.003

33 

Jennings, N.R., Mamdani, E.H., Corera, J., Laresgoiti, I., Perriolat, F., Skarek, P., & Varga, L.Z. (1996). Using ARCHON to develop real-word DAI applications Part 1. IEEE Expert, 11, 64–70. doi: 10.1109/64.546585

34 

Liau, C. (2003). Belief, information acquisition, and trust in multi-agent systems – a modal logic formulation. Artificial Intelligence Journal, 149, 31–60. doi: 10.1016/S0004-3702(03)00063-8

35 

Lorini, E., & Demolombe, R. (2008). From binary trust to graded trust in information sources: A logical perspective. In Lecture notes in computer science vol. 5396 (pp. 205–225). Berlin/Heidelberg/New York: Springer.

36 

Maes, P. (1996). Agents that reduce work and information overload. Communication of the ACM, 37(7), 31–40.

37 

Marsh, S. (1994). Formalising trust as a computational concept. Technical report (PhD thesis), University of Stirling, Stirling.

38 

Matt, P., Morge, M., & Toni, F. (2010). Combining statistics and arguments to compute trust. In 9th international conference on autonomous agents and multiagent systems (pp. 209–216). Toronto: IFAAMAS.

39 

McBurney, P., Hitchcock, D., & Parsons, S. (2007). The eightfold way of deliberation dialogue. International Journal of Intelligent Systems, 22, 95–132. doi: 10.1002/int.20191

40 

Modgil, S. (2009). Reasoning about preferences in argumentation frameworks. Artificial Intelligence Journal, 173(9–10), 901–934. doi: 10.1016/j.artint.2009.02.001

41 

Parsons, S., Atkinson, K., Haigh, K., Levitt, K., McBurney, P., Rowe, J.,  … Sklar, E. (2012). Argument schemes for reasoning about trust. In Computational models of argument, COMMA 2012 (pp. 430–441). IOS Press.

42 

Prakken, H., & Sartor, G. (1997). Argument-based extended logic programming with defeasible priorities. Journal of Applied Non-Classical Logics, 7, 25–75. doi: 10.1080/11663081.1997.10510900

43 

Rodriguez, J.A., Noriega, P., Sierra, C., & Padget, J. (1997). A Java-based electronic auction house. In Proceedings of the 2nd international conference on the practical application of intelligent agents and multi-agent technology (pp. 207–224). London: Practical Application Company.

44 

Shi, J., Bochmann, G., & Adams, C. (2005). A trust model with statistical foundation. IFIP Advances in Information and Communication Technology, 173, 145–158.

45 

Simari, G., & Loui, R. (1992). A mathematical treatment of defeasible reasoning and its implementation. Artificial Intelligence Journal, 53, 125–157. doi: 10.1016/0004-3702(92)90069-A

46 

Stranders, R., de Weerdt, M., & Witteveen, C. (2007). Fuzzy argumentation for trust. In Lecture notes on computer science vol. 5056 (pp. 214–230). Berlin/Heidelberg/New York: Springer.

47 

Sycara, K. (1990). Persuasive argumentation in negotiation. Theory and Decision, 28, 203–242. doi: 10.1007/BF00162699

48 

Tang, Y., Cai, K., McBurney, P., Sklar, E., & Parsons, S. (2012). Using argumentation to reason about trust and belief. Journal of Logic and Computation, 22, 979–1018. doi: 10.1093/logcom/exr038

49 

Tarski, A. (1956). On some fundamental concepts of metamathematics. In E.H. Woodger (Ed.), Logic, semantics, metamathematics. Oxford: Oxford University Press.

50 

Villata, S., Boella, G., Gabbay, D., & van der Torre, L. (2011). Arguing about the trustworthiness of the information sources. In Lecture notes on computer science (pp. 74–85). Berlin/Heidelberg/New York: Springer.

51 

Walton, D.N., & Krabbe, E.C.W. (1995). Commitment in dialogue: Basic concepts of interpersonal reasoning. Albany, NY: State University of New York Press.

52 

Wellman, M.P. (1993). A market-oriented programming environment and its application to distributed multicommodity flow problems. Artificial Intelligence and Research, 1, 1–23.

53 

Wooldridge, M.J., & Jennings, N. (1995). Intelligent agents: Theory and practice. The Knowledge Engineering Review, 10, 115–152. doi: 10.1017/S0269888900008122