You are viewing a javascript disabled version of the site. Please enable Javascript for this site to function properly.
Go to headerGo to navigationGo to searchGo to contentsGo to footer
In content section. Select this link to jump to navigation

An informant-based approach to argument strength in Defeasible Logic Programming

Abstract

This work formalizes an informant-based structured argumentation approach in a multi-agent setting, where the knowledge base of an agent may include information provided by other agents, and each piece of knowledge comes attached with its informant. In that way, arguments are associated with the set of informants corresponding to the information they are built upon. Our approach proposes an informant-based notion of argument strength, where the strength of an argument is determined by the credibility of its informant agents. Moreover, we consider that the strength of an argument is not absolute, but it is relative to the resolution of the conflicts the argument is involved in. In other words, the strength of an argument may vary from one context to another, as it will be determined by comparison to its attacking arguments (respectively, the arguments it attacks). Finally, we equip agents with the means to express reasons for or against the consideration of any piece of information provided by a given informant agent. Consequently, we allow agents to argue about the arguments’ strength through the construction of arguments that challenge (respectively, defeat) or are in favour of their informant agents.

1.Introduction

Argumentation has proved to be a powerful paradigm for conceptualizing commonsense reasoning and a promising research area within the field of Artificial Intelligence [7,11,14,40]. One of the main issues an argumentation system has to address is the selection of the acceptable arguments, whose conclusions can be considered to be justified. For this purpose, it is necessary to account for the conflicts between the arguments in the system, and how those conflicts are to be resolved. At this point, a crucial notion comes into play: the notion of argument strength. In general, given an attack from an argument A to an argument B, the attack will only succeed if A is stronger than B. However, there is no unique way of establishing the strength of the arguments, as evidenced by the wide variety of approaches existing in the literature of argumentation. For instance, in [3,5] the arguments’ strength is established by a general preference relation. On the other hand, [13] defines the strength of an argument through a formula yielding a numerical value, based on the intuition that the strength of an argument depends on the strength of its attackers. Finally, for instance, approaches to value-based argumentation [10] consider that the strength of an argument depends on the social values it advances, and determining whether the attack of one argument on another succeeds depends on the comparative strength of the values advanced by the arguments concerned. In contrast to these, in this work we will adopt a symbolic informant-based approach to argument strength in structured argumentation, where the strength of an argument is established in terms of the strength or credibility of its informant agents. Moreover, the strength of arguments in our approach will not be absolute and, furthermore, could be challenged.

In this paper we assume a multi-agent setting where agents share domain knowledge with one another. Hence, each deliberative agent may obtain pieces of information from other informant agents, which can have different levels of credibility. In order to reach conclusions to establish their beliefs, agents will build arguments based on the information in their knowledge bases, which include the pieces of information received from a set of their informant agents. In this context, we introduce an informant-based argumentative approach where the credibility of the information sources (i.e., the informant agents) will be used for determining the arguments’ strength. On the one hand, as it is usual in argumentation systems, arguments could be challenged due to the information they use. On the other hand, since the pieces of information used for building the arguments are associated with their informant agents, our approach is such that arguments can also be challenged on their credibility, in other words, on their strength.

The knowledge representation and reasoning capabilities of our approach will take their basis from DeLP [25]. Specifically, our proposal greatly extends and further develops the approach introduced in [21] by providing a formal account of the arguments’ strength based on the credibility of their informant sources. In particular, we extend DeLP’s representational language by including backing and detracting rules, respectively enabling to express reasons for and against the consideration of the information provided by some informant agents. Then, these new types of rules will be used for building backing and detracting arguments, allowing to argue about the arguments’ strength. As a result, in a situation where some informants are challenged whereas others are not, the existence of multiple informants for a given piece of information in an agent’s knowledge base may strengthen its position (hence, of the arguments using them).

For example, let us consider an agent I1 that has to decide whether to buy a used smartphone or not; this agent receives information from agents I2, I3, I4 and I5. Let us also assume that agent I1 regards I3 as less credible than I2, and I4 as more credible than I5. Now, suppose I2 informs I1 that “the phone is a good option”, whereas I1 receives information from I3 expressing that “the phone is not a good option”. With this information, agent I1 will be able to build arguments for buying the phone and for not buying it, and these two arguments will attack each other. Then, taking the credibility of the informants into account, the former argument will prevail over the latter and agent I1 will arrive to the conclusion that “the phone is a good option” based on information provided by the more credible agent. However, it could be the case that there exist reasons for or against the consideration of information received from some informant. For instance, suppose agent I5 provides information saying that I2 is an expert on electronic devices. In addition, suppose I4 informs I1 that I2 should not be trusted when it comes to recommending an item it owns. This new information could be encoded through a backing and a detracting rule, respectively, leading to the construction of their homonym arguments. As a result, the backing and detracting arguments would be in conflict and, taking the credibility of their informants into account, the detracting argument would prevail over the backing argument (because I4 is more credible than I5). In particular, the approach proposed in this paper will be able to handle situations like these, where the strength of the argument expressing that the phone was a good option is being challenged by the detracting argument.

To summarize, the contribution of this paper is two-fold. On the one hand, we formalize an informant-based structured argumentation approach where the strength of an argument is determined by the strength (i.e., the credibility) of its informant agents. Furthermore, as will become clear later, the strength of an argument in our approach is not absolute, but it is relative to the resolution of the conflicts the argument is involved in. In other words, the strength of an argument will vary from one context to another, as it will be determined by comparison to its attacking arguments (respectively, the arguments it attacks). As a result, it could be the case that some informant providing information for building an argument is relevant for establishing the argument’s strength in a given context, but not in others. On the other hand, through the incorporation of backing and detracting rules (and their homonym arguments) we provide the means for reasoning about the arguments’ strength. In particular, the former will allow us to express reasons for the consideration of any piece of information provided by a given informant agent; analogously, the latter enable to express reasons against the consideration of an informant and thus, will be challenging the strength of arguments making use of information provided by that informant.

The rest of this paper is organized as follows. In Section 2 we will introduce the elements that will be used to represent the agents’ knowledge. Section 3 shows how an agent can build different kinds of arguments and identify different kinds of attacks between them. Then, in Section 4, we will introduce the ways in which the conflicts between arguments are resolved, leading to defeats. Moreover, taking those defeats into account, in Section 5 we formalize the agents’ reasoning mechanism enabling to determine their warranted beliefs. Finally, Sections 6 and 7 discuss relevant related work, draw some conclusions and comment on future lines of work.

2.Knowledge representation

In this section we will introduce our proposal for agents’ knowledge representation that will be defined as an informant-based DeLP program. We assume that agents may share tentative information in the form of defeasible rules, and that each agent can obtain information from other informant agents that have different degrees of credibility. Also, backing and detracting rules will be proposed in our formalization, allowing to express reasons for and against the consideration of the informant agents providing knowledge used for building the arguments.

The assumption of the existence of a total credibility order over informants is not quite realistic in many multi-agent application domains, and a similar observation applies to the existence of a global order shared by all agents. With this observation in mind, the approach to be introduced below will consider that every agent has its own partial order defined over the set of informant agents, representing the credibility it assigns to each informant. Each agent has a knowledge base where every piece of information is attached with an agent identifier representing that the corresponding agent is the source of that piece of information. In addition, agents could communicate with their peers for obtaining new information or for sharing their beliefs. Clearly, as agents may disagree with one another, the beliefs an agent has may be in conflict with another agent’s knowledge. Then, when sharing conflicting information, the credibility order among the informant agents can be used to decide which information prevails. Next, we briefly introduce the notion of credibility order, which will be used throughout the rest of the paper.

We assume a finite set I of identifiers for naming informant agents, shared by all agents. Agent identifiers will be denoted with an uppercase typewriter letter “I” that can have letters and natural numbers as subscripts (i.e., I={Ia,Ib,,Iz,I1,I2,,In,}), and each identifier is unequivocally associated with a single agent.

Each agent will have its own credibility order, represented by an irreflexive, asymmetric and transitive binary relation over I, denoted <coIx (i.e., <coIx is a strict partial order over I), where IxI stands for the agent identifier this order belongs to; for instance, <coIa is the credibility order of the agent identified as Ia. In this paper we will assume that the credibility order relates agents that are sources of information about the same topic; multi-topic or multi-context credibility orders will be considered in future work. The notation Ib<coIaIc means that agent Ia deems agent Ib as less credible than agent Ic (equivalently, under Ia’s perspective, Ic is more credible than Ib). Then, the notation IbcoIaIc is used to express that, for agent Ia, Ib is not less credible than Ic. Furthermore, if IbcoIaIc and IccoIaIb, then agent Ia regards Ib and Ic to be incomparable, noted as IbcoIaIc. The following example illustrates a credibility order over agents, which will be used as part of the running example throughout the rest of the paper.

Example 1.

Consider the set of informants {I1,I2,I3,I4,I5,I6,I7,I8,I9}I. Suppose agent I6 has the following credibility order <coI6={I2<coI6I5,I3<coI6I5,I5<coI6I1,I2<coI6I4,I3<coI6I4,I4<coI6I7,I7<coI6I8,I6<coI6I9}. Figure 1 shows a graphical representation of the credibility order <coI6. Following this notation, an edge from a node N1 to a node N2 represents that N2<coI6N1. For instance, the edge from I7 to I4 in Fig. 1 denotes the credibility relation I4<coI6I7.

Fig. 1.

Graphical representation of the credibility order <coI6 from Example 1.

Graphical representation of the credibility order <coI6 from Example 1.

It should be noted that agents may change their assessment of one another; as a result, this change would impact their credibility orders, resulting in an update. The dynamic nature of credibility orders is outside the scope of this paper. However, for instance, the formalism proposed in [42], which provides a mechanism for handling the dynamics of a credibility order, can be a complement to our proposal.

As stated before, agents can receive information from different sources that are not equally credible. Thus, the information they provide, which may be contradictory, will be considered to be tentative. In this setting we need a mechanism for dealing with uncertain and conflicting information, able to ultimately determine the beliefs an agent should commit to. In particular, when deciding between contradictory conclusions, the reasoning mechanism will rely on the credibility order the agent has. In our approach, knowledge representation and reasoning is inspired on Defeasible Logic Programming (DeLP) [25]. Our interest in this particular computational tool is that its language provides the declarative capability of representing weak information in the form of defeasible rules and presumptions, and its defeasible argumentation inference mechanism allows warranting conclusions in the presence of contradictory information. Below we will introduce the elements that will be used for representing an agent’s knowledge, some of which are taken from DeLP’s representational language.

Let L be a set of ground atoms. A ground literal L (or literal, for short) is an atom A or a negated atom A, where AL and the symbol “∼” represents strong negation.

Definition 1

Definition 1(Defeasible rule).

A defeasible rule is an ordered pair, denoted “Head<Body”, where the first element (Head) is a ground literal and the second element (Body) is a finite set of ground literals. A defeasible rule with head L0 and body {L1,,Ln} (n0) will also be written as L0<L1,,Ln.

A defeasible rule “Head<Body” expresses that “reasons to believe in the antecedent Body give reasons to believe in the consequent Head”. For instance, the defeasible rule “closed_roads<snowing” represents that “reasons to believe that it is snowing provide reasons to believe that the roads are closed”, whereas “closed_roads<snowplows” represents that “reasons to believe that snowplows are working provide reasons to believe that the roads are not closed”. In particular, a defeasible rule with an empty body is noted as “Head<” and is called a presumption [37]. Hence, for instance, the presumption “snowplows<” expresses that “there are defeasible reasons to believe that snowplows are working”. We associate defeasible rules (including presumptions) with their informant agents; for this, we introduce the notion of defeasible domain object.

Definition 2

Definition 2(Defeasible domain object).

Let I be a finite set of agent identifiers. A defeasible domain object is a tuple (I:R), where II and R is a defeasible rule.

In addition to defeasible rules, we introduce backing rules and detracting rules to express reasons for and against the consideration of informants, respectively. Formally:

Definition 3

Definition 3(Backing rule).

Let I be a finite set of agent identifiers. A backing rule is an ordered pair, denoted “(HeadBody)”, where HeadI is the identifier of an informant agent and Body is a finite and non-empty set of literals. A backing rule with head I and body {L1,,Ln} (n1) will also be written as (IL1,,Ln).

Definition 4

Definition 4(Detracting rule).

Let I be a finite set of agent identifiers. A detracting rule is an ordered pair, denoted “(HeadBody)”, where HeadI is the identifier of an informant agent and Body is a finite and non-empty set of literals. A detracting rule with head I and body {L1,,Ln} (n1) will also be written as (IL1,,Ln).

Syntactically, the only difference between backing and detracting rules is the use of ⊕ and ⊘; however, these two types of rules are semantically opposite. A backing rule “(IBody)” expresses that “the antecedent Body gives reasons for considering information provided by the informant I”. On the contrary, a detracting rule “(IBody)” expresses that “the antecedent Body gives reasons against considering information provided by the informant I”. Also, note that backing and detracting rules do not have an associated agent (i.e., their source is not identified), meaning that they correspond to the agent’s own knowledge. Nevertheless, since these rules have a set of literals in their body, it is also possible to argue about the reasons for or against the consideration of the informants in their heads. When convenient, we will refer to backing and detracting rules as informant rules.

The entire knowledge of an agent, which will be used to make inferences and construct arguments, is composed of a set of defeasible domain objects and a set of informant rules. This knowledge base will be called informant-based DeLP program.

Definition 5

Definition 5(Informant-based DeLP program).

An informant-based DeLP program (IBDP for short) is a pair (Δ,Σ), where Δ is a finite set of defeasible domain objects and Σ is a finite set of informant rules.

The following example introduces an IBDP that will serve as a running example, to illustrate the different notions proposed in this paper.

Example 2.

Let us consider the IBDP PI6=(ΔI6,ΣI6) of an agent I6. Note that PI6 encodes agent I6’s own information as well as information obtained from the following informant agents: I1, I2, I3, I4, I5, I7, I8.

ΔI6=(I1:h<a),(I6:c<),(I4:p<),(I2:a<b,g),(I5:d<),(I6:w<x),(I3:b<),(I2:g<),(I3:q<i),(I4:a<c,d),(I3:i<),(I7:j<),(I7:z<),(I8:z<)ΣI6=(I5p),(I3x),(I5g,q),(I4x),(I5w),(I5j),(I8j,p),(I6j)
The program PI6 has a set of defeasible domain objects in ΔI6 that include defeasible rules and presumptions obtained from different informants. It is worth mentioning that two different defeasible domain objects can have the same informant: for instance, (I7:j<) and (I7:z<). Also, two different defeasible domain objects can have the same defeasible rule with different informant agents: for instance, (I7:z<) and (I8:z<). On the other hand, the set ΣI6 includes backing and detracting rules. For instance, the detracting rule (I5g,q) expresses that “g and q give reasons against the consideration of any piece of information provided by I5”, and the backing rule (I5p) expresses that “p gives reasons for considering the information provided by I5”.

As evidenced in Example 2, an IBDP can have two or more defeasible domain objects with the same defeasible rule R but different informant agents. This does not mean that the agent’s knowledge base is redundant; rather, it encodes the fact that the same piece of information was received from different sources. This feature can be considered as an advantage of our approach since, as mentioned before, the credibility order of an agent may change (even dynamically); hence, at any moment, the more credible informant of R could be considered. Furthermore, given the existence of detracting rules, a specific informant of R could be challenged, whereas others are not. Again, a situation like this is illustrated by Example 2, where there exists a detracting rule for the informant I8 of “z<”, but not for the informant I7. As a result, the existence of multiple informants may strengthen the position of a defeasible rule by allowing for different defeasible domain objects within the agent’s knowledge base (therefore, of the arguments using them). Finally, it is important to remark that the existence of backing and detracting rules in an IBDP is not mandatory.

In our approach, an agent will be specified in terms of four components: its own agent identifier, an informant-based DeLP program used to store its knowledge, a credibility order among informants, and an informant-based comparison criterion. The first three elements have been introduced above. On the other hand, the informant-based comparison criterion will make use of the credibility order to establish strict preferences among sets of informants. Briefly, sets of informant agents will be compared as part of the warranting process with the aim of deciding between conflicting arguments, to ultimately determine the accepted arguments and the justified conclusions of the agent. Consequently, since the notion of argument and the way in which they become in conflict (attack) have not been formalized yet, we will postpone the characterization of informant-based comparison criteria until Section 4; that is, we will formally introduce that notion after providing the formal context in which such criteria will be applied. An agent is defined as follows:

Definition 6

Definition 6(Agent).

 Let I be a finite set of agent identifiers. An agent is a tuple (I,PI,<coI,I) where II, PI=(ΔI,ΣI) is an informant-based DeLP program, <coII×I is a credibility order over I, and I2I×2I is an informant-based comparison criterion over sets of informants.

Example 3.

Consider the IBDP PI6 from Example 2 and the credibility order <coI6 presented in Example 1. Then, agent I6 can be specified as (I6,PI6,<coI6,I6). The informant-based comparison criterion I6 will be introduced in Section 4.

Given the characterization of an agent, it can be the case that two different agents (I1,PI1,<coI1,I1) and (I2,PI2,<coI2,I2) have the same IBDP and the same informant-based comparison criterion (thus, PI1=PI2 and I1=I2) but a different credibility order. In such a case, even though the two agents share the same knowledge, since the comparison criterion is defined in terms of the credibility order, the conflicts arising from the consideration of inconsistent information may be resolved differently for the two agents. Hence, in such a case, the agents’ inferences (warranted conclusions or beliefs) may differ. On the other hand, even though two agents (I3,PI3,<coI3,I3) and (I4,PI4,<coI4,I4) cannot have the same agent identifier, they can share every other component (i.e., it can be the case that PI3=PI4, <coI3=<coI4 and I3=I4); clearly, in such a case, the two agents will draw the same conclusions.

In the following section we will introduce the three kinds of arguments that can be built using the knowledge represented in an IBDP. After that, the different kinds of conflicts that may arise between those arguments (referred to as attacks) will be identified. Then, in Section 4, we will introduce the notion of valid informant-based comparison criterion, used for determining the successful attacks.

3.Arguments and attacks

A central piece of this formalism that will allow agents to handle contradictory domain information is the notion of argument. Intuitively, an argument is a structure whose conclusion is obtained from a set of premises, informed by some agents, through the use of a reasoning mechanism. In particular, the claims of the arguments will be the tentative beliefs of the agents. When analyzing an argument for a particular belief, an agent can find other arguments, referred to as counter-arguments, that are in conflict with it. Specifically, a conflict may arise because the counter-argument contradicts some information (i.e., a premise, an intermediate conclusion or the final claim) in the argument or because it challenges one of its informants. In this situation, it is necessary to have a mechanism for comparing the conflicting arguments to decide which one prevails. This analysis leads to a dialectical process seeking to validate the arguments in conflict. The arguments that survive all possible attacks from their counter-arguments will be said to warrant their conclusions or claims, and these will be the agent’s beliefs.

Next, we will show how an agent can build different types of arguments using the defeasible domain objects and the informant rules stored in its informant-based DeLP program. As a preliminary notion, before formally defining the arguments, we introduce the concept of defeasible derivation.

Definition 7

Definition 7(Defeasible derivation).

Let P=(Δ,Σ) be an IBDP and L a literal. A defeasible derivation of L from SΔ, denoted SPL, consists of a finite sequence L1,L2,,Ln=L of literals, where each literal Li (1in) is in the sequence because there exists a defeasible domain object (I:R) in S such that: R=Li<; or R=Li<B1,,Bk, and every Bt (1tk) is an element Lj of the sequence appearing before Li (j<i).

A derivation for a literal L is called “defeasible” because, as we will show next, there may exist information in contradiction with L or any other literal appearing in the sequence, and under certain conditions this could prevent the acceptance of L as a warranted belief. It should be noted that rules from different informants can be combined to derive a literal. Also, note that the set S contains all the defeasible domain objects available for obtaining the derivation. Finally, it is important to observe that from the same IBDP it is possible to obtain several, distinct, defeasible derivations for a given literal. Furthermore, as the following example shows, from a given IBDP it is possible to obtain defeasible derivations for complementary literals (w.r.t. the strong negation “∼”).

Example 4.

Let us consider the IBDP PI6 from Example 2. From that program it is possible to obtain the following defeasible derivation for the literal “a”: the sequence ‘b, g, a’ that makes use of the defeasible domain objects (I2:a<b,g), (I3:b<) and (I2:g<). Also, PI6 allows to derive the literal “a” using the defeasible domain objects (I4:a<c,d), (I6:c<) and (I5:d<) to obtain the sequence ‘c, d, a’. Thus, from PI6 it is possible to obtain defeasible derivations for complementary literals.

As shown in Example 2, an IBDP may contain defeasible domain objects such that their heads correspond to complementary literals. This is reasonable because different informants may provide pieces of knowledge providing reasons for or against a given conclusion. Moreover, it could be the case that the same informant, under different conditions, gives reasons for or against a literal. Consequently, as illustrated in Example 4, defeasible derivations for complementary literals can be obtained from the same IBDP. In order to be able to identify coherent sets of elements within an IBDP, the notion of a contradictory set of defeasible domain objects of an IBDP is defined next.

Definition 8

Definition 8(Contradictory set).

Let P=(Δ,Σ) be an IBDP and SΔ. We say that the set S is contradictory if and only if there exist two complementary literals L and L such that SPL and SPL.

To illustrate this notion, let PI6 be the IBDP from Example 2. Then, the set Ax={(I2:a<b,g),(I3:b<),(I2:g<),(I4:a<c,d),(I6:c<),(I5:d<)} is contradictory whereas the set Ay={(I2:a<b,g),(I4:a<c,d),(I6:c<),(I5:d<)} is not. As a result, for instance, every superset of Ax will also be contradictory.

Observe that backing and detracting rules are not used for obtaining defeasible derivations. As will be shown next, they will only be used to build arguments that, respectively, support or challenge the consideration of informant agents. Moreover, as will be formalized in Section 4, these new types of argument will allow to argue about the arguments’ strength. The usual definition of argument is then extended to consider backing and detracting rules, when required. As a result, we will distinguish among three different types of arguments: the first type regards arguments that conclude literals, whereas the other two deal with arguments for or against informant agents, respectively.

Definition 9

Definition 9(Claim argument).

Let P=(Δ,Σ) be an IBDP and L a literal. A,L is a claim argument for the literal L, built from P, if the following conditions hold:

  • (1) AΔ,

  • (2) APL,

  • (3) A is non-contradictory, and

  • (4) A is minimal: there is no BA satisfying conditions 2 and 3.

Definition 10

Definition 10(Backing argument).

Let I be a finite set of agent identifiers, P=(Δ,Σ) an IBDP and II. A,I is a backing argument for the informant I, built from P, if A={(IL1,,Ln)}A, with AΔ, and the following conditions hold:

  • (1) A(ΔΣ),

  • (2) APLi(1in),

  • (3) (I:R)A,

  • (4) A is non-contradictory, and

  • (5) A is minimal: there is no BA satisfying conditions 2, 3 and 4.

Definition 11

Definition 11(Detracting argument).

Let I be a finite set of agent identifiers, P=(Δ,Σ) an IBDP and II. A,I is a detracting argument for the informant I, built from P, if A={(IL1,,Ln)}A, with AΔ, and the following conditions hold:

  • (1) A(ΔΣ),

  • (2) APLi(1in),

  • (3) (I:R)A,

  • (4) A is non-contradictory, and

  • (5) A is minimal: there is no BA satisfying conditions 2, 3 and 4.

Briefly, as specified by the last clause in Definitions 911, all arguments share the characteristic of having a minimal and non-contradictory set of rules that allows to defeasibly derive a conclusion or the conditions for or against the consideration of an informant agent. In particular, this minimality requirement aligns with the definition of argument structures in [25], and has the aim of avoiding irrelevant information in the arguments (consequently, minimizing the points of attack). Also, the first clause in Definitions 10 and 11 is meant to ensure that the backing and detracting rules (as well as the corresponding defeasible domain objects) used for building the arguments actually belong to the IBDP from which the arguments are built. Another feature shared by backing and detracting arguments is that, when obtaining the required derivations, they cannot make use of defeasible domain objects provided by the informant they support or challenge, respectively. This constraint is meant to maintain a kind of coherence within the arguments that is not captured by the requirement of them being non-contradictory, as discussed below.

On the one hand, the third clause of Definition 10 has the aim of preventing the construction of backing arguments for a given informant, which are based on information provided by that same informant.11 On the other hand, for detracting arguments, this constraint might not seem as intuitive as it is for backing arguments. This is because, in some cases, one might take into account pieces of information provided by agent I in order to distrust that agent.22 For instance, suppose the existence of a medical application where knowledge is expressed through an IBDP. Now, consider the existence of a detracting rule (IXphysician_IX), expressing reasons against the consideration of informant IX if he is not a physician. Then, if agent IX says he is not a physician (for instance, by providing a defeasible domain object (IX:physician_IX<)), one could think of using this piece of knowledge for building, together with the previous detracting rule, a detracting argument for agent IX.33 Nonetheless, it should be noted that such a detracting argument for IX would not only be against other arguments built using the information provided by IX but would also be challenging itself, since its reasoning is based on information stated by IX. Hence, the third clause in Definition 11 is meant to avoid situations like these, where detracting arguments would end up being self-attacking (see Definition 14 below) and therefore, inconsistent.

It is worth mentioning that backing and detracting arguments (also, their homonym rules) resemble the backing and undercutting arguments (respectively, rules) of [19]. However, their intended meanings differ: whereas backing and undercutting arguments in [19] respectively provide reasons for and against the use of specific defeasible rules (with no notion of information source), the aim of backing and detracting arguments here is to allow arguing about the arguments’ strength by providing reasons for and against the consideration of any piece of information provided by some informant agents. Also, differently from [19], in our approach there is no explicit notion of support between arguments, as it is usually done in the literature of bipolar argumentation (see [20] for an overview).

Finally note that, since informant rules are not used for obtaining defeasible derivations, claim arguments will not include informant rules. In contrast, a backing (respectively, detracting) argument will only include one backing (respectively, detracting) rule. Nevertheless, despite their common features, the three argument types are mutually exclusive. When convenient, we will abstract from an argument’s type, referring to it just as an argument. Then, given an argument built from an IBDP, we can define the notion of sub-argument as follows.

Definition 12

Definition 12(Sub-argument).

Let P=(Δ,Σ) be an IBDP and A1,L1, A2,L2 two arguments built from P. We say that A2,L2 is a sub-argument of A1,L1 iff A2A1.

It should be noted that every proper sub-argument of a backing or a detracting argument is a claim argument. In general, if A2,Y is a proper sub-argument of A1,X (i.e., A2A1) then A2,Y is a claim argument.

Example 5.

Given the agent’s specification introduced in Example 3, agent I6 will be able to build the following arguments from PI6, among others:

  • A1,h, with A1={(I1:h<a),(I2:a<b,g),(I3:b<),(I2:g<)}

  • A2,a, with A2={(I2:a<b,g),(I3:b<),(I2:g<)}

  • A3,a, with A3={(I4:a<c,d),(I6:c<),(I5:d<)}

  • A4,I5, with A4={(I5g,q),(I2:g<),(I3:q<i),(I3:i<)}

  • A5,I5, with A5={(I5p),(I4:p<)}

  • A6,I5, with A6={(I5j),(I7:j<)}

  • A7,I6, with A7={(I6j),(I7:j<)}

Note that A1,h, A2,a and A3,a are claim arguments. Also, A5,I5 is a backing argument, whereas A4,I5, A6,I5 and A7,I6 are detracting arguments. In addition, observe that A2,a is a proper sub-argument of A1,h.

The above example illustrates that from a given IBDP it is possible to build arguments that are in conflict with one another, such as the claim arguments A2,a and A3,a. These conflicts become evident since the arguments’ conclusions correspond to complementary literals. On the other hand, the existence of backing and detracting arguments may lead to new kinds of conflict, that will allow to argue about the arguments’ strength in the resolution of other conflicts. The following definitions formalize the different kinds of conflict that may occur between arguments built from an IBDP, from hereon referred to as attacks.

The first kind of attack, called conclusion attack (or c-attack) captures the usual conflict in an argumentation system, where the claim of one argument contradicts a conclusion (premise, intermediate or final claim) of another.

Definition 13

Definition 13(Conclusion attack).

Let P=(Δ,Σ) be an IBDP, A1,L1 a claim argument built from P, and A2,L2 any argument built from P. We say that A1,L1 c-attacks A2,L2 at the literal L iff there exists a claim sub-argument A,L of A2,L2 such that L1 and L are complementary literals with respect to “∼”. We refer to A,L as the disagreement sub-argument in the attack.

The second kind of attack we consider, called strength attack (or s-attack) aims at capturing the intuition that a detracting argument for a given informant is challenging any argument that makes use of a defeasible domain object provided by that informant agent. Then, since the strength of an argument is determined in terms of the strength of its informant agents, according to the adopted informant-based comparison criterion, the detracting argument is somehow attacking the other argument’s strength.

Definition 14

Definition 14(Strength attack).

Let I be a finite set of agent identifiers, II, P=(Δ,Σ) an IBDP, A1,I a detracting argument built from P, and A2,L2 any argument built from P. We say that A1,I s-attacks A2,L2 at the informant I iff there exists a defeasible domain object (I:R)A2.

The next kind of attack we consider, referred to as strength-defense attack (or sd-attack), corresponds to the situation where a backing argument for a given informant exists, and the backing argument attacks a detracting argument for that informant. In such a situation, we consider that the backing argument is defending the strength of the argument being attacked by the detracting argument.

Definition 15

Definition 15(Strength-defense attack).

Let I be a finite set of agent identifiers, II, P=(Δ,Σ) an IBDP, A1,I a backing argument built from P, and A2,I a detracting argument built from P. We say that A1,I sd-attacks A2,I.

Given the existence of a backing argument for an informant I, who might originate a strength-defense attack, any detracting argument for I can be considered as providing a counter-defense for the strength of the argument for whom I was being challenged. This kind of attack, called strength-counter-defense attack (or scd-attack), is defined as follows.

Definition 16

Definition 16(Strength-counter-defense attack).

Let I be a finite set of agent identifiers, II, P=(Δ,Σ) an IBDP, A1,I a detracting argument built from P, and A2,I a backing argument built from P. We say that A1,I scd-attacks A2,I.

Consequently, whenever an sd-attack exists, an scd-attack will also exist and vice-versa. This is because, as explained above, the latter is meant to provide a counter-defense to the defense provided by a backing argument. More generally, whenever a backing argument and a detracting argument for the same informant I exist, a 2-cycle of attacks will exist between such arguments (respectively, an sd-attack from the backing argument to the detracting argument, and an scd-attack from the detracting argument to the backing argument).

Finally, if an argument A1,L1 attacks (either c-attacks, s-attacks, sd-attacks or scd-attacks) another argument A2,L2, we will say that A1,L1 is a counter-argument of A2,L2.

Example 6.

Given the arguments listed in Example 5, some attacks are identified next: argument A3,a c-attacks arguments A1,h and A2,a at the literal “a”; the disagreement sub-argument in both cases is A2,a. Similarly, A2,a c-attacks A3,a at the literal “a”, where the disagreement sub-argument is A3,a. In addition, the detracting arguments A4,I5 and A6,I5 s-attack A3,a because (I5:d<)A3. Then, the detracting argument A7,I6 s-attacks A3,a because (I6:c<)A3. On the other hand, for instance, the backing argument A5,I5 sd-attacks the detracting arguments A4,I5 and A6,I5, and both A4,I5 and A6,I5 scd-attack argument A5,I5.

These arguments and attacks are illustrated in Fig. 2. In particular, arguments are depicted with triangles, and attacks are represented with dashed arrows between the arguments. Furthermore, the circles beside the symbol of the defeasible rule “<” within the arguments indicate the informant associated with each defeasible domain object in those arguments.

Fig. 2.

Arguments and attacks from Example 6.

Arguments and attacks from Example 6.

As the preceding example shows, conclusion attacks are symmetric in a sense. That is, if A1,L1 c-attacks A2,L2, then the disagreement sub-argument A,L of A2,L2 is such that it c-attacks A1,L1. In particular, if A1,L1 c-attacks A2,L2 on its final conclusion L2, then A2,L2 c-attacks A1,L1; these situations were illustrated by arguments A3,a, A1,h and A2,a in Example 6. In contrast, strength attacks need not be symmetric. This is due to the fact that the attacking argument is a detracting argument, and no particular attacked sub-argument is identified. However, this does not prevent detracting arguments from attacking each other. For instance, it could be the case that A1,I1 s-attacks A2,I2 at the informant I1, and A2,I2 s-attacks A1,I1 at the informant I2; what would occur in such a case is that there exist two defeasible domain objects (I1:R) and (I2:R) such that (I1:R)A2 and (I2:R)A1. On the other hand, as mentioned before, whenever a backing argument B sd-attacks a detracting argument D, D scd-attacks B; moreover, any other detracting argument for the same informant will also scd-attack B (consequently, will be sd-attacked by B). That is, the existence of a backing argument and a detracting argument for the same informant leads to a two-way conflict between those arguments (the former being an sd-attack and the latter an scd-attack); again, these situations were illustrated by arguments A5,I5, A4,I5 and A6,I5 in Example 6.

Given that an agent may build multiple arguments, which in turn may have several counter-arguments, in order to determine the agent’s beliefs we need to determine the undefeated arguments. To establish whether an argument A,L is undefeated, it is necessary to explicitly account for all its counter-arguments. Let A1,L1,A2,L2,,Ak,Lk be the counter-arguments of A,L. If any counter-argument Ai,Li is (according to the informant-based comparison criterion) better than or unrelated to A,L, then Ai,Li is a candidate for defeating A,L. However, if argument A,L is better than Ai,Li, then this counter-argument will not be taken in consideration as a defeater for A,L. Therefore, in order to determine the defeaters of an argument, we will make use of the informant-based comparison criterion. Then, once all the successful attackers (i.e., the defeaters) of an argument are identified, we will be able to determine its acceptance status and establish whether both the argument and its conclusion are warranted or not. These issues will be addressed in the following section.

4.Defeats

Agents build arguments from their knowledge bases with the aim of establishing their beliefs. However, as discussed in Section 3, the existence of conflicting information within an agent’s knowledge base leads to the existence of attacks between those arguments. Furthermore, since attacks could succeed or fail, it is necessary to have a comparison criterion to determine whether the attacking argument in a conflict prevails, in which case it becomes a defeater. In general, an attack will be considered to be effective when the set of informants from the attacked argument is not better than that of the attacking argument, with respect to an informant-based comparison criterion.

When comparing sets of informants corresponding to arguments built from an agent’s IBDP, the comparison criterion “≺” will be the one provided in the agent’s specification. In other words, the informant-based comparison criterion is modular. However, in order to be considered as valid, the criterion has to satisfy some constraints. The notion of a valid informant-based comparison criterion is formalized below. Following the usual convention, IS2IIS1 means that the set of informants IS1 is strictly better than (or preferred to) the set IS2, and IS2IIS1 means that the set IS1 is not better than (or not preferred to) the set IS2.

Definition 17

Definition 17(Valid informant-based comparison criterion).

Let I be a finite set of agent identifiers and (I,PI,<coI,I) the specification of an agent II. We say that I is a valid informant-based comparison criterion iff it holds that: (1) for every IS1,IS2I, if IS2IIS1, then there exist I1IS1 and I2IS2 such that I2<coII1 (based on <coI); (2) for every ISI, ISIIS (irreflexive); and (3) for every IS1,IS2I, if IS2IIS1, then IS1IIS2 (asymmetric).

Following the preceding definition, a valid informant-based comparison criterion should be based on the agent’s credibility order, and it should be irreflexive and asymmetric. We do not intend these conditions to be the only ones that can be satisfied by an informant-based comparison criterion. In contrast, we aim at establishing the minimum requirements surrounding such criteria. Specifically, the first condition is meant to link an informant-based comparison criterion with the credibility order it is based on, so that it actually makes use of the information provided by that order and does not contradict it when considering the corner case of singleton sets. That is, a valid informant-based comparison criterion is such that, when comparing unitary sets of informants {I1} and {I2}, if {I2}I{I1} then I2<coII1; in other words, if a unitary set of informants is better than another unitary set of informants, then it holds that the informant in the latter set is less credible than the one in the former set (according to the credibility order). On the other hand, by requiring the valid criteria to be irreflexive and asymmetric (second and third conditions), we are establishing the characteristic of them being strict.44 Finally, as stated above, a valid informant-based comparison criterion might impose additional constraints as long as it complies with the conditions established in Definition 17.

Next, we will introduce a valid informant-based comparison criterion that will be used in our examples for determining the successful attacks between the arguments built from an IBDP. This criterion, called single informant credibility criterion, is an adapted version of the single rule criterion from [43], modified to account for sets of informant agents. Intuitively, it prefers a set of informants IS1 over a set of informants IS2 if there is at least one informant in IS1 that is more credible than an informant in IS2, and no informant in IS2 is more credible than an informant in IS1. Formally:

Definition 18

Definition 18(Single informant credibility criterion).

 Let I be a finite set of agent identifiers, <co a credibility order over I and IS1,IS2I. We say that IS1 is preferred to IS2, denoted IS2SIS1, iff it holds that: (1) there exist IiIS1 and IjIS2 such that Ij<coIi; and (2) there is no IkIS1 and no ItIS2 such that Ik<coIt.

Proposition 1.

Let I be a finite set of agent identifiers and (I,PI,<coI,I) the specification of an agent II, where I=S is the single informant credibility criterion. It holds that S is a valid informant-based comparison criterion.

Proof.

We have to show that for every IS1,IS2I it holds that:

  • (1) If IS2SIS1, then there exist I1IS1 and I2IS2 such that I2<coII1 (based on <coI). This follows directly from the first clause in Definition 18.

  • (2) IS1SIS1 (irreflexive). Suppose by contradiction that IS1SIS1. Then, it would be the case that there exist Ik,ItIS1 such that Ik<coIIt, contradicting the second clause of Definition 18.

  • (3) If IS2SIS1, then IS1SIS2 (asymmetric). Suppose by contradiction that IS1SIS2. Then, by the second clause of Definition 18 it would be the case that there is no IkIS2 and no ItIS1 such that Ik<coIIt. However, this would violate the first clause of Definition 18, contradicting the hypothesis that IS2SIS1.

 □

It is worth remarking that, as specified by Definition 17, a valid informant-based comparison criterion is not required to be transitive. As discussed before, this does not imply that any valid criteria cannot be transitive; thus, transitive as well as non-transitive criteria can be considered as long as they meet the requirements imposed by Definition 17. Then, by not imposing such a constraint on valid criteria, we allow for a wider family of criteria to be considered. Among others, this allows us to consider the single informant credibility criterion which, as introduced in Definition 18, is not transitive. To illustrate the fact that this criterion does not satisfy transitivity, let us consider the following example.55 Consider the set of agents {I1,I2,I3,I4} and a credibility order <co such that I1<coI2 and I3<coI4. By applying the single informant credibility criterion S, it holds that {I1}S{I2,I3} and {I2,I3}S{I4}. However, since I1 and I4 are incomparable w.r.t. <co, it does not hold that {I1}S{I4} and thus, the criterion is not transitive. Notwithstanding this, the non-transitivity of the single informant credibility criterion does not undermine its usefulness. In fact, this criterion is analogous to the rules priorities criterion from the standard literature of structured argumentation (see e.g., [25, Def. 3.7]). On the other hand, as an example of a transitive and valid informant-based comparison criterion, we can consider the max-max lifting criterion introduced in [9] when taking a credibility order as the basis of the ≻ relation (see Section 6 for a detailed discussion).

We will now turn to establish the conditions under which the attacks introduced in Section 3 succeed and become defeats by making use of a valid informant-based comparison criterion. As mentioned before, in general, an attack will succeed if the set of informants from the attacking argument is not worse than that of the attacked argument, with respect to an informant-based comparison criterion. Hence, for each kind of attack, we will establish the sets of informants to be accounted for by the comparison criterion. For this, we need to formally characterize the set of informants associated with an argument.

Definition 19

Definition 19(Argument informants set).

Let I be a finite set of agent identifiers and A,L an argument built from an IBDP. The informants set of A,L is Inf(A,L)={II(I:R)A}.

Having established a mechanism for identifying the set of informants associated with an argument, we now turn to formalize the different kinds of defeat that may occur between a pair of arguments built from an IBDP. In the following, whenever we want to refer to a generic argument (without caring for its conclusion or the informant it supports or challenges), for convenience we will sometimes write A instead of A,L for claim arguments (respectively, A instead of A,I for backing and detracting arguments); this is because an argument can be unequivocally identified by its associated set of rules. Then, for instance, Inf(A,L) and Inf(A,I) will sometimes be written as Inf(A).

Definition 20

Definition 20(Conclusion defeat).

Let I be a finite set of agent identifiers, (I,PI,<coI,I) the specification of an agent II, and A1, A2 two arguments built from PI such that A1 c-attacks A2 at the literal L. We say that A1 c-defeats A2 (equivalently, A1 is a c-defeater of A2) iff the disagreement sub-argument A,L of A2 is such that Inf(A1)IInf(A). In particular, if Inf(A)IInf(A1), we say that A1 is a proper c-defeater of A2; otherwise, we say that A1 is a blocking c-defeater of A2.

Definition 21

Definition 21(Strength defeat).

Let I be a finite set of agent identifiers, (I,PI,<coI,I) the specification of an agent II, and A1, A2 two arguments built from PI such that A1 s-attacks A2. We say that A1 s-defeats A2 (equivalently, A1 is an s-defeater of A2) and, in particular, A1 is a proper s-defeater of A2.

Definition 22

Definition 22(Strength-defense defeat).

Let I be a finite set of agent identifiers, (I,PI,<coI,I) the specification of an agent II, and A1, A2 two arguments built from PI such that A1 sd-attacks A2. We say that A1 sd-defeats A2 (equivalently, A1 is an sd-defeater of A2) iff Inf(A1)IInf(A2). In particular, if Inf(A2)IInf(A1), we say that A1 is a proper sd-defeater of A2; otherwise, we say that A1 is a blocking sd-defeater of A2.

Definition 23

Definition 23(Strength-counter-defense defeat).

Let I be a finite set of agent identifiers, (I,PI,<coI,I) the specification of an agent II, and A1, A2 two arguments built from PI such that A1 scd-attacks A2. We say that A1 scd-defeats A2 (equivalently, A1 is an scd-defeater of A2) iff Inf(A1)IInf(A2). In particular, if Inf(A2)IInf(A1), we say that A1 is a proper scd-defeater of A2; otherwise, we say that A1 is a blocking scd-defeater of A2.

There exist some differences in the way in which the different types of attack are resolved. Specifically, the differences rely on the sets of informants that are compared in each case. For the resolution of c-attacks, it suffices to compare the set of informants from the attacking argument with the set of informants from the disagreement sub-argument. Note that the resolution of c-attacks in this way is analogous to the standard resolution of rebutting attacks in the literature of structured argumentation (see e.g., [25,36]). When resolving s-attacks, it is important to recall the nature of the attacking argument, which provides reasons against the consideration of a given informant used by the attacked argument. To prevent the success of an s-attack, the attacked argument should somehow give reasons for the consideration of that informant, and they should be provided by informants that are not worse than the ones associated with the attacking argument. As a result, since the attacked argument does not give reasons for the consideration of the challenged informant, the s-attack will always succeed (i.e., no comparison between sets of informants is made). Again, the resolution of s-defeats relates to the way in which undercutting attacks are resolved in the literature.

Then, sd-attacks and scd-attacks are handled analogously. In these two cases, there exists a conflict between a backing argument and a detracting argument, where they respectively provide reasons for and against the consideration of a given informant. Therefore, in the resolution of these types of attack, the entire sets of informants associated with the attacking and the attacked argument are compared. Given the resemblance between backing and detracting arguments as proposed here and the backing and undercutting arguments of [19], it can be noted that sd-defeats and scd-defeats somehow relate to the implicit defeats of [19], and are resolved analogously; notwithstanding this, there is a clear difference between them since our approach compares the sets of informant agents of the backing and detracting arguments whereas [19] makes use of a preference relation which might not take into account the information sources of arguments.

Finally note that, in all cases where sets of informants are compared to determine the success of an attack (thus turning it into a defeat), the traditional form of resolution of [3] is adopted. Namely, in our approach, an attack succeeds if the set of informants from the attacked argument is not better (w.r.t. the informant-based comparison criterion) than the set of informants from the attacking argument. In contrast, works like [6,31] consider that preferences play an additional role, serving to repair the attack relation in order to account for conflicts derived from the preferences, even in cases where those conflicts might not have been originally expressed within the attack relation. Among others, they both consider the existence of a defeat from an argument A1 to an argument A2 in a scenario where A2 attacks A1 (and not vice-versa) but A1 is strictly preferred to A2.

Example 7.

Let us consider (I6,PI6,<coI6,I6), the specification of agent I6 given in Example 3, and the arguments listed in Example 5. Also, suppose that I6 is the single informant credibility criterion from Definition 18. Then, we identify the following defeats (among others): A3,a c-defeats A1,h and A2,a. In both cases, the disagreement sub-argument A2,a is such that Inf(A2,a)={I2,I3}. Then, since Inf(A3,a)={I4,I5,I6}, Inf(A3,a)I6Inf(A2,a). In particular, Inf(A2,a)I6Inf(A3,a), so A3,a is a proper c-defeater of A1,h and A2,a. Continuing with the resolution of attacks from Example 6, the detracting arguments A4,I5 and A6,I5 proper s-defeat A3,a. Similarly, the detracting argument A7,I6 is a proper s-defeater of A3,a. On the other hand, the backing argument A5,I5 proper sd-defeats A4,I5 since Inf(A5,I5)={I4}, Inf(A4,I5)={I2,I3} and Inf(A4,I5)I6Inf(A5,I5). Finally, the detracting argument A6,I5 is a proper scd-defeater of A5,I5; this is because Inf(A6,I5)={I7}, Inf(A5,I5)={I4} and Inf(A5,I5)I6Inf(A6,I5).

The defeats are illustrated in Fig. 3. The notation for the arguments is the same as the one used in Fig. 2; on the other hand, defeats are represented with solid arrows between the arguments.

Fig. 3.

Arguments and defeats from Example 7.

Arguments and defeats from Example 7.

5.Warrant

To determine whether an agent I can accept a literal L as a belief it is necessary to find out if, from its IBDP PI, it is possible to build an argument A,L that ends up undefeated after all things considered. Naturally, this will require the consideration of all possible defeaters for A,L. Then, given a defeater B,H, defeaters for it will also have to be considered, as well as the defeaters for those defeaters, and so on. This dialectical analysis leads to characterizing the notion of an argumentation line, which constitutes a sequence of arguments where each argument is a defeater of its predecessor.

Definition 24

Definition 24(Argumentation line).

Let I be a finite set of agent identifiers, (I,PI,<coI,I) the specification of an agent II, and A0 an argument built from PI. An argumentation line for A0 is a sequence of arguments built from PI, denoted Λ=[A0,A1,A2,,An], where each element of the sequence Ai is a defeater of its predecessor Ai1 (0<in).

Given an argumentation line Λ for an argument A0, every argument in an odd position (e.g., A2, A4, if they exist) is said to be a supporting argument for A0 because they reinstate A0; the set of such arguments will be denoted as ΛS. Analogously, arguments in even positions of Λ are called interfering arguments, and the corresponding set will be denoted as ΛI.

Example 8.

Given the specification of agent I6 introduced in Example 3, the arguments built from its IBDP PI6 (Example 5) and the defeats between them listed in Example 7, we obtain (among others) the following argumentation lines for A1,h: Λ1=[A1,h,A3,a,A4,I5,A5,I5,A6,I5], whose set of supporting arguments is ΛS1={A1,h,A4,I5,A6,I5} and its set of interfering arguments is ΛI1={A3,a,A5,I5}; Λ2=[A1,h,A3,a,A7,I6], with ΛS2={A1,h,A7,I6} and ΛI2={A3,a}; and Λ3=[A1,h,A3,a,A6,I5], whose set of supporting arguments is ΛS3={A1,h,A6,I5} and its set of interfering arguments is ΛI3={A3,a}.

There may exist argumentation lines that lead to fallacious chains of reasoning. In order to avoid those, we impose some restrictions on argumentation lines, to distinguish the acceptable argumentation lines. The first situation we want to avoid is to have infinite chains of reasoning; this is captured by the first and third clauses in Definition 26, by requiring an acceptable argumentation line to be a finite sequence, and to avoid the introduction of repeated arguments and disagreement sub-arguments. Another constraint imposed on acceptable argumentation lines is a kind of consistency within the sets of supporting and interfering arguments, in order to prevent an argument from being defended by another argument that is in conflict with it; this intuition is modeled in the second clause of Definition 26, requiring the sets of supporting and interfering arguments of an argumentation line to be non-contradictory.

Let us now analyze the inclusion of blocking defeaters in an argumentation line. The existence of a blocking defeat from Ai to Ai1 in an argumentation line implies that the sets of informants of the two arguments were incomparable under the adopted comparison criterion.66 In addition, if there exists an argument Ai+1 that is a blocking defeater of Ai, it would imply that the comparison criterion could not resolve that conflict either. As a result, if Ai+1 were to be included in the corresponding argumentation line, it would somehow imply that Ai1 prevails over Ai, just because it has argument Ai+1 supporting it (which, in turn, was not better than Ai). To avoid these issues, consecutive blocking defeaters are not allowed in an acceptable argumentation line, and this is captured in the fourth clause of Definition 26.

Next, let us move to considering the inclusion of s-defeats in an argumentation line. As expressed before, s-defeats (corresponding to s-attacks, which always succeed), are aimed at challenging the strength of the defeated argument by targeting one of its informant agents. This is because, as discussed earlier, we consider that the strength of an argument is linked to the credibility of its informant agents. However, it should be noted that the existence of an s-defeat towards an argument is not, on its own, sufficient to establish that the defeat actually took place because of challenging the defeated argument’s strength. This is due to the fact that not every informant providing information used for building an argument is relevant for determining its strength. For instance, as expressed in Definition 20, the strength of a claim argument A that c-attacks an argument B, with C being the disagreement sub-argument, will be determined by a subset of its informants ISAInf(A) such that ISAInf(C), where Inf(C) is the entire set of informants of argument C; specifically, as shown by Definition 17, some informants might not be accounted for when comparing the two arguments. Consequently, the strength of an argument is determined in the context of the resolution of an attack and, as a result, we consider that the strength of an argument (possibly established by alternative subsets of its informants) is not absolute; rather, it is relative to the resolution of each attack it is involved in, as different sets of agents may be considered in each case to determine the argument’s strength.

To capture these intuitions, we next introduce the notion of strength-determining set of informants of an argument in an argumentation line, which corresponds to a set of informants of the argument that provides the necessary strength for its inclusion as a defeater of its predecessor in the line. In particular, as will be shown later in Definition 26, this notion will serve to identify s-defeaters that indeed defeat an argument by challenging its strength, because they provide reasons against one of its strength-determining informants. For instance, consider the argumentation line Λ1=[A1,h,A3,a] which is a sub-sequence of the line Λ1 (also, of the line Λ3) from Example 8, where A3,a proper c-defeats argument A1,h; in this case, the strength-determining sets of informants of A3,a would be {I4} and {I5}, since the informants in these sets are individually better than every informant used by the disagreement sub-argument A2,a. Furthermore, since the strength-determining set of informants of an argument may not be unique, the following definition allows to obtain every set of informants meeting that requirement:

Definition 25

Definition 25(Strength-determining sets of informants).

Let I be a finite set of agent identifiers, (I,PI,<coI,I) the specification of an agent II, A an argument built from PI and Λ an argumentation line ending in A. We define the strength-determining sets of informants of A in Λ as the elements in the set returned by the function StrDetInf defined in Fig. 4.

Fig. 4.

Function StrDetInf characterizing the strength-determining sets of informants.

Function StrDetInf characterizing the strength-determining sets of informants.

Note that, for the first argument in an argumentation line, the StrDetInf function returns a unitary set containing the entire set of informant agents from the argument; this is so because the argument was not introduced in the line for being a defeater of its predecessor. Hence, since its strength was not accounted for in the resolution of a previous attack, we deem all its informants as able to determine its strength. The same situation occurs when the argument under consideration is a detracting argument that originates an s-defeat; in such a case, we consider its entire set of informants to be the only strength-determining set because no comparison is made for the resolution of the s-attack into an s-defeat. In every other case, each set of informants that is able to make the argument defeat its predecessor (through a proper or a blocking defeat, respectively) will be a strength-determining set of informants.

Taking the notion of strength-determining set of informants into account, we will only consider the inclusion of an s-defeater in an acceptable argumentation line in cases where the detracting argument targets an informant belonging to (at least) one set of strength-determining informants of its predecessor argument in the line; this intuition is captured in the last clause of Definition 26. Finally, regarding sd-defeaters and scd-defeaters, if they appear in an argumentation line it is as a consequence of the existence of a previous s-defeater. Therefore, no additional considerations have to be taken into account for their inclusion in an acceptable argumentation line. As a result, the notion of acceptable argumentation line is formalized below:

Definition 26

Definition 26(Acceptable argumentation line).

Let I be a finite set of agent identifiers, (I,PI,<coI,I) the specification of an agent II, A0 an argument built from PI and Λ=[A0,A1,A2,,An] an argumentation line for A0. We say that Λ is an acceptable argumentation line iff it holds that:

  • (1) Λ is a finite sequence.

  • (2) The set ΛS of supporting arguments and the set ΛI of interfering arguments are non-contradictory.

  • (3) No argument Ai in Λ is an argument appearing earlier in Λ or a disagreement sub-argument of an argument Aj appearing earlier in Λ (j<i).

  • (4) For every argument Ai of Λ such that Ai is a blocking defeater of Ai1, if there exists Ai+1 in Λ, then Ai+1 is a proper defeater of Ai.

  • (5) For every detracting argument Ai of Λ such that Ai is a proper s-defeater of Ai1 challenging the informant Ii (hence, IiInf(Ai1)), there exists ISAi1StrDetInf(I,Ai1,Λ) such that IiISAi1, where Λ is the sub-sequence of Λ ending in Ai1.

Given the argumentation lines Λ1, Λ2 and Λ3 from Example 8, Λ1 and Λ3 are acceptable argumentation lines whereas Λ2 is not because it does not satisfy condition 5 of Definition 26 (I6 does not belong to any of the strength-determining sets of informants of A3 in Λ1=[A1,h,A3,a]). The set of all acceptable argumentation lines for a given argument A are then gathered to conform a tree structure containing all possible lines of reasoning starting with A. Next, we formally define this structure by introducing the notion of dialectical tree.

Definition 27

Definition 27(Dialectical tree).

Let I be a finite set of agent identifiers, (I,PI,<coI,I) the specification of an agent II and A0 an argument built from PI. A dialectical tree for A0, denoted TA0, is a tree such that its nodes are arguments built from PI and the edges denote defeats between those arguments, and the following conditions hold:

  • (1) A0 is the root of TA0.

  • (2) If Λ=[A0,A1,,An] is an acceptable argumentation line for A0 and there is no argument Am built from PI such that Λ=[A0,A1,,An,Am] is an acceptable argumentation line for A0, then An is a leaf in TA0 and Λ is the branch of TA0 starting from the root A0 down to the leaf An.

Once built, the dialectical tree is marked in order to determine the acceptance status of the root argument. The marking criterion will mark the nodes in the tree as undefeated (U) or as defeated (D). Briefly, leafs of the tree are U, while inner nodes are D if they are defeated by a U claim argument, a U backing argument or a U detracting argument originating a scd-defeat, or if they are defeated by U detracting arguments originating s-defeats that target an informant in each of its strength-determining sets of informants.

Definition 28

Definition 28(Marking criterion).

Let I be a finite set of agent identifiers, (I,PI,<coI,I) the specification of an agent II, A0 an argument built from PI and TA0 the dialectical tree for A0. If N is a node in TA0, it will be marked as:

  • Dif N is the root of TA0 or it is an inner node corresponding to an argument A, and either:

    • - N has a child marked U corresponding to a c-defeater, an sd-defeater or an scd-defeater of A; or

    • - for each ISAStrDetInf(I,A,Λ), N has a child marked U that corresponds to an s-defeater B,I of A such that IISA, where Λ is the argumentation line corresponding to the arguments in the branch starting from the root down to the node N.

  • Uotherwise.

Example 9.

Consider the specification of agent I6 introduced in Example 3 and the arguments listed in Example 5. Figure 5 depicts the marked dialectical tree for argument A1,h.77 This tree has two acceptable argumentation lines, namely Λ1 and Λ3 from Example 8. The notation for arguments and defeats in the figure is the same as in Fig. 3. Also, each argument is marked in its left vertex as defeated or undefeated, with a circle containing a D or a U, respectively. Observe that the nodes corresponding to A6 are marked U (they are leafs). Then, the node corresponding to A5 is marked D because it has a child marked U corresponding to an scd-defeater. Consequently, since A5 is marked D, A4 is marked U. However, note that A3 is marked U even though it has two children corresponding to s-defeaters (A4,I5 and A6,I5) that are marked U. This is because there is no s-defeater for A3 in TA1,h targeting the informant I4 (recall that StrDetInf(I,A3,Λ1)={{I4},{I5}}, with Λ1=[A1,h,A3,a]). Finally, since A3 is marked U, the root of TA1,h is marked D.

Fig. 5.

Marked dialectical tree TA1,h from Example 9.

Marked dialectical tree T⟨A1,h⟩ from Example 9.

A marked dialectical tree embodies a dialectical analysis considering every possible argument an agent can build for and against the root argument in the tree. Hence, if the root argument is marked as U, it means that the conclusion of that argument is warranted, and the agent can accept it as a belief. Moreover, the existence of one argument A,L marked as U in its dialectical tree TA,L is sufficient for the agent to accept L as a belief. On the contrary, if every argument for L is marked as D in its own tree, then the literal L will not be warranted and thus, the agent will not accept it as a belief. The notions of warranted argument and warranted literal are formalized next:

Definition 29

Definition 29(Warranted claim argument – warranted literal).

Let I be a finite set of agent identifiers, (I,PI,<coI,I) the specification of an agent II and A,L a claim argument built from PI. We say that I warrants the argument A,L and the literal L iff A,L is marked as U in the dialectical tree TA,L.

Example 10.

Consider agent I6 specified in Example 3 and the marked dialectical tree from Example 9. Agent I6 does not warrant the argument A1,h since, as depicted in Fig. 5, the root of TA1,h is marked as D. Consequently, since there are no other claim arguments for “h”, agent I6 does not warrant the literal “h” (hence, it will not accept it as a belief).

Example 11.

Suppose the detracting rule (I4j) is added to the IBDP PI6 of agent I6, leading to the new agent’s specification (I6,(ΔI6,ΣI6{(I4j)}),<coI6,I6). In this scenario, the dialectical tree TA1,h will be the one illustrated in Fig. 6. Differently from Example 9, A3,a is now marked D because for each ISA3StrDetInf(I6,A3,a,Λ)={{I4},{I5}}, A3,a has a child marked U that corresponds to an s-defeater targeting an informant in ISA3; specifically, the detracting arguments A4,I5 and A6,I5 are s-defeaters targeting I5, whereas the detracting argument A8,I4 (with A8={(I4j),(I7:j<)}) is an s-defeater targeting I4. As a result, the root of TA1,h is now marked U and thus, agent I6 warrants the argument A1,h and the literal “h”, accepting the latter as a belief.

Fig. 6.

Marked dialectical tree TA1,h from Example 11.

Marked dialectical tree T⟨A1,h⟩ from Example 11.

As introduced above, the beliefs an agent has are determined by the set of literals it warrants. Given this, it is to be expected that the set of warranted beliefs from an agent is consistent. In the remainder of this section we will formally show that an agent does not warrant complementary literals, as expressed by Theorem 1. For this purpose, we will first show some intermediate results, given by Lemmas 1 and 2.

Lemma 1.

Let I be a finite set of agent identifiers, (I,PI,<coI,I) the specification of an agent II, and A,L an argument built from PI. If I warrants A,L, then for every claim argument B,H built from PI such that A,L c-defeats B,H it holds that the root node of TB,H has a child corresponding to argument A,L that is marked U.

Proof.

Since A,L c-defeats B,H, by Definitions 24, 26 and 27 it follows directly that there exists a child N of the root of TB,H corresponding to A,L. Now we have to prove that N is marked as U in TB,H. Given that by hypothesis I warrants A,L, by Definition 29 the root of the dialectical tree TA,L is marked as U. By the characterization of dialectical trees, a part of TA,L can appear as a sub-tree of TB,H, rooted in the node N. Then we have the following cases:

  • TA,L (the complete tree) is a sub-tree of TB,H, rooted in N. In this case, by Definition 28, the node N will be marked as U in TB,H.

  • A part of TA,L (not the complete tree) is a sub-tree of TB,H, rooted in N. In this case, there exists a node S in TA,L that is not a node in the sub-tree of TB,H rooted in N. If the node S corresponds to an interfering argument in TA,L, then by Definition 28 it holds that not having S in TB,H does not change the marking of N, as it was marked as U in TA,L. Consequently, N will be marked as U in TB,H. Let us now suppose that the node S corresponds to a supporting argument C,X in TA,L. Therefore, there exists an acceptable argumentation line Λ=[A,L,,D,Y,C,X,] corresponding to a branch of TA,L. Then, if the node S corresponding to C,X does not appear in the sub-tree of TB,H rooted in N and the root of TB,H has a child corresponding to A,L, it should be the case that the argumentation line Λ=[B,H,A,L,,D,Y,C,X] (i.e., the argumentation line starting with B,H and continuing with the sub-sequence of Λ ending in C,X) is not acceptable because the inclusion of C,X violates some condition of Definition 26. Hence, it must be the case that either:

    • - The set of interfering arguments of Λ (i.e., the set containing C,X and every other argument in the interfering set of Λ, such as A,L) is contradictory. However, such interfering arguments (including C,X and A,L) appear in Λ as supporting arguments. Thus, this would imply that Λ is not an acceptable argumentation line because its set of supporting arguments would be contradictory as well. Contradiction.

    • - C,X is an argument appearing earlier in Λ or is a disagreement sub-argument of an argument appearing earlier in Λ. Without loss of generality, these two cases are covered by: C,X is a sub-argument of an argument appearing earlier in Λ. Then, we have to consider the following cases:

      • C,X is a sub-argument of an argument different from B,H in Λ. Since the only difference between Λ and Λ prior to C,X is the inclusion of B,H as the first element in the sequence, C,X would be a sub-argument of an argument appearing earlier in Λ, leading Λ to be a non-acceptable argumentation line. Contradiction.

      • C,X is a sub-argument of B,H. Since by hypothesis B,H is a claim argument, by Definitions 9 and 12, C,X is also a claim argument. Then, given that C,X appears right after argument D,Y in Λ and Λ, we have that C,X c-defeats D,Y; consequently, the set CD is contradictory. Moreover, since CB it holds that BD is also contradictory. Hence, Λ=[B,H,A,L,,D,Y] (i.e., the sub-sequence of Λ ending in D,Y) would not be an acceptable argumentation line, which contradicts the assumption that Λ was not acceptable because of the inclusion of C,X.

    • - C,X leads to two consecutive blocking defeats in Λ. Since C,X is a supporting argument in Λ and, as previously shown, is different from A,L, it holds that C,X is at least the third element in the sequence corresponding to Λ. Thus, the consecutive blocking defeats would also occur in Λ, meaning that Λ is not an acceptable argumentation line. Contradiction.

    • - C,X is a detracting argument appearing right after argument D,Y in Λ, and there is no set of informants ISDStrDetInf(I,D,Y,Λ) holding that XISD, where Λ=[B,H,A,L,,D,Y] (i.e., the sub-sequence of Λ ending in D,Y). Since C,X is a supporting argument in Λ and, as previously shown, is different from A,L, it holds that C,X is at least the third element in the sequence corresponding to Λ; hence, D,Y is at least the second element in Λ. Consequently, D,Y is different from B,H and the argument preceding D,Y in Λ (also, in Λ) will appear preceding D,Y in Λ as well. By Definition 25 the strength-determining sets of informants StrDetInf(I,D,Y,Λ) will be obtained by comparing the informants of D,Y with the informants of the argument preceding it in Λ (also, in Λ), which is the same argument preceding it in Λ. Thus, it holds that StrDetInf(I,D,Y,Λ)=StrDetInf(I,D,y,Λ), where Λ=[A,L,,D,Y] (i.e., the sub-sequence of Λ ending in D,Y). As a result, if C,X was targeting a non-determining informant of D,Y in Λ, it was doing so in Λ as well, meaning that Λ is not an acceptable argumentation line. Contradiction.

 □

Lemma 2.

Let I be a finite set of agent identifiers, (I,PI,<coI,I) the specification of an agent II, and A,L an argument built from PI. If I warrants A,L, then for every claim argument B,H built from PI that is c-defeated by A,L or that c-defeats A,L it holds that I does not warrant B,H.

Proof.

We have to consider two cases:

  • A,L c-defeats B,H. Since by hypothesis I warrants A,L, by Lemma 1 the root of TB,H has a child node corresponding to A,L that is marked U. Therefore, by Definition 28, the root of TB,H will be marked as D and thus, by Definition 29, I does not warrant B,H.

  • B,H c-defeats A,L. Let us suppose by contradiction that I warrants B,H. If that were the case, by Lemma 1 the root of TA,L will have a child node corresponding to B,H that is marked U. Hence, by Definition 28, the root of TA,L will be marked as D. Consequently, by Definition 29 A,L would not be warranted by I, which contradicts our hypothesis.

 □

Theorem 1.

Let I be a finite set of agent identifiers, (I,PI,<coI,I) the specification of an agent II, and L, H two complementary literals. If I warrants L, then I does not warrant H.

Proof.

If I warrants L, by Definition 29, there exists a claim argument A,L built from PI that is warranted by I. Let us now suppose by contradiction that I warrants H. Then, by Definition 29, there would exist a claim argument B,H that is warranted by I. Hence, by Definition 13 A,L and B,H c-attack each other. Moreover, by Definition 20, it will be the case that A,L is a proper c-defeater of B,H, B,H is a proper c-defeater of A,L, or A,L and B,H are blocking c-defeaters of one another. Then, by Lemma 2 this would imply that I does not warrant A,L, contradicting the hypothesis that L is warranted by I. □

6.Related work

The literature of argumentation offers a wide and extensive variety of approaches accounting for the notion of argument strength, some of which will be discussed in this section. For instance, approaches like [3,5] resolve conflicts by identifying the stronger arguments through a general preference relation. On the other hand, works like [13,28] define the strength of an argument through a formula yielding a numerical value. This formula is based on the intuition that the strength of an argument is inversely proportional to the strength of its attackers, with the aim of codifying the likelihood of an argument to be ultimately defeated. Similarly, [35] proposed a game-theoretic measure of argument strength, where the strength of each argument is calculated in a way such that if an argument is attacked then its strength falls, but if the attack is in turn attacked, then the strength of the original argument rises. Related to these, approaches to value-based argumentation [10] consider that the strength of an argument depends on the social values it advances, and determining whether the attack of one argument on another succeeds depends on the comparative strength of the values advanced by the arguments concerned. Also, in [41] an approach combining the ASPIC argumentation system and the fuzzy set theory is proposed, where argument strength is computed by using a t-norm aggregating the importance of its involved premises and rules. Similarly to us, the latter attaches information to the basic elements of their system to compute argument strength, which in turn is used to determine acceptable arguments. However, in that work it is not possible to challenge the strength of an argument (even when undercutting defeaters could be seen as similar to our detracting defeaters, an undercut challenges the applicability of a rule instead of the strength of the argument as detracting defeaters do).

We can also identify a group of approaches that, explicitly or implicitly, make use of a notion of credibility or trust to account for the arguments’ strength. Given the close relationship between our proposal and these approaches, we devote the rest of this section to discuss the similarities and differences with some of them.

In [38] and [44] an argumentation formalism is proposed which, as part of its reasoning process, uses information about trust to measure the arguments’ strength. This formalism is described as a set of graphs and, to determine an agent’s beliefs, the authors propose a model that accounts for the trust in the information that is used for building the arguments. Like ours, their approach is presented in a multi-agent setting, where informant agents can have different levels of credibility and these credibilities are used to measure the arguments’ strength. In contrast to our proposal, where each agent has its own credibility order (completely independent from the other agents’), they use a centralized notion of trust that is codified in a shared trust network. This global network holds information about how agents trust each other and can be used to obtain an agent-centric trust network that represents the viewpoint of a particular agent. Although from these graphs it is possible to determine a credibility order for each agent, these orderings are strongly dependent on the connections in the global network.

Similarly to our formalism, each piece of information in [38,44] is linked to an agent which establishes how credible that information is, and the strength of an argument is determined by the credibility of the information it is based on. Then, [38] and [44] use an argumentation inference mechanism to deal with a potentially contradictory belief base. In that context, arguments are built to support pieces of information that can be consistently inferred from the belief base, and the strength measures are used to decide between conflicting arguments. In addition to that, our approach allows to construct arguments for and against the consideration of informant agents. Therefore, differently from theirs, an agent in our approach will be able to reason and argue about the strength of its arguments, in addition to arguing about the domain information stored in its knowledge base.

Another significant difference between [38,44] and the work reported in this paper is that they use numerical values to establish the trust relation among agents, leading to a total order over the set of agent identifiers; in contrast, we use symbolic information in the form of a strict partial order over that set. Such contrast leads to different approaches for measuring the arguments’ strength. On the one hand, they compute the strength of an argument using a formula based on numerical values assigned to the agents that provided the information used for building the argument. On the other hand, the strength of an argument in our approach is not absolute; instead, it is relative to the resolution of the conflicts the argument is involved in, and is established by the sets of strength-determining informants of the argument (according to a valid informant-based comparison criterion). Consequently, in their approach, the central component for determining the arguments’ strength is the formula to be used, whereas in ours it is the adopted informant-based comparison criterion.

The work reported in [17] relates to our proposal in that they also make use of the credibility of informant agents as a source of argument strength. Similarly to us, they associate arguments with their information sources (and multiple sources can be associated with the same argument); however, they abstract away from the origin of such credibility. That is, their credibility function associates an informant with a set of arguments, without specifically attaching the informant to a particular piece of knowledge within the argument. In contrast, we associate each defeasible rule in an argument with its informant agent to conform a defeasible domain object. Then, in [18] the authors extend their work to combine credibilities of informants of facts, assumptions and conclusions in order to determine the arguments’ strength. Like in our approach, defeats among arguments in [17,18] are defined by accounting for a notion of argument strength based on the credibility of their informants. However, a key aspect that differentiates our approach from theirs is the fact that their credibility function establishes a total order over the set of informants, whereas our credibility orders are assumed to be strict partial orders. Finally, as remarked above for other approaches, another difference between our work and [17,18] is the fact that we provide the means to reason about the arguments’ strength by allowing to build arguments for and against their different informant agents; in contrast, the approach of [17,18] does not provide such a possibility.

In this paper we introduced a particular informant-based comparison criterion (see Definition 18) to measure and compare the arguments’ strength. As discussed in Section 4, the single informant credibility criterion resembles the rules priorities criterion (see e.g., [25]) which takes preferences among rules to then define preferences among arguments (which are in turn expressed as sets of rules). That is, the rules priorities criterion lifts preferences over rules to preferences over sets of rules. Similarly, an informant-based comparison criterion makes use of a credibility order (i.e., a relationship over informant agents) to establish preferences over sets of informants; in other words, preferences over informant agents are lifted onto preferences over sets of informants. The literature offers a variety of approaches dealing with the issue of lifting preferences, such as [9,23,33,34] (see also [8] for an extensive overview on the issue of lifting an order relation on a set X to an order on the family of all non-empty subsets of X).

On the one hand, in [34] the authors propose natural extensions of the versions of five axioms introduced in [8] and also state that not every axiom is desirable in every situation. Regarding the axioms or conditions associated with a lifting principle (here, a valid informant-based comparison criterion), we can note the following difference. Whereas in our approach the credibility order (which is then lifted to define an informant-based comparison criterion) should be irreflexive, asymmetric and transitive, [34] assumes an order relation on a set X that has to be reflexive, antisymmetric, transitive and total. For instance, regarding the reflexivity vs. irreflexivity requirement, we can note that a valid informant-based comparison criterion is required to be irreflexive in order to align with the strict nature of the credibility order it is based on. An exhaustive analysis of our approach in the context of their proposed axioms is left as future work, including the study of alternative or additional conditions to be imposed either on credibility orders or on valid informant-based comparison criteria.

In [9] the authors argue that, in the context of structured argumentation, the support provided by an argument for its conclusion is determined by the degree of support of its premisses, and by the degree of support provided by the inference rules applied in its construction. Then, among other things, the authors discuss several alternatives for lifting qualitative orderings on premises and for lifting qualitative orderings on defeasible rules. It should be noted that, when accounting for orderings on different types of elements conforming the arguments, these orderings should be somehow combined when lifting preferences. Our proposal is such that domain knowledge in an IBDP is only expressed through the defeasible domain objects, which associate defeasible rules with their informant agents. In particular, our lifting criteria (the informant-based comparison criteria) only account for one dimension: the credibility order over the informant agents.

When considering the conditions imposed on valid informant-based comparison criteria and the conditions imposed by the different liftings discussed in [9], we can note the following: our conditions are meant to characterize a family of criteria whereas they describe specific kinds of liftings on qualitative orderings of elements. However, we can find a relationship between the liftings studied in [9] and the family of valid criteria characterized here, and we plan to explore this relationship as part of future work. For instance, consider the max-max criterion to lift orderings on premises to orderings on sets of premises X and Y: YX iff for every xX there is a yY such that yx (with ≻ meaning strictly preferred to). Here, we can see that the max-max criterion satisfies the condition of being based on the original ordering (i.e., at least one pair of elements of X and Y is related by ≻), as in the first clause of Definition 17. Then, it can be shown that the max-max criterion also satisfies the property of yielding irreflexive orderings when applied on transitive orderings over premises. For this, consider a set X and suppose by contradiction that XX. Hence, for every xX there exists xX such that xx, and this must also be satisfied for such x. So, because of the transitivity of the ≻ relation over premises, at some point a cycle will exist, contradicting the fact that it is a strict ordering. Consequently, the max-max lifting criterion satisfies irreflexivity (second clause of Definition 17). Finally, suppose the max-max criterion relates two sets of premises X and Y as follows: YX and XY. Here, if YX, by definition of max-max it holds that for every xX there exists yY such that yx. Then, if also XY, it must be the case that for every yY there is xX such that xy. So, because of the transitivity of the ≻ relation over pairs of premises, for every yY it holds that there exists yY such that yy (i.e., YY), contradicting the fact that the max-max criterion yields irreflexive orderings on sets of premises. Consequently, the max-max criterion also satisfies the third clause of Definition 17 and thus, it could serve as a valid informant-based comparison criterion when considering a credibility order as the base ≻ relation.

Another work that relates to our proposal is [32], where the authors use an argumentation mechanism based on trust as a layer of a belief revision process carried out by agents dealing with (potentially conflicting) opinions about their pairs. In their argumentation approach, trust is used for building a preference ordering amongst arguments, thus codifying their strength. For this, they first aggregate the information about different opinions for the same proposition. Then, using these aggregated propositions, they build arguments whose trustworthiness is assessed using a conjunctive fusion operator over the opinions forming the argument. This assessment considers the number of agents and information pieces that were needed for building the argument. Even though [32] does not account for the fact that arguments can be constructed to challenge an informant agent (hence, other arguments’ strength), it could be interesting to adapt their ideas in our proposal. For that purpose, we can think of two alternatives: either provide a comparison criterion that encodes the above mentioned strategies, or extend the notion of acceptable argumentation line (Definition 26) to consider preferences codifying those strategies.

A qualitative bipolar argumentative modeling of trust is proposed in [39]. Like in our proposal, their approach is qualitative and only a finite number of levels is assumed in the trust scale. On the other hand, in contrast with our proposal, they use a bipolar argumentative approach where trust and distrust can be independently assessed. There, an agent can evaluate its trust into an object X (that can be either a source or another agent) on the basis of two types of information: the observed behavior of X, and the reputation of X according to the other agents. Reputation information is then viewed as an input the agent uses for revising or updating its own trust evaluation, based on its perception. The approach proposed in [39] is such that two kinds of arguments in favor of trusting an agent (either establishing that a good point is reached or a bad point is avoided), as well as two kinds of arguments against trusting an agent (either indicating that a bad point is satisfied or that a good point is not reached) can be constructed. These four kinds of arguments are based on an inference rule and the trust evaluation of the agent, which is represented with an interval [t,t+] over a discrete scale, with the intended meaning that the trust is not larger than t+ nor smaller than t. Clearly, our work relates to [39] in that we also allow to build arguments for or against an informant agent; however, our arguments do not encode reasons for trusting or distrusting such an agent, but they provide reasons for or against the consideration of information provided by it. Moreover, as another difference, even though [39] indicates some basic mechanisms leading to the revision of trust values, that paper is mainly focused on evaluating trust rather than on integrating the trust values as a measure of argument strength in an argumentation process.

The authors in [29,30] adopt a symbolic approach to model credibility using two global relations: the trust relation and the distrust relation. These relations, together with the set of agents, constitute a trust system where a pair (a,b) in the trust (respectively, distrust) relation expresses that agent a trusts (respectively, distrusts) agent b. Their formalism aims at determining whether an agent trusts another taking into account the potential conflicts that may appear when the trust and distrust relations are jointly analyzed in the trust system. To that end, they follow an argumentation approach in which arguments represent a position for an agent to either trust or distrust a peer; those arguments are similar to our backing and detracting arguments, respectively. Additionally, when considering an advanced version of their system, each agent is also provided with a partial order defined amongst its peers, and they use this order to codify the efficacy in which an agent trusts its peers (aiming to model a degree of trustworthiness or reputation). Like us, they use these efficacy orders to provide strength to their arguments. However, the goal of the argumentation system in the two approaches differs. Their aim is to decide if an agent trusts another or not, given the trust and distrust relations. In contrast, we aim at establishing the beliefs an agent has, which correspond to the pieces of information (literals) it warrants. Notwithstanding this, similarly to ours, their proposal seeks to enrich argumentation systems with strength measures that account for (potentially conflicting) information about trust, and use these measures to decide which arguments and conclusions prevail.

The work reported in [4], similarly to ours, presents an agent-based argumentation approach for reasoning about beliefs and information received from other agents. There, beliefs are also used to represent how trustworthy the information sources are to a given agent. They identify six forms of trust that can appear as part of the formulas in the agent belief base. From their belief bases arguments are built, conflicts among them are identified and then resolved. In particular, besides the usual conflicts in structured argumentation, the authors identify several types of attack that arise from the semantics of the six forms of trust considered. Therefore, like us, they allow to challenge and support the credibility or trustworthiness of an informant; nevertheless, there are some differences. In their approach each form of trust is binary: the agent either trusts or does not trust an informant. In contrast, in our approach agents are ordered using a strict partial order and thus, it is possible to establish whether an informant is more credible than another. There are also differences on how trust relates to argument strength. In their approach, trust forms are used for constructing arguments and do not directly affect argument strength. On the other hand, our approach uses the level of credibility as the measure to define the arguments’ strength. They consider the notion of strength when they introduce graded beliefs, where a grade is attached to the beliefs operator. Using these grades they compute the strength of an argument as the weakest link. However, unlike our approach, they do not provide any mechanism to challenge the strength of an argument. Finally, it is worth to mention that these differences also apply to a comparison between our proposal and the work reported in [48].

7.Conclusion

In this paper we have presented an argumentative reasoning formalism where the credibility of informants plays a central role, as it allows to determine the arguments’ strength. Our formalism was developed in a multi-agent setting where agents share domain knowledge. There, each agent may obtain information from other informant agents and also has an assessment of how credible these informants are. Agents are equipped with the argumentative machinery, allowing them to reason with the potentially conflicting information in their knowledge bases to finally determine their warranted beliefs. In our approach, defeasible rules (which represent domain knowledge) are associated with their informant agents. Also, we introduced two new kinds of rules (backing and detracting rules) in order to be able to argue about the contexts in which the domain knowledge provided by the informant agents should be used or not. In other words, these rules are used to express reasons for and against the consideration of informants, respectively. From all this knowledge, an agent will be able to construct arguments to support its inferences. In addition, each agent has a credibility order among its informant agents and a comparison criterion used to assess the strength of the conflicting arguments built from its knowledge base.

As shown before, our informant-based approach is such that the strength of an argument is determined by the credibility of its informant agents. To that end, the comparison criterion in an agent’s specification is based on the agent’s credibility order. In particular, we have shown that the strength of an argument in our approach is not absolute, but it is relative to the resolution of the conflicts the argument is involved in. Then, it could be the case that some informant providing information for building an argument is relevant for establishing the argument’s strength in some cases, but not in others. In this context, the incorporation of backing and detracting rules allows agents to argue about the arguments’ strength. Specifically, backing rules allow to express reasons for the consideration of informant agents, whereas detracting rules enable to express reasons against the consideration of information provided by them. Using these rules we defined new types of argument which, together with the classic arguments supporting conclusions, are considered by the argumentation machinery to establish the beliefs an agent has. Finally, we have formally shown that the warranting process employed by our argumentative approach is sound, preventing an agent from warranting contradicting conclusions.

It is worth noting that the defeasible domain objects within an informant-based Defeasible Logic Program (IBDP) establish a correspondence between defeasible rules and their informant agents. The fact that, differently from standard DeLP programs, an IBDP does not include strict rules may appear as a limitation of our approach. However, recall that strict rules are provided in DeLP as a representational tool that gives the possibility of expressing the indefeasible nature of the relation between the body and head of such rules, making them indisputable. In contrast, in an IBDP, a domain object is expressed as a pair containing a rule and the informant of that rule. In this context, such rules are always defeasible since they come attached with the credibility of their informant agent. Therefore, our approach only accounts for defeasible rules (hence the name of the defeasible domain objects) since our main focus was on how the arguments’ informants affect and determine the arguments’ strength. Moreover, regarding the absence of strict rules, we would like to remark that a number of different applications of DeLP (e.g., see [1,2,16,24,26,27,47]) have been developed using defeasible logic programs without strict rules. Consequently, we consider that the current characterization of IBDPs without strict rules is not a real limitation for our approach’s expressivity and applicability. Notwithstanding this consideration, and to maintain a general approach, it is possible to extend our approach to account for standard DeLP strict rules and facts, where these elements of the program should not come with an associated informant to reflect their strict and indisputable nature accurately; we plan to do this as part of future work.

Regarding the notion of argument strength accounted for in this work, we can highlight the fact that it is solely based on the credibility of the informant agents. This may lead one to think of this notion of argument strength as too narrow, or too specific. Nevertheless, as discussed in Section 4, an informant-based comparison criterion like the single informant comparison criterion does not stray too far from other existing criteria in the literature of structured argumentation, resembling the rules priorities criterion. At any rate, as part of future work, we will explore a generalization our notion of argument strength in order to combine different comparison criteria at different levels. For instance, we could adopt an approach similar to [15], where the rules priorities criterion is used first to resolve attacks into defeats and, in case of undecidedness, the generalized specificity criterion is considered later. More generally, we will explore the possibility of using the operators defined in [45], which allow to combine multiple argument comparison criteria.

As shown in the existing literature (see [12] for an overview), DeLP is among the four major approaches to structured argumentation. Extending a DeLP program into an IBDP, with the addition of informants of defeasible rules and informant rules allowing to argue for and against the consideration of information provided by the different informant agents, we believe we are contributing to expanding DeLP’s applicability domain. Nevertheless, as part of our future work, we plan to study the possibility of further exploiting these ideas and apply them in the context of other major structured argumentation approaches such as ASPIC+ [36] or ABA [46]. In that regard, we consider that such an exploration could make for exciting advances in the area, and will most definitely bring on new challenges requiring a substantial transformation of those frameworks.

In Section 2 we argued that our approach is restricted to deal with credibility orders relating agents that are sources of information about the same topic. As future work, we intend to extend our approach to account for multi-topic credibility orders (i.e., handle multiple credibility orders, one for each topic). In order to be able to deal with these, we also plan to extend the knowledge representation and reasoning capabilities of an IBDP by, among other things, expanding defeasible domain objects to state their topic explicitly. In particular, such an extension would allow us to better model scenarios like the one described in the example of the medical domain given after Definition 11. Then, for instance, we could represent the information provided by agent IX about medical treatments as belonging to one topic and, on the other hand, the information provided by IX stating that he is not a physician as belonging to another topic. In that way, detracting rules could also be extended to include the topic under which the information provided by a given agent is challenged; consequently, for the example given in the paper, the detracting rule could target the defeasible domain objects provided by IX which correspond to medical advice but not the one referring to IX not being a physician.

Finally, we would like to discuss other exciting prospects for future research. On the one hand, we plan to study how our approach could be extended to consider trust and distrust relations as presented in [29,30]; briefly, the idea would be to connect such relations with backing and detracting arguments. On the other hand, we will also study how an agent’s credibility order should be updated when the warranted information is taken into account. For that purpose, backing and detracting arguments can also play a central role. For instance, suppose that the credibility order initially establishes that an informant I1 is less credible than another informant I2. Then, if the argumentative machinery yields an undefeated backing argument for I1 and an undefeated detracting argument for I2, there could be an indication that the credibility order has to be updated so that I1 becomes more credible than I2. Furthermore, as another direction for future work, we intend to formally analyze the conditions characterizing our valid informant-based comparison criteria in the context of various lifting principles such as those addressed in [9], and also study (and redefine in the context of our approach, when required) different preference handling principles like the ones discussed in [22].

Notes

1 Such arguments can be seen as an instance of the fallacy known as begging the question, which refers to its own assertion to prove the assertion.

2 We thank one of the reviewers for pointing out this example.

3 Recall that in this work we are assuming that the knowledge encoded through an IBDP belongs to the same topic. In this example all the information being modeled, in particular, the one claiming that IX is not a physician, pertains to the medical domain.

4 As will be shown later in this section, this is a key aspect in the resolution of attacks into defeats.

5 We thank one of the reviewers for suggesting this example.

6 Note that the two sets of informants cannot be equally preferred since valid informant-based comparison criteria are required to be asymmetric, in line with the credibility order they are based.

7 Note that, to save space, the tree is depicted horizontally.

Acknowledgements

We would like to thank the reviewers for their helpful and insightful comments on the previous versions of this paper. This work has been supported by EU H2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 690974 for the project “MIREL: MIning and REasoning with Legal texts”, by CONICET (grant PIP 112-201701-00871) and by Universidad Nacional del Sur (grants PGI 24/N046 and 24/ZN32).

References

[1] 

R.A. Agis, S. Gottifredi and A.J. García, Acquiring knowledge from expert agents in a structured argumentation setting, Argument & Computation 10: (2) ((2019) ), 149–189. doi:10.3233/AAC-190447.

[2] 

R.A. Agis, S. Gottifredi and A.J. García, An approach for distributed discussion and collaborative knowledge sharing: Theoretical and empirical analysis, Expert Systems with Applications 116: ((2019) ), 377–395. doi:10.1016/j.eswa.2018.09.016.

[3] 

L. Amgoud and C. Cayrol, A reasoning model based on the production of acceptable arguments, Ann. Math. Artif. Intell. 34: (1–3) ((2002) ), 197–215. doi:10.1023/A:1014490210693.

[4] 

L. Amgoud and R. Demolombe, An argumentation-based approach for reasoning about trust in information sources, Argument & Computation 5: (2–3) ((2014) ), 191–215. doi:10.1080/19462166.2014.881417.

[5] 

L. Amgoud and S. Vesic, A new approach for preference-based argumentation frameworks, Ann. Math. Artif. Intell. 63: (2) ((2011) ), 149–183. doi:10.1007/s10472-011-9271-9.

[6] 

L. Amgoud and S. Vesic, Rich preference-based argumentation frameworks, Int. J. Approx. Reasoning 55: (2) ((2014) ), 585–606. doi:10.1016/j.ijar.2013.10.010.

[7] 

K. Atkinson, P. Baroni, M. Giacomin, A. Hunter, H. Prakken, C. Reed, G.R. Simari, M. Thimm and S. Villata, Towards artificial argumentation, AI Magazine 38: (3) ((2017) ), 25–36. doi:10.1609/aimag.v38i3.2704.

[8] 

S. Barberà, W. Bossert and P.K. Pattanaik, Ranking Sets of Objects, Springer, (2004) , pp. 893–977.

[9] 

M. Beirlaen, J. Heyninck, P. Pardo and C. Straßer, Argument strength in formal argumentation, FLAP 5: (3) ((2018) ), 629–676.

[10] 

T.J.M. Bench-Capon, Persuasion in practical argument using value-based argumentation frameworks, J. Log. Comput. 13: (3) ((2003) ), 429–448. doi:10.1093/logcom/13.3.429.

[11] 

T.J.M. Bench-Capon and P.E. Dunne, Argumentation in artificial intelligence, Artif. Intell. 171: (10–15) ((2007) ), 619–641. doi:10.1016/j.artint.2007.05.001.

[12] 

P. Besnard, A.J. García, A. Hunter, S. Modgil, H. Prakken, G.R. Simari and F. Toni, Introduction to structured argumentation, Argument & Computation 5: (1) ((2014) ), 1–4. doi:10.1080/19462166.2013.869764.

[13] 

P. Besnard and A. Hunter, A logic-based theory of deductive arguments, Artif. Intell. 128: (1–2) ((2001) ), 203–235. doi:10.1016/S0004-3702(01)00071-6.

[14] 

P. Besnard and A. Hunter, Elements of Argumentation, MIT Press, (2008) . doi:10.7551/mitpress/9780262026437.001.0001.

[15] 

C.E. Briguez, M.C. Budán, C.A.D. Deagustini, A.G. Maguitman, M. Capobianco and G.R. Simari, Argument-based mixed recommenders and their application to movie suggestion, Expert Syst. Appl. 41: (14) ((2014) ), 6467–6482. doi:10.1016/j.eswa.2014.03.046.

[16] 

M. Capobianco, C.I. Chesñevar and G.R. Simari, Argumentation and the dynamics of warranted beliefs in changing environments, Autonomous Agents and Multi-Agent Systems 11: (2) ((2005) ), 127–151. doi:10.1007/s10458-005-1354-8.

[17] 

C.F. Chang, P. Harvey and A. Ghose, Source sensitive argumentation system, in: ICEIS 2006 – Proceedings of the Eighth International Conference on Enterprise Information Systems: Databases and Information Systems Integration, Paphos, Cyprus, May 23–27, 2006, (2006) , pp. 39–46.

[18] 

C.F. Chang, P. Harvey and A.K. Ghose, Combining credibility in a source sensitive argumentation system, in: Advances in Artificial Intelligence, 4th Helenic Conference on AI, SETN 2006, Heraklion, Crete, Greece, May 18–20, 2006, Proceedings, (2006) , pp. 478–481. doi:10.1007/11752912_49.

[19] 

A. Cohen, A.J. García and G.R. Simari, A structured argumentation system with backing and undercutting, Eng. Appl. of AI 49: ((2016) ), 149–166. doi:10.1016/j.engappai.2015.10.001.

[20] 

A. Cohen, S. Gottifredi, A.J. García and G.R. Simari, A survey of different approaches to support in argumentation systems, Knowledge Eng. Review 29: (5) ((2014) ), 513–550. doi:10.1017/S0269888913000325.

[21] 

A. Cohen, S. Gottifredi, L.H. Tamargo and A.J. García, Extending Defeasible Logic Programming with informant-based argumentation, in: Argumentation-Based Proofs of Endearment. Essays in Honor of Guillermo R. Simari on the Occasion of His 70th Birthday, C.I. Chesñevar, M.A. Falappa, E. Fermé, A.J. García, A.G. Maguitman, D.C. Martínez, M.V. Martínez, R.O. Rodríguez and G.I. Simari, eds, College Publications, London, (2018) , pp. 73–89.

[22] 

K. Cyras and F. Toni, Properties of ABA+ for non-monotonic reasoning, in: NMR 2016 – Proceedings of the 16th International Workshop on Non-Monotonic Reasoning, Cape Town, South Africa, 22–24 April 2016, http://nmr2016.cs.uct.ac.za/proceedings_nmr2016_online.pdf.

[23] 

S.K. Dyrkolbotn, T. Pedersen and J.M. Broersen, On elitist lifting and consistency in structured argumentation, FLAP 5: (3) ((2018) ), 709–746.

[24] 

A.J. García, N.D. Rotstein, M. Tucat and G.R. Simari, An argumentative reasoning service for deliberative agents, in: Knowledge Science, Engineering and Management, Second International Conference, KSEM 2007, Melbourne, Australia, November 28–30, 2007, Proceedings, (2007) , pp. 128–139. doi:10.1007/978-3-540-76719-0_16.

[25] 

A.J. García and G.R. Simari, Defeasible logic programming: An argumentative approach, Theory and Practice of Logic Programming 4: (1–2) ((2004) ), 95–138. doi:10.1017/S1471068403001674.

[26] 

D.R. García, A.J. García and G.R. Simari, Planning and defeasible reasoning, in: 6th International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS 2007), Honolulu, Hawaii, USA, May 14–18, 2007, (2007) , p. 222.

[27] 

D.R. García, A.J. García and G.R. Simari, Defeasible reasoning and partial order planning, in: Foundations of Information and Knowledge Systems, 5th International Symposium, FoIKS 2008, Pisa, Italy, February 11–15, 2008, Proceedings, (2008) , pp. 311–328. doi:10.1007/978-3-540-77684-0_21.

[28] 

S. Gottifredi, N.D. Rotstein, A.J. García and G.R. Simari, Using argument strength for building dialectical bonsai, Ann. Math. Artif. Intell. 69: (1) ((2013) ), 103–129. doi:10.1007/s10472-013-9338-x.

[29] 

W.T. Harwood, J.A. Clark and J.L. Jacob, Networks of trust and distrust: Towards logical reputation systems, in: Logics in Security, (2010) .

[30] 

W.T. Harwood, J.A. Clark and J.L. Jacob, A perspective on trust, security and autonomous systems, in: New Security Paradigms Workshop, (2010) .

[31] 

S. Kaci, L.W.N. van der Torre and S. Villata, Preference in abstract argumentation, in: Computational Models of Argument – Proceedings of COMMA 2018, Warsaw, Poland, 12–14 September 2018, (2018) , pp. 405–412.

[32] 

A. Koster, A.L.C. Bazzan and M. de Souza, Liar liar, pants on fire; or how to use subjective logic and argumentation to evaluate information from untrustworthy sources, Artif. Intell. Rev. 48: (2) ((2017) ), 219–235. doi:10.1007/s10462-016-9499-1.

[33] 

J. Maly, M. Truszczynski and S. Woltran, Preference orders on families of sets – when can impossibility results be avoided? in: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13–19, 2018, Stockholm, Sweden, (2018) , pp. 433–439.

[34] 

J. Maly, M. Truszczynski and S. Woltran, Preference orders on families of sets – when can impossibility results be avoided?, J. Artif. Intell. Res. 66: ((2019) ), 1147–1197. doi:10.1613/jair.1.11879.

[35] 

P. Matt and F. Toni, A game-theoretic measure of argument strength for abstract argumentation, in: Logics in Artificial Intelligence, 11th European Conference, JELIA 2008, Dresden, Germany, September 28 – October 1, 2008. Proceedings, (2008) , pp. 285–297.

[36] 

S. Modgil and H. Prakken, The ASPIC+ framework for structured argumentation: A tutorial, Argument & Computation 5: (1) ((2014) ), 31–62. doi:10.1080/19462166.2013.869766.

[37] 

D. Nute, Defeasible reasoning: A philosophical analysis in Prolog, in: Aspects of Artificial Intelligence, J.H. Fetzer, ed., Kluwer Academic Pub., (1988) , pp. 251–288. doi:10.1007/978-94-009-2699-8_9.

[38] 

S. Parsons, E. Sklar and P. McBurney, Using argumentation to reason with and about trust, in: Argumentation in Multi-Agent Systems – 8th International Workshop, ArgMAS 2011, Taipei, Taiwan, May 3, 2011, Revised Selected Papers, (2011) , pp. 194–212.

[39] 

H. Prade, A qualitative bipolar argumentative view of trust, in: Scalable Uncertainty Management, First International Conference, SUM 2007, Washington, DC, USA, October 10–12, 2007, Proceedings, (2007) , pp. 268–276. doi:10.1007/978-3-540-75410-7_20.

[40] 

I. Rahwan and G.R. Simari (eds), Argumentation in Artificial Intelligence, Springer, (2009) . doi:10.1007/978-0-387-98197-0.

[41] 

N. Tamani and M. Croitoru, A quantitative preference-based structured argumentation system for decision support, in: IEEE International Conference on Fuzzy Systems, FUZZ-IEEE 2014, Beijing, China, July 6–11, 2014, IEEE, (2014) , pp. 1408–1415.

[42] 

L.H. Tamargo, A.J. García, M.A. Falappa and G.R. Simari, On the revision of informant credibility orders, Artificial Intelligence 212: ((2014) ), 36–58. doi:10.1016/j.artint.2014.03.006.

[43] 

L.H. Tamargo, S. Gottifredi, A.J. García, M.A. Falappa and G.R. Simari, Deliberative DeLP agents with multiple informants, Inteligencia Artificial, Revista Iberoamericana de Inteligencia Artificial 15: (49) ((2012) ), 13–30.

[44] 

Y. Tang, K. Cai, P. McBurney, E. Sklar and S. Parsons, Using argumentation to reason about trust and belief, J. Log. Comput. 22: (5) ((2012) ), 979–1018. doi:10.1093/logcom/exr038.

[45] 

J.C. Teze, S. Gottifredi, A.J. García and G.R. Simari, An approach to generalizing the handling of preferences in argumentation-based decision-making systems, Knowl.-Based Syst. 189: ((2020) ).

[46] 

F. Toni, A tutorial on assumption-based argumentation, Argument & Computation 5: (1) ((2014) ), 89–117. doi:10.1080/19462166.2013.869878.

[47] 

M. Tucat, A.J. García and G.R. Simari, Using Defeasible Logic Programming with contextual queries for developing recommender servers, in: The Uses of Computational Argumentation, Papers from the 2009 AAAI Fall Symposium, Arlington, Virginia, USA, November 5–7, 2009, (2009) .

[48] 

S. Villata, G. Boella, D.M. Gabbay and L.W.N. van der Torre, A socio-cognitive model of trust using argumentation theory, Int. J. Approx. Reason. 54: (4) ((2013) ), 541–559. doi:10.1016/j.ijar.2012.09.001.