You are viewing a javascript disabled version of the site. Please enable Javascript for this site to function properly.
Go to headerGo to navigationGo to searchGo to contentsGo to footer
In content section. Select this link to jump to navigation

Assessing communication strategies in argumentation-based negotiation agents equipped with belief revision1

Abstract

The importance of negotiation has increased in the last years as a relevant interaction to solve conflicts in multi-agent systems. Although there are many different scenarios, a typical negotiating situation involves two cooperative agents that cannot reach their goals by themselves because they do not have some resources needed to reach such goals. Therefore, a way to improve their mutual benefit is to start a negotiation dialogue, taking into account that they might have incomplete or incorrect beliefs about the other agent’s goals and resources. The exchange of arguments during the negotiation gives them information that makes it possible to update their beliefs and consequently they can offer proposals which are closer for reaching a deal. In order to formalize their proposals in a negotiation setting, the agents must be able to generate, select and evaluate arguments associated with such offers, updating their mental state accordingly. We situate our work on this kind of scenarios with two argumentation-based negotiation agents equipped with belief revision operations in the generation and interpretation of arguments. It has been proved that those agents that take advantage of belief revision during the negotiation achieve an overall better performance. Because the belief revision process depends on the information the agents exchange in their utterances, in this paper we focus on different communication strategies the agents may implement and the impact that they have in the negotiation process. For this purpose, we present a negotiation protocol where the messages are extended to include a critique to the last proposal received and a counterproposal. Also, we define proposals that may be more or less informative containing different justifications. An intentional agent architecture is proposed and following this model different kind of negotiating agents are created using diverse communication strategies. To assess the impact these strategies have in the negotiation process some simulations are conducted, analyzing the results obtained.

1.Introduction

In systems composed of multiple autonomous agents, negotiation has proven to be a relevant form of interaction that enables two or more agents to arrive at a mutual agreement regarding some belief, goal or plan [17]. A typical scenario for negotiation involves two agents who have the need to collaborate for mutual benefit. Even though there is no agreed approach to characterizing all negotiation frameworks, it has been argued [17] that automated negotiation research can be considered to deal with three broad topics: Negotiation Protocols (the set of rules that govern the interaction); Negotiation Objects (the range of issues over which agreement must be reached) and the characterization of an Agents’ Decision Making Model (which accounts for the decision making apparatus the participants employ to act in line with the negotiation protocol in order to achieve their objectives).

Moreover, different approaches can be used to model negotiation in a multiagent (MAS) setting. In particular, three different kinds of approaches are usually distinguished: those which are game-theoretic [28], those which are heuristic-based [13], and finally those based on argumentation (argumentation-based negotiation or ABN for short). In this work we focus on the argumentation-based negotiation approach [4,10,20,25,26,31] that combines in a sound way several relevant aspects associated with representing the agents’ knowledge, assessing the strength and trust of their claims, tracing the exchanges of utterances in a negotiation dialogue, etc. (see e.g. [2,5]). In particular, ABN allows the negotiating agents not only to exchange offers but also reasons that support these offers in order to mutually influence their preference relation on the set of offers, and consequently the outcome of the dialogue. Moreover, as the agents that negotiate usually have incomplete beliefs about the others, the exchange of arguments gives them information that makes it possible to update their beliefs.

As [26] exposed, in order to formalize their offers in a negotiation setting ABN agents must be able to generate, select, interpret and evaluate arguments associated with such offers, updating their mental state accordingly. Also, they proposed the following set of principal components for the ABN architecture. The locution interpretation component parses incoming messages. These locutions usually contain a proposal, or an acceptance or rejection message of a previous proposal. The proposal evaluation and generation component makes a decision about whether to accept, reject or generate a counterproposal, or even terminate the negotiation. The locution generation component sends the response to the relevant party. The argument interpretation component updates the agent’s mental state accordingly. Finally, the argument generation mechanism is responsible for deciding what response to actually send to the counterpart and what (if any) arguments should accompany the response.

Our research is based on an ABN model which involves two cooperative agents. We will assume that each agent is benevolent (he will always try to do what is asked for if he is able to do so) and truthful (i.e., he will not knowingly communicate false information). Besides, we will assume that both agents cannot reach their respective goals by themselves, so that they have to ask for help from one another. The agents can thus exchange different resources, including the knowledge associated with possible plans to reach their goals. The resulting negotiation dialogue is composed of an exchange of proposals, where every proposal adopts the form of an argument whose claim is a possible exchange (which are the resources the agent is asking for and what he is willing to offer in return). As the agents initially may have incomplete or incorrect beliefs about the other agent’s goals and resources, during the negotiation process they update their beliefs and consequently, their mental state, according to the arguments exchanged. Thus, in the context of the ABN framework previously described, we will follow the belief revision approach for both argument interpretation and argument generation proposed in [24], where we analyzed the impact of including belief revision for improving the overall negotiation process.

Besides the importance an agent must give to the incoming information through the received messages in a negotiation process, we want to explore the relevance of the information an agent communicates to his counterpart. In order to do this, in this work we extend the original negotiation model proposed in [24] allowing the agents to exchange more informative messages. An agent’s illocutions may now also include a critique (in addition to a proposal), resulting in a more complex argument that can support the proposal exchange (justifying the demand, the offer or both). As a consequence, different kinds of agent may be defined using different communication strategies. These strategies will help the agents determine what information to include in their utterances: an agent may be more or less communicative, giving an explanation of why he is not willing to accept a proposal (i.e., a critique) or explaining the reason of the proposed solution. Different simulations are conducted to show the impact that these communication strategies have in the negotiation process. Information transfer efficiency is assessed in terms of the overall usefulness, quantity of information disclosed and negotiation duration.

Motivational example. For the rest of this article, we will work on a slightly modified version of the well-known Home Improvement Agents Problem (HIA) as a motivational example [20]. We will assume that Ag1 and Ag2 are truthful and benevolent agents. Agent Ag1 has as a goal hanging a picture, and it has a screw and a hammer. Also, he knows how a hammer and a nail can be used to hang a picture and how a screw and a screwdriver can be used to hang mirrors. Ag1 believes that Ag2 has a nail and a screwdriver (a correct, but incomplete belief) and he believes that Ag2 knows how to repair a desk using a screw and a screwdriver (incorrect belief). Finally Ag1 believes that Ag2’s goal is to repair a desk (incorrect belief). On the other hand, Agent Ag2 has as goal to hang a mirror, and it has a nail, a screwdriver and the knowledge of how to hang a mirror using a hammer and a nail. Also, he believes that Ag1 has a screw and the knowledge of how to hang a picture using a screw and a screwdriver (incorrect beliefs). Neither Ag1 nor Ag2 can reach their goals on the basis of their knowledge and resources. Consequently, they need to perform some exchanges in order to do so.

Our proposal aims at modelling how such exchanges can be determined by combining belief revision and communication strategies in an argumentation-based negotiation approach. In particular, our proposal relies on the characterization of belief revision operations to model the agent’s argument generation, where claims are part of the resources to be exchanged.

The remainder of this paper is structured as follows: in Section 2 we define the negotiation objects and protocol and in Section 3 the agent architecture is modeled. Then, in Section 4 we formalize the agent’s utterances and its components, also we define the notions of solutions and deal. In Section 5 we show how we equipped these negotiating agents with belief revision operators in the principal ABN functions: argument generation and interpretation. We also discuss some theoretical properties of our approach. In Section 5.6 we show how the HIA problem can be solved in the context of our proposal. Then, in Section 6, we present the simulations of three types of agents in diverse negotiation scenarios where different advantages and salient features of agents using belief revision can be assessed. Section 7 discusses related work, and finally in Section 8 we discuss the main conclusions obtained and outline some future research topics.

2.Negotiation objects and protocol

In our negotiation scenario two agents will negotiate resources trying to reach a deal towards their goals. The agents will alternate moves and in each one, an agent will make a proposal to his counterpart together with some justification to the proposed exchange. In our approach, from the second move onwards the agents can add to their messages a critique to the last proposal received.

In order to characterize the negotiation elements, we consider a propositional language L, in which the following subsets are distinguished:

  • ObjectsL: a set of atoms representing objects which are the resources an agent may have (e.g., nail, hammer).

  • GoalL: a set of atoms representing goals (e.g., hangMir represents the goal of hanging a mirror). This set is disjoint from the set of objects (i.e. ObjectsLGoalL=).

  • PlansL: a set of propositional formulae encoding plans, which may involve objects for achieving a goal (e.g., nailhammerhangPict). Formally, a plan pi is defined as pi=o1ong where ojObjectsL, j=1,n and gGoalL.

In our approach, as in several areas of computer science, the term resources is considered in a broad sense and can represent anything that is needed to achieve something (e.g., memory, programs, commodities, services, time, money, etc.). Particularly, in this work the set of resources, noted by ResourceL, will include plans for achieving goals, i.e. ResourceL=ObjectsLPlansL. The plans represent the agent’s knowledge of how to use objects to reach a particular goal. Consequently, a plan will be considered as special kind of resource that the agent can share with others without consuming it. We assume that an agent can have infinite copies of each plan he knows. Given a set XResourceL, we will write Xo and Xp to distinguish the subset of objects and the subset of plans in X, respectively. Formally, Xo=defXObjectsL and Xp=defXPlansL.

Using this language, the agents will exchange messages during the negotiation. A dialogue between two agents will be defined as a finite sequence of utterances where the first one is a proposal (which account for arguments in favor of some particular exchange). Then, alternative messages by each of the agents involved in the dialogue will be composed of a possible critique followed by a proposal. The dialogue ends with accept (i.e., successful negotiation, there is a deal) or withdraw (i.e., the negotiation failed, no deal is possible). The syntactic formalization of utterance and its components is presented in Fig. 2. Next, we define the negotiation dialogue.

Definition 2.1

Definition 2.1(Negotiation dialogue).

A dialogue between agents Agi and Agj is a finite sequence of utterances [u1,,un1,un] where u1 is a proposal, for 1<r<n, ur=(critique,proposal) and un{accept,withdraw}, such that: (1) there are no repeated utterances, i.e., usut, with t,s<n; (2) utterance uk with k>1 is performed by Agent Agi only if utterance uk1 is performed by Agent Agj (i.e., agents alternate moves). A dialogue will be initiated by Agi iff u1 is performed by Agi.

The contents of proposals and critiques will be defined in Section 4. Note that dialogues can be warranted to be finite, as there is a finite set of possible combinations of proposals and utterance repetition is not allowed. We can see that the dialogue between agents Agi and Agj will be started by one of the agents with a proposal computed by his Decision Making Apparatus (see next Definition 3.4) using the Init function, followed by a pair (critique, counter-proposal) by the other agent computed by Answer, and so on. Without loss of generality we assume the agent Agi is the one who starts the negotiation dialogue. Figure 1 represents the negotiation dialogue flow initiated by Agi as a finite-state machine.

Fig. 1.

Negotiation dialogue flow initiated by Agi.

Negotiation dialogue flow initiated by Agi.

3.Agent architecture

We model the negotiating agents as intentional ones, following the general architecture presented in [24], but making different improvements in the agents’ decision making apparatus to generate, evaluate and interpret more complex utterances. Each agent will have in his mental state, knowledge about his resources (objects and plans) and goals, as well as beliefs on the other agent’s resources and goals. In a more dynamic agent model a planner may be included with the purpose of generating plans in real time according to the agent’s goals (see for example [19]). In our approach, the plans are preconfigured as agent’s believes and then, the agent selects one of them to reach his current goal. The knowledge and beliefs an agent has about his context are represented using the language L, previously defined. From the information available in such a mental state, he will decide if he accepts a received proposal or which pair (critique, proposal) he can offer the other agent in order to reach an agreement. Otherwise he will withdraw from the negotiation.

Definition 3.1

Definition 3.1(Agent mental state).

Let two agents Agi, Agj be involved in a negotiation. The mental state (MS) of an Agent Agi is a tuple MSi=Ri,Gi,BiRj,BiGj,Hi, where: Ri,BiRjResourceL; Gi,BiGjGoalL and Hi is the history of the negotiation.22

Thus, the mental state of Agi includes a set of available resources (Ri) the agent is willing to negotiate, a set of goals to achieve (Gi),33 as well as belief sets about which resources are available for the opponent agent Agj (BiRj), and which goals he believes the agent Agj has (BiGj). Its mental state includes as well the history of the dialogue (see Definition 2.1) with Agj.

Example 3.2.

Consider the Motivational Example (HIA problem) given in Section 1. In the beginning of the negotiation process, Ag1’s mental state can be represented as MS1=R1,G1,B1R2,B1G2,H1 where:

R1={screw,hammer,screwscrewDriverhangMir,hammernailhangPict},G1={hangPict},B1R2={nail,screwDriver,screwscrewDriverrepairDesk},B1G2={repairDesk},H1=[·].

In our negotiation scenario, the agents may have missing and incorrect beliefs about each other. From a global viewpoint we want to characterize the sets that account for the agent’s correct, incorrect and missing beliefs with respect to his counterpart’s resources. Formally:

Definition 3.3

Definition 3.3(Missing, correct and incorrect beliefs).

Let Agi, Agj be two agents. We will write Mi to denote the set of resources that Agi does not know that Agj has, Ti to denote the set of resources that Agi believes that Agj has and this is actually the case, i.e., such beliefs are correct and Fi to denote the set of resources that Agi believes that Agj has and this is actually not the case, i.e., such beliefs are incorrect. Formally: Mi=BiRjResourcej, Ti=BiRjResourcej, and Fi=BiRjResourcej.

The decision making apparatus the agents employ to act in order to achieve their objectives depends on their mental states (see Definition 3.1). This apparatus will be in charge of computing those messages the agent will send to the other agent.

As the first dialogue move associated with the initial utterance is a particular one, we will single it out by using an initialization function Init. Further messages are computed by another function Answer. Formally:

Definition 3.4

Definition 3.4(Decision making apparatus).

The decision making apparatus of an Agent Agi is a pair DMi=Initi,Answeri, where

  • Initi:MSiMSi×Utterance;

  • Answeri:MSi×UtteranceMSi×Utterance.

At this stage we purposely leave unspecified the actual definitions of Initi and Answeri, later on in Section 5 we will provide their specification through high-level algorithms. Thus, in our approach an ABN agent model will be composed of his mental state and his decision making apparatus. Formally:

Definition 3.5

Definition 3.5(Agent model).

An agent Agi is a tuple MSi,DMi, where MSi is its mental state and DMi its decision making apparatus.

4.The agent’s utterances

Based on their mental states, the agents use their decision apparatus to generate illocutions that contain proposals towards reaching their goals. In our approach, after the first move a message may also contain a critique to the last received proposal. Besides, a proposal is an argument that includes what the agent wants to receive (Y) and what the agent is willing to give in return (X), together with a possible justification (J) explaining why an agent needs what he is asking for (SY) or the beliefs supporting his offer (SX). In turn, both justifications may be empty or a pair composed of a set of resources and a set of goals. The syntax for utterances and their components (i.e., proposal, solution, justification and critique) are shown in Fig. 2.

Fig. 2.

Syntax for the agents’ utterances.

Syntax for the agents’ utterances.

The well formed proposals, according to the defined syntax (see Fig. 2) may be more or less informative, with the following intended meaning:

  • Proposal with no explanation (SX=SY=):

    I propose that you provide me Y in exchange for X.

  • Proposals with partial justification (explaining the demand SY, or the offer SX):

    I propose that you provide me Y in exchange for X, because if I use EY then I can achieve G.I propose that you provide me Y in exchange for X, because I believe that if you use EX thenyou can achieve G.

  • Proposals with complete explanation (SX and SY):

    I propose that you provide me Y because with EY, I can achieve G; in exchangeI offer you X, because I believe if you use EX you can reach G.

Note that an agent’s proposal can be thought of as an argument44 whose claim or solution is associated with a possible exchange of resources [[X,Y]]i, where Y represents what the agent needs to achieve his goals and X the resources he offers in exchange together with its support, i.e., the reasons given for requesting and offering resources. The following definition formalizes this concept.

Definition 4.1

Definition 4.1(Proposal).

Let Agi be an agent with mental state MSi=Ri,Gi,BiRj,BiGj,Hi. A proposal performed by Agi, proposali, is a well formed proposal (i.e., defined in Fig. 2) namely, is an argument J,[[X,Y]]i, where [[X,Y]]i corresponds to the claim or solution of the argument, and J=SX,SY provides the support associated with the claim. SY=(EY,G) justifies what the agent demands or is the empty set, and SX=(EX,G) what he is offering as exchange or is possibly empty, and the following conditions hold:

  • 1. X,EYRi; GGi;

  • 2. Y,EXBiRj; GBiGj;

  • 3. YEYG;

  • 4. XEXG;

  • 5. EYG;

  • 6. EXG;

  • 7. Xo(YEY)=;

  • 8. Yo(XEX)=.

Notice that (3) state that EY and Y are needed for the agent to reach the goal G; (5) means the agent can not reach the goal using only EY and (7) states that no object of X is needed by the agent to reach G – as it suffices to use YEY to reach G, as stated in condition (3). In a similar way, (4), (6) and (8) represent the agent’s beliefs respect to what he is offering: (4) expresses he believes that his counterpart needs EX and X to reach the goal G (i.e., what he believes is his opponent’s goal); (6) means he believes the other agent can not reach his goal using only EX and (8) states that no object of Y is needed by the agent to reach G.55

Also, notice that both explanations may be empty allowing the agent to decide which support to communicate his counterpart.

The set of all the proposals an agent can generate is called Proposal.

Example 4.2

Example 4.2(Example 3.2 continued).

Suppose that in this scenario Ag1 begins the negotiation process by offering Ag2 the following proposal p1:

I propose that you provide me with a nail in exchange for a screw, because if I use a hammer and the knowledge about how to hang a picture using a nail and a hammer, then I can hang a picture. In exchange I offer you a screw because if you use your screwdriver and the knowledge about how to repair a desk with these resources, you can do it.

Then this proposal is denoted by p1=(SX,SY),[[{screw},{nail}]]1 where the justifications associated with the solution are:

SX=({screwDriver,screwscrewDriverrepairDesk},{repairDesk})andSY=({hammer,nailhammerhangPict},{hangPict}).

In the first move the agent that starts the negotiation can only make a proposal to his counterpart, but in the following utterances the agents can reply with a critique to the received proposal, together with a counterproposal. The critiques may have different meanings and are defined as follows.

Definition 4.3

Definition 4.3(Critique).

Let a proposal J,[[X,Y]]i where: J=(EX,G),(EY,G) offered by Agi to Agj, we define a critique Cj expressed by agent Agj as C1,C2,C3 following the given syntax in Fig. 2, where the following conditions hold:

  • C1:C1Y and C1Rj= representing that the agent lacks the required objects C1.

  • C2:C2EX and C2Rj= expressing that the agent does not have the resources C2 of the believed support EX.

  • C3:C3G and C3Gj= communicating that C3 is not a subset of the agent’s goals.

Notice that the critique C1 is oriented towards the original request (Y). On the other hand, C2 and C3 are critiques to the offer (X) due to incorrect beliefs that the proposing agent has. Besides, notice that the different components of a critique may be empty and then, the agent must decide which kind of critique he will include in his answer to his counterpart.

We also remark that in a argumentation setting a critique can be considered an attack to the last received proposal. In our approach we only consider one level attacks because there is no place for critiques to critiques (i.e. the agents are truthful and know with certainty which resources and objectives they have). 66

Example 4.4.

Following with Example 4.2, Ag2 can answer Ag1 with respect to the received proposal p1 with some of the following critiques:

  • C1: I do not have the requested nail.

  • C2: I do not have the knowledge of how to repair a desk with a screw and a screwdriver (i.e., screwscrewDriverrepairDesk).

  • C3: My goal is not repair a desk.

4.1.Proposal evaluation: Solutions and deals

As previously mentioned, we assume agents Agi and Agj cannot reach their goals on their own, and therefore the problem each agent faces is to find a suitable exchange of resources in the space of possible exchanges (P(Ri)×P(Rj)) in order to reach his own goal. In this setting, a proposal can be thought of as an argument J,[[X,Y]]i supporting an exchange of resources. By definition, the pair of resources [[X,Y]]i provides a solution to reach Agi’s goal. We define the function ⊙ that assigns to each proposal J,[[X,Y]]i its associated solution.77

Following [26], we assume that in our approach agents have an objective consideration when they evaluate proposals (i.e., they consider a proposal as a tentative proof to reach their goals, and they verify it by examining the validity of its underlying assumptions, such as resource availability). Since each agent is aware of his own resources and goals, he can determine first, in a selfish way, which are the exchanges that provide a solution for his problem. This is formalized in the following definition.

Definition 4.5

Definition 4.5(Solution).

Let Agi be an agent involved in a negotiation, where its mental state is MSi=Ri,Gi,BiRj,BiGj,Hi. A solution for Agi is any pair [[X,Y]]i, X,YResourceL such that:

  • 1. XRi;

  • 2. (RiXo)YGi.

We will denote by Si the set of all possible solutions for Agi.

Note that X stands for those resources that Agi is willing to give to Agj, whereas Y is the set of resources that are given to Agi to achieve his goal. In a similar way Sj is defined. A deal for Agi and Agj will be a solution which is applicable for both of them, being formally defined as follows.

Definition 4.6

Definition 4.6(Deal).

We will say that [[X,Y]]i where X,YResourceL, is a deal for Agi and Agj iff [[X,Y]]iSi[[Y,X]]jSj. We will denote with D the set of all deals between Agi and Agj.

From the definitions presented before, the agents’ evaluation process can be defined in a simple way as follows: if prop=J,[[X,Y]]i is an Agi proposal, then prop will be accepted by Agj if [[Y,X]]jSj. Notice that a proposal prop will be accepted only if it is a deal.

On the other hand, the agent has beliefs about his counterpart resources and goals that he can use to make exchange proposals. Then, in his proposal he can offer resources that he believes are useful for his opponent and thus, closer to reach a deal. We formalize these ideas as follows.

Definition 4.7

Definition 4.7(Belief of solution).

Let Agi and Agj be two agents and X,YResourceL, we will say that Agi believes [[X,Y]]i is a solution for Agj whenever:

  • 1. YBiRj;

  • 2. (BiRjYo)XBiGj.

We will define BiSj={[[X,Y]]iAgi believes [[X,Y]]i is a solution for Agj}.

Definition 4.8

Definition 4.8(Belief of deal).

Let Agi and Agj be two agents, we will say that Agi believes [[X,Y]]i is a deal iff:

  • 1. XRi;

  • 2. (RiXo)YGi;

  • 3. YBiRj;

  • 4. (BiRjYo)XBiGj.

We will define BiD={[[X,Y]]Agi believes [[X,Y]]i is a deal}.

Notice that Agi believes [[X,Y]]i is a deal iff it is a solution for him and he believes that is a solution for his counterpart.

From Definitions 4.7 and 4.8 the following propositions hold:88

Proposition 4.9.

[[X,Y]]iSi and [[X,Y]]iBiSj[[X,Y]]iBiD.

Proposition 4.10.

[[X,Y]]iBiD and [[Y,X]]jSj[[X,Y]]iD.

Proposition 4.11.

[[X,Y]]iBiD and [[Y,X]]jBjD[[X,Y]]iD.

Proposition 4.9 states that if a pair [[X,Y]]i is a solution for Agi and he believes that it is also a solution for Agj, then Agi believes that [[X,Y]]i is a deal, and the reciprocal also holds. Similarly, Proposition 4.10 asserts that if the agent Agi believes that [[X,Y]]i is a deal and [[Y,X]]j is also a solution for Agj, then [[X,Y]]i is a deal. Finally, Proposition 4.11 states that if Agi believes that [[X,Y]]i is a deal and Agj believes that [[Y,X]]j is a deal, then it holds that [[X,Y]]i is a deal.

Figure 3 shows the set of solutions and beliefs of solutions from the viewpoint of Agi. The dotted line represents the set Sj of solutions of Agj that the agent does not know with total precision. Because of this, Agi cannot be sure of making a proposal prop such that (prop)D. So, in order to entice agent Agj to accept some proposed agreement, Agi must choose a proposal prop such that he believes that its associated solution is a deal, i.e. (prop)BiD. The closer the belief set BiSj is to Sj, the closer will be BiD to D.

Fig. 3.

Solutions’ space from Agi viewpoint.

Solutions’ space from Agi viewpoint.

5.Agents equipped with belief revision

In this section, following the approach presented in [24] we implement belief revision in ABN agents to improve two important issues in the negotiation: the proposal generation and proposal interpretation. All the information contained in an incoming proposal is used by an agent to revise his beliefs about his counterpart and then, by having more accurate beliefs, the agent can make proposals that are more likely to be accepted. The authors showed in [24] that the negotiating agents that implement complete belief revision in these processes (i.e., proposal interpretation and generation) led the negotiation to have better results. In our current work we have improved this agent model equipped with belief revision to be capable of generating and interpreting more complex utterances and we focus our research on the impact that different communication strategies has in the negotiation process. In order to make our analysis self-contained, we will summarize some notions of the belief change theory that we apply in our agent model.

5.1.Belief revision operators

Classic belief change operations introduced in the AGM model [1] are known as expansions, contractions and revisions. An expansion incorporates a new belief without warranting the consistency of the resulting epistemic state. A contraction eliminates a belief α from the epistemic state as well as all those beliefs that make the inference of α possible. Finally, a revision incorporates a new belief α to the epistemic state warranting a consistent result, assuming that α itself is consistent.

As discussed before, in our setting we assume that the agents have their own beliefs about the other agent’s resources and goals. It must be noted that the sets of resources and objectives do not change during the negotiation. Only if a deal succeeds at the end of the negotiation process, the actual exchange of resources will take place and consequently the sets X and Y will be changed. In order to model such a negotiation process in terms of belief revision we will use the notion of Choice Kernel Set and Multiple Kernel contraction [14,16]. These notions will be useful for providing a practical approach to belief revision in our context. We provide below a brief review of the formal definitions involved.

Definition 5.1

Definition 5.1(Choice Kernel Set, from [14]).

Let L be a logical language, R and G finite subsets of L and Cn a consequence operator. Then RG is the set of all XR such that:

  • 1. GCn(X);

  • 2. If YX then GCn(Y).

The set RG is called Choice Kernel Set, and its elements are called G-Kernels of R.

Informally, a Choice Kernel Set is a minimal belief subset of the epistemic state from which G can be deduced. An element in R contributes to make R imply G if and only if it is an element of some G-Kernels of R. Therefore, removing at least one element of each G-kernel of R, it is no longer possible to derive G. The function that selects sentences to be removed will be called an incision function since it makes an incision into every G-kernel.

Definition 5.2

Definition 5.2(Incision function, from [14]).

A function σ is an incision function for R, iff satisfies for all G:

  • 1. σ(RG)(RG);

  • 2. XRGXσ(RG).

The multiple choice contraction operator allows to remove the elements selected by an incision function. Formally:

Definition 5.3

Definition 5.3(Multiple Kernel contraction, from [14]).

Let σ be an incision function for R and G finite subset of L. The multiple kernel contraction ≈ for R is defined as: RG=Rσ(RG).

Next, a revision operator is expressed using two sub-operations: first a contraction and then an expansion (i.e., adding G to the resulting set).

Definition 5.4

Definition 5.4(Revision operator, from [16]).

Let ≈ be a multiple kernel contraction. Given a finite set of sentences R, we define for any finite set G the revision operator ∗: RG=(R¬G)G, where ¬G=def{¬gi:giG}.

Contracting by the finite set ¬G is equivalent to contract by the set formed by the negations of the elements in G.

5.2.Argument generation

The beliefs a particular agent has about the other agent’s resources and goals are significant for proposal generation during negotiation, as they can help reaching a deal. From this information, an agent can infer which proposals he believes are more suitable for the other and consequently, more likely to be accepted. These notions were formalized through the definitions of solutions and belief of solutions (Definitions 4.5 and 4.8). To generate the arguments an agent can give to his counterpart, we define the function Gen that was firstly introduced in [24] but for our negotiation model we implement it with some necessary adaptations. This function allows to compute the proposals that are solution to Agi (i.e., (prop)Si) and to compute proposals that are potential solutions for Agj (i.e., (prop)BiSj). The Gen function is specified using belief revision operations and some properties that follow from its specification are given.

Definition 5.5

Definition 5.5(Gen).

Let R,RResourceL and GGoalL, we define a function Gen as follows:

Gen(R,R,G,i)=def{(EY,G),[[X,Y]]i:YR=,EYR,(EYY)(RRY)G,XREY}.

The Gen function receives two sets of resources (R and R) and a set of goals (G).99 As an outcome it generates a set of proposals propY=(EY,G),[[X,Y]]i justifying only what the agent Agi demands (i.e., SY and SX=) where Y and the first set of resources (R) are disjoint sets, but EY is a subset of R. The union of Y and EY is a minimal set from which G can be deduced. The set X corresponds to the unused resources of R to achieve G.

If the Gen function is executed with adequate arguments, representing Agi’s beliefs with respect to his counterpart’s resources and goals, then we can get another set of proposals, which are the believed solutions of his opponent and include a justification of what the agent is offering (X). Namely, Gen(BiRj,Ri,BiGj,j) generates a set of proposals propX=(EX,G),[[X,Y]]i.

Proposition 5.6.

Given an agent Agi, where his mental state is

MSi=Ri,Gi,BiRj,BiGj,Hi
then the following holds:
  • (1) If propYGen(Ri,BiRj,Gi,i) then propYProposal and (propY)Si;

  • (2) If propXGen(BiRj,Ri,BiGj,j) then propXProposal and (propX)BiSj.

Condition (1) establishes that the Gen function computes all the minimal proposals that are solutions for Agi from his point of view, namely, using as parameters his resources (Ri), his belief about the other agent’s resources (BiRj) and his goal (Gi). These proposals have the justification of what the agent demands. On the other hand, in (2) the Gen function computes the proposals that Agi thinks that are solutions for Agj, i.e., using as parameters his beliefs about the other agent’s resources (BiRj), his own resources (Ri) and his belief about the other agent’s goal (BiGj). In this case, the function returns the justification of what the agent is offering. In summary, Proposition 5.6 shows that the possible proposals that can be generated via an implementation of Gen are potential solutions for the negotiation problem between the agents involved. To compute the proposal with a complete justification the agents proceed to properly combine the information obtained in propX and propY which have a common solution (prop). This process is implemented through high-level algorithms (see Section 5.5).

From the set of possible proposals obtained by the Gen function, the agent must select one exchange to configure the argument he will communicate to the other agent in his utterance. He must also decide which justification to include and whether to make a critique or not to the last proposal received.

5.3.Utterance selection: Argument and critique

In our approach, after the first illocution the agent’s utterances may be conformed by a critique and a proposal: (Critique,Proposal). The critique and proposal communicated by Agi are respectively defined by C=C1,C2,C3i and J,[[X,Y]]i. The utterance selection mechanism of the negotiating agents will be in charge of the following actions: (i) with respect to the argument or proposal, to select – from a given set of possible exchanges (i.e., pairs [[X,Y]]i) that an agent may send to its counterpart, the one which is the more appropriate from his point of view and to decide whether to include a justification (i.e., SX or SY) in the argument J and (ii) if it is convenient for him to make a critique and if so, which one to choose (i.e., C1, C2 or C3).

Argument selection. Different selection mechanisms may be defined for each negotiating agent; an overview of some relevant approaches can be seen in [26]. Reference [30] introduce a negotiation model based on an information-based measure (to represent the information gain) and utility-based function (to represent the utility gain). The negotiation strategies are based on two primitive concepts: intimacy (degree of closeness) and balance (degree of fairness). Arguments are selected in order to obtain a successful deal and to reach a target intimacy level.

In our approach, inspired by [30] we propose an agent selection mechanism based on an Information function I:H×ProposalR (where H stands for the history of the negotiation, see Definition 3.1) and an Utility function U:ProposalR. Diverse selection mechanisms can be defined combining these functions to represent different agent behaviors. According to the agent personality and the social relation he has with his counterpart, the function combining I and U may be defined in a suitable way. For example, we can propose a possible selection function as follows: agents should select the proposal propProposal that maximizes a weighted sum λUU(prop)+λII(prop) and in case of having more than one maximum, the proposal will randomly be chosen between them. For simplicity we can suppose that both agents use the same utility and information functions, but each agent may consider different weights, which stand for different kinds of agents (some possible alternatives are shown in our running example in Section 5.6).

The Utility function for Agi (respectively for Agj) with respect to prop=(SX,SY),[[X,Y]]i may be defined as the difference of the cost of the resources offered to be exchanged, and defined as follows:

Ui(prop)=rYCost(r)rXCost(r)
and the agents’ Information function is defined as:
Ii(H,prop)=rY1get(H,r)+rX1give(H,r)+rEY1own(H,r)+rG1goal(H,r),
where 1get(H,r) returns 1 if for all (EY,G)(EX,G),[[X,Y]]iH, rY. In a similar way 1give,1own and 1goal are defined. The intuition is that given a dialogue H a proposal prop is more informative if its elements were not stated in previous locutions.

Critique selection. After receiving a proposal in an incoming utterance, an agent can make, in the next move, different kinds of critiques. For example if the agent Agj receives from Agi the proposal prop=(SX,SY),[[X,Y]]i, Agj can always select critique C1 if the appropriate conditions hold, but he can only select C2 and C3 if the justification SX was given in the last argument proposed by Agi (i.e., SX).

Different types of agents may be defined considering diverse critical strategies. For instance, more critical agents will communicate all the possible critiques whereas more reserved ones may communicates only some of them, or none. We show the performance of different kinds of agents using diverse critique strategies in the simulations we have conducted (see Section 6).

5.4.Utterance interpretation

When an agent receives an incoming utterance, an interpretation mechanism must be invoked in order to update the agent’s mental state accordingly. As an utterance is composed of a proposal and a possible critique. i.e., ur=critique,proposal, the agent can take advantage of both parts of the received message to update his beliefs.

Argument interpretation. In our framework, the proposal interpretation is based on the following intuition: since agents are truthful, benevolent and aware of their own resources, when an agent Agj receives a proposal prop=(EX,G)(EY,G),[[X,Y]]i from Agi, then Agj can infer the following information:

  • (1) If Agi asks for Y, then Agj believes Agi does not have Y as resource.

  • (2) If Agi uses EY, then Agj believes Agi has EY as a resource.

  • (3) If Agi offers X, then Agj believes Agi has X as resource.

  • (4) If Agi wants to reach G, then Agj believes Agi has G as Goal.

Then, the agents will change their beliefs according to the intuitions presented before, using belief revision operations. Let contract and revise be implementations of the operators ≈ and ∗ respectively (see Definitions 5.3 and 5.4), and prop=(EX,G),(EY,G),[[X,Y]]i an Agi proposal received by Agj. The following steps, which can be seen as variable assignments, implement the agent’s interpretation process:

  • (1) BjRicontract(BjRi,Y).

  • (2) BjRirevise(BjRi,EY).

  • (3) BjRirevise(BjRi,X).

  • (4) BjGirevise(BjGi,G).

Notice that in our approach the agent mental state does not represent the beliefs about his counterpart beliefs (e.g., BiBjβ). Thus, an agent cannot use the support of what his counterpart is offering EX to make a revision of this kind of beliefs.

Critique interpretation. When an agent Agj receives a critique from Agi to his last proposal: C=C1,C2,C3i, then Agj can infer the following information:

  • (1) If C1 and C1Ri= then, Agj believes Agi does not have C1 resources.

  • (2) If C2 and C2Ri= then, Agj believes Agi does not have C2 resources.

  • (3) If C3 and C3G= then, Agj believes that C3 is not part of Agi’s goals.

Using belief revision operations, Agj will change his beliefs according to the information received in the critiques. In this case only the contract operator is needed and the agent’s interpretation is as follows:

  • (1) BjRicontract(BjRi,C1).

  • (2) BjRicontract(BjRi,C2).

  • (3) BjGicontract(BjGi,C3).

In this way, an agent that takes full advantage of the utterance interpretation process can make that the computation of the belief set BiSj may be closer to Sj and consequently, the resulting set of possible deals BiD may be closer to D as well (as illustrated in Fig. 3).

5.5.The agent’s decision model

We have implemented the agent’s decision making apparatus (defined in Section 3) by using two algorithms Init and Answer. The algorithm Init is in charge of starting the negotiation. First, it selects a proposal (including a justification) that the agent Agi believes is a deal (BiD) which it has not been proposed before. If such a proposal does not exist, it tries to send a proposal associated with his own solutions (Si). If this fails, the agent sends a withdraw message. On its turn, Answer receives a proposal and a possible critique. Firstly, the agent’s beliefs are revised and it checks if the associated solution of the proposal is a solution to the agent’s problem, and in that case the proposal is accepted. If that is not the case, a critique selection mechanism is computed and Init is invoked to generate a new proposal. High-level algorithms for Initi and Answeri are given next.

Algorithm 1

Init

Init
Algorithm 2

Answer

Answer

Algorithm 1: In line 1, the function Gen (i.e., a suitable implementation of the Gen function specified in Definition 5.5) is used to compute the set of proposals propSetY such that their associated solutions belong to Si (see Proposition 5.6). Similarly, in line 2, Gen is used to compute the set of proposals propSetX that the agent believes their associated solutions belong to BiSj (see Proposition 5.6). In line 3, the set propSetXY is computed combining the proposals of propSetX and propSetY such that their associated solutions are the same and then, are potential deals. In line 4, those proposals that have been offered before are discarded. The select function chooses one proposal out of the set propSet of possible candidate proposals. Finally, the selected prop is added to H.

Algorithm 2: In line 1, the history H is updated, Then, in lines 2–8, the agent updates his mental state following the steps of utterance interpretation presented in Section 5.4. In lines 9–10 the set propSetY is computed and the agent checks if the associated solution of received proposal (prop) is a solution for him. Then, in line 13 the critique is generated. Finally, in line 14, for generating a counter-proposal the same lines of code as the ones in Init are to be executed (as Init generates proposals). Therefore, for the sake of simplicity and in order to avoid repeating code, a call to Init is used.

The proposed argumentation-based negotiation framework for two agents equipped with belief revision has been implemented using logic programming following the algorithms presented above. Based on such algorithms, concrete negotiating agents can be specified by instantiating their mental state and setting the selection function in charge to choose the proposal to negotiate and the communication strategy, responsible for deciding the justification and the critique to expose.

5.6.Running example: The HIA problem revisited

We consider the modified version of the Home Improvement Agents example [20] presented in Section 1, as a case study of our approach. We will assume that Ag1 and Ag2 are two collaborative agents that negotiate for mutual benefit. In the proposed scenario, Ag1 has the following initial mental state:

R1={screw,hammer,screwscrewDriverhangMir,hammernailhangPict},G1={hangPict},B1R2={nail,screwDriver,screwscrewDriverrepairDesk},B1G2={repairDesk},H1=[·]
and Ag2 has as initial mental state:
R2={nail,screwDriver,hammernailhangMir},G2={hangMir},B2R1={screw,screwscrewDriverhangPict},B2G1={·},H2=[·].
In this example, we consider that the agents select the proposal propProposal that maximizes a weighted sum defined in Section 5.3: λUU(prop)+λII(prop).

For Ag1 the weights are: λU=0.25, λI=2, prioritizing the proposals that are more informative and for Ag2: λU=2, λI=0.25, preferring those proposals that have higher utility function. We also assume that the different resources they negotiate have the same costs for them:

Cost={(hangMir,10),(hangPict,10),(repairDesk,10),(nail,2),(screw,2),(hammer,4),(screwDriver,4),(screwscrewDriverrepairDesk,8),(screwscrewDriverhangMir,8),(hammernailhangPict,8),(screwscrewDriverhangPict,8)}.

Both agents are equipped with full belief revision. Regarding the communication strategy, they give complete justifications and include all the possible critiques (C1, C2 or C3) in their utterances. The whole dialogue obtained in the negotiation program for this scenario is the following:

  • Ag1 Proposal: I propose you provide me [nail] because with [hammer,nailhammer=>hangPict] I can achieve [hangPict] in exchange I offer you [screw] because I Believe if you use [screwDriver,screwDriverscrew=>repairDesk] you can achieve [repairDesk]

  • Ag2 Critique: I do not have [screwDriverscrew=>repairDesk] and my goal is not [repairDesk]

    Proposal: I propose you provide me [hammer] because with [nail,nailhammer=>hangMir] I can achieve [hangMir] in exchange I offer you [screwDriver] because I Believe if you use [screw,screwDriverscrew=>hangPict] you can achieve [hangPict]

  • Ag1 Critique: I do not have [screwDriverscrew=>hangPict]

    Proposal: I propose you provide me [hangPict] because with [] I can achieve [hangPict] in exchange I offer you [screw,screwDriverscrew=>hangMir] because I Believe if you use [screwDriver] you can achieve [hangMir]

  • Ag2 Critique: I do not have the requested [hangPict]

    Proposal: I propose you provide me [screw,screwDriverscrew=>hangMir] because with [screwDriver] I can achieve [hangMir] in exchange I offer you [nail] because I Believe if you use [hammer,nailhammer=>hangPict] you can achieve [hangPict]

  • Ag1 Proposal: Accept

6.Simulations to assess different communication strategies

Simulations of bilateral negotiation were carried out considering different scenarios, to assess the benefits of using different communication strategies in agents equipped with full belief revision, i.e., agents using all the information received in the last message to update their mental state.

Generating the scenarios. All the simulation we have conducted are based on 100 randomly generated negotiation scenarios. The process for generating a scenario is based on randomly selecting the goals for each agent G1,G2GoalL, generating then three disjoint sets of resources F,S,TResourceL such that FG1, SG2 and TG1G2. Then, the mental state for Ag1 and Ag2 was defined as MS1=R1,G1,B1R2,B1G2,H1 and MS2=R2,G2,B2R1,B2G1,H2 such that:

  • (1) R1=F1S1T1, R2=F2S2T2, where F1,F2 (resp. S1, S2, and T1, T2) are partitions of F (resp. S and T).

  • (2) B1R2R2R1, B2R1R1R2.

  • (3) B1G2=G1, B2G1=G2.

  • (4) H1=H2=·.

We can see that F=F1F2 is a solution for Ag1, S=S1S2 is a solution for Ag2, and T=T1T2 can be a solution for both agents. With this allocation of resources and agent’s beliefs, we ensure that initially each agent cannot achieve his own goal by himself and both agents have incomplete and incorrect beliefs about their counterpart.

Simulations were run using two negotiating agents of the same type in 100 different negotiating scenarios. In all cases both agents used the selection function described in Section 5.3, i.e., the agents select the proposal propProposal that maximizes a weighted sum λUU(prop)+λII(prop), using a balanced approach that equally weighs the informativeness of the proposal and its associated utility, i.e., λU=λI=0.5. Besides, in all the negotiations it was assumed that Ag1 starts the negotiation dialogue.

In each simulation we analyzed (i) whether there was an agreement in the negotiation (i.e., it finished with accept or withdraw) and (ii) the length of the negotiation process (i.e. the number of iterations). Besides, we were interested in evaluating the evolution of the agent’s beliefs with respect to his initial mental state. In order to do this, we analyzed two ratios: for each scenario we evaluated the decrease of the agent’s missing and incorrect beliefs (see Definition 3.3). We computed the ratio of these two kinds of beliefs an agent has at the end of negotiation with respect to the initial ones he had, as follows:

(|Mend|+|Fend|)/(|Minit|+|Finit|).
Besides, for each case we compute how the correct beliefs increase during the negotiation process. This is computed as:
|Tend|/|Tinit|.

6.1.Agents that communicate different justifications

After creating these negotiation scenarios, three different types of agents were distinguished based on whether the communicated proposals are more or less informative (detailed in Section 4). In each case their Decision Making Apparatus was adapted to generate the required argument composition.

  • (1) NJ Agents: these agents do not include any justification in their proposals (SX=SY=).

  • (2) PJ Agents: these agents give a partial explanation to their proposal, justifying what they are demanding (SY).

  • (3) CJ Agents: agents that communicates the complete explanation of their proposals, justifying the offer and the demand (SX and SY).

Note that these different types of agents (NJ, PJ, CJ) share the same underlying structure and the only difference among them is associated with the arguments they give in their utterances. If more information is provided in their messages, the use of belief revision to update their mental states can be increased. Nevertheless, the role of the belief revision process during the negotiation of PJ Agents and CJ Agents will be the same. This is because the only difference between these kind of agents is that CJ agents add the support of what they are offering (SX) in their messages and these beliefs can not be used to revise the agents’ mental state in our approach, that is because the agent’s mental state is not represented in the agent’s beliefs about his counterpart’s beliefs. Thus, in this first stage of simulations, to assess the importance of the different explanations an agent can give in the proposals, without including a critique, we run negotiations using only NJ and PJ Agents, as CJ Agents will behave the same than the PJ ones.

6.1.1.Simulations using NJ and PJ agents

The output of negotiations using NJ and PJ Agents on the 100 negotiation scenarios are shown in Fig. 4. We can observe that NJ Agents reached an agreement in 93% of the negotiations, whereas in the simulations using PJ Agents we obtained a slightly higher percentage, 96% of the cases.

Fig. 4.

Output of Negotiations with: (a) NJ Agents and (b) PJ Agents.

Output of Negotiations with: (a) NJ Agents and (b) PJ Agents.

Concerning the reduction of missing and incorrect beliefs for Ag1 about his counterpart, NJ Agents have had an average slightly lower (57.05%) than the one for PJ Agents (60.65%). The average of increase for correct beliefs is 178.89% for NJ Agents and 176.48% for PJ Agents (i.e., percentages which are very similar).

These simulations allow to assess the impact of communicating more informed proposals allowing to implement deeper belief revision on the negotiating agents. On the one hand, PJ Agents reached agreements in more cases (96%) than NJ Agents, which do not give explanations (the percentage increased 3%). On the other hand, as expected the negotiation length tends to be shorter in those agents that communicates more explanations and take advantage of belief revision (as the average number of iterations decreased from 18.67 NJ Agents to 15.93 for PJ Agents). In these preliminar results, PJ Agents have achieved agreements in slightly more negotiation cases and faster than NJ Agents. However, they end the negotiation having on the average slightly more missing and incorrect beliefs and less correct beliefs than the NJ Agents. Intuitively, PJ Agents are able to reach an agreement under more incomplete or incorrect beliefs. Further experimentation may be conducted to analyzed the characteristic of the negotiation cases where the inclusion of justifications in the utterances the agents exchange, have more impact.

Notice that this was the first stage of our empirical analysis and from the results obtained for PJ Agents (i.e. the same apply for CJ agents), we can use agents with full justification to assess different communication strategies including different critiques.

6.2.Agents that communicate different critiques

In this stage we want to analyze the impact that the introduction of critiques in the agent’s utterances has in the negotiation process. For these simulations we use the agents that communicate proposals with complete explanations (i.e., CJ Agents), because the utterances of these type of agents can be answered with different types of critiques (i.e., C1, C2 or C3). We propose three types of CJ agents, using different strategies of critique selection:

  • (1) PC1 Agents: these agents communicate the first critique possible, considering a list of priorities, for this case we consider the following orders C1C2C3.

  • (2) PC2 Agents: for these agents the list of priorities is: C3C1C2.

  • (3) FC Agents: this type of agents exposes all the critiques that are possible.

Notice that PC1 and PC2 Agents communicate only one critique in each utterance whereas FC Agents can expose at most three critiques in each message. The outputs of the simulations realized with the PC2 and FC Agents are shown in Fig. 5.

Fig. 5.

Output of Negotiations with: (a) PC2 Agents and (b) FC Agents.

Output of Negotiations with: (a) PC2 Agents and (b) FC Agents.

Table 1 summarizes the results obtained with all the simulations run using different types of agents. Note that 100% agreements are reached in all the simulation with agents that introduce critiques in their utterances, in contrast with the results obtained with the agents that justify the proposals but without critiques (NJ and PJ Agents), where less agreements were reached. Regarding the duration of the negotiation, all the agents using strategies that involve critiques (i.e., PC1, PC2 and FC) have lower average number of iterations than the agents which do not include critiques. Among them, those agents that implement a full critique (i.e., FC Agents) and then communicate more information obtain a much lower average. The results obtained by the simulations with PC1 and PC2 Agents are very similar. We emphasize that the FC Agents reached agreements increasing the correct beliefs about their counterpart (resulting an average of 185.71% final beliefs with respect to initial ones, see Fig. 6(b)) but still maintaining incorrect and missing beliefs about them (obtained an average of 59.56% shown in Fig. 6(a)). Similarly results on the belief sets occurred with the other types of critiquing agents.

Table 1

Simulation’s results

StrategyDealsAverage iterationsMissing and wrong Bel.Correct Bel.
PC110013.7554.86185.85
PC210013.4059.55173.05
FC1008.5759.56185.71
Fig. 6.

FC Agents: (a) Reduction of missing and incorrect beliefs; (b) Acquired knowledge.

FC Agents: (a) Reduction of missing and incorrect beliefs; (b) Acquired knowledge.

Finally, we can observe that there is a considerable difference in the negotiation results (i.e., considering the number of deals reached and average of iterations) between agents that incorporate critiques in their utterance. This is because these types of agents can strengthen the belief revision process, but without communicating all the knowledge they have.

7.Discussion. Related work

In this paper we have proposed an argumentation-based negotiation model for two collaborative agents equipped with full belief revision and we focussed on the relevance of the information the agents communicate to their counterpart in the negotiation dialogue. In order to do this, in this work we have extended the argumentation-based negotiation model we proposed in [24]. The focus was on the belief revision applied by the agents and how they took advantage of the incoming information through the received messages. In this paper we have improved the negotiation protocol allowing the agents to exchange more informative messages. Firstly, the agent’s illocutions may now also include a critique (in addition to a proposal) and the agent’s decision mechanism must decide which kind of critique to communicate in each move. Besides a more complex argument can support the proposal exchange (justifying the demand, the offer or both). As a consequence, different kinds of agent may be defined using different communication strategies. In our approach we use a logic-based argumentation framework, where arguments are associated with proposals that allow agents to achieve agreements, and attacks correspond to critiques that defeat proposals (in terms of resource availability and possible conflicts in achieving goals). It must be noticed, however, that in our framework agents cannot introduce critiques about critiques (as it would be the case with arguments defeating arguments in most argumentation frameworks). We contend that in many negotiation scenarios it might be difficult to identify a critique about a critique (being advisable to persuade rather than to deepen the confrontation).

Research focused on providing a suitable model for capturing different negotiation strategies in agent dialogues was previously presented in [22]. In that case the study was made on a different negotiation scenario, defining the so-called double knapsack negotiation problem along with a sequential negotiation protocol, providing different concession information strategies. The inclusion of critiques in the agents’ dialogues have been also explored in the context of recommendation systems and showed improvements in the recommendations obtained [9].

In contrast with the original argumentative framework to solve the HIA problem in [20], our negotiation model allows the agents to gain and revise their beliefs as the dialogue takes place. Consequently, in our approach an agent does not need to have initial (or correct) beliefs about the other agent involved in the negotiation also the utterances the agents exchange are more complex and informative. In [21] a similar scenario is analyzed, but agents are aware of all the agents’ resources and the agents’ plans (or their knowledge about plans) are not consider negotiable. We think that our proposal is more flexible in this respect, as plans are also negotiation objects in our formalization. There have been previous approaches integrating belief revision and negotiation. In [34] and [33] a formal characterization of negotiation from a belief revision perspective are given, but no implementation issues are considered.

Argument-based negotiation has been quite an active area in the last years. In [11] an excellent survey of recent advances in argument-based negotiation is presented. They discuss these contributions in the context of the argument-based reasoning mechanisms the agents use for negotiating, the protocols the agents use for conveying arguments and offers and, the strategies that determine their choices at each step of the negotiation. In the context of this article, a relevant approach to argumentation-based negotiation can be seen in [3] where the proposed framework makes it possible to study the outcomes of the negotiation process. In contrast with this approach, our proposal rely on the characterization of belief revision operations to model the agent’s arguments generation, which their claims are the resources to be exchanged. Formal models of belief change can be very helpful in providing suitable frameworks for rational agents [6], in which the information from inter-agent dialogues can be better exploited. In [18] the authors present a computational model implemented in an experimental dialogue system (DS). Communication in a natural language between two participants A and B is considered, where A has a communicative goal that his/her partner B will make a decision to perform an action D. Agent A argues the usefulness, pleasantness, etc. of D (including its consequences), in order to guide B’s reasoning in a desirable direction. In contrast with our approach, the whole negotiation process is based on natural language, distinguishing persuasion from information seeking dialogues, rather on applying belief revision on a knowledgebase expressed in a logical language. In [7] the authors propose a model and an algorithm for analyzing tendencies in group decision-making in argument-based negotiation. In contrast with our model, the authors do not rely on belief revision mechanisms for decision making. The proposed model allows the agent to redefine his objectives to maximize both his and group satisfaction. In contrast, our approach is focused on a 2-agent dialogue (proponent and opponent), and does not consider the notion of group decision-making.

Additionally, it must be noted that in our proposal we assume that agents are benevolent. This approach can also be found in several other frameworks as e.g., [20]. In addition, in our work, agents are assumed to be truthful. Recent research has led to consider other situations such as negotiation among dishonest agents [29], which is an interesting scenario for future work.

8.Conclusions. Future work

In this article we have assessed the relevance of the exchanging information in an argumentation-based negotiation model for two collaborative agents that may have incomplete and possibly incorrect beliefs about their opponents. To take advantage of the incoming information the agents are equipped with belief revision operators to interpret the received utterances and to generate new proposals.

We have extended the original argumentation-based negotiation model proposed in [24] allowing the agents to exchange more complex and informative messages. An agent’s illocutions may now also include a critique (in addition to a proposal), resulting in a more complete argument that can support the proposal exchange (justifying the demand, the offer or both). As a consequence, different kinds of agents may be defined using different communication strategies. These strategies will help the agents determine what information to include in their utterances: an agent may be more or less communicative, giving an explanation of why he is not willing to accept a proposal (i.e., a critique) or explaining the reason of the proposed solution.

When agents want to achieve their goals, they engage in a benevolent dialogue exchanging proposals together with possible critiques. During the negotiation, the agents continuously update their mental states to generate new proposals, more likely to be accepted. As a running example, a revised version of the HIA was solved by our negotiation program showing how the proposed negotiation model can be used to solve this kind of cooperative problem under incomplete and incorrect beliefs, and illustrating the role information communication has in the negotiation dialogue.

We have carried out an empirical analysis of our proposal, assessing the impact of considering agents with different communication strategies during the negotiation process. From this analysis we can conclude that the introduction of more informative illocutions have impact on the overall negotiation process. We obtained the 100% of agreements in all the strategies that introduce critiques, showing that better informed illocutions have impact on the success of the negotiation. Notice that in all the strategies studied, the agents reach these results without knowing all the correct information and maintaining some incorrect beliefs about their counterpart.

Part of our future work is focused on assessing the different communication strategies from the point of view of the quality of the results of the negotiation (e.g., using some utility measure associated with the agreed exchange). We are also interested in extending the proposed model to an n-party scenario, where different agents can get involved in dialogues. Clearly, such scenario would involve additional aspects which deserve further analysis (e.g. satisfaction in group decision making, as discussed in [7]), which are outside the scope of this article.

Furthermore, we want also to identify different kinds of negotiation problems for which a particular type of agent (i.e., using a specific communication strategy) are to be preferred, considering the trade-off between negotiation results and computational complexity. Also, we want to evaluate the role of the information exchange and belief revision in other kinds of negotiating agents (e.g., dishonest, less collaborative, etc.) and different scenarios.

In order to fully instantiate flexible agents in real domains a more complex agent architecture would be needed, expanding the one presented in this article. Such a model would include a Planner enabling agents to plan dynamically and under real-time constraints (e.g. following [19]), using as well a richer and more expressive representation of the agent’s beliefs. Such beliefs may include grades, i.e. a quantification of uncertainty [8] or different multi-level opponent models [27]. Also, the representation of higher level beliefs (i.e. beliefs about other agents beliefs) may be included using for instance, dynamic epistemic logic [32], which allows to specify the static and dynamic aspects of multi-agent systems. All these features would increase the expressive power of the language of negotiation.

Another interesting topic for future research is the integration our approach with of so-called agent-planning program [15], which suitably mixes automated planning with agent-oriented programming. Agent planning programs are finite-state programs, possibly containing loops, whose atomic instructions consist of a guard, a maintenance goal, and an achievement goal, which act as precondition-invariance-postcondition assertions in program specification. In this setting, argumentation and belief revision could be also integrated for capturing different decision making capabilities.

We think that deepening the integration of communication strategies and belief revision in the context of ABN agents is a very promising area for future research, paving the way for the deployment of intelligent software systems for solving real-world problems.

Notes

2 In what follows, we will refer to Agi as a generic agent, and Agj the counterpart agent.

3 Notice that in the case of having Gi more than one goal, the agent will want to achieve all of them and consequently, we will follow a conjunctive approach of the set.

4 A full account of argumentation theory and its applications in multiagent systems and belief revision is outside the scope of this article. For further references and insights the reader is referred to [12].

5 We write XG whenever GCn(X), where Cn is a logical consequence operator.

6 These kind of chained attacks can be introduced using a Defeasible Logic Programming framework (see for example [23]). A full account of such attacks is outside the scope of this article.

7 The function ⊙ corresponds to the second component projection.

8 All the propositions and their proofs were formalized in Coq and are available at http://web.cifasis-conicet.gov.ar/~pilotti/Automated_Agent_Negotiation.v.

9 Note that X has to belong to R as the agent cannot give away something he does not have. However, the definition is broad enough to allow that Y stands for anything that allows an agent to reach his goal.

References

[1] 

C. Alchourrón, P. Gärdenfors and D. Makinson, On the logic of theory change: Partial meet contraction and revision functions, J. Symb. Log. 50: ((1985) ), 510–530. doi:10.2307/2274239.

[2] 

L. Amgoud and R. Demolombe, An argumentation-based approach for reasoning about trust in information sources, Argument & Computation 5: ((2014) ), 191–215. doi:10.1080/19462166.2014.881417.

[3] 

L. Amgoud, Y. Dimopoulos and P. Moraitis, A unified and general framework for argumentation-based negotiation, in: Proc. AAMAS 2007, (2007) .

[4] 

L. Amgoud and S. Vesic, A formal analysis of the outcomes of argumentation-based negotiations, in: AAMAS, L. Sonenberg, P. Stone, K. Tumer and P. Yolum, eds, IFAAMAS, (2011) , pp. 1237–1238.

[5] 

L. Amgoud and S. Vesic, A formal analysis of the role of argumentation in negotiation dialogues, J. Log. Comput. 22: ((2012) ), 957–978. doi:10.1093/logcom/exr037.

[6] 

G. Bonanno, J. Delgrande, J. Lang and H. Rott, Special issue on formal models of belief change in rational agents, J. Applied Logic 7: ((2009) ), 363. doi:10.1016/j.jal.2009.05.001.

[7] 

J. Carneiro, D. Martinho, G. Marreiros and P. Novais, The effect of decision satisfaction prediction in argumentation-based negotiation, in: Highlights of Practical Applications of Scalable Multi-Agent Systems. The PAAMS Collection – International Workshops of PAAMS 2016, Proceedings, Sevilla, Spain, June 1–3, 2016, J. Bajo, M.J. Escalona, S. Giroux, P. Hoffa-Dabrowska, V. Julián, P. Novais, N.S. Pi, R. Unland and R.A. Silveira, eds, Communications in Computer and Information Science, Vol. 616: , Springer, (2016) , pp. 262–273.

[8] 

A. Casali, L. Godo and C. Sierra, Graded BDI models for agent architectures, in: Computational Logic in Multi-Agent Systems, J. Leite and P. Torroni, eds, Lecture Notes in Computer Science, Vol. 3487: , Springer, Berlin, Heidelberg, (2005) , pp. 126–143. doi:10.1007/11533092_8.

[9] 

L. Chen and P. Pu, Critiquing-based recommenders: Survey and emerging trends, User Modeling and User-Adapted Interaction 22: ((2012) ), 125–150. doi:10.1007/s11257-011-9108-6.

[10] 

Y. Dimopoulos and P. Moraitis, Negotiation and argumentation in multi-agent systems, Chapter 4, in: Advances in Argumentation Based Negotiation, (2011) .

[11] 

Y. Dimopoulos and P. Moraitis, Advances in Argumentation-Based Negotiation, Bentham Science Publishers, (2014) , pp. 82–125. doi:10.2174/9781608058242114010006.

[12] 

M. Falappa, A. Garcia, G. Kern-Isberner and G. Simari, On the evolving relation between belief revision and argumentation, Knowledge Engineering Review 26: ((2011) ), 35–43. doi:10.1017/S0269888910000391.

[13] 

P. Faratin, C. Sierra and N. Jennings, Using similarity criteria to make negotiation trade-offs, in: MultiAgent Systems, 2000. Proceedings. Fourth International Conference on, (2000) , pp. 119–126. doi:10.1109/ICMAS.2000.858443.

[14] 

E. Fermé, K. Saez and P. Sanz, Multiple kernel contraction, Studia Logica: An International Journal for Symbolic Logic 73: ((2003) ), 183–195. doi:10.1023/A:1022927828817.

[15] 

G.D. Giacomo, A.E. Gerevini, F. Patrizi, A. Saetti and S. Sardina, Agent planning programs, Artificial Intelligence 231: ((2016) ), 64–106. doi:10.1016/j.artint.2015.10.001.

[16] 

S. Hansson, A Textbook of Belief Dynamics: Theory Change and Database Updating, Applied Logic Series, Kluwer Academic Publishers, (1999) .

[17] 

N.R. Jennings, P. Faratin, A.R. Lomuscio, S. Parsons, C. Sierra and M. Wooldridge, Automated negotiation: Prospects, methods and challenges, International Journal of Group Decision and Negotiation 10: ((2001) ), 199–215. doi:10.1023/A:1008746126376.

[18] 

M. Koit and H. Õim, A computational model of argumentation in agreement negotiation processes, Argument & Computation 6: ((2015) ), 101–129. doi:10.1080/19462166.2014.915233.

[19] 

A.R. Panisson, G. Farias, A. Freitas, F. Meneguzzi, R. Vieira and R.H. Bordini, Planning interactions for agents in argumentation-based negotiation, in: Proc. 11th Int. Workshop on Argumentation in Multi-Agent Systems, (2014) .

[20] 

S. Parsons, C. Sierra and N.R. Jennings, Agents that reason and negotiate by arguing, Journal of Logic and Computation 8: ((1998) ), 261–292. doi:10.1093/logcom/8.3.261.

[21] 

P. Pasquier, R. Hollands, I. Rahwan, F. Dignum and L. Sonenberg, An empirical study of interest-based negotiation, Autonomous Agents and Multi-Agent Systems 22: ((2011) ), 249–288. doi:10.1007/s10458-010-9125-6.

[22] 

P. Pilotti, A. Casali and C. Chesñevar, The double knapsack negotiation problem: Modeling cooperative agents and experimenting negotiation strategies, in: Advances in Artificial Intelligence – IBERAMIA 2014, A.L. Bazzan and K. Pichara, eds, Lecture Notes in Computer Science, Vol. 8864: , Springer, (2014) , pp. 548–559.

[23] 

P. Pilotti, A. Casali and C. Chesñevar, Incorporating object features in collaborative argumentation-based negotiation agents, in: Proc. BRACIS-ENIAC 2014, Sao Carlos, SP, Brazil, (2014) .

[24] 

P. Pilotti, A. Casali and C. Chesñevar, A belief revision approach for argumentation-based negotiation agents, International Journal of Applied Mathematics and Computer Science 25: ((2015) ), 455–470.

[25] 

I. Rahwan, P. Pasquier, L. Sonenberg and F. Dignum, On the benefits of exploiting underlying goals in argument-based negotiation, in: Twenty-Second Conference on Artificial Intelligence (AAAI), Vancouver, (2007) , pp. 116–121.

[26] 

I. Rahwan, S.D. Ramchurn, N.R. Jennings, P. Mcburney, S. Parsons and L. Sonenberg, Argumentation-based negotiation, Knowl. Eng. Rev. 18: ((2003) ), 343–375. doi:10.1017/S0269888904000098.

[27] 

T. Rienstra, M. Thimm and N. Oren, Opponent models with uncertainty for strategic argumentation, in: IJCAI, (2013) .

[28] 

J.S. Rosenschein and G. Zlotkin, Rules of Encounter – Designing Conventions for Automated Negotiation Among Computers, MIT Press, (1994) .

[29] 

C. Sakama, Dishonest reasoning by abduction, in: IJCAI, T. Walsh, ed., IJCAI/AAAI, (2011) , pp. 1063–1064.

[30] 

C. Sierra and J.K. Debenham, The LOGIC negotiation model, in: AAMAS, E.H. Durfee, M. Yokoo, M.N. Huhns and O. Shehory, eds, IFAAMAS, (2007) , p. 243.

[31] 

C. Sierra, N.R. Jennings, P. Noriega and S. Parsons, A framework for argumentation-based negotiation, in: Proceedings of the 4th International Workshop on Intelligent Agents IV, Agent Theories, Architectures, and Languages, ATAL’97, Springer-Verlag, London, UK, (1998) , pp. 177–192. doi:10.1007/BFb0026758.

[32] 

H. van Ditmarsch, W. van der Hoek and B.P. Kooi, Dynamic Epistemic Logic, Vol. 337: , Springer Science & Business Media, (2007) .

[33] 

D. Zhang, A logic-based axiomatic model of bargaining, Artif. Intell. 174: ((2010) ), 1307–1322. doi:10.1016/j.artint.2010.08.003.

[34] 

D. Zhang, N. Foo, T. Meyer and R. Kwok, Negotiation as mutual belief revision, in: Proceedings of AAAI’04, (2004) , pp. 317–322.