You are viewing a javascript disabled version of the site. Please enable Javascript for this site to function properly.
Go to headerGo to navigationGo to searchGo to contentsGo to footer
In content section. Select this link to jump to navigation

An activity-centric argumentation framework for assistive technology aimed at improving health

Abstract

Tailoring assistive systems for guiding and monitoring an individual in daily living activities is a complex task. This paper presents ALI, an assistive system combining a formal possibilistic argumentation system and an informal model of human activity: the Cultural-Historic Activity Theory, facilitating the delivery of tailored advices to a human actor. We follow an activity-centric approach, taking into consideration the human’s motives, goals and prioritized actions. ALI tracks a person in order to I) determine what activities were performed over a period of time (activity recognition tracking), and II) send personalized notifications suggesting the most suitable activities to perform (decision-making monitoring). The ALI system was evaluated in a formative pilot study related to promote social activities and physical exercise.

1.Introduction

Providing tailored and appropriate guidance or recommendations to individuals with the purpose of improving their performance of daily living activities is a complex task. One of the major challenges is motivating an individual to change their behavior to a healthier lifestyle pattern, as is evidenced in numerous approaches [47,50,58].

Different methods have been developed for building behavior change and persuasive systems in order to influence a person’s performance of activity. A number of authors in the Artificial Intelligence (AI) and Computer Science fields have used psychological cues of persuasion, considering different information sources such as human-centric (emotions, preferences, motivations, goals) and environment context (time, place, language, visual interface), for instance [25,32,53], among others. How to model and use these different information sources in order to provide sound, persuasive and encouraging messages for improving daily activities is, without doubt, a complex task. Moreover, when the notion of activity is more elaborated, understanding human activities as systemic and socially situated phenomena, then, approaches considering activities as simple linear sets of atomic evidence lag behind.

Against this background, we introduce an Assisted LIving system (ALI) to address the complex task of providing appropriate guidance and recommendations in the conducting of daily activities, tailored to individuals who may have different needs and desires for support. The ALI concept is based on a novel integration of an argument-based decision-making framework of AI, with Activity-Theoretical models for understanding human activity, developed mainly within the fields of social science and psychology [38]. This integration allows us to reason and infer sets of sound argument-based explanations of human behavior and select the best guidance action to take, by considering a human-centric point of view. Argumentation theory provides us with consistent, common-sense reasoning tools for building arguments. In contrast to logical proofs, arguments are defeasible, that is, the validity of their conclusions can be disputed by other arguments [10]. ALI offers personalized recommendations for daily activities, based on two different sources: 1) user data: her/his context (temporal and spatial) and her/his preferences (needs, motives, activities, goals); and 2) domain expert’s information. In our approach, each human activity is composed by a sequence of actions; these actions are directed by goals such as “doing physical exercise”, following the hierarchical activity model of Cultural-Historical Activity Theory (CHAT) [35,44,70]. This scenario is introduced in Fig. 2. With this scenario set, ALI selects an argument which follows the preferences of a person and her therapist’s expertise. Two main tasks are performed by ALI when it is used by an individual: 1) monitoring activities of a person; 2) supporting, by guiding her/his in daily living activities. Moreover, ALI collects information that can be used for assessment of patterns and changes of patterns of behavior over time.

We summarize the main contributions of this research as follows:

  • 1. A formal integration between an argument-based possibilistic decision-making framework and CHAT in order to recognize, argue, justify and provide argumentative explanations for human activities.

  • 2. Argument extensions are interpreted based on CHAT for selecting and formulating messages.

  • 3. A real time activity-support system that collects uncertain and incomplete observations of the human’s context and implements 1) and 2).

  • 4. A system evaluated in a formative pilot study, which provided results concerning how increasing the persuasiveness of the application.

The rest of the paper is divided as follows. In Section 2 the methods used in our approach are introduced. In Section 3, results regarding our integration between a Social Science approach and an argumentation-based decision-making framework are presented. In Section 4 we present a novel approach for constructing messages from structured arguments. Section 5 presents the prototype architecture of ALI; in Section 6, a pilot study of ALI is introduced. In Section 7 we compare our approach with different literature discussing our contributions and limitations. In Section 8 conclusions are provided, in addition to future work. In Appendix A the syntax and semantics of the formal language capturing complex activities is described. Appendix B presents the formal description of a possibilistic decision-making framework [56] used in capturing complex activities is described. Appendix B presents a formal description of possibilistic logic programs [56] used in our approach.

2.Methods and instruments

In this section we introduce theories used in our approach.

2.1.Theories about human activity and user scenario

Cultural-Historical Activity Theory (CHAT) offers a philosophical and cross-disciplinary perspective for analyzing diverse human practices as development processes in which both individual and social levels are interlinked [35]. With its recent emphasis on Information Systems, CHAT helps in exploring and understanding interactions in their social context, multiple contexts and cultures, and the dynamics and development of particular activities. In order to represent and model information about human activities we therefore use CHAT.

CHAT is suitable to describe the dynamics of goal-based human activities such as, “maintain physical condition”, which requires the achievement of different goals e.g., “exercise during the week”, “eat healthy”, “ride a bike instead of use a car”, etc. CHAT considers an activity as a hierarchy of goal-oriented activities and sub-activities [44]. Consequently, an hierarchy if goals and sub-goals can be formed (see Fig. 1). Each action at the lowest level, consists of a set of operations, which are not goal-oriented in the perspective of CHAT. We are interested in generating observations of these operations as shown in Fig. 1. This figure describes the so called human operations (bottom) which we interpret as observable person specific information from the context, for instance cues obtained by sensors. In this setting, person’s goals and context observations can be captured in a knowledge base, for instance, by an intelligent assistive technology (AT) such as ALI.

The structure of a complex activity is dynamic [38], e.g., driving a car, can be an activity for a person, but for experienced driver it can be a goal, with the car driving as one of actions to fulfill an activity.

Fig. 1.

Representation of a hierarchical goal-based activity using CHAT.

Representation of a hierarchical goal-based activity using CHAT.

The role of our AT system ALI is 1) to monitor a person’s goal-based activity, and 2) deliver encouraging messages, which may have the potential to affect the person’s activity behavior. A running example describing how we use CHAT to capture an activity is presented as follows:

Example 1

Example 1(The Kim scenario (maintaining physical condition)).

Kim is a young adult. A therapist has reason to meet Kim, and discuss her situation. Kim would like to see some changes in her everyday life, which the therapist supports. Generic patterns of behavior, which can be seen as potentially “unhealthy”, are identified and focused on, such as Kim’s tendency to avoid leaving the house and getting stuck by the computer without much physical exercise.

In this scenario, Kim and her therapist agreed that maintaining a good physical condition is the most preferred activity to monitor and track during the period of intervention. From Kim’s perspective, maintaining a healthy pattern of physical exercise implies achieving different goals, like regularly accomplishing physical exercise, minimizing inactive behavior such as sitting by the computer and increasing the time spent outside her home. Some observations that would imply the achievement of these goals are detecting that she is jogging or running. Also, for achieving more social contacts and getting out of her home environment, Kim found it desirable to meet her family and friends more often.

A scheme representing the hierarchical goal-based activity is presented in Fig. 2.

Fig. 2.

Kim scenario, based on CHAT.

Kim scenario, based on CHAT.

2.2.Supplementary software

The ACKTUS platform (Activity-Centered Modeling of Knowledge and Interaction Tailored to Users) [46] was developed for enabling health professionals model domain knowledge to be used in knowledge-based applications, and design the interaction content and flow for supporting different types of activities (e.g., diagnosis, risk assessment, support for conducting Activities of Daily Living (ADL)) [45]. ACKTUS contains a number of knowledge-bases, assessment applications and dedicated user interfaces for different knowledge domains. In this work, ACKTUS is used for the following purposes: 1) as an instrument for assessing a user’s health status, preferred activities, preferences and goals, through the ACKTUS application I-Help; 2) as an instrument for the users participating in the pilot user study to create their own motivational messages, which they would like to receive; and 3) for storing arguments and notifications generated by ALI (i.e., events) in the actor repository together with other person-specific information. All ACKTUS applications share a common core ontology, which is a representation of knowledge at the levels of activity and actions, in terms of the complexity hierarchy model of human activity provided by CHAT. Consequently, ALI supplements this model by providing interpretations of observed operations, which can be combined with representations of knowledge defined by the ACKTUS core ontology and fused into a more rich understanding of a human activity and behavior.

2.2.1.User study

A pilot evaluation study of ALI was conducted as a part of a broader study presented in [47]. The study addresses the following research questions: 1) how does information about the context, preferences and personalized suggestions contribute to building arguments?; 2) how is the human–computer interaction performed through a mobile phone?; and 3) how does the user react to positive and encouraging messages? These questions are partially answered, based on the analyses of data obtained by ALI and ACKTUS I-Help and through interviews with the test subjects. The study was formative, aiming to provide results which can be fed into the ongoing development of ALI. The evaluation study focused on the analysis of location and locomotion features obtained from the mobile phone of an individual and building arguments supported by knowledge obtained from the ACKTUS repositories.

Whether or not the behavior of an individual actually changes when personalized suggestions are received through the ALI system is a subject for future work.

2.2.2.Methods, participants and procedure

In this paper, we focus on the decreasing well-being among young individuals between the ages of 18 and 24 years old. The two subjects who volunteered to participate in our study were not necessarily suffering from any of these conditions. The test subjects were informed about the purpose of the study and gave informed consent.

The two female adolescents were first interviewed by a therapist who made an initial assessment, in which priorities and goals were identified. The initial assessment was performed using the ACKTUS application I-Help, through which data was captured and stored in an actor repository. This information about the two test subjects was retrieved by ALI and functioned as the source for person-specific information such as preferred goals and prioritized activities. Subject A and Subject B prioritized physical activity as the main activity to be supported, in order to achieve a healthy and regular activity pattern over day and night.

Subjects were also asked to formulate personalized recommendations, or arguments, which they preferred to be given, and under what conditions they should be presented, which were added to their actor repositories. This was in accordance with the purpose of the messages in ALI, in which positive and encouraging feedback messages follow an approach to coping with depression and anxiety, using an introspective natural dialogue (creating messages to oneself), which has been shown to be an important determinant of physical activity in youth [8,74]. In this manner, the personalized view represents a trusted source for recommendations, since listening to or reading recommendations from another source or person requires confidence and trust.

The two subjects agreed to carry a smart-phone over a period of one week. They agreed to carry the phone throughout all activities and were explained what kind of data would be collected. Over the evaluation period, the two subjects were asked to try to maintain the phone switched on both day and night.

3.An argumentation-based possibilistic decision-making framework integrating CHAT

Formal argumentation is concerned primarily with reaching conclusions through logical reasoning, that is, claims based on premises. In the past few years, formal models of argumentation have been steadily gaining importance in artificial intelligence, where they have found a wide range of applications in specifying semantics for logic programs, decision-making, generating natural language text and supporting multi-agent dialogue, among others.

Dung made an important contribution to the research field of argumentation in [21] by showing that argumentation can be “viewed” as Logic Programming (LP). Dung provided a meta-schema of such systems, defining a general architecture for meta-interpreters for argumentation systems.

Extending Dung’s approach, we can represent an argumentation system as a “three-step” system, starting with a knowledge base and obtaining argument-based conclusions as output (Fig. 3). This chain resembles an inference process, starting with raw data and ending with a conclusion or a sound set of conclusions. ALI follows this meta-architecture, providing sound proofs of goal-based human activities, using a possibilistic decision-making framework (Step 1 and Step 2) and a human-centric explanation of an activity (Step 3) using CHAT.

Fig. 3.

Meta-interpreter for an argumentation system.

Meta-interpreter for an argumentation system.

The scenario about Kim introduced in Example 1 presents a decision-making process, which deals with uncertainty. ALI has to make argument-based decisions based on Kim’s preferred goals by obtaining sets of possible worlds (or interpretations) of her context. Different approaches based on argumentation theory have been developed dealing with the different forms of information for justifying/explaining rational decisions. A number of approaches based on Logical Argumentation, formalizing argument-based decision-making under uncertainty [41,60] and Possibilistic Logic [1,4,56] have been proposed.

In fact, in common life scenarios, descriptions of uncertain observations such as “I think that…”, “chances are…”, or like “it seems like…” usually appeal to our experience or our common sense. A possibilistic logic framework based on possibility theory can be used to model these pieces of knowledge, which are pervaded with uncertainty (like in the Kim scenario). Such framework is also useful when representing preferences expressed as sets of prioritized goals [20]. We argue that a possibilistic logic framework is suitable for representing the exemplified scenario.

In the logic programming literature, different logic programming semantics exist, which capture possibilistic logic programs in order to infer information from a given possibilistic logic program [55,59]. Given that ALI is expected to support processes like decision-making and recommendations in real time, computational complexity of time is an important issue. In this setting, the Possibilistic Well-Founded Semantics (P_WFS), which is computable in polynomial time, seems to be a suitable semantics for supporting a real-time inference process in the ALI architecture [59]. Moreover, there is an implementation of the possibilistic well-founded semantics for possibilistic logic programs [69]. We describe the main concepts of a Possibilistic Argument-based Decision Framework11 (PADF) in order to capture Kim’s scenario. The formal qualities of PADF were introduced in [56] and a summary can be found in Appendix B.

A PADF is a tuple P,D,G in which:

  • 1. a knowledge base which is defined by a possibilistic normal logic program P;

  • 2. a set of decisions D and

  • 3. a set of goals G,

where D and G are subsets of the signature of the possibilistic normal logic program P. In order to illustrate how the PADF captures the Kim scenario, let us consider an extension of Example 1, as follows:

Example 2

Example 2(Monitoring Kim’s physical activity patterns).

Kim agrees with therapists to use ALI to monitor her physical activity patterns. ALI is set up on her mobile phone. In this setting, ALI obtains a register of her location and locomotion activities over a period of time. More details about how location and locomotion observations are obtained is described in Section 5.

Fig. 4.

Sub-scenario for giving Kim encouraging advice.

Sub-scenario for giving Kim encouraging advice.

Following the Kim scenario, and following its representation in Fig. 2, we obtain an alternative sub-scenario, for example, encouraging Kim to do exercise, as described in the sub-scenario depicted in Fig. 4 where ALI observes that Kim is at rest (Shes_at_resto) during a defined time (Trigger_timeouto). These observations trigger the action of sending an encouraging notification, which is displayed on her mobile. We can capture this sub-scenario with a possibilistic normal logic program, integrating each block in Fig. 4 in a clause. In this setting, each clause represents a decision that must be made (preferred action), given a set of observations of the world (goal-related observations), in order to fulfill a goal. Goals, observations and decisions are identified with sub-indices g; o; d respectively, as follows:

1:Shes_ExercisinggShes_at_resto,Trigger_timeouto,Encourage_notificationd.

Since the information obtained from the mobile sensors is pervaded by vagueness, each piece of knowledge will be attached to a degree of confidence, which expresses the uncertainty degree of each rule from a possibilistic point of view (Greek letters whose numerical value belongs to (0,1]). The Kim scenario captured by a possibilistic decision-making framework PADFKim=P,G,D is introduced in Table 1.

Table 1

Possibilistic decision-making framework PADFKim=P,G,D

Possibilistic decision-making framework PADFKim=⟨P,G,D⟩

In Table 1, the set of 18 rules of P represents the dependence interaction between observations. The goal 1ρ:notShes_Exercisingg defines the possibility of not performing the action that fulfills the goal; in other words, it has the contrary aim for sleep. This “negative” goal has an uncertainty degree which is 1ρ (the complement).

By using a PADF framework, arguments can be built, which capture the feasibility of reaching a goal by performing an action (or making a decision), given a set of certain observations of the world [56]. Hence, given this framework PADF=P,G,D, and given a function P_WFS(S) which returns the possibilistic well-founded model22 of a given possibilistic logic program S, we find that an argument A is defined by:

(1)A=S,d,(g,α),
where:

  • 1. (g,α)T and gG such that P_WFS(S{1:d.})=T,F, being T, F set of possibilistic atoms from which we can infer conclusions.

  • 2. SP such that S is a minimal set (⊆) among the subsets of P satisfying 1.

The argument definition (1) is illustrated by using the sub-scenario introduced in Fig. 4. In this case, S represents all the clauses which will achieve the goal g, given a decision d taken by ALI, and α represents the preference for that specific goal. In this setting, an argument will have an informal reading as follows:

(g,α)She prefers Do exercise in an extent αSThere’s no evidence that She’s driving so, there’s a possibility λ that She’s runningThen, it’s possible that She does exercise ifdALI sends a message with positive feedback

In order to illustrate the process of argument construction, 44 arguments were obtained from PADFKim. In Table 2 a subset of the arguments in the Kim scenario is presented.

Table 2

Arguments subset of an extension in the Kim Scenario

Arguments subset of an extension in the Kim Scenario

Once the arguments are constructed, we compare the strengths of those arguments. In this setting, one can identify two types of disagreement between arguments, which are usually called undercut and rebut in argumentation literature [65]. In order to define these relationships between arguments, let A=SA,dA,gA, B=SB,dB,gB be two arguments, with P_WFS(SA{1:dA.})=TA,FA and P_WFS(SB{1:dB.})=TB,FB. We say that an argument A attacks B if one of the following conditions holds:

  • 1. Rebut: aTA and ¬aTB.

  • 2. Undercut: aTA and aFB.

In other words, we can say that rebut is an attack, which contradicts a conclusion of an argument, and an undercut is an attack, which invalidates an assumption of an argument [56].

The attack relationships among the set of arguments obtained from the possibilistic decision-making framework (Table 1) were identified using the WizArg tool [30]. The attack relationships are presented in Fig. 5, where each argument is represented by a node and each attack relation is represented by an edge.

Fig. 5.

Argument attack relationships display using WizArg [30].

Argument attack relationships display using WizArg [30].

3.1.Argument acceptance analysis

Dung [21] defined the so called Argumentation Framework (AF), which is of the form AF=A,attacks, where (A) is a set of arguments and (attacksA×A) is the set of their attack relationships.

Given an AF, one can look for subsets of arguments, which suggest coherent points of views from the disagreements among the arguments. The selection pattern of arguments is usually supported by the so called argumentation semantics in argumentation theory. In argumentation literature, one can find different argumentation semantics; however, the semantics introduced by Dung in [21] are the most accepted.

A basic argumentation semantics SEMArg of a possibilistic argumentation decision-making framework PADF is a function from PADF to 22AF, where SEM(AF)={E1,,En} such that EiA (1in). Usually, each Ei is called an extension of the argumentation framework AF representing a set of acceptable arguments. Dung semantics represent different selection patterns for acceptable arguments, which in the Kim scenario, represent sets of logical and sound arguments explaining or justifying the possibility of achieving a goal, given a set of observations of whether an action is taken.

In order to compute Dung’s argumentation semantics in ALI, we use the WizArg tool in the ALI architecture. By using the possibilistic argumentation framework PADFKim and the stable semantics [21], we obtained the following sets:

SEMstable(PADF)={{A2,A23,A9,A29,A7,A35},{A4,A26,A6,A10,A16,A22},{A24,A7,A3,A20,A13},{A30,A18,A25,A37,A4,A44},{A28,A13,A23,A3,A39,A7},{A40,A8,A23,A7,A33,A14},{A4,A25,A42,A10,A31,A16,A6},{A20,A17,A16,A18},{A32,A38,A5,A12,A25,A4}}.

In the Kim scenario, these nine extensions represent sets of justified and conflict-free arguments, which will be used in integrating assessment information obtained by the therapist.

We are interested in representing extensions and their arguments in terms of goals, which in our scenario are already defined by Kim and her therapist. As a consequence, let us consider that given an argumentation framework PADF, a set of argument extensions E induced by an argumentation semantics defined by ESEM(PADF), we have that: E:={A1,A2,,Am} in which each argument Ai (1in) is of the form Si,di,(gi,αi). Hence ε(E) will be defined in terms of its goal sets (gi,αi) as follows:

(2)ε(E):={(g1,α)|S,d,(g,α)E}.

Observe that ε(E) is basically projecting the goals of each argument into a set of possibilistic atoms. Given a set of possibilistic atoms ε(E):={(a1,α1),,(an,αn)}, ε(E) is {a1,,an}. Observe that ε(E) is removing the possibilistic values of ε(E).

An intuitive reading for Eq. (2) in the Kim scenario, is the possibility to represent justified and conflict-free arguments (extensions) w.r.t. the goals of those arguments. These notations will be used in the next section where an interpretation of the extension sets using CHAT in the context of the Kim scenario is introduced.

3.2.Conclusion inference

CHAT is an approach in social sciences that aims to understand individual human beings in their natural everyday life circumstances, through an analysis of the structure and processes of their activities. The concept of activity is therefore the most fundamental concept in CHAT [38]. The central idea for the interpretation of extensions using CHAT, is to maintain a human-centric perspective of the decision-making process in the argumentation selection. Given the goal-centered analysis of human activities following CHAT, and the context presented in Examples 1 and 2, we define a human activity Act as a finite set of goals g:

(3)Act={g1,,gn}.

This representation of an activity (3) is consistent with the idea of an extension of a PADF (2), both of them w.r.t. goals to be achieved. The representation of an activity, in terms of goals, allows us to integrate a decision-making framework directly into a hierarchy of activities, following the distinctions described in CHAT. In this setting, we can define the set of all the activities that an individual can perform as follows:

Definition 1.

Let G be a finite set of goals g1,,gn. A denotes all the possible activities in terms of goals, that can be performed with G, being A=2G.

Definition 1 describes A as a set of all the activities in terms goals, on other words A represents the set:

(4)A={G1,,Gm}.

As discussed in Section 1, ALI is intended to be a complementary tool for a health-care team, providing extra information for assessment and monitoring individuals. In this setting, part of the importance of ALI lies in the method of presenting such notifications with positive feedback or encouraging messages. So far, we have been presenting a logical, sound method for decision-making, which is the “reasoner” component, and which provides a set of argument-based alternative explanations (PADF extensions), solving in a logical, sound manner the question of “when” to guide a person in changing her mental state and beginning an activity.

In the remainder of this section, we introduce two main contributions of ALI, which solve the second research question regarding “how” to provide persuasive notifications. The first contribution is a quantification of the activity performance using an integration of CHAT and PADF, and the second is a method for building persuasive messages using a possibilistic goal-based activity scheme.

3.2.1.Activity completion

With the previously introduced goal-oriented integration, let us define the concept completion of an activity. Activity completion is performed by a goal-oriented analysis of extension sets, verifying if an extension contains all, some or no goals of a given activity. The concept of completion is important for quantifying the possibility to perform and complete an activity. The quantifiers used are complete, partial, indifferent. In terms of Example 1 and 2, the completion quantifies the activity performed by Kim, describing if she performed the recommended activity or not in a time lapse. This is a central contribution to this paper: the concept of tracking a human activity based on the status of an activity w.r.t. completion.

Definition 2

Definition 2(Status of activities).

Let us consider an argumentation framework PADF, which has an extension ESEM(PADF), where SEM is an argumentation semantics, which induces a set of goals defined by ε(E). Let be ActA, the status of an activity is given by:

  • Complete: iff Actε(E) for all ESEM(PADF).

  • Partial: iff ESEM(PADF) such that Actε(E) and ESEM(PADF) such that Actε(E).

  • Indifferent: iff for all ESEM(PADF), Actε(E).

In order to exemplify Definition 2, let us consider the extensions obtained by arguments in Example 2 and the scenario in Fig. 6. Kim’s therapist analyzes her activities based on the observations collected by ALI through the mobile and the recommendations, which were presented to her. The therapist notices that there are goals, which were achieved, and there are others for which ALI does not have information. For instance, there are no observations that the action Walk was performed.

Fig. 6.

Kim scenario 2, considering sub-activities with multiple goals.

Kim scenario 2, considering sub-activities with multiple goals.

On the other hand, ALI performs the recommendation in real time using the weight of each argument in the set of extensions. The weight is defined by the goal preferences (defined in our scenario by Kim and therapists) and by the degree of confidence of each rule (possibilistic degree). In ALI, this degree is attached to the fidelity of the embedded sensor in the mobile phone. In the ALI prototype implementation, a high accuracy level was pre-defined for accelerometer measures (Kim’s movements) and a low one for sound measures (snoring or breathing sounds), and this data depends on the implementation.

In order to exemplify the selection of rules, let us consider Example 2, one of the set of the nine extensions, which is presented in Table 2 and the scenario depicted in Fig. 2, in which ALI detects that Kim is running (argument A2). Given that Kim prefers doing exercise to not doing it, (Shes_Exercisingg>notShes_Exercisingg), the argument A2 is selected and all the arguments generated containing a negative goal are discarded. In this scenario, the set of activities and goals is very limited, but, in spite of this, the complexity of the interactions between arguments (attacks) is high. The number of solutions for the decision-making process must be reduced, which is done by selecting those extensions with preferred goals and preferred activities. Consequently, the process of selecting preferred goals and activities follows a human-centric and activity-centric perspective, using the integration with CHAT.

4.From arguments to text sentences

In this section we present a novel method for building pseudo-natural language from possibilistic argument-based hypotheses. We define an activity–goal scheme in order to build “encourage” and “positive feedback” messages.

Intuitively, our aim is to use well-formed hypotheses which explain what a person is currently doing, to create messages with the contained information. Our approach associates a decision/action with a type of scheme33 for delivering a specific kind of message, for instance, in the Kim Scenario, there are two decision alternatives: send a positive feedback or send an encouraging message. In this setting, the intention of the message (encouraging, justification, persuasion, etc.) directs the structure of the message, where the location of relevant parts of the text (units in rhetorical literature), like the goal and context information are not the same for all the schemes. In our example, Kim initially defined a set of messages that she would like to receive in a given situation, for encouraging herself or as a positive feedback. Given that the intention of ALI’s messages is to produce positive feedback, which is perceived as encouraging by Kim, her messages include the goal and the activity (part of the argument) to emphasize the intention.

Let us call M={mesgE,mesgP} a set of text messages pre-defined, in this case, by Kim. Each message has a specific intention, either encouraging: mesgE or positive feedback: mesgP. In rhetorical theory, the articulation between units in a text highlights important parts producing the desired intention. We follow a canonical order for rhetorical units introduced in [49], where important unit interactions nucleus contain the main content of the text, and other units: satellites produce a contextual stress. The canonical order for encouraging (“enablement” in [49]) text follows the form: nucleus before satellite and, for positive feedback, the form: satellite before nucleus. We introduce the definition of these two activity–goal schemes as follows:

Definition 3.

Let Act be a goal-based activity: Act={g1,,gn} and let A=S,d,(g,α) be an argument about an activity, and let mesgE be an encouraging message, then an encouraging activity–goal scheme is a tuple of the form: Act,g,mesgE.

In Definition 3, the interaction between the activity and the goal produces the nucleus of the sentence and Kim’s message will be the satellite. Let us consider the following example:

Example 3.

Let us consider the Kim scenario and the CHAT-PADF integration represented in Fig. 2, we have:

Act={Maintaining good physical condition},g={Doing physical exercise},mesgE={“Kom igen, ut och spring din latmask!” (Swedish)mesgE=“Come out and run lazy!” (aprox. English)}.

The activity–goal scheme can be re-written as follows:

(5)In order to Act_, you might consider g_. Then, mesgE_
and by using the Kim example a message will be:
“In order to maintain good physical condition_,you might consider doing physical exercise_.”Then, come out and run, lazy!_

Definition 4.

Let Act be a goal-based activity: Act={g1,,gn} and let A=S,d,(g,α) be an argument about an activity, and let mesgP be a “positive message”, then an positive feedback activity–goal scheme is a tuple of the form: mesgP,Act,g.

In a positive feedback activity–goal scheme (Definition 4) we consider satellite units with compliment common messages like: “Mycket bra jobbat!” (Swedish)/“Very good job!”, and, integrating these satellites to the activity-goal nucleus. We can exemplify this approach by using our running example as follows:

Example 4.

In Kim scenario a positive feedback activity–goal scheme contains the structure:

(6)mesgP_. You are doing well for Act_g_.
An example considering mesgP={Mycket bra jobbat!! (Swedish)/Very good job!} would be:
Very good job!_. You are doing wellmaintaining good physical condition_,doing physical exercise_.

Structures described in Definition 3 and Definition 4 are different. The order of mesgE and mesgP follows the idea of nucleus and satellite components in rhetorical theory [49].

5.ALI prototype system

Fig. 7.

ALI System Architecture.

ALI System Architecture.

We introduce in this section the ALI system architecture. The two main modules of ALI are described (Fig. 7): (1) the ALI mobile application; (2) the ALI centralized modules. Some of the relevant functionalities of ACKTUS are introduced in Section 2.2. The ALI application was introduced to therapists as an initial step for validating the approach and testing each functionality.

5.1.ALI mobile application

ALI was implemented as a dual service, running as a data collector and, at the same time, delivering notifications in the mobile module.

Data sensing and notification delivery (on{X} service)

The detection task is accomplished by using a mobile application implemented with on{X} technology (https://www.onx.ms), sending the information collected from sensors in real time to the server via RestWS [23]. on{X} lets us obtain a wide set of mobile sensor features such as location, mode of transport, light sensors, position of the mobile phone (different than location) and the feature called regions (https://www.onx.ms/#apiPage/regions), which allows for the inference of whether a person is going in or out of a defined place, such as a home environment, as in the Kim scenario. The coordinates of Kim’s home location were obtained in the meeting with the therapist. Visual and vibrating notifications (Fig. 8) were also implemented using on{X}.

Fig. 8.

An example of an encouraging notification sent to Kim’s mobile, which is running the ALI application.

An example of an encouraging notification sent to Kim’s mobile, which is running the ALI application.

The collected raw data was used for obtaining detailed data about the individual’s location, which was correlated with timestamp data from the on{X} service. An example of this correlation is presented in Fig. 9, using a GPS visualizer (http://www.gpsvisualizer.com/) for conducting data analysis. In Fig. 9 (top image), the visualization of the location of the person is shown and, at the same time, the time and type of the delivered notification. In the same figure (bottom image), the detailed location of the tracking measure and the feature of each measure (timestamp in the bubble callout) are shown.

Fig. 9.

Different type of messages delivered to Kim (top). Traces of Kim’s activity (bottom).

Different type of messages delivered to Kim (top). Traces of Kim’s activity (bottom).

5.2.ALI Centralized Modules

The ALI Centralized Modules contain inference and recommendation modules. These are briefly described as follows.

Data collection storing

Data sent from mobile phone via HTTP (Hypertext Transfer Protocol) is collected and transformed to Answer Set Programs [27] in the class of normal programs, which is the admitted language for inference modules in ALI. This module is built on Java and deployed in a GlassFish server (https://glassfish.java.net/). This module also collects all the data from a user and stores it in a MySQL database.

Argument Builder

The XSB system [67] is used for building arguments. The Argument Builder module captures the rules from the Data Collection Module, and, using XSB framework, rules are evaluated in the form of dependency graphs. These are evaluated following the Well-Founded Semantics through a full SLG resolution with tabling (see further details in [67]). The Argument Builder module is implemented in Java and linked to XSB using InterProlog [15] as a middleware.

Argument Evaluation

An extension-based argumentation semantics solver library is used for argument evaluation. WizArg [30] obtains sets of argument extensions. The stable semantics option is used in WizArg. However, it is also possible to choose between CF2, stable and grounded semantics.

Activity Recommender sub-module

This module obtains the best decision from the arguments and prepares a persuasive notification with the purpose of convincing the individual to perform the action.

Message adaptation

This module obtains the recommendation and transforms it into an HTTP message to be visualized by the phone. This module sends the notification via Web Rest services.

ALI records all the executed arguments (selected arguments to which recommendations were sent) into a MySQL database. ALI quantifies the completion of an activity by analyzing the records and comparing them with the argument extensions. In this way, the tracking analysis is performed.

6.Pilot study results

The results are divided into results related to the argument building process and the generation of tailored messages and the interaction between the users and the ALI application.

6.1.Building argument-based explanations on different human-centric information sources

The outcomes of the assessment performed by therapists are described in natural language and follow the topics that the individual have created arguments about. Consequently, there are two main sources of human-centric arguments (the individual as a baseline view and the therapist, based on their expertise and knowledge of the client), which are supplemented with the current opinions that the individual holds in a particular situation in which an argumentative dialogue is performed. These opinions may not necessarily be the opinions that the individual holds as a baseline set of opinions. In our pilot study, we applied only the arguments formulated by the individual.

One example of an encouraging message is presented in Fig. 8. The message is presented when Argument 7 (see Table 2) is triggered. This notification was suggested by Subject A, talking to herself in order to “move” and do any kind of outdoor activity, because she had stayed at home more than two days, which was included as a trigger time observation.

6.2.Interaction between ALI and the study subjects

Interviews were conducted in order to investigate the positive and negative aspects of using a mobile phone to receive notifications, as perceived by the two participants. Subject A pointed out that one of the disadvantages was that she frequently forgot to bring along the charger for the mobile phone, which was one of the causes for only obtaining data on two days of activity. Subject A, on the other hand, had no problems with forgetting the charger, since she was at home most of the time. She also highlighted that she was receiving some notifications regarding going and doing exercise, but she was sick, and ALI continued sending notifications. Subject A suggested that she was interested in establishing a direct dialogue with the system in order to state that she was unable to do the exercise and had a good argument for not complying with the suggestions.

The question of whether the individual changed her behavior or not as a consequence of using ALI was not a subject of this pilot study. However, the following was observed, which creates a base for future studies. Given the data log, when Subject A received an encouraging message, she left her home, and ALI detected that she was out of town. It was confirmed later that she was visiting relatives, which was considered as complying with the Ali notifications. However, taking into account the location analysis and the number of notifications sent, we can infer that Subject A and Subject B were not attending to all the recommendations immediately.

A different kind of information was obtained using the GPS Visualizer. The plotted images were shown to Subject A, who responded with interest and curiosity and wanted to see exactly where she had been walking in a forest nearby. Her interest in the potential feedback in the form of a map of her routes offered suggestions for a future improved version of ALI. The top map of Fig. 9, shows when and what type of notification was sent and shows Subject A’s position before and after. The bottom map shows the different locations where Subject B was located in her home.

7.Discussion and related work

In this section we discuss our contributions with respect to other approaches, considering that our focus was the development of new approaches for improving two capabilities in AT systems: rational decision-making under uncertainty and tailored service delivering.

7.1.Integrating a possibilistic argument-based decision-making framework and CHAT

In argumentation literature, there are different approaches where human-centric perspective define partially or totally decision-making processes such as [18,32,53] among others. In informal argumentation branch, the work of Grasso et al. in [32] introduces a system for dialectical argumentation to provide healthy nutrition education, Daphne system. Daphne is intended to persuade a person establishing a dialog, which is not the case of our approach which obtains information from the context to infer automatically a correct advice. In practical argumentation, different approaches such as the protocol PARMA for a multi-agent dialogue game [7] based on Walton’s argument schemes [73]. Comparatively, practical reasoning approaches, such as [7,12], have a different perspective than us regarding the reasoning process, for instance about what it is “best” for a particular agent (maybe human one), practical reasoning is intrinsically open-ended, creating a challenge for agent design. An analysis regarding issues and solutions for practical reasoning can be found in [6].

By contrast to informal and practical argumentation, other approaches such as [2,31,63] focus on providing sound and consistent argument-based explanations regardless a human-centric perspective. In this setting, the main general contribution of our approach is the combining of focus to goal-based activities of an individual, which enables a human-centric perspective that includes driving forces for conducting activity. This is particularly important when applications are aimed at supporting behavior change, as in our case, changing behavior towards a more healthy pattern of behavior.

In argumentation literature, Abstract Argumentation Frameworks (AAFs) ([13,21], among others) provide a theoretical basis for exploring issues of defeasible reasoning. The ALI approach follows the line of AAFs introduced in [21]. However, it is closer to approaches in which the knowledge is coded in the structure of arguments and argumentation semantics is used to determine the acceptability of arguments.

Dung in his seminal work [21] made one of the major contributions to the argumentation field by showing that logic programming can be shown as a form of argumentation, and at the same time, argumentation itself can be viewed as logic programming with negation as failure. In this setting, the underlying formalisms for knowledge representation are of particular importance. In other words, the underlying language for capturing a knowledge base is crucial for representing information. Different “non-monotonic” logics have been proposed for capturing commonsense knowledge [51,52,54,68]. A knowledge base like the one used in the Kim scenario, is captured by a possibilistic version of an Extended Logic Program (ELP) [28]. Representing incomplete information as well as exceptions, it allows for the description of scenarios with uncertain information, like the sensor-based information obtained from a mobile phone. Other approaches also define incomplete information and exceptions [37,48,57]. However, an ELP seems to fit perfectly for the purpose of defining decision-making scenarios with uncertain data. In this setting, the possibilistic decision-making framework used in the ALI approach has a main desirable property, in comparison with the approaches described in [3,36], which is that it can deal with reasoning that is at the same time non-monotonic and uncertain. This main characteristic makes the PADF the optimal approach for developing real implementations. PADF is based on the Well-Founded Semantics (WFS) [71], which applies a skeptical reasoning approach, and is defined for all general logic programs [71]. In contrast to WFS, the stable model semantics [28] do not always generate a model. In other words, WFS allows for a model for a given knowledge base to always be obtained (sometimes coinciding with the empty model).

Indeed some authors from abstract argumentation branch of AI, have been made to use formal models of abstract argumentation as a basis for practical reasoning such as [66] or integrating at the same time different approaches of reasoning, epistemic and practical reasoning [64]. Our decision-making approach takes different advantages of WFS to create an implementation. Some other approaches can be currently unfeasible for implementing such as the case of practical reasoning with a complex management of natural language.

7.1.1.Using CHAT for argument interpretation

By considering a psychology framework for the interpretation of a sound set of acceptable arguments, a human-centric perspective is developed, obtaining as a contribution a quantification of the human activity performance and the possibility of planning analysis as a future work. CHAT is a more complex framework for human behavior analysis, involving not only the hierarchy activities–actions–operations, but also defining human needs, which are the ultimate cause behind human activities [43].

Consequently, the integration of CHAT into a formal decision-making process, offers two different paths for future work. First, a further deepened human-centric analysis can be integrated. Argument-based explanations can be obtained for conscious and unconscious causes for human activities, which makes it possible to analyze the different aspects of human activity, both the hierarchical characteristics (activities–goals–operations), and motives and needs. It was suggested by Karwowski and coworkers that press can be included in the Activity-theoretical model of activity [39]. Figure 10 shows how press as external stimuli creates a desire to obtain or avoid something.

Second, the goal-centered analysis of human activities where an activity is based on human-centric goals: Act={g1,,gn} resembles a plan. An integration between PADF and CHAT enables the quantification of the performance of activities. Moreover, new plans can be formulated in order to perform similar, or the same activity in a different manner.

Fig. 10.

Relationship between different activity-theoretical concepts: press, needs, motives, and goals [39].

Relationship between different activity-theoretical concepts: press, needs, motives, and goals [39].

7.2.Generating persuasive messages by using activity–goal scheme

In literature, there is an important amount of persuasive and guiding systems. Persuasive approaches have different perspectives depending on the underlying reasoning approach and the philosophical model adopted. In multi-agent system approaches, the interaction between agents is mainly governed by the belief–desire–intention (BDI) model, which is possibly the most known and studied model for reasoning agents [29]. In argumentation theory, there are research branches studying persuasion based on informal argumentation such as [1618,32] or practical reasoning [7,12], where the reasoning and decision processes are different than in ALI. Indeed, the generation of messages described in [7,17] and [32] are more complex than our approach, however our messages generation process lies on a consistent non-monotonic approach for building argument-based explanations, rather than in open-ended methods such as New Rhetoric or Walton’s argumentation schemes [72]. A limitation of our work for generating “naturalness” in our structure activity–goal–evidence, is the lack of flexible archetypes for building natural expressions, certainly in further versions of ALI we will consider natural language approaches without loosing consistency in the explanations.

Our approach has some similarities with the work introduced in [12,53,61], where values (in terms of social values) and emotional states are considered. In [53], the BDI model is extended with emotions, feelings and goals that a person pursues, similar to [32], using the hierarchical classification of values in [61]. These approaches introduce a reasoning process based on schemes; however, the importance of the sound and consistency of proofs is disregarded.

The generation of persuasive messages in ALI, based on an activity–goal scheme is inspired by scheme-based reasoning [34,72,73]. The persuasive message generated by ALI has a different aim than persuasion dialogues in argumentation literature [11,12,72]. In [72], such differences are established, where persuasion is intended to persuade another individual to accept some contested proposition that s/he did not previously believe. In our approach, the persuadee, in principle, agrees with and believes in the persuasion method used, which uses her own words and follows the idea of self-motivation. In this setting, activity–goal scheme follows a dialectical integration of supports-conclusion arguments, since it is closer to the graph-oriented representation of a persuasion reasoning in [53]. In our approach, built messages following the proposed schemes (5) and (6), seem to follow a natural sense in both Swedish and English. This is an aspect to take into consideration, since ALI is intended for being a multi-linguistic tool for multi-cultural environment. Considering this, some of the approaches in [18,22,32] are hardly to implement in other languages or cultural contexts without strong changes in the reasoning model, which are based on English language characteristics.

7.3.A modular architecture for recognizing human activity in a non-intrusive manner

The approach presented in this work fulfills three major requirements; 1) a non-intrusive human recognition alternative; 2) dealing with uncertain and incomplete information from sensors with no data training; and 3) the activity recommendation should be supported and monitored by a health-care team. The prototype was oriented towards modularize the sensors in order to being able to integrate other from an Ambient Assisted Living systems in future.

This work differs from simpler approaches for human activity recognition such as those described in [5,24,40]. There is a significant difference in this implementation compared to approaches, which use sensors placed on different parts of the body of a person in an Ambient Assisted Living environment [26,42]. This is sometimes not feasible due to practical reasons. Uncertain and incomplete data from sensors were also analyzed in Ambient Assisted Living contexts (e.g., [5] and [42]). This approach uses a different alternative, where more than one possible scenario (set of argument extensions) is inferred in real time based on an argumentation semantics.

7.4.A formative pilot evaluation study

In order to test different parts of ALI architecture, a pilot study was conducted. Regarding the building process of natural arguments, the architecture obtains from the individual their preferences and feedback and encouraging messages. The information is used for building human-centric arguments, which are implemented in an introspective natural dialogue between a human agent and the system agent.

The evaluation study, where location sensors from the mobile phones were used, showed the following advantages: 1) ALI is a non-intrusive solution; 2) young users are familiar with mobile phones, and 3) the approach is a low cost alternative. The identified obstacles to use mobile phones for the purpose were: 1) inaccuracy of location sensors (Fig. 9 shows the inaccuracy of Kim’s location when she was in her home); 2) real time data transmission failed when the user was outside of the mobile Internet service coverage area, and 3) battery limitations.

In order to improve the argumentative dialogue between the user and the system (i.e., interaction), future work includes the implementation of a functionality in the next version of the prototype where the user can provide a response in natural language to the arguments provided by the system. In the pilot study, only the arguments formulated by the individual were applied. In future work, there will be three agents involved in the dialogues: 1) ALI as an agent, mediating the user’s baseline view including the arguments created by the user; 2) the therapist as an agent, mediating the domain professional’s view, and 3) the human agent, contributing with her current opinion about her situation at the moment of a dialogue.

8.Conclusions and future work

This paper presents ALI, which is an assistive technology system using an argument-based approach reasoning. This approach combines formal argumentation systems and informal models of human activity. This facilitates the tailoring of advices to the human actor, taking into consideration the human’s motives, goals and prioritized actions. The contributions of this work are the following:

  • A non-intrusive argument-based approach for tracking and monitoring an individual’s activities.

  • An argument-based framework for decision-making, framed on the Cultural-Historical Activity Theory, a theory for describing human activities.

  • A recommender system architecture, inferring the best decisions for selecting messages to support human’s goal-based activities.

Different perspectives were used in this interdisciplinary work for the purpose of recognizing and providing recommendations tailored to a person’s goals and preferences. Diverse lines of research will be pursued as part of future work, for instance: 1) include human’s motives, needs and different aspects of the cultural and historical perspective of a person for tailoring assistive technology, this include to investigate persuasion and behavior change approaches in Health and Artificial Intelligence fields; 2) further improvements of the interactive dialog between the human and ALI system will be performed by applying methods inspired by informal argumentation, particularly using New Rhetoric and natural language approaches; 3) methods for handling changes of preferences and verifying the validity of arguments with respect to time are part also of our future work. Furthermore, different user studies will be conducted which will involve more subjects and a longer test period. Allowing the use of assistive technology as ALI over a longer period of time, we will have further insight into how this technology affects the user’s decision-making and activity performance.

Notes

1 In Appendix B a definition of the Possibilistic Argument-based Decision Framework is introduced.

2 The well-founded model is a three-valued model. In Section B.1.1 a definition of a well-founded model is introduced.

3 Schemes are stereotypical patterns of human reasoning, and there are a considerable number of scheme definitions [33,72].

Appendices

Appendix A.

Appendix A.Syntax and semantics of the formal language for activity reasoning

In this section, the syntax of the formal language capturing complex activities is described. We also present the semantics for evaluating such language.

A.1.Syntax

The syntax used in this paper consists of proposition symbols: ⊥, ⊤, p0, p1, …, connectives: ∧, ←, ¬, not and auxiliary symbols: (,) where ∧, ← are 2-place connectives, ¬, not are 1-place connectives and ⊥, ⊤ are 0-place connectives. The proposition symbols, ⊥, and the propositional symbols of the form ¬pi (i0) stand for the indecomposable propositions, which we call atoms, or atomic propositions. Atoms negated by ¬ will be called extended atoms. We will use the concept of atoms without paying attention to whether it is an extended atom or not. The negation sign ¬ is regarded as the so called strong negation by the Answer Set Programming (ASP) literature and the negation not as the negation as failure (NAF) [9]. A literal is an atom, a (called positive literal), or the negation of an atom nota (called negative literal). Given a set of atoms {a1,,an}, we write not{a1,,an} to denote the set of literals {nota1,,notan}.

An extended normal clause, C, is denoted as: aa1,,aj,notaj+1,,notan, where j+n0, a is an atom and each ai is an atom. When j+n=0, the clause is an abbreviation of a such that ⊤ is the proposition symbol that always evaluates to true.

An extended logic program P is a finite set of extended normal clauses. When n=0, the clause is called extended definite clause. An extended definite logic program is a finite set of extended definite clauses. By LP, we denote the set of atoms in the language of P. Let ProgL be the set of all normal programs with atoms from L. We will manage the strong negation (¬) in our logic programs as done in ASP [9]. Basically, each atom of the form ¬a is replaced by a new atom symbol a which does not appear in the language of the program.

Sometimes we denote an extended normal clause C by aB+, notB, where B+ contains all the positive body literals and B contains all the negative body literals. A possibilistic atom is a pair p=(a,q)A×Q, where A is a finite set of atoms and (Q,) is a lattice. We apply the projection ∗ over p as follows: p=a. Given a set of possibilistic atoms S, we define the generalization of ∗ over S as follows: S={p|pS}. Given a lattice (Q,) and SQ, the function LUB(S) denotes the least upper bound of S and function GLB(S) denotes the greatest lower bound of S. We define the syntax of a valid extended possibilistic normal logic program as follows: let (Q,) be a lattice. An extended possibilistic normal clause r is of the form α:aB+, notB where αQ. The projection ∗ over the possibilistic clause r is: r=aB+, notB. n(r)=α is a necessity degree representing the certainty level of the information described by r.

A.2.Semantics

An interpretation of a propositional signature LP is a function from LP to {false,true}. A partial interpretation, also called 3-valued interpretation based on a signature LP, is a disjoint pair of sets I1,I2, such that I1I2LP. A partial interpretation is total if I1I2=LP. An interpretation I of a given logic program P is a model for P iff I(C)=true for each clause CP. I is a minimal model of P if a model I of P different from I such that II does not exist.

Extended logic programs (ELP) [28] capture incomplete information as well as exceptions, using strong negation and NAF. Indeed, ELP is not the only approach for capturing defaults and partial information. Two major semantics for ELP have been defined: 1) answer set semantics [28], an extension of Stable model semantics; and 2) the Well-Founded Semantics (WFS) [71].

Appendix B.

Appendix B.A possibilistic argument-based decision framework

Originally introduced in [59].

B.1.Background

B.1.1.Well-founded semantics

In this section, we present a definition of the well-founded semantics in terms of rewriting systems. We start presenting a definition of a 3-valued logic semantics.

Definition 5

Definition 5(SEM [19]).

For normal logic program P, we define HEAD(P)={a|aB+,notBP} – the set of all head-atoms of P. We also define SEM(P)=Ptrue,Pfalse, where Ptrue:={p|pP} and Pfalse:={p|pLPHEAD(P)}. SEM(P) is also called model of P.

In order to present a characterization of the well-funded semantics in terms of rewriting systems, we define some basic transformation rules for normal logic programs.

Definition 6

Definition 6(Basic transformation rules [19]).

A transformation rule is a binary relation on ProgL. The following transformation rules are called basic. Let a program PProgL be given.

RED+:

This transformation can be applied to P, if there is an atom a which does not occur in HEAD(P). RED+ transforms P to the program where all occurrences of nota are removed.

RED:

This transformation can be applied to P, if there is a rule aP. RED transforms P to the program where all clauses that contain nota in their bodies are deleted.

Success:

Suppose that P includes a fact a and a clause qbody such that abody. Then we replace the clause qbody by qbody{a}.

Failure:

Suppose that P contains a clause qbody such that abody and aHEAD(P). Then we erase the given clause.

Loop:

We say that P2 results from P1 by LoopA if, by definition, there is a set A of atoms such that:

  • 1. for each rule abodyP1, if aA, then bodyA,

  • 2. P2:={abodyP1|bodyA=},

  • 3. P1P2.

Let CS0 be the rewriting system such that contains the transformation rules: RED+, RED, Success, Failure, and Loop. We denote the uniquely determined normal form of a program P with respect to the system CS0 by normCS0(P). Every system CS0 induces a semantics SEMCS0 as follows:

SEMCS0(P):=SEM(normCS0(P)).

In order to illustrate the basic transformation rules, let us consider the following example.

Example 5.

Let P be the following normal program:

d(b)notd(a).d(c)notd(b).d(c)d(a).

Now, let us apply CS0 to P. Since d(a)HEAD(P), then, we can apply RED+ to P. Thus we get:

d(b).d(c)notd(b).d(c)d(a).

Notice that now we can apply RED to the new program, thus we get: d(b). d(c)d(a).

Finally, we can apply Failure to the new program, thus we get: d(b). This last program is called the normal form of P w.r.t. CS0, because none of the transformation rules from CS0 can be applied.

WFS was introduced in [71] and was characterized in terms of rewriting systems in [14]. This characterization is defined as follows:

Lemma 1

Lemma 1([14]).

CS0 is a confluent rewriting system. It induces a 3-valued semantics that it is the Well-founded Semantics.

B.1.2.Possibilistic well-founded semantics

In order to define the possibilistic argument-based decision-making framework, a possibilistic version of the well-founded semantics is defined.

Definition 7.

Let P be an extended possibilistic logic program and S be a set of atoms. We define R(P,S) as the extended possibilistic logic program obtained from P by deleting:

  • 1. all the formulae of the form not a in the bodies of the possibilistic clauses such that aS, and

  • 2. each possibilistic clause that has a formula of the form not a in its body.

Observe that R(P,S) does not have negative literals. This means that R(P,S) is an extended possibilistic definite logic program.

Definition 8

Definition 8(Possibilistic well-founded semantics [59]).

Let P=(Q,),N be an extended possibilistic normal logic program, S1 be a set of possibilistic atoms, S2 be a set of atoms such that S1,S2 is the well-founded model of P. S1,QQ(S2) is the possibilistic well-founded model of P if and only if S1=ΠCn(R(P,S2)). Where ΠCn(P) is a fix-point operator. By P_WFS(P), we denote the possibilistic well-founded model of P.

B.2.Possibilistic argumentation-based decision framework

Generally speaking, a possibilistic decision-making problem follows a structure of cognitive states, namely beliefs, desires and intentions. In fact, the beliefs that an agent has about the world are captured by a possibilistic knowledge base, while intentions and goals of the given agent are expressed in terms of a set of decisions and a set of prioritized goals. Therefore, we can define:

Definition 9.

A possibilistic decision-making framework is a tuple P,D,G in which:

  • 1. P is a possibilistic normal logic program.

  • 2. D={d1,,dn} is a set of decision atoms such that DLP,D denotes the set of all possible decisions.

  • 3. G={(g1,β1),,(gm,βm)} is a set of possibilistic atoms such that GLP,G denotes the set of all possible goals and βj (1jn) represents the priority of the goal gj.

  • 4. DG=.

The possibilistic decision-making framework of Definition 9 is based on a possibilistic theory with negation as failure. Indeed, the user is able to express assumptions by means of negation as failure. Since the possibilistic decision-making framework is based on a possibilistic default theory, a possibilistic default reasoning inference for building arguments is required. Consider the possibilistic version of the well-founded semantics (Definition 8).

Definition 10.

Let F=P,D,G be a possibilistic decision-making framework. An argument on a decision dD is a tuple A=S,d,(g,α) such that:

  • 1. (g,α)T and gG such that P_WFS(S{1:d.})=T,F, being T, F set of possibilistic atoms from which we can infer conclusions.

  • 2. SP such that S is a minimal set (⊆) among the subsets of P satisfying 1.

Once we have identified the set of arguments of our possibilistic default theory, the relationships between these arguments need to be identified.

Three elements make an argumentation system a framework for defeasible argumentation: the first is the notion of a conflict between arguments (also called ‘attack’ and ‘counter argument’) [65]. Two types of conflicts are established in [62] and defined w.r.t. to P_WFS in Definition 3.3 [56]: let A=SA,dA,gA, B=SB,dB,gB be two arguments, with P_WFS(SA{1:dA.})=TA,FA and P_WFS(SB{1:dB.})=TB,FB. We say that an argument A attacks B if one of the following conditions holds:

  • 1. Rebut: aTA and ¬aTB.

  • 2. Undercut: aTA and aFB.

By having a set of arguments and their relationships, a possibilistic decision-making framework can be instantiated into a possibilistic argumentation decision-making framework. Since any pair of arguments can be compared according to different criteria (such as the certainty level of the goal reached by the given argument), a possibilistic argumentation decision-making framework is provided with a partial order relation. Hence, given a set of arguments AF, AF denotes a partial order in AF.

Definition 11.

A possibilistic argumentation decision-making framework is a tuple PF=F,AF,Att,AF, where F is a possibilistic decision-making framework, and Att denotes the binary relations of attacks in AF, i.e. AttAF×AF.

Essentially, a possibilistic argumentation decision-making framework is an extension of a possibilistic decision-making framework.

References

[1] 

T. Alsinet, C.I. Chesñevar, L. Godo, S. Sandri and G. Simari, Formalizing argumentative reasoning in a possibilistic logic programming setting with fuzzy unification, International Journal of Approximate Reasoning 48: (3) ((2008) ), 711–729. doi:10.1016/j.ijar.2007.07.004.

[2] 

L. Amgoud and F.D. De Saint Cyr, Measures for persuasion dialogs: A preliminary investigation, Frontiers in Artificial Intelligence and Applications 172 (2008), 13–24.

[3] 

L. Amgoud and H. Prade, Using arguments for making decisions: A possibilistic logic approach, in: Proceedings of the 20th Annual Conference on Uncertainty in Artificial Intelligence (UAI-04), AUAI Press, Arlington, VA, (2004) , pp. 10–17.

[4] 

L. Amgoud and H. Prade, Using arguments for making and explaining decisions, Artificial Intelligence 173: (3) ((2009) ), 413–436. doi:10.1016/j.artint.2008.11.006.

[5] 

M. Amoretti, F. Wientapper and F. Furfari, Sensor data fusion for activity monitoring in ambient assisted living environments, in: ICST 2010, Springer, Pisa, Italy, (2010) , pp. 206–221.

[6] 

K. Atkinson and T. Bench-Capon, Practical reasoning as presumptive argumentation using action based alternating transition systems, Artificial Intelligence 171: (10) ((2007) ), 855–874. doi:10.1016/j.artint.2007.04.009.

[7] 

K. Atkinson, T. Bench-Capon and P. McBurney, Computational representation of practical argument, Synthese 152: (2) ((2006) ), 157–206. doi:10.1007/s11229-005-3488-2.

[8] 

A. Bandura, Self-efficacy: Toward a unifying theory of behavioral change, Psychological Review 84: (2) ((1977) ), 191. doi:10.1037/0033-295X.84.2.191.

[9] 

C. Baral, Knowledge Representation, Reasoning and Declarative Problem Solving, Cambridge University Press, Cambridge, (2003) .

[10] 

P. Baroni, M. Caminada and M. Giacomin, An introduction to argumentation semantics, Knowledge Engineering Review 26: (4) ((2011) ), 365–410. doi:10.1017/S0269888911000166.

[11] 

T. Bench-Capon and H. Prakken, Argumentation, in: Information Technology and Lawyers, Springer, Dordrecht, The Netherlands, (2006) , pp. 61–80. doi:10.1007/1-4020-4146-2_3.

[12] 

T.J. Bench-Capon, Persuasion in practical argument using value-based argumentation frameworks, Journal of Logic and Computation 13: (3) ((2003) ), 429–448. doi:10.1093/logcom/13.3.429.

[13] 

A. Bondarenko, P.M. Dung, R.A. Kowalski and F. Toni, An abstract, argumentation-theoretic approach to default reasoning, Artificial Intelligence 93: (1) ((1997) ), 63–101. doi:10.1016/S0004-3702(97)00015-5.

[14] 

S. Brass, U. Zukowski and B. Freitag, Transformation-based bottom-up computation of the well-founded model, in: Non-Monotonic Extensions of Logic Programming, NMELP ’96, T.C.P. Jürgen Dix and L. Moniz Pereira, eds, Lecture Notes in Computer Science, Vol. 1216: , (2007) , pp. 171–201. doi:10.1007/BFb0023807.

[15] 

M. Calejo, Interprolog: Towards a declarative embedding of logic programming in Java, in: Logics in Artificial Intelligence, Springer, (2004) , pp. 714–717. doi:10.1007/978-3-540-30227-8_64.

[16] 

A. Cawsey, F. Grasso and C. Paris, Adaptive information for consumers of healthcare, in: The Adaptive Web, Springer, Berlin, (2007) , pp. 465–484. doi:10.1007/978-3-540-72079-9_15.

[17] 

B. De Carolis, F. de Rosis, F. Grasso, A. Rossiello, D.C. Berry and T. Gillie, Generating recipient-centered explanations about drug prescription, Artificial Intelligence in Medicine 8: (2) ((1996) ), 123–145. doi:10.1016/0933-3657(95)00029-1.

[18] 

F. De Rosis and F. Grasso, Affective natural language generation, in: Affective Interactions, Springer, (2000) , pp. 204–218.

[19] 

J. Dix, M. Osorio and C. Zepeda, A general theory of confluent rewriting systems for logic programming and its applications, Ann. Pure Appl. Logic 108: (1–3) ((2001) ), 153–188. doi:10.1016/S0168-0072(00)00044-0.

[20] 

D. Dubois and H. Prade, Possibilistic logic: A retrospective and prospective view, Fuzzy Sets and Systems 144: (1) ((2004) ), 3–23. doi:10.1016/j.fss.2003.10.011.

[21] 

P.M. Dung, On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games, Artificial Intelligence 77: (2) ((1995) ), 321–357. doi:10.1016/0004-3702(94)00041-X.

[22] 

E. Erriquez and F. Grasso, Generation of personalised advisory messages: An ontology based approach, in: CBMS, (2008) , pp. 437–442.

[23] 

R.T. Fielding and R.N. Taylor, Principled design of the modern Web architecture, ACM Trans. Internet Technol. 2: (2) ((2002) ), 115–150. doi:10.1145/514183.514185.

[24] 

C. Filippaki, G. Antoniou and I. Tsamardinos, Using constraint optimization for conflict resolution and detail control in activity recognition, in: AmI 2011, Springer, (2011) , pp. 51–60.

[25] 

B.J. Fogg, Persuasive technology: Using computers to change what we think and do, Ubiquity 2002: ((2002) ), 5.

[26] 

J. Foley and G. Churcher, Applying complex event processing and extending sensor Web enablement to a health care sensor network architecture, in: 1st ICST Conference, Springer, (2009) , pp. 2–5, Chapter 1.

[27] 

M. Gelfond and V. Lifschitz, The stable model semantics for logic programming, in: 5th Conference on Logic Programming, R. Kowalski and K. Bowen, eds, MIT Press, (1988) , pp. 1070–1080.

[28] 

M. Gelfond and V. Lifschitz, Classical negation in logic programs and disjunctive databases, New Generation Computing 9: (3–4) ((1991) ), 365–385. doi:10.1007/BF03037169.

[29] 

M. Georgeff, B. Pell, M. Pollack, M. Tambe and M. Wooldridge, The belief–desire–intention model of agency, in: Intelligent Agents V: Agents Theories, Architectures, and Languages, Springer, (1999) , pp. 1–10. doi:10.1007/3-540-49057-4_1.

[30] 

I. Gómez-Sebastià and J.C. Nieves, WizArg: Visual argumentation framework solving wizard, in: Artificial Intelligence Research and Development Conference, IOS Press, Amsterdam, The Netherlands, (2010) , pp. 249–258.

[31] 

T.F. Gordon, The pleadings game, Artificial Intelligence and Law 2: (4) ((1993) ), 239–292. doi:10.1007/BF00871972.

[32] 

F. Grasso, A. Cawsey and R. Jones, Dialectical argumentation to solve conflicts in advice giving: A case study in the promotion of healthy nutrition, International Journal of Human–Computer Studies 53: (6) ((2000) ), 1077–1115. doi:10.1006/ijhc.2000.0429.

[33] 

W. Grennan, Informal Logic: Issues and Techniques, MQUP, (1997) .

[34] 

A. Hastings, A reformulation of the modes of reasoning in argumentation, Northwestern University, 1962.

[35] 

F.T. Igira and J. Gregory, Cultural historical activity theory, in: Handbook of Research on Contemporary Theoretical Models in Information Systems, 2009, pp. 434–454. doi:10.4018/978-1-60566-659-4.ch025.

[36] 

A. Kakas and P. Moraitis, Argumentation based decision making for autonomous agents, in: Proceedings of the Second International Joint Conference on Autonomous Agents and Multiagent Systems, ACM, (2003) , pp. 883–890. doi:10.1145/860575.860717.

[37] 

A.C. Kakas, R.A. Kowalski and F. Toni, Abductive logic programming, Journal of Logic and Computation 2: (6) ((1992) ), 719–770. doi:10.1093/logcom/2.6.719.

[38] 

V. Kaptelinin and B.A. Nardi, Acting with Technology: Activity Theory and Interaction Design, MIT Press, (2006) .

[39] 

W. Karwowski and G. Bedny, Task concept and its major attributes in ergonomics and psychology, in: Human–Computer Interaction and Operators’ Performance: Optimizing Work Design with Activity Theory, 2011, pp. 63–89.

[40] 

E. Kim and S. Helal, Revisiting human activity frameworks, in: ICST Conference, S-Cube 2010, Springer, Miami, FL, (2010) , pp. 219–234.

[41] 

P. Krause, S. Ambler, M. Elvang-Goransson and J. Fox, A logic of argumentation for reasoning under uncertainty, Computational Intelligence 11: (1) ((1995) ), 113–131. doi:10.1111/j.1467-8640.1995.tb00025.x.

[42] 

H. Lee, J. Choi and R. Elmasri, A dynamic context reasoning based on evidential fusion networks in home-based care, in: Sensor Fusion – Foundation and Applications, C. Thomas, ed., InTech, (2011) , pp. 1–26, Chapter 1.

[43] 

A.N. Leont’ev, Activity, Consciousness, and Personality, Prentice-Hall, Englewood Cliffs, NJ, (1978) .

[44] 

A.N. Leontyev, Activity and Consciousness, Personality, Moscow, 1974.

[45] 

H. Lindgren and I. Nilsson, Towards user-authored agent dialogues for assessment in personalised ambient assisted living, International Journal of Web Engineering and Technology 8: (2) ((2013) ), 154–176. doi:10.1504/IJWET.2013.055714.

[46] 

H. Lindgren and C. Yan, ACKTUS: A platform for developing personalized support systems in the health domain, in: Proceedings of the 5th International Conference on Digital Health 2015, DH ’15, ACM, New York, NY, (2015) , pp. 135–142. doi:10.1145/2750511.2750526.

[47] 

H. Lindgren, J. Baskar, E. Guerrero, J.C. Nieves, I. Nilsson and C. Yan, Computer-supported assessment for tailoring assistive technology, in: Proceedings of the 6th International Conference on Digital Health 2016, DH’ 16, to appear, doi:10.1145/2896338.2896352.

[48] 

J. Lobo, J. Minker and A. Rajasekar, Foundations of Disjunctive Logic Programming, MIT Press, (1992) .

[49] 

W.C. Mann and S.A. Thompson, Rhetorical structure theory: Toward a functional theory of text organization, Text 8: (3) ((1988) ), 243–281.

[50] 

S.J. Marshall and S.J. Biddle, The transtheoretical model of behavior change: A meta-analysis of applications to physical activity and exercise, Annals of Behavioral Medicine 23: (4) ((2001) ), 229–246. doi:10.1207/S15324796ABM2304_2.

[51] 

J. McCarthy, Circumscription – A form of non-monotonic reasoning, Artificial Intelligence 13: (1) ((1980) ), 27–39. doi:10.1016/0004-3702(80)90011-9.

[52] 

D. McDermott, Nonmonotonic logic II: Nonmonotonic modal theories, Journal of the ACM 29: (1) ((1982) ), 33–57. doi:10.1145/322290.322293.

[53] 

M. Miceli, F.D. Rosis and I. Poggi, Emotional and non-emotional persuasion, Applied Artificial Intelligence 20: (10) ((2006) ), 849–879. doi:10.1080/08839510600938193.

[54] 

R.C. Moore, Semantical considerations on nonmonotonic logic, Artificial Intelligence 25: (1) ((1985) ), 75–94. doi:10.1016/0004-3702(85)90042-6.

[55] 

P. Nicolas, L. Garcia, I. Stéphan and C. Lefèvre, Possibilistic uncertainty handling for answer set programming, Annals of Mathematics and Artificial Intelligence 47: (1–2) ((2006) ), 139–181. doi:10.1007/s10472-006-9029-y.

[56] 

J.C. Nieves and R. Confalonieri, A possibilistic argumentation decision making framework with default reasoning, Fundamenta Informaticae 113: ((2011) ), 41–61.

[57] 

D. Nute, Defeasible logic, in: Web Knowledge Management and Decision Support, Springer, (2003) , pp. 151–169. doi:10.1007/3-540-36524-9_13.

[58] 

H. op den Akker, V.M. Jones and H.J. Hermens, Tailoring real-time physical activity coaching systems: A literature survey and model, User Modeling and User-Adapted Interaction 24: (5) ((2014) ), 351–392.

[59] 

M. Osorio and J.C. Nieves, Possibilistic well-founded semantics, in: MICAI’09, Lecture Note in Artificial Intelligence, Vol. 5845: , Springer-Verlag, (2009) , pp. 15–26.

[60] 

S. Parsons and A. Hunter, A review of uncertainty handling formalisms, in: Applications of Uncertainty Formalisms, Springer, (1998) , pp. 8–37. doi:10.1007/3-540-49426-X_2.

[61] 

C. Perelman, The New Rhetoric: A Treatise on Argumentation, (1969) .

[62] 

J.L. Pollock, Defeasible reasoning, Cognitive Science 11: (4) ((1987) ), 481–518. doi:10.1207/s15516709cog1104_4.

[63] 

H. Prakken, Coherence and flexibility in dialogue games for argumentation, Journal of Logic and Computation 15: (6) ((2005) ), 1009–1040. doi:10.1093/logcom/exi046.

[64] 

H. Prakken, Combining sceptical epistemic reasoning with credulous practical reasoning, Frontiers in Artificial Intelligence and Applications 144: ((2006) ), 311–322.

[65] 

H. Prakken and G. Vreeswijk, Logics for defeasible argumentation, in: Handbook of Philosophical Logic, Kluwer Academic Publishers, (2002) , pp. 218–319.

[66] 

I. Rahwan and L. Amgoud, An argumentation based approach for practical reasoning, in: Proceedings of the Fifth International Joint Conference on Autonomous Agents and Multiagent Systems, ACM, (2006) , pp. 347–354. doi:10.1145/1160633.1160696.

[67] 

P. Rao, K. Sagonas, T. Swift, D.S. Warren and J. Freire, XSB: A system for efficiently computing well-founded semantics, in: Logic Programming and Nonmonotonic Reasoning, Springer, (1997) , pp. 430–440. doi:10.1007/3-540-63255-7_33.

[68] 

R. Reiter, A logic for default reasoning, Artificial Intelligence 13: (1) ((1980) ), 81–132. doi:10.1016/0004-3702(80)90014-4.

[69] 

F. Riguzzi and T. Swift, The PITA system: Tabling and answer subsumption for reasoning under uncertainty, Theory and Practice of Logic Programming 11: (4–5) ((2011) ), 433–449. doi:10.1017/S147106841100010X.

[70] 

W.-M. Roth and Y.-J. Lee, “Vygotsky’s neglected legacy” cultural-historical activity theory, Review of Educational Research 77: (2) ((2007) ), 186–232. doi:10.3102/0034654306298273.

[71] 

A. Van Gelder, K.A. Ross and J.S. Schlipf, The well-founded semantics for general logic programs, Journal of the ACM 38: (3) ((1991) ), 619–649. doi:10.1145/116825.116838.

[72] 

D. Walton, C. Reed and F. Macagno, Argumentation Schemes, Cambridge University Press, (2008) .

[73] 

D.N. Walton, Argumentation Schemes for Presumptive Reasoning, Psychology Press, (1996) .

[74] 

D.K. Wilson, J. Williams, A. Evans, G. Mixon and C. Rheaume, Brief report: A qualitative study of gender preferences and motivational factors for physical activity in underserved adolescents, Journal of Pediatric Psychology 30: (3) ((2005) ), 293–297. doi:10.1093/jpepsy/jsi039.