You are viewing a javascript disabled version of the site. Please enable Javascript for this site to function properly.
Go to headerGo to navigationGo to searchGo to contentsGo to footer
In content section. Select this link to jump to navigation

Preface

This issue of JAISE is a regular issue consisting of 10 articles. Review of these articles were supervised by our associate editors Carlos Ramos, Vincent Tam, Andres Ortega, Davy Preuveneers, Jie Yang, Fabio Paterno, Reiner Wichert, Andrea Prati, Aaron Crandall, Gordon Hunter, and Asier Aztiria, whom we thank for their work. The back pages of this issue contain information about upcoming events and other related material. The list of issues for the coming months is included at the end of this preface.

1.This issue

Research in Human Robot Interaction (HRI) with social robots usually gathers observations in order to explore the dynamics of short and long-term interactions. The most common approaches for analyzing the observed data are based on a small number of behavioural units for which frequency or duration is captured. A consequence of this sampled analysis is that comparing results between studies may be difficult. The paper “New instrumentation for Human Robot Interaction assessment based on observational methods” by Andrés et al. proposes an approach to assess the complete human-robot interaction data. Experiments with two different robot types and general guidelines learned from the analysis for assessing the interaction quality between social robots and users are presented.

Many applications in mobile navigation services support users through the determination of their current location. Knowledge of the context of the user can influence the way services are presented. In the paper “A context-aware pedestrian navigation system” by Pouryegan and Malek, pedestrian navigation services have been classified into five categories: Location Finding, Optimal Path Finding, Orientation, Positioning and Way-finding. The paper also considers four types of context in deciding on the services it renders to the user. The contextual data types are four general categories: mobile, movement, movement environment, and motivation, each with its own subclasses, attributes and methods, which are used to set the service parameters for the targeted pedestrian navigation system.

Many smart environment systems are equipped with a multitude of devices and sensors, the management of which has been a major challenge in the design of these systems. An approach to handling the problem has been to model the goals and intentions of the user, and then use methods of artificial intelligence to control the state of the involved system components. Typically, a sophisticated scheme is required to discover and enforce the respective commands, notifications and their correct sequence on the real devices. The paper “Autonomic Goal-Oriented Device Management for Smart Environments” by Sanaullah et al. proposes a methodology for this process based on considering the composite nature of the state of an individual device, and the possible variation of specific commands, notifications and their sequence, based on the current states of the devices. The methodology works at two levels: design-time and runtime. At design time, it constructs the extended data and control flow behavioural graphs of the devices by using the concepts of a model checking approach. Then, at runtime, it uses these graphs for finding the reliable evolution through which the desired goal can be fulfilled.

Parents of children with the Prader-Willi Syndrome (PWS) have shown to experience difficulties in interpreting their child’s signals due to a lack of interest from the child, as these children are excessively sleepy, hardly cry and express movement to a lesser extent. This may cause a risk for a disrupted bonding process between the child and his or her parents. The paper “Sense – a biofeedback system to support the interaction between parents and their child with the Prader-Willi syndrome – a pilot study” by Frederiks et al. proposes a concept biofeedback system to stimulate the interaction and bonding process between parents and their child with the PWS condition. Sense measures the child’s reaction to his or her environment and informs the parents about the child’s reaction. In this way the system allows parents to adapt to their context, namely the interaction with their child. The system consists of a galvanic skin response (GSR) sensor that measures the activation of the sympathetic nervous system through the change of conductance over skin of the child’s foot or hand. The signal of this sensor is then transmitted to a movement and color changing “butterfly”, in order for parents to be able to interpret the child’s social interaction signals.

Much recent attention is allocated to remote health monitoring of the elderly for various detection and prediction of special conditions which may call for support or intervention. The focus of the majority of the existing work on health monitoring is on sensor development, data collection schemes and feature extraction methods. There has been less focus on analyzing multidimensional physiological parameters and activity recognition integrated in a single system. The paper “Sensor based Efficient Decision Making Framework for Remote Healthcare” by Ganapathy et al. proposes an automated sensor-based decision making framework that categorizes activity and vital parameters, and uses an open standard for web enablement of sensor data. Two schemes for abnormality detection in vital parameters and activity recognition are proposed in the paper. The framework combines a generative Hidden Markov Model for activity prediction, and a discriminative Adaptive Neuro Fuzzy Inference System for accurate activity classification. These two modules are combined for early detection and accurate activity recognition.

A large variety of sensor types have been examined in the design of smart environment solutions. Most of these sensors are tasked to measure environmental parameters or detect activities of different users within the application environment. Among these sensor types are capacitive proximity sensors which use weak electric fields to recognize conductive objects, such as the human body. These sensors can be unobtrusively applied or even provide information when hidden from the view. In the past years various research groups have used this sensor category to create a variety of applications. However, no rigorous comparative study has been reported to establish when using these sensors has an advantage over other sensor types. The paper “Capacitive proximity sensing in smart environments” by Braun et al. discusses the application of capacitive proximity sensors in smart environments in comparison to other sensor technologies. The paper offers an overview of this sensing technology and identifies specific application domains which are suitable for its utilization. Based on existing systems from literature and a number of prototypes the authors have created in the past years, the paper specifies the benefits and limitations of this technology and offers a set of guidelines to researchers who consider this technology in their smart environment applications.

Customizing future smart environments according to their specific end user’s preferences will require highly expressive languages in order to give the user full control of the system via an intuitive interface. The paper “Customizing smart environments: A tabletop approach” by Pons et al. presents a rule editing tool for interactive tabletops aimed at specifying behaviour in reactive smart environments. The behaviour specification in the proposed editor is based on a generic rule model realized with data flow expressions, which allows highly expressive rules to be defined in terms of comprehensible representations. The paper also reports on an experimental study conducted in a smart home setting to evaluate the suitability of this tool for users with different programming backgrounds.

Along the same line of work to customize the behaviour of a smart environment is to use gesture recognition as the basis for personalized interaction of the user with the system, enabling the user to comfortably manage the physical and virtual resources in the environment. The paper “A gesture-based method for natural interaction in smart spaces” by Wang et al. describes a gesture-based interaction method that uses a specific grammar to control and network objects in a smart space. The method’s grammar establishes the identity of an object based on a gesture primitive performed by the user, followed by another gesture primitive which would indicate the action to perform. The main component of the system is a gesture recognition module based on an adapted Dynamic Time Warping algorithm. This module works based on either accelerometer data from a smartphone held by the user, or spatial position data of the gesture obtained from Kinect-based methods.

Localization systems are becoming more important in pervasive wireless technologies for their roles in location-aware services. For indoor applications, fingerprinting is a localization technique where a target’s position is estimated on the basis of measurements, by some reference nodes, of the Received radio Signal Strength (RSS) from the target node. The paper “A Hybrid Radio/Accelerometric Approach to Arm Posture Recognition” by Giuberti et al. explores the use of existing fingerprinting-based radio localization methods to perform arm posture recognition by estimating the positions of target wireless sensor nodes placed on a user’s arms. The work also explores a data fusion technique to combine the fingerprinting method with data obtained by accelerometers present in the target node, to refine the accuracy of the position estimates. The paper provides an analytical discussion comparing the feasibility and accuracy of the proposed method against other methods such as those based on visual sensing.

The ability to detect, quantify and describe odours is of ever increasing interest in a society with greater awareness of environmental sustainability. Technologies which enable detection and identification of complex gases, or the so-called electronic noses, have been on the market for some time, slowly providing affordable and commercial solutions. By modelling high-level knowledge about odours, their causes, and relations to other phenomenon, it is possible to assist the interpretation of the gas sensor signals. To automate this process, the high-level (symbolic) knowledge needs to be seamlessly connected to the lower-level sensor (quantitative) data. The paper “Reasoning for Sensor Data Interpretation: an Application to Air Quality Monitoring” by Alirezaie and Loutfi presents a knowledge-driven approach to reasoning about changes detected over gas sensor signals in a sensor network. The paper uses ideas from Semantic Sensor Networks (SSN) to define an ontology which provides an adaptive way of modelling the domain-related knowledge. The proposed approach is tested in a kitchen environment with several objects monitored by different sensors. The contextual information provided by the sensor network together with high-level domain knowledge are used to infer explanations for changes in the ambient air detected by the gas sensors.

2.Upcoming issues

The following is the list of upcoming issues of JAISE:

  • Sept. 2015: Thematic Issue on Mobility

  • Nov. 2015: Regular Issue

  • Jan. 2016: Thematic Issue on Natural Interaction in Intelligent Environments

  • March 2016: Regular Issue

More information on the call for papers to the future thematic issues is available on the webpage of JAISE at: http://jaise-journal.org/.