You are viewing a javascript disabled version of the site. Please enable Javascript for this site to function properly.
Go to headerGo to navigationGo to searchGo to contentsGo to footer
In content section. Select this link to jump to navigation

I-feed: A robotic platform of an assistive feeding robot for the disabled elderly population

Abstract

BACKGROUND:

Over time, scholars have invented various types of feeding robots to help patients with hand disabilities. However, most commercially available feeding robots are functionally simple or expensive.

OBJECTIVE:

The purpose of this study is to develop a cheap, multi-functional feeding robot with excellent performance to help disabled elderly eat independently.

METHODS:

Our feeding robot (called ‘I-feed’) uses human-computer interaction based on voice recognition. The feeding system we developed with a four-degree-of-freedom robotic arm is capable of completing the two tasks of food selection and feeding through speech recognition, but also simultaneously meets users’ diverse needs with three bowls. We also designed a U-shaped table to adjust the height of the feeding robot.

RESULTS:

This newly developed feeding robot can not only select bowls with different foods by efficient voice commands, but also adapts to users of different heights through a U-shaped table with an adjustable height.

CONCLUSIONS:

The experimental results show that the accuracy of speech recognition is excellent, and the robot arm can perform the corresponding tasks successfully.

1.Introduction

It is well-known that many people around the world cannot eat independently, caused by diseases such as paralysis, Parkinson’s disease, and cerebral palsy [1]. To solve this problem, assistive devices have been considered by researchers in recent years [2, 3, 4]. Concordantly, feeding robots, devices that allow patients to interact directly, have attracted wide attention of many researchers. A variety of current commercial feeding robots are being developed, including the Meal Buddy and Obi. Gai et al. developed a desktop meal assistance robot and controlled the robot to execute commands of feeding, rotation, and stop with a footswitch [5]. Chen et al. proposed to send the intention of patients to auxiliary devices by brain-computer interaction [6]. Schröer et al. devised an autonomous meal assistance robot based on the interaction of vision and EEG [7]. Nevertheless, these emerging robots generally have the disadvantage of feeding position error, uncomfortable interaction mode, or high price. To improve the current predicament, we propose a low-cost and multi-functional feeding robot based on voice interaction (Fig. 1). This newly developed feeding robot can not only select bowls with different foods by efficient voice instructions, but also adapts to users of different heights through a U-shaped table with an adjustable height.

Figure 1.

a. Overall structure of the feeding robot. b. Physical product.

a. Overall structure of the feeding robot. b. Physical product.

2.Structure and method

The feeding system we developed with a four-degree-of-freedom robotic arm is capable of completing the two tasks of food selection and feeding through voice interaction, but also simultaneously meets users’ diverse needs with three bowls.

2.1Mechanical structure

The mechanical structures of the feeding robot consist of five functional structures: a base, gear structures simulating the shoulder joint, elbow joint, wrist joint, and the U-shaped table with an adjustable height that can be pushed directly over the user’s nursing bed (Fig. 2).

The base transmission mechanism is composed of an output shaft that outputs the power of the base to the shoulder, a base motor acting as the power source for the base, a worm gear reducer making the robot arm run smoother, an encoder giving feedback on the base motion data, a motor frame for installing the motor, and a frame fixed for mounting the base structure on the base plate.

The shoulder transmission mechanism is assembled by several essential structures, including a large arm connecting the shoulder and the base, a shoulder motor working as power source, a gearbox allowing the arm run smoother, an encoder giving feedback on the shoulder movement data, a coil spring balancing the weight of the forearm and the large arm, a motor frame for installing the motor, and a frame fixed for mounting the shoulder structure on the base.

The elbow driving mechanism includes a forearm connecting the wrist and elbow, a motor working as the power source for the elbow, a reducer allowing the forearm to run smoother, an encoder getting the elbow movement data, a fixed frame installing the motor, as well as a bracket fixed for elbow structure.

The wrist driving mechanism comprises a spoon for eating, a wrist motor playing the role of power source, a gearbox making the spoon run smoother, a bracket mounting the motor, and another bracket fixing the wrist structure on the forearm.

Figure 2.

The internal structures of the wrist joint, elbow joint, shoulder joint, and base joint.

The internal structures of the wrist joint, elbow joint, shoulder joint, and base joint.

2.2Control system

On this basis, the control system needs to complete several tasks, including communicating with the voice module, receiving control instructions, and controlling the normal operation of the robot arm.

Considering the problems of programming and function division, we designed the control system as a master-slave structure. The master controller primarily analyzes the data transmitted by the voice module (HBR740), calculates the data sent from each slave controller, and sends the corresponding motion instructions.

As the machine coordination unit, the master controller is responsible for the spatial decoupling of the robotic arm and the closed-loop control of the position, and meanwhile sends the execution command of the corresponding joint to each slave controller through the CAN bus.

The four slave controllers receive the movement instructions from the master controller and control individual motors at the corresponding joint to adjust the speed and the speed feedback of the motors. With this control method, the feeding robot can successfully complete the specified feeding command.

2.3Experimental setting

We aimed to improve the performance of the feeding robot and used the method of combining hardware and software to avoid part of the clutter in the data acquisition process. This method ensures the accuracy of the data acquired by the encoder.

Table 1

Control command corresponding to each voice command

NumberVoice command (Chinese)Control command
1Dì yī gèSelect the first bowl
2Dì yīSelect the first bowl
3Dì yī hàoSelect the first bowl
4yī hàoSelect the first bowl
5Dì èr gèSelect the second bowl
6Dì èrSelect the second bowl
7Dì èr hàoSelect the second bowl
8èr hàoSelect the second bowl
9Dì sān gèSelect the third bowl
10Dì sānSelect the third bowl
11Dì sān hàoSelect the third bowl
12Sān hàoSelect the third bowl
13Zàn tíngPause
14Tíng xiàPause
15Jì xùContinue
16Kāi shǐContinue
17Wèi shíContinue
18Tíng zhiPause
19Tíng zhi Wèi shíPause
20Fù wèiReset
21Jī qì fù wèiReset

Figure 3.

The experimental results. a–b. Experimenters’ mouths are 5 cm away from the microphone. c-d. Experimenters’ mouths are 30 cm away from the microphone.

The experimental results. a–b. Experimenters’ mouths are 5 cm away from the microphone. c-d. Experimenters’ mouths are 30 cm away from the microphone.

In the real experiments, we record the voices of different experimenters but also adopted the method of one command corresponding to multi-voice selection (Table 1). To verify the effect of the speech recognition module in different environments, the experimenters issued control commands in a quiet (0–30 db) and normal environment (30–60 db), in a concert with placing the speech recognition module at a distance of 5 cm and 30 cm from the user’s mouth. We tested each condition 50 times. In this voice recognition experiment, the control commands were in standard Mandarin Chinese and sent out by a male and female voice.

3.Results

The experimental results (average accuracy) are shown in Fig. 3.

4.Discussion and conclusion

Based on the experimental results, it is demonstrated that the accuracy of speech recognition is excellent, and the robot arm can perform the corresponding tasks successfully. In future work, the optimization of our feeding robot’s speech recognition rate will be further studied.

Acknowledgments

This work was sponsored by the Shanghai Pujiang Program, no. 16PJC063.

Conflict of interest

None to report.

References

[1] 

Jacobsson C, Axelsson K, Österlind PO, Norberg A. How people with stroke and healthy older people experience the eating process. Journal of Clinical Nursing. (2000) ; 9: (2): 255–64.

[2] 

Rashidi P, Mihailidis A. A survey on ambient-assisted living tools for older adults. IEEE Journal of Biomedical and Health Informatics. (2012) ; 17: (3): 579–90.

[3] 

Chiò A, Gauthier A, Vignola A, Calvo A, Ghiglione P, Cavallo E, et al. Caregiver time use in ALS. Neurology. (2006) ; 67: (5): 902–4.

[4] 

Lopes P, Lavoie R, Faldu R, Aquino N, Barron J, Kante M, et al. Eye-Controed Robotic Feeding Arm Technology. (2012) .

[5] 

Gai F, Zeng J, eds. Design on a Desktop Meal-assistance Robot. 2016 5th International Conference on Sustainable Energy and Environment Engineering (ICSEEE 2016); (2016) : Atlantis Press.

[6] 

Chen S-C, Hsu C-H, Kuo H-C, Zaeni IA, eds. The BCI control applied to the interactive autonomous robot with the function of meal assistance. Proceedings of the 3rd International Conference on Intelligent Technologies and Engineering Systems (ICITES2014); (2016) : Springer.

[7] 

Schröer S, Killmann I, Frank B, Völker M, Fiederer L, Ball T, et al. eds. An autonomous robotic assistant for drinking. 2015 IEEE International Conference on Robotics and Automation (ICRA); (2015) : IEEE.