RoboDoc: Critical ethical issues to consider for the design and development of a robotic doctor experience
Abstract
To make sense of the robotic revolution quietly occurring in the health sector – particularly, the potential of the robot as a doctor – we need the mechanisms in place to enable us to reflect on the ethical values, beliefs, and aspirations that have shaped and are continuing to shape our health services. Robotics has the potential to re-invent our health system, but as designers and developers of the robot doctor, we need the means to fully understand both patient and doctor’s needs to ensure complete satisfaction. In particular, we need to identify the strong moral and ethical principles in the health care system in order to truly understand those health care professionals that work within. This paper will explore the critical issues behind the robot-doctor interaction. It will highlight some of the main ethical considerations involved in the design and development of this medical robotic experience.
1.Introduction
To make sense of the robotic revolution quietly occurring in the health sector – particularly, the potential of the robot as a doctor – we need the mechanisms in place to enable us to reflect on the ethical values, beliefs, and aspirations that have shaped and are continuing to shape our health services. Robotics has the potential to re-invent our health system, but as designers and developers of the robot doctor, we need the means to fully understand both patient and doctor’s needs to ensure complete satisfaction. In particular, we need to identify the strong moral and ethical principles in the health care system in order to truly understand those health care professionals that work within. Moreover, the challenges of understanding the ethics posed by medical robotics are many, especially as the patient doctor experience is now implicated in a layering of IT complexity. In many cases, it is only by delving deep and recreating a patient doctor interaction, by directly and indirectly rebuilding events or real scenarios that the true understanding (i.e. needs, strengths, and effects) and learning of the ethical values will emerge. The core of understanding of our future robotic doctor lies in the patient-doctor experience; this paper is interested in exploring the critical issues behind the robot-doctor interaction. It will highlight some of the main ethical considerations involved in the design and development of this medical robotic experience.
2.Ethics and human robotic interaction
Human–Robot Interaction (HRI) is the study of interactions between humans and robots, it is an interdisciplinary study of interaction dynamics between humans and robots. As Feil-Seifer & Mataric (2011, p. 2) note ‘the fundamental goal of HRI is to develop the principles and algorithms for robot systems that make them capable of direct, safe and effective interaction with humans’. It is the ‘effective’ and ‘ethical’ doctor-patient interaction which is of interest to the authors of this paper. In terms of technology, there is mounting evidence to show that embodied, and in particular humanoid platforms are significantly more effective than passive devices such as TV sets or tablets in terms of inducing behaviour moderating effects when used as personal assistants (Wada et al., 2002). In fact, over the last few years, research into the development of the digital healthcare assistant or robot personal health care companion (i.e. Mabu and Pillo) has been undertaken to determine the positive effects it might have on the quality of the healthcare service. Robot as companions have a body and the ‘potential’ senses to interact with the environment and people around them. In terms of the patient, they can offer an ‘embodiment’ that has the potential to make the interaction with technology a more natural, engaging and acceptable experience. However, as Dan Stiehl & Breazeal (2006) point out robots must start to ‘feature technologies which allow them to sense the world around them that consists of both people and objects and adapt quickly’. More work is needed to examine which features elicit affective responses and connections (Ng-Thow-Hing, 2011). In fact, ‘Human-like faculties such as emotions, empathy and the capacity to perceive and understand non-verbal social cues would give robots much greater ability to interact with humans’ (Buiu & Popescu, 2011, p. 1097). In addition, Lin et al. (2014) have suggested law and ethics to govern the risks of robotics errors and there are not many studies on the emotional and psychological effects of healthcare especially in the long term. Indeed, the authors of this paper, feel that designers and developers of these robots have unique opportunities to improve the overall appeal and ethical outlook of the robotic health care companion (in particular the robot doctor) beyond their technical functionalities. Indeed, as Ross & Wensveen (2010) discuss the design of products and systems require an aesthetic that goes beyond traditional static form aspects, ‘It requires a new language of form that incorporates the dynamics of behavior’. They argue that the aesthetics of interactive behaviour can be a powerful design driver that helps connect dynamic form, social and ethical aspects (Ross & Wensveen, 2010). Borenstein & Perason (2014) argue about two camps of view of whether or not the robot’s appearance should be more “human-likeness” and assert that aesthetical debates should consider “a balance between repelling human beings and manipulating them, designers may have less control over the emergence of certain ‘quirks’ that are interpreted by humans as indicative of robot processing traits characteristics of persons. In terms of human–robotic design, the authors envision the aesthetic as a way of enhancing the human–robot social and ethical interaction. This paper focuses on understanding the ethical issues emerging from the doctor and patient interaction with the aim of enhancing the robotic doctor and patient experience.
3.Robots and healthcare
3.1.The doctor and patient perspective
The interaction between patient and doctor has deep historical significance, with accompanying ethical norms, professional guidelines and legal regulation having implications for the wider organisation and delivery of healthcare. Understanding this interaction in depth is critical to shape the artificial systems that are built to replace some or all functions of human doctors. In the context of modern evidence-based medicine, the core relationship between patient and doctor has been modelled in a number of forms (Szasz & Hollender, 1956). The prevalent model has shifted over time – patient-centred care prioritises the individual’s values and involves the patient in decision-making (Kaba & Sooriakumaran, 2007).
Robots have been piloted in healthcare for more than a decade (Garmann-Johnsen et al., 2014; Syrdal et al., 2007). The European Foresight Monitoring Network – EFMN (2008) defines healthcare robots as “systems able to perform coordinated mechatronic actions (force or movement exertions) on the basis of processing of information acquired through sensor technology, with the aim to support the functioning of impaired individuals, medical interventions, care and rehabilitation of patients and also to support individuals in prevention programs”, however, the research and regulatory policies are not consistent across research and practice. Several key features of the current model of patient-centred care require re-evaluation in a robotic/AI healthcare model. Confidentiality of medical information is the norm, with rare scenarios where it is ethical and legal to breach confidentiality (McConnell, 1994). While confidentiality is already facing challenges through the management of electronic healthcare records holding large-scale, granular data, this is poised to explode in a robot doctor era. AI decision-making may require fine-grained monitoring of consultations through voice recording and facial expression tracking, while inpatient care may require constant monitoring of physiological data (Liu et al., 2017; Comstock, 2018).
Current interactions between patients and clinicians depend on building trust through open and honest communication. Interacting with a robotic doctor may be fundamentally different. Several companies have trialled chatbot-style text interfaces to an AI diagnosis system (Ni et al., 2017; Divya et al., 2018; Razzaki et al., 2018; McCartney, 2018) while others envision a richer interface. The capabilities of the robotic system in generating natural speech, a simulated face, or gestures through a humanoid avatar may all greatly impact its ability to form a rapport. Patients may be less accepting of a nearly-perfect simulation that falls into the Uncanny Valley (Mori, 1970) than a simpler interface. To introduce a robot surgeon would require a holistic safety and ethical measures such as during a hypothetical scenario of complications, who is responsible? Is it the designer of the robot, the manufacturer, the human surgeon who recommended the use of the robot, the hospital, the insurer or some other entity? (Bekey, 2014).
More widely, robotic delivery of healthcare creates a host of new medicolegal conundrums. Where does responsibility for medicolegal error lie in a complex system of potentially opaque AI decision-making and lack of clear legal accountability? Can current professional regulatory bodies (e.g. General Medical Council in the UK) continue their role in protecting the public by enforcing professional standards, or are new legal frameworks required? What does a robotic doctor do when faced with a patient who lies, has irregular immigration status, or may have been involved in a crime?
AI and robotics present the prospect of reconfiguring health systems to deliver care at a higher standard, more cheaply and more accessibly across the world. However, this transition will likely require evolution of a range of ethical norms, professional structures and legal frameworks that surround medical practice today. Anticipating and planning these is necessary to mitigate the risks (patient disapproval, large scale harm to patients, corporate malfeasance) involved in this transition.
References
1 | Bekey, A. ((2014) ). Current trends in robotics: Technology and ethics. In P. Lin, K. Abney and G.A. Bekey (Eds.), Robot Ethics: The Ethical and Social Implications of Robotics, Cambridge, MA: MIT Press. |
2 | Borenstein, J. & Perason, Y. ((2014) ). Medicine and care. In P. Lin, K. Abney and G.A. Bekey (Eds.), Robot Ethics: The Ethical and Social Implications of Robotics, Cambridge, MA: MIT Press. |
3 | Buiu, C. & Popescu, N. (2011). Aesthetic emotions in human–robot interaction. Implications on interaction design of artists. Retrieved from: http://www.ijicic.org/ijicic-09-1104.pdf. |
4 | Comstock, J. (2018). Babylon’s AI passes mockup of UK’s GP exam, goes head-to-head with doctors. Retrieved from: https://www.mobihealthnews.com/content/babylons-ai-passes-mockup-uks-gp-exam-goes-head-head-doctors. |
5 | Dan Stiehl, W. & Breazeal, C. (2006). A “sensitive skin” for robotic companions featuring temperature. Force, and electric field sensors. Retrieved from: http://robotic.media.mit.edu/pdfs/conferences/StiehlBreazeal-IROS-06.pdf. |
6 | Divya, S., et al. ((2018) ). A self-diagnosis medical chatbot using artificial intelligence. Journal of Web Development and Web Designing, 3: (1). |
7 | European Foresight Monitoring Network – EFMN (2008). Roadmap robotics for healthcare. Foresight Brief No. 157. Retrieved from: www.foresight-platform.eu/wpcontent/uploads/2011/02/EFMN-Brief-No.-157_Robotics-for-Healthcare.pdf. |
8 | Feil-Seifer, D. & Mataric, M.J. (2011). Human–robot interaction. Retrieved from: robotics.usc.edu/publications/media/uploads/pubs/585.pdf. |
9 | Garmann-Johnsen, N., Mettler, T. & Sprenger, M. ((2014) ). Service robotics in healthcare: A perspective for information systems researchers? In Proceedings of the 2014 International Conference on Information Systems – ICIS, Auckland (pp. 1–12). |
10 | Kaba, R. & Sooriakumaran, P. ((2007) ). The evolution of the doctor–patient relationship. International Journal of Surgery, 5: (1), 57–65. doi:10.1016/j.ijsu.2006.01.005. |
11 | Lin, P., Abney, K. & Bekey, G.A. (Eds.) ((2014) ). Introduction to Roboti Ethics, Robot Ethics: The Ethical and Social Implications of Robotics. Cambridge, MA: MIT Press. |
12 | Liu, D., Peng, F., Shea, A., Rudovic, O. & Picard, R. ((2017) ). DeepFaceLIFT: Interpretable personalized models for automatic estimation of self-reported pain. J. Mach. Learn. Res., 2017: (66), 1–16. Retrieved from: https://arxiv.org/pdf/1708.04670.pdf. |
13 | McCartney, M. ((2018) ). Margaret McCartney: AI in medicine must be rigorously tested. BMJ, 361: , k175. |
14 | McConnell, T. ((1994) ). Confidentiality and the law. Journal of Medical Ethics, 1994: (20), 47–49. doi:10.1136/jme.20.1.47. |
15 | Mori, M. ((1970) ). The uncanny valley. Energy, 7: (4), 33–35. |
16 | Ng-Thow-Hing, V. (2011). Multimodal approach to affective human–robot interaction design with children. Retrieved from: http://www.academia.edu/1058856/Multimodal_Approach_to_Affective_Human-Robot_Interaction_Design_with_Children. |
17 | Ni, L., et al. ((2017) ). Mandy: Towards a smart primary care chatbot application. In International symposium on knowledge and systems sciences. Singapore: Springer. |
18 | Razzaki, S., et al. (2018). A comparative study of artificial intelligence and human doctors for the purpose of triage and diagnosis. Preprint. Retrieved from: https://arxiv.org/abs/1806.10698. |
19 | Ross, P. & Wensveen, S. ((2010) ). Designing behavior in interaction: Using aesthetic experience as a mechanism for design. International Journal of Design, 4: (2), 3–13. Retrieved from: http://www.ijdesign.org/ojs/index.php/IJDesign/article/view/765/297. |
20 | Syrdal, D.S., Walters, M.L., Otero, N., Koay, K.L. & Dautenhahn, K. ((2007) ). “He knows when you are sleeping” – Privacy and the personal robot companion. In Proceedings of the 2007 AAAI Workshop Human Implications of Human–Robot Interaction, Washington, DC (pp. 28–33). |
21 | Szasz, T.S. & Hollender, M.H. ((1956) ). A contribution to the philosophy of medicine: The basic models of the doctor–patient relationship. AMA Arch. Intern. Med., 97: (5), 585–592. doi:10.1001/archinte.1956.00250230079008. |
22 | Wada, D., Shibata, T., Saito, T. & Tanie, K. ((2002) ). Analysis of factors that bring mental effects to elderly people in robot assisted activity. In Proceedings of the International Conference on Intelligent Robots and Systems. |