We believe AI technologies have a significant role to play in the healthcare domain. Before they can be implemented, however, we must first recognise and address at least two formal deficits in state-of-the-art AI systems: the causality deficit and the care deficit. When making causal inferences and reasoning about causal relationships (on the diagnostic front) and assisting, monitoring, and providing companionship for patients (on the care-giving front), the optimal level of human-level performance far outstrips the optimal level of AI-level performance (Chen, 2019). An understand- ing of the current state of global healthcare, a recognition of the need to address both the causality deficit and the care deficit in the design of AI systems, and an appreciation of the value of human-oriented AI will allow us to rise to the challenge of putting the ‘care’ back into ‘healthcare’.
The shallow and deficient state of global healthcare
To understand how best to implement AI technologies in the healthcare domain, we must first come to terms with the state of global healthcare. As a result of declining fertility rates and increasing longevity, the number of people in the world aged 60 years or over is projected to increase from 900 million (12% of the total global population) in 2015 to 2 billion (22% of the total global population) in 2050 (United Nations, 2015). In conjunction with the increasing prevalence of non-communicable diseases such as heart disease, cancer, and diabetes, climate change and other environmental factors are likely to result in the emergence of new disease threats and conditions. With these demographic and epidemiological trends in place, we have good reason to expect an increase in demand for healthcare workers in the coming years. However, it is anticipated that there will be a global shortfall of 18 million healthcare workers in 2030, especially in low- and lower-middle-income countries (World Health Organization, 2016).
These demographic and epidemiological trends, along with the anticipated shortfall of healthcare resources, have already begun to exert pressure on the healthcare domain. Symptoms of this pressure include: physician burnout (Epstein & Privitera, 2016), the growing sense among patients that doctors are rushed, busy, and hurried (Singletary, Patel & Heslin, 2017), the recommendation of unnecessary and overused medical tests and procedures such as medical imaging studies for lower back pain and stenting for patients who are unlikely to get any benefit (Brownlee et al, 2017), and the over-prescription of drugs by doctors who are too busy to listen to their patients and the related opioid epidemic in the U.S. (Topol, 2019). If our primary concern is with the quality of care received by the patients and their health outcomes, then the suboptimal outcomes, sheer waste, and unnecessary harm to which we are led by the practice of contemporary medicine ought to give us pause. As Francis Peabody (1927) well knew, the secret of caring for the patient is in caring for the patient. This humanistic ideal in medical practice, equally championed by Hippocrates and Sir William Osler, is what we risk losing if we fail to cope with the growing pressure on the healthcare domain.
Using human-oriented AI to put the ‘care’ back into ‘healthcare’
Enter AI. AI systems can handle far more complex datasets than human beings, do not experience fatigue or distraction in the manner that human beings might, possess superior computational bandwidth, and have the potential to add value to the healthcare domain. The burgeoning pattern recognition abilities of machine learning-based AI systems, in particular, have led some to suggest that they could soon replace human radiologists and pathologists in reading and reviewing X-rays, CT scans, and tissue slides (Chockley & Emanuel, 2016).
We believe that underlying the humanistic ideal in medical practice is the need for human-to-human bonding and support, which entails that we must think in terms of AI-human interfaces. It is not that radiologists and pathologists will be replaced by AI systems in medical imaging but rather that they will up-skill, develop more value-added functions, and enhance the quality of healthcare. Patients will benefit from more interactions with radiologists and pathologists, who will become more integrated within the clinical care teams (Recht & Bryan, 2017). Appropriate- ly implemented, AI systems will be able to relieve at least some of the growing pressure on the healthcare domain, free up much-needed time, augment the abilities of both human healthcare professionals and human patients, and improve the quality of care received by the patients and their health outcomes. Symptom checker programs augment the self-triage abilities of patients and diagnostic abilities of healthcare professionals, telemedicine allows for the overcoming of physical barriers and the provision of clinical healthcare from a distance, and exoskeleton suits allow both human patients and caregivers to exceed their natural physical limitations. Inappropriately implemented, however, AI systems might reinforce the growing pressure on the healthcare domain: think of the increase in burnout that physicians experience from the data entry demands of the electronic health record and computerised physician order entry systems (Shanafelt et al.,2016).
AI systems will be appropriately implemented only if they are human-oriented and their designers take into consideration the diverse interests and concerns of the various stakeholders in the healthcare domain, including but not limited to the following: patients, doctors, nurses, physicians, pharmacists, and researchers. In all instances, we should proceed with an awareness of both the state-of-the-art- capabilities and formal deficits of AI systems, a full recognition of the humanistic ideal in medical practice and the growing pressure on the healthcare domain, and a hope that appropriately implemented AI systems can help us to put the ‘care’ back into ‘healthcare’.
Dr Melvin Chen & Assoc. Prof. Chew Lock Yue – Philosophy, NTU & Physics, NTU