This slideshow requires JavaScript.

Written by: Arina Rashid and Jill Phua*

On 6th February 2020, the SMU Centre for AI & Data Governance and SGInnovate hosted a panel discussion on the ‘Challenges of employing AI in the healthcare sector’. The 90-minute panel was chaired by Miss Sunita Kannan – Data, AI Advisory and Responsible AI expert. The other panelists included Professor Dov Greenbaum (Director of the Zvi Meitar Institute for Legal Implications of Emerging Technologies, Interdisciplinary Center, Herzliya Israel), Dr Tan Jit Seng (Founder and Director of Lotus Eldercare and Vice President of the Asia Pacific Assistive Robotics Association) and Mr. Julien Willeme (Legal Director for Medtronic, Asia-Pacific).

I. Introduction

The panel started off by observing that like many Western developed countries, Singapore is facing a demographic time bomb, in the form of a rapidly ageing population which is also expected to live longer. In this regard, AI has been gaining prominence as a way to augment the traditional physician-hospital caregiver role.

II. Data collection

The panel then discussed the foundation of AI: data collection. Through the collection of patient data, companies are able not just to develop a useful medical device, but also glean clinical insights which are in turn sold to patients and clinicians.

Singapore has the necessary quantity of data required to support AI development. However, the data may not be legally accessible, and current data and privacy regulations in Singapore might pose a roadblock for companies wishing to develop AI technology.

Furthermore, the quality of the data may not always be useful. For AI to be applied effectively in a healthcare setting, clinical data has to be combined with wellness and lifestyle data. Yet, in practice, it has been extremely difficult to obtain both to a high degree of quality. Indeed, few research centres in the world currently possess the two.

The next question was: who owns the data? As an individual, what ownership do we retain over our data once we relay it to a healthcare provider? Can the physician sell patients’ data to a third party for profit? Legally, various stakeholders have good arguments for asserting ownership over medical data.

In this regard, it is arguable that traditional labels such as “ownership” are obsolete. Rather, what is needed is a new set of rules and regulations tailored for data collection. While the European Union’s General Data Protection Regulation has been correct in proposing a new generation of rights relating to data in general, such regulations might not be commercially viable in the specific area of data collection. That being said, the regulation has shown potential through developing a framework of rights and obligations that companies have to adhere to with regard to “data sharing”. Thus overall, there are still many legal issues left to be resolved in the area of data management.

III. Application of AI in the healthcare setting

The panel then discussed AI developments in healthcare. Clinical decision support tools, which capture data to provide enhanced and detailed information for physicians, are using AI. An example is the PillCam, which is being used to replace traditional colonoscopy. Instead of going through the uncomfortable procedure, one can simply swallow a pill containing a camera, which then takes images of one’s colon for analysis. Using an AI algorithm has improved analysis further:  the AI detects abnormalities more accurately, and five times more quickly than a physician.

AI also has applications in geriatric medicine. AI imaging can now be used for diabetic wound management, where the AI machine analyses a photo taken with one’s phone and detects if a wound is infected or healing. This can save manpower and healthcare costs. There are also early diabetic screening services such as retinal imaging: through an analysis of a photo, the AI machine can accurately identify potential diabetics.

AI cannot (at present) replace doctors; rather, it serves as an effective tool in helping doctors make efficient and informed decisions.

IV. Reliability of AI and improving the quality of care

The panel noted that AI is only as good as the data collected, and there must be further measures to incentivize people to share their data. This, of course, has to be balanced with factors such as an individual’s right to privacy and the right to delete their data. Furthermore, in Singapore, the government has been one of the biggest holders of medical records. Thus, before allowing other agents to retrieve medical records, legislation is required to regulate access and protect against abuse of information.

Another issue has been that the medical industry is still adapting to the digitization of medicine. Many still use pen and paper, and medical schools may need to teach doctors how to input data. There has also been difficulty translating written records into electronic records, due to the use of short forms by doctors, not to mention illegible handwriting. Hence, to increase the quality of their prescriptions and treatment, it would be essential for doctors to first have a clear understanding about AI devices and how they are used to help.

Furthermore, AI is not immune to the possibility of bias. Because the outputs of AI in Healthcare are based off a specific subset of data, it is unclear if the conclusions based on that set of data can be transferred to another set of data with a different group of people. In addition, there have been healthcare studies based on data collected from devices such as Fitbit and Apple watches. However, as the data collection relies on the devices’ sensors, it is almost impossible to cross-check the accuracy of the data collected. Furthermore, these devices are susceptible to being damaged and/ or faulty. Due to these challenges, it is unclear whether we can standardize and trust such data.

V. Legal Liability for AI

The panel then debated who should be legally liable when AI went wrong. It considered AI in the following scenarios: “human in the loop” and “human out of the loop”.

A. Human in the Loop

How far do we keep the human within the AI equation? For instance, with regard to self-driving cars, when would people be comfortable to say that there should not be a steering wheel behind the car, or a human who is ready to take over when there is something wrong with the steering wheel? The irony is that people are often preoccupied with the numbers of cases where AI had resulted in the loss of human lives, but they often neglect the fact that humans kill orders of magnitude more due to poor driving.

Concerning this point of keeping the human in the loop, is it always the case that the human – here, the doctor – needs to appreciate how the AI came to a decision? At what point do we have to use AI to maintain a standard of care? When do we allow AI to make the decision, and when do we fault the human for allowing AI to make the decision? The panel concluded that as with all areas of legal technology, these are issues with no ready answers, which will have to be litigated in court.

The panel also observed that given the limitations on data collection and standards to date, most of the AI products today would not replace the doctor’s ultimate decision-making (i.e. the human would remain in the loop). While the AI diagnosis of the patient is an important factor in the overall patient management plan, it is not the main entity affecting the outcome of that plan. For example, AI is currently only able to provide a single-point answer, such as to tell a physician that the patient has a tumor. Thus, the question of what to do with the tumor is still left up to the doctor. AI is also still unable to collect intangible data such as a person’s will to live, which is particularly pertinent in healthcare for the aged.

B. Human out of the Loop

However, some AI technology consciously keeps the human out of the loop. For instance, every pacemaker today has a built-in algorithm that reads one’s heart rhythm: if it detects an undesirable pattern, an automatic shock will be sent through the device, with the impact akin to being hit by a horse in the chest. In the past, when the AI malfunctioned, some patients had been hit 60 times in the space of 10 minutes. Luckily, the newer generation of pacemakers will be AI-enabled with machine-learning capabilities, meaning that the AI algorithm becomes smarter as the pacemaker works. And this is beyond human capabilities.

But keeping the human out of the loop has raised other questions regarding legal liability. The US Food & Drug Administration has suggested imposing legal liability on such AI as if it were an actual person. Yet, this too raises questions: would the AI then be fully accountable for everything and hence absolve humans from all related liability? And even if the AI was treated as a legal entity, it cannot be put in jail or sentenced to death.

Another question raised was: if the ultimate goal is to improve patient outcomes, would it matter if the AI is no longer explainable? The lawyer’s response would be that one needs to understand how something works before one can pinpoint how it went wrong. Doctors also prefer to take a conservative approach to experimentation; naturally so, as human lives are at stake. While being unable to appreciate how things work does not necessarily mean we are unable to use them, if we do not know how the AI works, we will not be able to learn from it and pass down its knowledge.

VI. Final Thoughts

The seminar provided interesting insights into how AI can help improve the quality of care for Singapore’s ageing population. The lack of a clear path on AI’s trajectory of development reflects the many variables present in our society, such as government regulations and the receptiveness of the elderly towards AI devices.

While AI will play a bigger role in the medical field in time to come, it is uncertain whether Singapore has the appetite to allow it to develop to the point where the human is out of the loop. But it is clear that lawyers will have to deal head-on with the difficult questions of who owns data and who should be liable when AI goes wrong.

– – – For a PDF version of this article, click here. – – –


*      Authors: Arina Rashid, (Year 4 LL.B. Undergraduate) and Jill Phua (Year 2 LL.B. Undergraduate), Singapore Management University, School of Law. Edited by Rennie Whang Yixuan (Year 3 J.D. Student).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s