ALL Metrics
-
Views
-
Downloads
Get PDF
Get XML
Cite
Export
Track
Research Article

Liability from the use of medical artificial intelligence: a comparative study of English and Taiwanese tort laws

[version 1; peer review: 2 approved with reservations, 1 not approved]
PUBLISHED 17 Dec 2021
Author details Author details
OPEN PEER REVIEW
REVIEWER STATUS

This article is included in the Health Services gateway.

This article is included in the Research Synergy Foundation gateway.

Abstract

Background: Modern artificial intelligence applications are appearing in healthcare and medical practices. Artificial intelligence is used both in medical research and on patients via medical devices. The aim of this paper is to examine and compare English and Taiwanese tort laws in relation to medical artificial intelligence.
Methods: The methodologies employed are legal doctrinal analysis and comparative law analysis.
Results: The investigation finds that English tort law treats wrong diagnostic or wrong advice as negligent misstatement, and mishaps due to devices as a physical tort under the negligence rule. Negligent misstatement may occur in diagnosis or advisory systems, while a negligent act may occur in products used in the treatment of the patient. Product liability under English common law applies the same rule as negligence. In Taiwan, the general principles of tort law in Taiwan’s Civil Code for misstatement and negligent action apply, whereas the Consumer Protection Act provides for additional rules on product liability of traders.
Conclusions: Safety regulations may be a suitable alternative to tort liability as a means to ensure the safety of medical artificial intelligence systems.

Keywords

Tort liability, Medical artificial intelligence, English law, Taiwanese law, Product liability

Introduction

Modern artificial intelligence applications are appearing in healthcare and medical practices (Nordlinger et al., 2020). As a result, the question of exposure to risk and liability from the use of artificial intelligence in medical practices is a question that haunts the medical fraternity.

In this paper, English and Taiwanese tort laws will be examined to identify the similarities and differences in their approaches, in order to determine the liability from medical artificial intelligence systems. In addition, they serve as a means of comparison between a common law jurisdiction and a civil law jurisdiction.

A brief history of artificial intelligence

The field of artificial intelligence began with the appearance of the first computers in the 1950s. Alan Turing (1950) proposed an ‘imitation game’, now known as the Turing test, to determine whether a machine can be said to be ‘intelligent’. The term ‘artificial intelligence’ was coined by John McCarthy in 1956 when he organised the Dartmouth Summer Research Project on Artificial Intelligence to gather the various experts in the emerging computational fields at that time (Moor, 2006).

More recently, advancements using machine learning, deep learning and artificial neural network techniques have made artificial intelligence applications practical and feasible. The massive availability of clinical and imagery data coupled with exponential improvement in processing power have fuelled the growth and development of artificial intelligence in the area of healthcare and medical practice.

Medical artificial intelligence

Medical artificial intelligence technology is used both in biomedical research as well as on patients. Muralli and Sivakumaran’s (2018) survey found artificial intelligence applications in healthcare to include managing medical records and data, doing repetitive jobs, treatment design, digital consultation, drug creation, detecting malignant diseases, detecting mental conditions, recognition of facial symptoms, management of diabetes and robot-assisted surgery. Avanzo et al. (2021) identified clinical applications of artificial intelligence such as imaging, therapy and quality assurance. The benefits of adopting artificial intelligence in healthcare have been identified by Meskó and Görög (2020), which include improving in-person and online consultation, health assistance and medication management, AI-driven diagnostics, mining medical records, precision medicine, designing treatment plans, drug creation and triage tools.

The risks of using medical artificial intelligence may be accompanied by defects in product manufacturing, design, or insufficiency in the description of product or instruction of use. Patients are indirect users of medical artificial intelligence since they do not decide on the selection, quality, operation, and maintenance of medical artificial intelligence devices, but are the beneficiaries of such devices through the actions of medical professionals.

Literature review

The discussion on tort liability of medical artificial intelligence goes back to the early days of rule-based medical expert systems (Adams & Gray, 1987; Cannataci, 1989). More recently, this discussion has continued in both legal and medical literature. The legal discussions focus primarily on American law (Allain, 2013; Pesapane et al., 2018; Price et al., 2019; Sullivan & Schweikart, 2019; 吳采薇, 2020), although investigations from the European (Minssen et al., 2020; Pesapane et al., 2018) and Taiwanese perspectives (吳佳琳, 2020; 陳鋕雄, 2019) have also been made.

One common thread of discussion is on who is liable for injury caused by the failure of an artificial intelligence system (Allain, 2013; Junod, 2019). Another common approach is to consider tort liability as a form of product liability of the manufacturer or software developer (Allain, 2013; Frank, 2019; Jabri, 2020; Molnár-Gábor, 2020). It is thought that one way for addressing the risk is through regulation (Kamalnath, 2018; Minssen et al., 2020; Pesapane et al., 2020; Price, 2017), although a no-fault insurance solution has also been proposed as a way to compensate victims (Smith & Fotheringham, 2020).

Artificial intelligence systems can at times be enigmatic. Techniques such as deep learning perform their designated task without it being clear as to how it is done. Such tools are treated as black boxes which become problematic when they are being evaluated by regulators and used to determine liability (Macrae, 2019; Price, 2017, 2018).

Interestingly, Khan (2016) raises the question of a practitioner’s liability for failure to follow the advice of an artificial intelligence system which later turns out to be the correct advice. To investigate this question, Tobia, Nielsen and Stremitzer (2021) conducted an online experiment with respondents as potential jurors and found that practitioners are less likely to be held liable by the respondents if they followed the advice, provided that the advice is not unusual.

A doomsday scenario was raised by Froomkin, Kerr and Pineau (2019) who warned of a future where artificial intelligence systems outperform doctors which may lead to the danger of over-reliance on medical artificial intelligence. Finally, a cautionary word was raised by Maliha et al. (2021), who warn not to over-emphasise the risk and liability of artificial intelligence to the extent of hindering the development and progress of useful and novel applications of medical artificial intelligence.

Methods

Employing a legal doctrinal methodology, which involves identifying the meaning and principles in statutory law and court decisions using a literal and sometimes, a teleological interpretation method to the legal text (Westerman, 2011; Hutchinson & Duncan, 2012), this paper examines tort law liability arising from artificial intelligence applications in medical practice from the perspectives of English and Taiwanese laws. English law is the origin of the common law family of legal systems, whereas Taiwanese law is an example of civil law system in an Asian country. The case laws studied herein are selected from a standard tort law textbook (Goudkamp & Nolan, 2020; Khong, 2021). Goudkamp & Nolan (2020) was chosen as the reference for English law as it has been the leading textbook on English tort law since the first edition in 1937. The legal doctrinal methodology is used to identify how a common law approach differs from a civil law approach to the same question. The first author analysed English law while the second author did the same for Taiwanese law in this research. This research does not employ any statistical data and no human respondents were involved.

Ethical approval

Research ethics approval for this research has been granted by the Research Ethics Committee of the Multimedia University, Approval Number EA0692021. No human respondents, and therefore no consent therefrom, are involved in this research.

Results

English tort law

English tort law is not statutory in nature and is continuously being developed through case law. Generally, torts are categorised into informational torts and physical torts. In the context of artificial intelligence, liability arising from an informational tort may be due to an insufficient or outdated knowledge base, defective algorithms or coding, or even the inappropriate use of artificial intelligence tools by the users; while liability from a physical tort may be due to faulty equipment or sensors, or that the algorithm is not fit for its task.

Informational tort

In contemporary English tort law, liabilities from wrongful information are classified as negligent misstatement and deceit.

The principal case for negligent misstatement is Hedley Byrne & Co Ltd v Heller & Partners Ltd, where it was held that “if someone possessed of a special skill undertakes, quite irrespective of contract, to apply that skill for the assistance of another person who relies upon such skill, a duty of care will arise. The fact that the service is to be given by means of or by the instrumentality of words can make no difference.” This is refined in Smith v Bush, where the court, relying on the UK Unfair Contract Terms Act 1977, rejected the use of disclaimers as they are not fair to consumers. The tort of negligent misstatement applies to medical artificial intelligence because information and advice given by artificial intelligence systems are bespoke and customised to each patient based on their needs.

Unlike negligent misstatement, deceit requires proof of ‘fraud’ or ‘recklessness as to the truth’ on the part of the defendant: Derry v Peek. It is very unlikely that the tort of deceit applies to medical artificial intelligence, for there will be a lack of an intention to commit fraud.

Physical tort

Physical torts in English law are classified into intentional torts and unintentional torts. Intentional torts usually come in the form of trespass, and in the case of medical artificial intelligence, battery, while unintentional torts under the negligence rule.

In Cole v Turner, Holt CJ declared that “the least touching of another in anger is a battery.” Later, in Collins v Wilcock, Goff LJ provided a more contemporary and general definition of battery: “the physical contact so persisted in has in the circumstances gone beyond generally acceptable standards of conduct.” Two types of exceptions to battery are recognised: lawful control and exigencies of everyday life, and consent.

The most common type of unintentional tort in English law is negligence. Lord Atkin in the House of Lords’ opinion in Donoghue v Stevenson stated that “negligence [is] based upon a general public sentiment of moral wrongdoing for which the offender must pay.” To succeed in negligence, apart from showing that the defendant owed a duty of care to the plaintiff, three other elements have to be satisfied: that the defendant breached his duty, the damage is foreseeable, and the plaintiff suffered damage. For breach of duty, Baron Alderson in Blyth v Birmingham Waterworks Co held that “[n]egligence is the omission to do something which a reasonable man, guided upon those considerations which ordinarily regulate the conduct of human affairs, would do, or doing something which a prudent and reasonable man would not do.” Thus, the standard of care for negligence is the reasonable man standard.

Likewise, the standard of care of a professional is that of a reasonable professional. In the case of medical practitioners, Bolam v Friern Hospital Management Committee established that a doctor “is not guilty of negligence if he has acted in accordance with a practice accepted as proper by a responsible body of medical men skilled in that particular art.” For some time, it was thought that the Bolam principle absolves the doctor if he proves that what he did accords with the practice of “a responsible body of medical men.” However, in Bolitho v City and Hackney Health Authority, the House of Lords opened the opportunity for judges’ intervention “if, in a rare case, it can be demonstrated that the professional opinion is not capable of withstanding logical analysis, the judge is entitled to hold that the body of opinion is not reasonable or responsible.”

Additionally, Donoghue v Stevenson serves to illustrate two other principles. First, it is an example of product liability law, where a consumer can sue a manufacturer outside contract law. Secondly, it supports the concept of corporate liability as Stevenson is the name of the manufacturer appearing on the label. The approach of the English courts is to assume that the defendant as a business entity is responsible or vicariously liable for the negligence of its employees.

On the distinction between intentional tort and negligence, the test is whether the injury was inflicted intentionally, not whether the action of the alleged tortfeasor was intentional: Letang v Cooper.

Taiwanese tort law

Taiwan, being a Civil Law jurisdiction, has statues as the basis for tort liability. Article 184 of the Civil Code covers the general scenario: “A person who, intentionally or negligently, has wrongfully damaged the rights of another is bound to compensate him for any injury arising therefrom.”

Under the Civil Code, insufficiency of or error in thr information provided by a party which cause injuries or damage is considered as an infringement of a person’s freedom to make correct decisions. Physical injury to a person, whether intentionally or negligently, is treated as an infringement to the integrity of his health of body. Damages for infringements of rights can be claimed according to Article 184 of the Civil Code. Damages for pain and suffering can be an additional heading of compensation.

Additional protection is afforded through Article 7 of the Consumer Protection Act: “Traders engaging in designing, producing or manufacturing of goods or in the provisions of services, shall ensure that goods or services provided meet and comply with the contemporary technical and professional standards with reasonably expected safety requirements. … Traders shall be jointly and severally liable in [sic] violating the foregoing paragraphs and thereby causing injury or damage to consumers or third parties, provided that if traders can prove that they have not been negligent, the court may reduce damages.”

According to the Taiwan High Court Civil Judgment, 109 San-Yi No. 139, liability arising from the use of medical devices may be in two forms: product liability for medical devices, and tort liability for medical services. To attribute responsibility for injuries caused by medical devices, the court must judge, on a case-by-case basis, whether it is a product liability, tort liability or both. Damages due to a combination of product liability and tort liability will be shared by the manufacturer of the medical device and the medical service provider.

Discussion

Comparative analysis

Imposing liability on tortfeasors for causing harm serves the purposes of compensating the victim and incentivising potential tortfeasors to take care. Despite this, both English common law and the Taiwanese tort law appear to concentrate primarily on the compensatory aspects. Therefore, safety regulations by the medical authorities are additionally needed to ensure the safety of such devices.

Both informational torts and physical torts are covered by English and Taiwanese tort laws. Harm from medical artificial intelligence is likely to be unintentional and thus come under the negligence rule.

One difference between the laws in both jurisdictions is on the treatment of product liability. In English law, if a manufacturer can mount a defence indicating that what he has done is reasonable, or in medical treatment cases is the same as what other practitioners would have done, then he is absolved of all blame and liability. However, in Taiwanese law, this kind of defence will only partially reduce the amount of his damages.

Erroneous information or advice provided by medical artificial intelligence devices is treated as an informational tort, whereas mishaps due to machines or robotic devices are categorised as physical tort. Both are likely to be unintentional torts in English and Taiwanese tort laws.

Conclusion

The aim of this paper is to examine English and Taiwanese tort laws in relation to medical artificial intelligence, and to highlight the similarities and differences in their approaches.

It is found that English tort law treats wrong diagnostic information or advice as negligent misstatement and mishaps due to devices as a form of physical tort under the negligence rule. The negligence rule is applied similarly to product liability under English common law.

On the other hand, the general principles of tort law in Taiwan’s Civil Code apply to misstatement and negligent actions. Additionally, its Consumer Protection Act imposes product liability on traders for liability from defective medical artificial intelligence systems.

In both jurisdictions, tort law liabilities appear to focus on providing compensation to victims. Safety regulation may be a suitable alternative to tort liability as a means to ensure the safety of medical artificial intelligence systems.

Data availability

Figshare: Selected English and Taiwanese Judicial Decisions on Tort Liability. https://doi.org/10.6084/m9.figshare.15162339 (Khong, 2021).

This project contains the following underlying data:

  • Dataset.zip (A collection of 10 English and one Taiwanese court cases on tort liability).

Data are available under the terms of the Creative Commons Zero “No rights reserved” data waiver (CC0 1.0 Public domain dedication).

Comments on this article Comments (0)

Version 1
VERSION 1 PUBLISHED 17 Dec 2021
Comment
Author details Author details
Competing interests
Grant information
Copyright
Download
 
Export To
metrics
Views Downloads
F1000Research - -
PubMed Central
Data from PMC are received and updated monthly.
- -
Citations
CITE
how to cite this article
Khong DWK and Yeh WJ. Liability from the use of medical artificial intelligence: a comparative study of English and Taiwanese tort laws [version 1; peer review: 2 approved with reservations, 1 not approved]. F1000Research 2021, 10:1294 (https://doi.org/10.12688/f1000research.73367.1)
NOTE: If applicable, it is important to ensure the information in square brackets after the title is included in all citations of this article.
track
receive updates on this article
Track an article to receive email alerts on any updates to this article.

Open Peer Review

Current Reviewer Status: ?
Key to Reviewer Statuses VIEW
ApprovedThe paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approvedFundamental flaws in the paper seriously undermine the findings and conclusions
Version 1
VERSION 1
PUBLISHED 17 Dec 2021
Views
1
Cite
Reviewer Report 08 Jun 2024
Daria Kim, Max Planck Institute for Innovation and Competition, Munich, Germany 
Not Approved
VIEWS 1
The paper surveys the application of English and Taiwanese tort laws in the context of medical artificial intelligence applications. While this review might be informative for readers not well-versed in the subject, I believe the addressed legal issues require a ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Kim D. Reviewer Report For: Liability from the use of medical artificial intelligence: a comparative study of English and Taiwanese tort laws [version 1; peer review: 2 approved with reservations, 1 not approved]. F1000Research 2021, 10:1294 (https://doi.org/10.5256/f1000research.77012.r275629)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
Views
4
Cite
Reviewer Report 30 May 2024
Kit Fotheringham, University of Bristol, Bristol, England, UK 
Approved with Reservations
VIEWS 4
The authors propose that harms occurring from defective AI in medicine could be covered by informational torts such as negligent misstatement. Generally, these torts are restricted to cases of "economic loss", and the cases cited support this. However, the scenario ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Fotheringham K. Reviewer Report For: Liability from the use of medical artificial intelligence: a comparative study of English and Taiwanese tort laws [version 1; peer review: 2 approved with reservations, 1 not approved]. F1000Research 2021, 10:1294 (https://doi.org/10.5256/f1000research.77012.r254890)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
Views
11
Cite
Reviewer Report 17 Jan 2022
Søren Holm, Centre for Social Ethics and Policy, Department of Law, School of Social Sciences, University of Manchester, Manchester, UK;  Center for Medical Ethics, HELSAM, Faculty of Medicine, Oslo University, Oslo, Norway 
Approved with Reservations
VIEWS 11
The paper by Khong and Yeh compares the legal principles that governs the ascription and allocation of liability to AI systems in clinical medicine in England and in Taiwan. The comparison is interesting, but there seems to be much more ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Holm S. Reviewer Report For: Liability from the use of medical artificial intelligence: a comparative study of English and Taiwanese tort laws [version 1; peer review: 2 approved with reservations, 1 not approved]. F1000Research 2021, 10:1294 (https://doi.org/10.5256/f1000research.77012.r118385)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.

Comments on this article Comments (0)

Version 1
VERSION 1 PUBLISHED 17 Dec 2021
Comment
Alongside their report, reviewers assign a status to the article:
Approved - the paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations - A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approved - fundamental flaws in the paper seriously undermine the findings and conclusions
Sign In
If you've forgotten your password, please enter your email address below and we'll send you instructions on how to reset your password.

The email address should be the one you originally registered with F1000.

Email address not valid, please try again

You registered with F1000 via Google, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Google account password, please click here.

You registered with F1000 via Facebook, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Facebook account password, please click here.

Code not correct, please try again
Email us for further assistance.
Server error, please try again.