Keywords
Tort liability, Medical artificial intelligence, English law, Taiwanese law, Product liability
This article is included in the Health Services gateway.
This article is included in the Research Synergy Foundation gateway.
Tort liability, Medical artificial intelligence, English law, Taiwanese law, Product liability
Modern artificial intelligence applications are appearing in healthcare and medical practices (Nordlinger et al., 2020). As a result, the question of exposure to risk and liability from the use of artificial intelligence in medical practices is a question that haunts the medical fraternity.
In this paper, English and Taiwanese tort laws will be examined to identify the similarities and differences in their approaches, in order to determine the liability from medical artificial intelligence systems. In addition, they serve as a means of comparison between a common law jurisdiction and a civil law jurisdiction.
The field of artificial intelligence began with the appearance of the first computers in the 1950s. Alan Turing (1950) proposed an ‘imitation game’, now known as the Turing test, to determine whether a machine can be said to be ‘intelligent’. The term ‘artificial intelligence’ was coined by John McCarthy in 1956 when he organised the Dartmouth Summer Research Project on Artificial Intelligence to gather the various experts in the emerging computational fields at that time (Moor, 2006).
More recently, advancements using machine learning, deep learning and artificial neural network techniques have made artificial intelligence applications practical and feasible. The massive availability of clinical and imagery data coupled with exponential improvement in processing power have fuelled the growth and development of artificial intelligence in the area of healthcare and medical practice.
Medical artificial intelligence technology is used both in biomedical research as well as on patients. Muralli and Sivakumaran’s (2018) survey found artificial intelligence applications in healthcare to include managing medical records and data, doing repetitive jobs, treatment design, digital consultation, drug creation, detecting malignant diseases, detecting mental conditions, recognition of facial symptoms, management of diabetes and robot-assisted surgery. Avanzo et al. (2021) identified clinical applications of artificial intelligence such as imaging, therapy and quality assurance. The benefits of adopting artificial intelligence in healthcare have been identified by Meskó and Görög (2020), which include improving in-person and online consultation, health assistance and medication management, AI-driven diagnostics, mining medical records, precision medicine, designing treatment plans, drug creation and triage tools.
The risks of using medical artificial intelligence may be accompanied by defects in product manufacturing, design, or insufficiency in the description of product or instruction of use. Patients are indirect users of medical artificial intelligence since they do not decide on the selection, quality, operation, and maintenance of medical artificial intelligence devices, but are the beneficiaries of such devices through the actions of medical professionals.
The discussion on tort liability of medical artificial intelligence goes back to the early days of rule-based medical expert systems (Adams & Gray, 1987; Cannataci, 1989). More recently, this discussion has continued in both legal and medical literature. The legal discussions focus primarily on American law (Allain, 2013; Pesapane et al., 2018; Price et al., 2019; Sullivan & Schweikart, 2019; 吳采薇, 2020), although investigations from the European (Minssen et al., 2020; Pesapane et al., 2018) and Taiwanese perspectives (吳佳琳, 2020; 陳鋕雄, 2019) have also been made.
One common thread of discussion is on who is liable for injury caused by the failure of an artificial intelligence system (Allain, 2013; Junod, 2019). Another common approach is to consider tort liability as a form of product liability of the manufacturer or software developer (Allain, 2013; Frank, 2019; Jabri, 2020; Molnár-Gábor, 2020). It is thought that one way for addressing the risk is through regulation (Kamalnath, 2018; Minssen et al., 2020; Pesapane et al., 2020; Price, 2017), although a no-fault insurance solution has also been proposed as a way to compensate victims (Smith & Fotheringham, 2020).
Artificial intelligence systems can at times be enigmatic. Techniques such as deep learning perform their designated task without it being clear as to how it is done. Such tools are treated as black boxes which become problematic when they are being evaluated by regulators and used to determine liability (Macrae, 2019; Price, 2017, 2018).
Interestingly, Khan (2016) raises the question of a practitioner’s liability for failure to follow the advice of an artificial intelligence system which later turns out to be the correct advice. To investigate this question, Tobia, Nielsen and Stremitzer (2021) conducted an online experiment with respondents as potential jurors and found that practitioners are less likely to be held liable by the respondents if they followed the advice, provided that the advice is not unusual.
A doomsday scenario was raised by Froomkin, Kerr and Pineau (2019) who warned of a future where artificial intelligence systems outperform doctors which may lead to the danger of over-reliance on medical artificial intelligence. Finally, a cautionary word was raised by Maliha et al. (2021), who warn not to over-emphasise the risk and liability of artificial intelligence to the extent of hindering the development and progress of useful and novel applications of medical artificial intelligence.
Employing a legal doctrinal methodology, which involves identifying the meaning and principles in statutory law and court decisions using a literal and sometimes, a teleological interpretation method to the legal text (Westerman, 2011; Hutchinson & Duncan, 2012), this paper examines tort law liability arising from artificial intelligence applications in medical practice from the perspectives of English and Taiwanese laws. English law is the origin of the common law family of legal systems, whereas Taiwanese law is an example of civil law system in an Asian country. The case laws studied herein are selected from a standard tort law textbook (Goudkamp & Nolan, 2020; Khong, 2021). Goudkamp & Nolan (2020) was chosen as the reference for English law as it has been the leading textbook on English tort law since the first edition in 1937. The legal doctrinal methodology is used to identify how a common law approach differs from a civil law approach to the same question. The first author analysed English law while the second author did the same for Taiwanese law in this research. This research does not employ any statistical data and no human respondents were involved.
English tort law is not statutory in nature and is continuously being developed through case law. Generally, torts are categorised into informational torts and physical torts. In the context of artificial intelligence, liability arising from an informational tort may be due to an insufficient or outdated knowledge base, defective algorithms or coding, or even the inappropriate use of artificial intelligence tools by the users; while liability from a physical tort may be due to faulty equipment or sensors, or that the algorithm is not fit for its task.
Informational tort
In contemporary English tort law, liabilities from wrongful information are classified as negligent misstatement and deceit.
The principal case for negligent misstatement is Hedley Byrne & Co Ltd v Heller & Partners Ltd, where it was held that “if someone possessed of a special skill undertakes, quite irrespective of contract, to apply that skill for the assistance of another person who relies upon such skill, a duty of care will arise. The fact that the service is to be given by means of or by the instrumentality of words can make no difference.” This is refined in Smith v Bush, where the court, relying on the UK Unfair Contract Terms Act 1977, rejected the use of disclaimers as they are not fair to consumers. The tort of negligent misstatement applies to medical artificial intelligence because information and advice given by artificial intelligence systems are bespoke and customised to each patient based on their needs.
Unlike negligent misstatement, deceit requires proof of ‘fraud’ or ‘recklessness as to the truth’ on the part of the defendant: Derry v Peek. It is very unlikely that the tort of deceit applies to medical artificial intelligence, for there will be a lack of an intention to commit fraud.
Physical tort
Physical torts in English law are classified into intentional torts and unintentional torts. Intentional torts usually come in the form of trespass, and in the case of medical artificial intelligence, battery, while unintentional torts under the negligence rule.
In Cole v Turner, Holt CJ declared that “the least touching of another in anger is a battery.” Later, in Collins v Wilcock, Goff LJ provided a more contemporary and general definition of battery: “the physical contact so persisted in has in the circumstances gone beyond generally acceptable standards of conduct.” Two types of exceptions to battery are recognised: lawful control and exigencies of everyday life, and consent.
The most common type of unintentional tort in English law is negligence. Lord Atkin in the House of Lords’ opinion in Donoghue v Stevenson stated that “negligence [is] based upon a general public sentiment of moral wrongdoing for which the offender must pay.” To succeed in negligence, apart from showing that the defendant owed a duty of care to the plaintiff, three other elements have to be satisfied: that the defendant breached his duty, the damage is foreseeable, and the plaintiff suffered damage. For breach of duty, Baron Alderson in Blyth v Birmingham Waterworks Co held that “[n]egligence is the omission to do something which a reasonable man, guided upon those considerations which ordinarily regulate the conduct of human affairs, would do, or doing something which a prudent and reasonable man would not do.” Thus, the standard of care for negligence is the reasonable man standard.
Likewise, the standard of care of a professional is that of a reasonable professional. In the case of medical practitioners, Bolam v Friern Hospital Management Committee established that a doctor “is not guilty of negligence if he has acted in accordance with a practice accepted as proper by a responsible body of medical men skilled in that particular art.” For some time, it was thought that the Bolam principle absolves the doctor if he proves that what he did accords with the practice of “a responsible body of medical men.” However, in Bolitho v City and Hackney Health Authority, the House of Lords opened the opportunity for judges’ intervention “if, in a rare case, it can be demonstrated that the professional opinion is not capable of withstanding logical analysis, the judge is entitled to hold that the body of opinion is not reasonable or responsible.”
Additionally, Donoghue v Stevenson serves to illustrate two other principles. First, it is an example of product liability law, where a consumer can sue a manufacturer outside contract law. Secondly, it supports the concept of corporate liability as Stevenson is the name of the manufacturer appearing on the label. The approach of the English courts is to assume that the defendant as a business entity is responsible or vicariously liable for the negligence of its employees.
On the distinction between intentional tort and negligence, the test is whether the injury was inflicted intentionally, not whether the action of the alleged tortfeasor was intentional: Letang v Cooper.
Taiwan, being a Civil Law jurisdiction, has statues as the basis for tort liability. Article 184 of the Civil Code covers the general scenario: “A person who, intentionally or negligently, has wrongfully damaged the rights of another is bound to compensate him for any injury arising therefrom.”
Under the Civil Code, insufficiency of or error in thr information provided by a party which cause injuries or damage is considered as an infringement of a person’s freedom to make correct decisions. Physical injury to a person, whether intentionally or negligently, is treated as an infringement to the integrity of his health of body. Damages for infringements of rights can be claimed according to Article 184 of the Civil Code. Damages for pain and suffering can be an additional heading of compensation.
Additional protection is afforded through Article 7 of the Consumer Protection Act: “Traders engaging in designing, producing or manufacturing of goods or in the provisions of services, shall ensure that goods or services provided meet and comply with the contemporary technical and professional standards with reasonably expected safety requirements. … Traders shall be jointly and severally liable in [sic] violating the foregoing paragraphs and thereby causing injury or damage to consumers or third parties, provided that if traders can prove that they have not been negligent, the court may reduce damages.”
According to the Taiwan High Court Civil Judgment, 109 San-Yi No. 139, liability arising from the use of medical devices may be in two forms: product liability for medical devices, and tort liability for medical services. To attribute responsibility for injuries caused by medical devices, the court must judge, on a case-by-case basis, whether it is a product liability, tort liability or both. Damages due to a combination of product liability and tort liability will be shared by the manufacturer of the medical device and the medical service provider.
Imposing liability on tortfeasors for causing harm serves the purposes of compensating the victim and incentivising potential tortfeasors to take care. Despite this, both English common law and the Taiwanese tort law appear to concentrate primarily on the compensatory aspects. Therefore, safety regulations by the medical authorities are additionally needed to ensure the safety of such devices.
Both informational torts and physical torts are covered by English and Taiwanese tort laws. Harm from medical artificial intelligence is likely to be unintentional and thus come under the negligence rule.
One difference between the laws in both jurisdictions is on the treatment of product liability. In English law, if a manufacturer can mount a defence indicating that what he has done is reasonable, or in medical treatment cases is the same as what other practitioners would have done, then he is absolved of all blame and liability. However, in Taiwanese law, this kind of defence will only partially reduce the amount of his damages.
Erroneous information or advice provided by medical artificial intelligence devices is treated as an informational tort, whereas mishaps due to machines or robotic devices are categorised as physical tort. Both are likely to be unintentional torts in English and Taiwanese tort laws.
The aim of this paper is to examine English and Taiwanese tort laws in relation to medical artificial intelligence, and to highlight the similarities and differences in their approaches.
It is found that English tort law treats wrong diagnostic information or advice as negligent misstatement and mishaps due to devices as a form of physical tort under the negligence rule. The negligence rule is applied similarly to product liability under English common law.
On the other hand, the general principles of tort law in Taiwan’s Civil Code apply to misstatement and negligent actions. Additionally, its Consumer Protection Act imposes product liability on traders for liability from defective medical artificial intelligence systems.
In both jurisdictions, tort law liabilities appear to focus on providing compensation to victims. Safety regulation may be a suitable alternative to tort liability as a means to ensure the safety of medical artificial intelligence systems.
Figshare: Selected English and Taiwanese Judicial Decisions on Tort Liability. https://doi.org/10.6084/m9.figshare.15162339 (Khong, 2021).
This project contains the following underlying data:
Data are available under the terms of the Creative Commons Zero “No rights reserved” data waiver (CC0 1.0 Public domain dedication).
An early version of this paper was presented at the 1st International Conference on Law and Digitalisation 2021 on 21 June 2021. Kind comments and feedback from participants are acknowledged.
Views | Downloads | |
---|---|---|
F1000Research | - | - |
PubMed Central
Data from PMC are received and updated monthly.
|
- | - |
Is the work clearly and accurately presented and does it cite the current literature?
Partly
Is the study design appropriate and is the work technically sound?
Partly
Are sufficient details of methods and analysis provided to allow replication by others?
Partly
If applicable, is the statistical analysis and its interpretation appropriate?
Not applicable
Are all the source data underlying the results available to ensure full reproducibility?
No source data required
Are the conclusions drawn adequately supported by the results?
Partly
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Innovation and Law
Is the work clearly and accurately presented and does it cite the current literature?
No
Is the study design appropriate and is the work technically sound?
Partly
Are sufficient details of methods and analysis provided to allow replication by others?
Yes
If applicable, is the statistical analysis and its interpretation appropriate?
Not applicable
Are all the source data underlying the results available to ensure full reproducibility?
No source data required
Are the conclusions drawn adequately supported by the results?
No
References
1. Smith H, Fotheringham K: Exploring remedies for defective artificial intelligence aids in clinical decision-making in post-Brexit England and Wales. Medical Law International. 2022; 22 (1): 33-51 Publisher Full TextCompeting Interests: No competing interests were disclosed.
Reviewer Expertise: Regulation of AI under the law of England & Wales
Is the work clearly and accurately presented and does it cite the current literature?
Partly
Is the study design appropriate and is the work technically sound?
Yes
Are sufficient details of methods and analysis provided to allow replication by others?
Yes
If applicable, is the statistical analysis and its interpretation appropriate?
Not applicable
Are all the source data underlying the results available to ensure full reproducibility?
No source data required
Are the conclusions drawn adequately supported by the results?
Yes
References
1. Holm S, Stanton C, Bartlett B: A New Argument for No-Fault Compensation in Health Care: The Introduction of Artificial Intelligence Systems.Health Care Anal. 2021; 29 (3): 171-188 PubMed Abstract | Publisher Full TextCompeting Interests: No competing interests were disclosed.
Reviewer Expertise: Medical Ethics, AI Ethics, Regulation of new health technologies
Alongside their report, reviewers assign a status to the article:
Invited Reviewers | |||
---|---|---|---|
1 | 2 | 3 | |
Version 1 17 Dec 21 |
read | read | read |
Provide sufficient details of any financial or non-financial competing interests to enable users to assess whether your comments might lead a reasonable person to question your impartiality. Consider the following examples, but note that this is not an exhaustive list:
Sign up for content alerts and receive a weekly or monthly email with all newly published articles
Already registered? Sign in
The email address should be the one you originally registered with F1000.
You registered with F1000 via Google, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Google account password, please click here.
You registered with F1000 via Facebook, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Facebook account password, please click here.
If your email address is registered with us, we will email you instructions to reset your password.
If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.
Comments on this article Comments (0)