ALL Metrics
-
Views
-
Downloads
Get PDF
Get XML
Cite
Export
Track
Correspondence

Unintended consequences of machine learning in medicine?

[version 1; peer review: 2 approved]
PUBLISHED 19 Sep 2017
Author details Author details
OPEN PEER REVIEW
REVIEWER STATUS

This article is included in the Artificial Intelligence and Machine Learning gateway.

This article is included in the Machine learning: life sciences collection.

Abstract

Machine learning (ML) has the potential to significantly aid medical practice. However, a recent article highlighted some negative consequences that may arise from using ML decision support in medicine. We argue here that whilst the concerns raised by the authors may be appropriate, they are not specific to ML, and thus the article may lead to an adverse perception about this technique in particular. Whilst ML is not without its limitations like any methodology, a balanced view is needed in order to not hamper its use in potentially enabling better patient care.

Keywords

machine learning, healthcare, medicine, artificial intelligence

There is significant interest in the use of machine learning (ML) in medicine. ML techniques can ‘learn’ from the vast amount of healthcare data currently available, in order to assist clinical decision making. However, a recent article1 highlighted a number of consequences that may occur with increased ML use in healthcare, including physician deskilling, and that the approach is a ‘black box’ and unable to use contextual information during analysis.

Whilst we agree that Cabitza et al’s concerns are justified1, we believe that a more balanced discussion could have been provided with regards to ML-based decision support systems (ML-DSS). As it stands, an impression is given that ML is flawed, rather than the issue being the way in which it is applied. The concerns raised are generally applicable to many analytical approaches, and reflect poor study design and/or a lack of analytical rigour than the particular technique being used.

The authors cite two examples to claim that ML-DSS could potentially reduce physician diagnostic accuracy. The mammogram example2 shows reduction in sensitivity for 6 of the most discriminating of 50 radiologists. However, the mammogram ML-DSS referred to is old2, and it is not clear how the underlying model was trained and evaluated. The model may perform well for some types of cancer, but not as well for others as a result of the training data. Indeed updates have been shown to increase detection sensitivity3. ML models can be refined by providing more data and results need to be critically appraised in this context. Additionally, no mention is made of the possible benefits of ML-DSS for less experienced staff. In the mammogram example, an improvement in sensitivity for 44 out of 50 radiologists was seen for easier to detect cancers. There was also an increased overall diagnostic accuracy when using ML-DSS in the electrocardiogram study4. Accuracy loss for experienced readers when using ML-DSS is valid, but more reflective of training needed and not an outcome specific to ML-DSS. A knowledgeable doctor may have no need for an ML-DSS, but the tool could greatly assist less experienced staff.

Cabitza et al. also argue that the confounding caused by asthma in the outcome of patients with pneumonia would have not been observed in a neural network model. There are, however, methods to obtain the feature importance and the direction of the relationship between predictor variables and outcome in neural networks5. Further, some ML approaches, such as random forest, are more transparent than others and ML can easily be coupled with clinical expertise to develop risk models that have their benefits over traditional statistical modelling6.

The issues highlighted by Cabitza et al. are more concerned with the studies themselves rather than an intrinsic flaw in ML methodology. To fully leverage ML or any other approach, users must have a good understanding of the caveats. In summary, we agree that ML-based approaches are not without their limitations, but the growing application of ML in healthcare has the potential to significantly aid physicians, especially in increasingly resource constrained environments. Informed, appropriate use of ML-DSS could, therefore, enable better patient care.

Comments on this article Comments (0)

Version 1
VERSION 1 PUBLISHED 19 Sep 2017
Comment
Author details Author details
Competing interests
Grant information
Copyright
Download
 
Export To
metrics
Views Downloads
F1000Research - -
PubMed Central
Data from PMC are received and updated monthly.
- -
Citations
CITE
how to cite this article
McDonald L, Ramagopalan SV, Cox AP and Oguz M. Unintended consequences of machine learning in medicine? [version 1; peer review: 2 approved]. F1000Research 2017, 6:1707 (https://doi.org/10.12688/f1000research.12693.1)
NOTE: If applicable, it is important to ensure the information in square brackets after the title is included in all citations of this article.
track
receive updates on this article
Track an article to receive email alerts on any updates to this article.

Open Peer Review

Current Reviewer Status: ?
Key to Reviewer Statuses VIEW
ApprovedThe paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approvedFundamental flaws in the paper seriously undermine the findings and conclusions
Version 1
VERSION 1
PUBLISHED 19 Sep 2017
Views
25
Cite
Reviewer Report 23 Nov 2017
Hugo Schnack, Department of Psychiatry, Brain Center Rudolf Magnus, University Medical Center Utrecht, Utrecht, The Netherlands 
Zimbo Boudewijns, Department of Psychiatry, Brain Center Rudolf Magnus, University Medical Center Utrecht, Utrecht, The Netherlands 
Approved
VIEWS 25
Machine learning (ML) methods are currently being applied in a wide range of fields. Theoretically, the ability to extract meaningful relations from large datasets holds a great promise for health care and could potentially offer new, unexpected insights into disease ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Schnack H and Boudewijns Z. Reviewer Report For: Unintended consequences of machine learning in medicine? [version 1; peer review: 2 approved]. F1000Research 2017, 6:1707 (https://doi.org/10.5256/f1000research.13746.r27518)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
Views
25
Cite
Reviewer Report 30 Oct 2017
Arturo Gonzalez-Izquierdo, Institute of Health Informatics, University College London, London, UK 
Maria Pikoula, Institute of Health Informatics, University College London, London, UK 
Spiros Denaxas, Institute of Health Informatics, University College London, London, UK 
Approved
VIEWS 25
The publication of this letter is both important and timely given the increased interest that statistical learning approaches applied to healthcare data are receiving. 

The original article emphasized a negative perception of potential adverse consequences of machine ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Gonzalez-Izquierdo A, Pikoula M and Denaxas S. Reviewer Report For: Unintended consequences of machine learning in medicine? [version 1; peer review: 2 approved]. F1000Research 2017, 6:1707 (https://doi.org/10.5256/f1000research.13746.r26485)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.

Comments on this article Comments (0)

Version 1
VERSION 1 PUBLISHED 19 Sep 2017
Comment
Alongside their report, reviewers assign a status to the article:
Approved - the paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations - A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approved - fundamental flaws in the paper seriously undermine the findings and conclusions
Sign In
If you've forgotten your password, please enter your email address below and we'll send you instructions on how to reset your password.

The email address should be the one you originally registered with F1000.

Email address not valid, please try again

You registered with F1000 via Google, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Google account password, please click here.

You registered with F1000 via Facebook, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Facebook account password, please click here.

Code not correct, please try again
Email us for further assistance.
Server error, please try again.