Keywords
google, Dr.Google, diagnostic skills, residency, nephrology
google, Dr.Google, diagnostic skills, residency, nephrology
We agreed that atypical presentations of common conditions are more frequent than rare diseases. Both residents and fellows are in the journey of learning and personal knowledge biases cannot be excluded. We feel that this is a limitation of our manuscript and have added a sentence to mention specifically of your concerns. While we feel that 'Googlers' might have gotten to the level of fellows in rare diseases, we still feel that it is the thinking that is most important and a physicians' knowledge and experience cannot be replaced by a search engine. We do mention this towards the end in the discussion.
We agreed that atypical presentations of common conditions are more frequent than rare diseases. Both residents and fellows are in the journey of learning and personal knowledge biases cannot be excluded. We feel that this is a limitation of our manuscript and have added a sentence to mention specifically of your concerns. While we feel that 'Googlers' might have gotten to the level of fellows in rare diseases, we still feel that it is the thinking that is most important and a physicians' knowledge and experience cannot be replaced by a search engine. We do mention this towards the end in the discussion.
To read any peer review reports and author responses for this article, follow the "read" links in the Open Peer Review table.
In medical problem solving and decision-making, experts often use heuristics, or methods of problem solving for which no formula exists, but are instead based on informal methods or experience1. Heuristics help generate accurate decisions in an economical manner for both time and cost. In a sense, expert strategies are immensely adaptive1. While invaluable in helping the experienced clinician arrive at a diagnosis faster, the use of heuristics is associated with biases inherent in efficient decision making and, therefore, can lead to specific patterns of error2. The use of technology employs an algorithmic, rather than a heuristic, approach to medical problem solving and at speeds much greater than human capacity. Various technologies have been experimented with in medicine for years. Past efforts have included computer programs specifically designed to help clinicians make medical decisions and diagnose conditions more efficiently and accurately1,3. Electronic medical records and information technology have improved access to and ease of use of patient data. Technology does not merely facilitate or augment decision-making, but it reorganizes decision-making practices1.
The most dramatic development in medical decision-making technology has been the advent of the Internet. Use of social media tools such as Facebook and Twitter allow for sharing of information and getting information at a much faster rate than previously thought. Search engines have slowly emerged as useful tools to acquire data regarding medical knowledge. Clinicians can utilize search engines to help them with decision-making. Search engines, the most popular of which is Google3, allow for the algorithmic surveying of all available information in an attempt to provide the most meaningful and useful information to the end user. It is plausible that the use of search engines could substantially aid the clinician, especially when dealing with diagnostic or therapeutic challenges involving great complexity and multiple variables, but the effectiveness of search engines as an aid to the clinician is incompletely defined, as suggested by a recent study by Krause et al.4.
As technology infiltrates everyday medicine, the debate about the appropriate role for information technology within medicine has intensified5,6. Early on, concern was raised regarding the utility of search engines to direct patients and clinicians to relevant sources7. More recently, there is mounting anecdotal evidence of miraculous or fantastic accounts of patients and physicians-in-training “googling” the answer to a medical question that had experts stumped8. There have been several small studies looking at the ability of doctors at various levels of training and experience to correctly diagnose a disease using Google based on case presentations from the New England Journal of Medicine (NEJM). Falagas et al. did a head-to-head comparison of three learners (two medical students and one “trainee doctor”) in which the learners first provided their diagnoses to NEJM cases without help, and then repeated the exercise with the help of Google and Pubmed9. While the findings did not reach statistical significance, the study suggested that use of Google and Pubmed may be helpful in generating a differential diagnosis9. Tang and Ng took 26 cases, also from the case records series in the New England Journal of Medicine, and selected 3–5 search terms for each case and entered them into Google10. Using this approach, the Google search provided the correct diagnosis in 58% of the cases10. The conclusions of the studies were essentially the same: Google (and probably other search engines and algorithmic technologies) appears to be a viable clinical tool to aid in physician diagnosis and learning.
Does “googling” a diagnosis replace an experienced physician’s clinical acumen? “Googling” a clinical question may be especially useful in the case of rare or syndromic diseases, but may be less likely to be useful in diagnosing more common diseases. To assess this possibility, we reviewed and analyzed the use of Google as a diagnostic tool in renal diseases and compared it to the experience of fellows and attending staff. A total of 21 members participated in the study (7 novices, 7 intermediate levels- fellows and 7 experts -attendings). We created 103 pairings of common and uncommon renal diseases with keywords related to the features of the disease using a standard renal textbook as a guide (Appendix 1). The diseases were then categorized as common or rare based upon the consensus of the investigators. This association was not indicated on the worksheets given to the participants. The order of the questions was then randomized and worksheets were made with approximately fifteen keyword pairings per page. Experts (nephrology attendings) and intermediates (nephrology fellows) were given the entire list of keywords (one page at a time) and asked to identify the associated diseases without any aid. Novices (first- and second-year internal medicine residents) were given approximately three pages at random and asked to use Google to identify the renal disease associated with the keywords. The novices were given standardized instructions requiring that they only use the first ten results (first page of results) returned from a Google search. They were then only permitted to use the first page of each of the ten results that appear on the first Google search page. A detailed instruction sheet is attached for reference (Appendix 2). The residents were instructed to use any or all of the keywords, as they saw fit, and they were allowed to try different iterations of the keywords if their original search did not yield a diagnosis they were satisfied with. The residents were supervised/proctored by one of the investigators; questions were limited to explanations of the rules. The percent of diagnoses correctly identified from the keywords was identified for each test-taking group, and the groups were compared with each other two at a time. The diseases were then categorized as common or rare based upon consensus of the investigators. Worksheets were created with keywords groupings for each disease listed and space provided for a study participant to record the suspected diagnosis. The association of common versus rare was not indicated on the worksheets given to the participants. The participants were asked to complete as many pages as they were willing to complete. All participating experts answered a total of 229 questions. All participating intermediates answered a total of 254 questions. All participating novices answered a total of 230 questions.
The percent of diagnoses correctly identified from the keywords was identified for each test-taking group and the groups were compared with each other two at a time. A t-test was calculated for each pairing; p-values were calculated using Microsoft Excel. A subgroup analysis was also conducted for common diseases and for rare diseases. Table 1 and Table 2 show examples of the common and rare diseases chosen, and the keywords and their associated diseases, respectively.
Overall, with the aid of Google, the novices (internal medicine residents) correctly diagnosed renal diseases less often than the experts (nephrology attendings) (72.2% vs. 84.7%, p<0.001), but with the same frequency as the intermediates (nephrology fellows) (72.2% vs. 71.5%, p=0.795). In a subgroup analysis of common diseases, the novices correctly diagnosed renal diseases less often than the experts (76.6% vs. 90.5%, p<0.001) and intermediates (76.6% vs. 82.3%, p=0.031). However, in a subgroup analysis of rare diseases, the novices correctly diagnosed renal diseases less often than the experts (65.2% vs. 76.1%, p=0.014), but more often than the intermediates (65.2% vs. 56.2%, p=0.029). This study is unique, in that it directly compares heuristic and algorithmic problem solving, using the dominant technology of our time: the Internet via Google. It also addresses which types of problems are best solved using the heuristics of an experienced clinician and which problems benefit most from algorithmic problem solving with the aid of a search engine. Limitations of the short research include single-center study, investigator bias and limited number of participants. Residents and fellows are still in the learning process and using a search engine against them can create a certain bias, as they don’t have the experience yet. While this question will require further study, our findings suggest that for uncommon clinical entities, the use of search engine technology may be able to increase the diagnostic performance of a novice to an intermediate level.
Can the computer really 'out think' the doctor in making a diagnosis? A recent editorial in The New York Times11 begs this question as well and suggests that in rare diseases, and in many instances, a computer software program would have saved many lives. This might be true for rarely encountered conditions, but perhaps not for common diseases. Rare diseases are often not diagnosed at the first encounter with a physician, and hence the term “rare”. A computer-based query, as used in Google, might help diagnose a rare illness faster, but cannot substitute for the heuristic thinking process of a physician and the matching of patterns facilitated by a physician’s experience. But, in many cases, the internet can reveal rare cases that can lead to unnecessary testing and anxiety for the patient and the physician. Hence, while search engines and diagnostic programs will likely continue to evolve as diagnostic tools, they can aid, but cannot replace the thought processes of the experienced clinician.
KDJ and PBS designed the idea. The concept and experiment were IRB exempt at NSLIJ Health System. KDJ and JM wrote the manuscript.
Part of this work was presented at the national American Society of Nephrology Annual Renal Week Nov 2011, Philadelphia, PA, USA.
Views | Downloads | |
---|---|---|
F1000Research | - | - |
PubMed Central
Data from PMC are received and updated monthly.
|
- | - |
Competing Interests: No competing interests were disclosed.
Competing Interests: No competing interests were disclosed.
Competing Interests: No competing interests were disclosed.
Alongside their report, reviewers assign a status to the article:
Invited Reviewers | ||
---|---|---|
1 | 2 | |
Version 2 (update) 10 Apr 13 |
read | |
Version 1 14 Mar 13 |
read | read |
Provide sufficient details of any financial or non-financial competing interests to enable users to assess whether your comments might lead a reasonable person to question your impartiality. Consider the following examples, but note that this is not an exhaustive list:
Sign up for content alerts and receive a weekly or monthly email with all newly published articles
Already registered? Sign in
The email address should be the one you originally registered with F1000.
You registered with F1000 via Google, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Google account password, please click here.
You registered with F1000 via Facebook, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Facebook account password, please click here.
If your email address is registered with us, we will email you instructions to reset your password.
If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.
Comments on this article Comments (1)