Keywords
Citation Tracking, Literature Search, Supplementary Search, Methods, Scoping Review, Research Methodology, Survey, Systematic Review
This article is included in the Research on Research, Policy & Culture gateway.
This article is included in the Future of Research (FoR) collection.
Citation Tracking, Literature Search, Supplementary Search, Methods, Scoping Review, Research Methodology, Survey, Systematic Review
We added methodological details of the Delphi study and we now specify the number of rounds we expect to perform, the consensus rate, anonymity of participants, and expected non-response.
See the authors' detailed response to the review by Julie Glanville
See the authors' detailed response to the review by David Moher
Systematic reviews are considered to be of high clinical and methodological importance as they help to derive recommendations for health care practice and future research1–3. A comprehensive literature search that aims to identify the available evidence as completely as possible is the foundation of every systematic review4–6. Due to an ever-growing research volume, lack of universal terminology and indexation, as well as extensive time requirements for identifying studies in a systematic way, efficient search approaches are required5,7,8. According to current recommendations, systematic search approaches should include both electronic database searching and one or several supplementary search methods9. Potential supplementary search methods include citation tracking, contacting study authors or experts, handsearching, trial register searching, and web searching10. In this study, we focus on citation tracking.
Citation tracking is an umbrella term for multiple methods which directly or indirectly collect related references from so called "seed references". These seed references are usually eligible for inclusion into the review. Some may be known at the beginning of the review and others may emerge as eligible records following full-text screening10–12. The terminology used to describe the principles of citation tracking is non-uniform and heterogeneous13–16. Citation tracking methods are sub-categorized into direct and indirect citation tracking (Figure 1a). For direct citation tracking, the words "backward" and "forward" denote the directionality of tracking13,17,18. Backward citation tracking is the oldest form of citation tracking. It aims at identifying references cited by a seed reference - which can easily be achieved by checking the reference list. Terms like "footnote chasing" or "reference list searching" are synonyms6,13. In contrast, forward citation tracking or chaining aims at identifying citing references, i.e. references that cite a seed reference19. Indirect citation tracking describes the identification of (i) co-cited references or co-citations (i.e. other references cited by citing literature of a seed reference) and of (ii) co-citing references (i.e. publications sharing references with a seed reference)11,20. Direct and indirect citation relationships of references based on a seed reference are illustrated in Figure 1b. Both direct and indirect citation tracking may contain one or more layers of iteration. To this end, researchers may use newly retrieved, relevant references as new seed references.
1a: Hierarchical illustration of different citation tracking methods; 1b: Direct and indirect citation relationships of references based on a seed reference. A → B denotes A cites B. The horizontal axis denotes time, i.e. the chronology in which references were published relative to the seed reference: “Older” stands for references that were published before the seed reference, “Newer” stands for references that were published after the seed reference.
Direct backward citation tracking of cited references is currently the most common citation tracking method. However, recent guidance suggests that a combination of several methods (e.g., tracking cited, citing, co-cited and co-citing references) may be the most effective way to use citation tracking for systematic reviewing10. It is quite likely that the added value of any form of citation tracking is not the same for all systematic reviews. It rather depends on a variety of factors. For instance, citation tracking may be beneficial in research areas that require complex searches such as reviews of complex interventions, mixed-methods reviews, qualitative evidence syntheses, or reviews on public health topics. Furthermore, research areas without consistent terminology or with vocabulary overlaps with other fields, such as methodological topics, may also benefit from the use of citation tracking20,21. Hence, tailored and evidence-guided recommendations on the use of citation tracking are strongly needed. However, none of the current reviews on this topic has systematically identified available evidence on the use and benefit of citation tracking in the context of systematic literature searching10.
Therefore, the aim of our study is to develop recommendations for the use of citation tracking in systematic literature searching for health-related topics. The scoping review will be guided by the following three research questions which in turn will inform the Delphi study:
This protocol is reported according to the “Preferred Reporting Items for Systematic review and Meta-Analysis Protocols” (PRISMA-P) checklist22 which we published on the Open Science Framework23. Our study will have two parts: a scoping review and a Delphi study. The scoping review has the objective to map the benefit and the use of citation tracking, or research gaps if the results are not sufficiently informative. The objective of the subsequent Delphi study is to derive consensus recommendations for future practice and research of citation tracking24–26. For the scoping review, we will use the framework by Arksey and O’Malley26 and the “Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews” (PRISMA-ScR)27. For the Delphi study, we will follow the “Guidance on Conducting and REporting DElphi Studies” (CREDES) statement28.
Eligibility criteria. We will include any study with a focus on citation tracking as a means of evidence retrieval which exhibits one of the following criteria: benefit and/or effectiveness of (i) citation tracking in general; (ii) different methods of citation tracking (e.g., backward vs. forward, direct vs. indirect); or (iii) technical uses of citation tracking (e.g., comparing citation indexes and/or tools, e.g., Scopus vs. Web of Science, Oyster, Voyster). Eligible studies need to have a health-related context. There will be no restrictions on study design, language, and publication date.
We will exclude studies solely using citation tracking for evidence retrieval, e.g., a systematic review applying citation tracking as a supplementary search technique, or studies focussing on citation tracking as a means to explore network or citation impact (i.e. bibliometric analysis). Studies only assessing the benefit of combined search methods in which the isolated benefit of citation tracking cannot be extracted will also be excluded. Furthermore, we will exclude methodological guidelines without empirical investigations and other non-empirical publications like editorials, commentaries, letters and abstract-only publications. Table 1 illustrates our inclusion and exclusion criteria per domain.
Information sources. We will search MEDLINE via Ovid; CINAHL (Cumulative Index to Nursing and Allied Health Literature), LLISFT (Library Literature & Information Science Full Text) and LISTA (Library, Information Science & Technology Abstracts) via EBSCOhost, and the Web of Science Core Collection by using database-specific search strategies. Additionally, we will perform web searching via Google Scholar as well as direct forward and backward citation tracking of included studies. As some evidence suggests that one citation index may not be enough for this29, we will use Scopus, Web of Science, and Google Scholar for forward citation tracking. For backward citation tracking, we will use Scopus and, if seed references are not indexed in Scopus, we will manually extract the seed reference's reference list. We will iteratively repeat direct citation tracking on newly identified eligible references until no further eligible references will be identified. We will also contact librarians in the field of health sciences and information specialists through several mailing lists (Canadian Medical Libraries, Expertsearching, MEDIBIB-L/German-speaking medical librarians, and EAHIL-list) to ask for further studies.
Search strategy. Due to a lack of adequate index terms, our search strategy will be based on text words only. To determine frequently occurring terms for inclusion into the search strategy, we analysed keywords in the titles and abstracts of potentially relevant publications retrieved from preliminary searches and similar articles identified via PubMed by using various text mining tools (PubMed Reminer, AntConc, Yale MeSH analyzer, Voyant, VOSviewer, Termine, Text analyzer)30. We restricted some of our text words to the title field in order to avoid retrieving systematic reviews that used citation tracking.
All authors contributed to the development of search strategies. HE and CAH are information specialists with a professional background in research; JH and TN are researchers experienced in the development of search strategies. HE drafted the search strategy and JH peer-checked it.
Box 1 shows the final search for MEDLINE in Ovid syntax. To use the search in other databases, we will translate it by means of Polyglot Search Translator31. CAH will conduct the searches and eliminate duplicates using the Bramer method32. We will perform web searching in Google Scholar using search terms from our database search. We will document our search strategy according to PRISMA-S33.
(reference list OR reference lists OR ((reference OR references OR citation OR citations OR co-citation OR co-citations) ADJ3 (search OR searches OR searching OR searched OR screen OR screening OR chain OR chains OR chaining OR check OR checking OR checked OR chased OR chasing OR tracking OR tracked OR harvesting OR tool OR tools OR backward OR forward)) OR ((cited OR citing OR cocited OR cociting OR co-cited OR co-citing) ADJ3 (references OR reference)) OR citation discovery tool OR cocitation OR co-citation OR cocitations OR co-citations OR co-cited OR backward chaining OR forward chaining OR snowball sampling OR snowballing OR footnote chasing OR berry picking OR cross references OR cross referencing OR cross-references OR cross-referencing OR citation activity OR citation activities OR citation analysis OR citation analyses OR citation network OR citation networks OR citation relationship OR citation relationships).ti OR (((((strategy OR strategies OR method* OR literature OR evidence OR additional OR complementary OR supplementary) ADJ3 (find OR finding OR search* OR retriev*)) OR (database ADJ2 combin*)).ti) AND ((search OR searches OR searching OR searched).ab))
Data management. A bibliography management tool will be used to manage the number of reference retrievals throughout the study selection process. Furthermore, we will use specific tools for study selection that we describe below.
Selection of sources of evidence. After an initial calibration phase, that is screening 100 titles and abstracts separately and discussing divergent decisions (TN, JH, HE), two authors (JH, TN) will independently screen titles, abstracts, and full texts using Rayyan34. They will solve disagreements by third author arbitration (HE). To screen the results of the citation tracking step, we will consider ASReview, particularly if the number of references exceeds 1000. ASReview combines machine (deep) learning models on a set of eligible studies with active learning on manual selections during title-abstract screening to generate a relevancy-ranked abstract list and to save screening time. Should the tool prove to be beneficial for reducing the screening load, we will consider conducting a more sensitive database search at a later stage and screen additional results with ASReview.
Data charting process. We will pilot a prespecified data extraction sheet approved by consensus among the authors. We will extract bibliographic and geographic data, design- and study-specific data as well as results that answer our research questions. Since we expect heterogeneous studies in terms of aim, design, and methods, we aim for an iterative data extraction process. This will allow a flexible and study-specific data extraction process, e.g., by adding previously neglected data extraction items that might contribute to the overall body of knowledge to the data extraction form. In the final publication, we will provide a detailed overview of extracted data items. One author will extract data and a second author will peer-check the extraction. We will solve disagreements by third author arbitration.
Synthesis of results. One author (JH) will narratively summarise study characteristics and results. Depending on the results, we will also chart them graphically.
Design and rationale. A consensus multi-stage online Delphi procedure will be used to derive recommendations for the use of citation tracking in systematic literature searching for health-related topics28,35,36. A Delphi procedure will be chosen since the method enables to collect the perspectives of international experts on citation tracking, promote discussions on the topic as well as derive consensus recommendations for future practice and research. The Delphi study will entail several Delphi rounds (see below). The results of the scoping review will inform the initial Delphi round (see below for details). To distribute the Delphi rounds to the experts, we will use the web-based tool SosciSurvey37. The Delphi language will be English.
Expert panel. The recruitment of experts will be based on a stepwise approach. First, we will contact authors of pertinent articles identified during the literature search as well as experts from our professional networks. This "person-based" approach will help us to identify experts who authored papers, books, comments, and reviews in the field of citation tracking. We will ask the contacted persons to take part in the Delphi study. Second, we will identify and contact relevant national and international organisations as well as systematic review collaborations (e.g., Cochrane groups, Joanna Briggs Institute (JBI), Campbell Collaboration, National Academy of Medicine (NAM), expert information specialists, Evidence Synthesis International, and PRISMA-S working group). This "organisation-based approach" will allow us to reach experts in the field of literature retrieval methods who are potentially using citation tracking without necessarily being the authors of methodological studies (yet). By using this stepwise approach, we intend to recruit at least 15 experts.
Data collection. In online Delphi rounds, we will seek guidance on various aspects of citation tracking. For example, recommendations on the following aspects could be of particular interest:
Uniform terminology for citation tracking methods
Situations in which citation tracking should be applied
Potential situations in which citation tracking can be used as a sole method of evidence retrieval
Situations in which a particular citation tracking method or a combination thereof is likely to be most effective
Situations in which further layers of iteration of citation tracking should be applied
Necessity to use multiple citation indexes for citation tracking
For indirect citation tracking, screening of selected records only and definition of their ranking and cut-off
Reporting of citation tracking (complementing PRISMA-S33)
Questions on citation tracking that currently cannot be answered and require more research
Based on the results of our scoping review, we will formulate draft recommendations for the first Delphi round. Experts will be invited to rate their agreement with the draft recommendations using a four-point Likert scale (strongly agree – agree – disagree – strongly disagree). If experts vote disagree/strongly disagree, they will be required to comment on their reasons and/or give constructive feedback. We consider a recommendation as consented when at least 75% of the experts agree/strongly agree. All other recommendations will be adapted for the next Delphi round. This adaptation will be based on the comments collected from the experts and, if necessary, on discussion via video conference.
There are items where we will not directly propose recommendations, e.g., if the results of our scoping review do not allow it or if there are several equally valid options (e.g., for terminology). In these cases, we will either ask the Delphi experts for their experiences and perspectives or let them vote on several options. We will use the resulting answers to formulate draft recommendations, which will be entered into the Delphi consensus process (see above). Therefore, our Delphi study may comprise qualitative and quantitative aspects.
We will limit the number of Delphi rounds to a maximum of four rounds. Should there be no consensus for any of the items by the end of the fourth round, we will report the results but not give any recommendations.
Expert assessments will be anonymous among experts but open to the study team. We expect a low non-response rate since experts' participation is indicative of their interest in our study.
To describe experts’ characteristics, we will collect sociodemographic data, i.e. professional education and background, current field of work as well as years of experience in literature searching and citation tracking. We expect that experts will invest around 30 to 90 minutes per Delphi round depending on the underpinning aim of the Delphi round as well as experts’ familiarity and experiences with the topic. For each Delphi round, we will schedule approximately three weeks for participation. Table 2 illustrates our reminder strategy within a Delphi round. We will pilot test and discuss our Delphi items with a person experienced in literature searching who is not an author and not involved in the Delphi study.
Process and time | Person-based approach | Organisation-based approach |
---|---|---|
Delphi round setup | Invitation | Invitation |
One week after | Reminder | - |
Two weeks after | Reminder | Reminder |
Delphi round closing after three weeks | - | - |
Data analysis. We will use descriptive statistics for votes for which results are numeric or can be converted into numbers. For free text answers and statements of experts, we will use thematic categorisation38.
Ethical concerns. The online Delphi study will contain introductory information on our aims, the Delphi itself, data management and security. We do not expect vulnerability on the part of experts and with regard to the Swiss Human Research Act, our research does not concern human diseases and the structure and function of the human body39. We will therefore not apply for ethical approval of the Delphi study. Taking part in the Delphi study will indicate consent to participate. There will be no mandatory participation once an expert consented to participate. Experts will not receive an incentive for participation and may leave the process at any time.
Our dissemination strategy uses multiple ways to share our study results with academic stakeholders. The final scoping review and Delphi study will each be published in an international open access journal relevant in the field of information retrieval. Additionally, we will discuss our results with experts at national and international conferences (e.g., conference of the German Network for Evidence-based Medicine (EbM-Netzwerk), conference of the European Association for Health Information and Libraries (EAHIL), Cochrane Colloquium, Health Technology Assessment International (HTAi) conference). To inform about our study results and publications, we will use Twitter, ResearchGate, and mailing lists from relevant stakeholders such as Canadian Medical Libraries, Expertsearching, MEDIBIB-L/German-speaking medical librarians, and EAHIL-list.
We conducted the initial search for the scoping review in November 2020 and expect to complete the Delphi study in 2022.
Current study status: literature searches: yes; piloting of the study selection process: yes; formal screening of search results against eligibility criteria: yes; data extraction: no; data analysis: no.
Missing pertinent evidence might have an impact on the validity of systematic reviews and, consequently, on the quality of health care40,41. Therefore, authors of systematic reviews should conduct high quality literature searches aiming to detect all relevant evidence. Citation tracking may be an effective way to complement electronic database searches and to broaden the scope of possible findings. Therefore, our study intends to provide literature- and expert-based recommendations on the use of citation tracking for systematic literature searching. Although we solely focus on a health-related context, it is possible that some of the recommendations developed during this project may prove relevant also for other academic fields such as social or environmental sciences9,42. Finally, tailored and evidence-based recommendations concerning the use of citation tracking for systematic literature searching may guide future steps in semi-automated and automated literature retrieval methods43,44.
Open Science Framework (OSF): PRISMA-P checklist for ‘Using citation tracking for systematic literature searching - study protocol for a scoping review of methodological studies and a Delphi study’, https://doi.org/10.17605/OSF.IO/7ETYD23.
Data are available under the terms of the Creative Commons Zero "No rights reserved" data waiver (CC0 1.0 Public domain dedication).
Views | Downloads | |
---|---|---|
F1000Research | - | - |
PubMed Central
Data from PMC are received and updated monthly.
|
- | - |
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Systematic reviews; open science; reporting guidelines.
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Information retrieval for evidence identification for systematic reviews.
References
1. Vogel C, Zwolinsky S, Griffiths C, Hobbs M, et al.: A Delphi study to build consensus on the definition and use of big data in obesity research.Int J Obes (Lond). 43 (12): 2573-2586 PubMed Abstract | Publisher Full TextCompeting Interests: No competing interests were disclosed.
Reviewer Expertise: Systematic reviews; open science; reporting guidelines.
Is the rationale for, and objectives of, the study clearly described?
Yes
Is the study design appropriate for the research question?
Yes
Are sufficient details of the methods provided to allow replication by others?
Partly
Are the datasets clearly presented in a useable and accessible format?
Partly
References
1. Rethlefsen ML, Kirtley S, Waffenschmidt S, Ayala AP, et al.: PRISMA-S: an extension to the PRISMA Statement for Reporting Literature Searches in Systematic Reviews.Syst Rev. 2021; 10 (1): 39 PubMed Abstract | Publisher Full TextCompeting Interests: No competing interests were disclosed.
Reviewer Expertise: Systematic reviews; open science; reporting guidelines.
Is the rationale for, and objectives of, the study clearly described?
Yes
Is the study design appropriate for the research question?
Yes
Are sufficient details of the methods provided to allow replication by others?
Partly
Are the datasets clearly presented in a useable and accessible format?
Not applicable
References
1. Belter C: A relevance ranking method for citation-based search results. Scientometrics. 2017; 112 (2): 731-746 Publisher Full TextCompeting Interests: I conduct training courses in information retrieval for the purpose of systematic reviews and these include courses in citation tracking/citation searches.
Reviewer Expertise: Information retrieval for evidence identification for systematic reviews.
Alongside their report, reviewers assign a status to the article:
Invited Reviewers | ||
---|---|---|
1 | 2 | |
Version 3 (revision) 02 Sep 21 |
read | |
Version 2 (revision) 04 Aug 21 |
read | read |
Version 1 01 Dec 20 |
read | read |
Provide sufficient details of any financial or non-financial competing interests to enable users to assess whether your comments might lead a reasonable person to question your impartiality. Consider the following examples, but note that this is not an exhaustive list:
Sign up for content alerts and receive a weekly or monthly email with all newly published articles
Already registered? Sign in
The email address should be the one you originally registered with F1000.
You registered with F1000 via Google, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Google account password, please click here.
You registered with F1000 via Facebook, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Facebook account password, please click here.
If your email address is registered with us, we will email you instructions to reset your password.
If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.
Comments on this article Comments (0)