The TrialsTracker: Automated ongoing monitoring of failure to share clinical trial results by all major companies and research institutions

Background: Failure to publish trial results is a prevalent ethical breach with a negative impact on patient care. Audit is an important tool for quality improvement. We set out to produce an online resource that automatically identifies the sponsors with the best and worst record for failing to share trial results. Methods: A tool was produced that identifies all completed trials from clinicaltrials.gov, searches for results in the clinicaltrials.gov registry and on PubMed, and presents summary statistics for each sponsor online. Results: The TrialsTracker tool is now available. Results are consistent with previous publication bias cohort studies using manual searches. The prevalence of missing studies is presented for various classes of sponsor. All code and data is shared. Discussion: We have designed, built, and launched an easily accessible online service, the TrialsTracker, that identifies sponsors who have failed in their duty to make results of clinical trials available, and which can be maintained at low cost. Sponsors who wish to improve their performance metrics in this tool can do so by publishing the results of their trials.

The results of clinical trials are used to make informed choices with patients about medical treatments. However, there is extensive and longstanding evidence that the results of clinical trials are routinely withheld from doctors, researchers, and patients. A current systematic review of all cohort studies following up registered trials, or trials with ethical approval, shows that approximately half fail to publish their results 1 . Evidence from an earlier review shows that studies with "negative" or non-significant results are twice as likely to be left unpublished 2 . Legislation, such as FDA Amendment Act 2007 (http://www.fda.gov/Regulatory-Information/Legislation/SignificantAmendmentstotheFDCAct/ FoodandDrugAdministrationAmendmentsActof2007/default.htm), which requires trials to post summary results on clinicaltrials.gov within 12 months of completion, have been widely ignored, with a compliance rate of one in five 3,4 . The FDA is entitled to impose fines of $10,000 a day on those breaching this law, but has never yet done so 5,6 . This public health problem has also been the subject of extensive campaigning. For example, the AllTrials campaign is currently supported by 89,000 individuals and 700 organisations, including major funders, professional bodies, patient organisations and government bodies (http://www.alltrials.net/).
Previous work suggests that some sponsors, companies, funders, and research sites may perform better than others 5,7 . In any sector, audit of the best and worst performers can be used to improve performance, allowing those with a poor performance to learn from those doing better. To be effective, however, audit should be repeated, and ideally ongoing 8 .
All work on publication bias to date relies on a single sweep of labour-intensive manual searches 9,10 , or a single attempt to automatically match registry entries to published papers using registry identification number 11 . Manual matching comes at high cost and does not give ongoing feedback. We therefore set out to: develop an online tool that automatically identifies trials with unreported results; present and rank the prevalence of publication failure, broken down by sponsor; and maintain the service, updating the data automatically, so that companies and research institutes are motivated to improve their performance.

Methods
The methods used by the online tool are as follows. Raw structured data on all studies in clinicaltrials.gov are downloaded in XML format. Studies are kept if they: have a study type "interventional" (excluding observational studies); have a "status" of "completed"; have a completion date more than 24 months ago, and after Jan 1 2006; are phase 2, 3, 4, or "n/a" (generally a device or behavioural intervention); no application to delay results posting has been filed (ascertained from the firstreceived_results_disposition_date tag); are conducted by a sponsor who has sponsored more than 30 trials (to exclude trials conducted by minor sponsors and make the ranking in the tool more informative).
Results are then sought for all included studies, using two methods. First the tool checks for structured results posted directly in clinicaltrials.gov, ascertained by the presence of the firstreceived_ results_date tag. Secondly, the tool searches for the nct_id (registry ID number) of the trial in PubMed's Secondary Source ID field. Since 2005, all trials with a registry ID in the body of the journal article text should have that ID replicated in this field (https://www. nlm.nih.gov/bsd/policy/clin_trials.html). However, since in our experience approximately 1.5% of PubMed records include a valid nct_id list in the abstract, but not the Secondary Source ID field, our tool additionally searches for this ID in the title or abstract text. We exclude results published before the completion date of the trial, or results that have the words "study protocol" in the title.
A final filter is then applied, with the aim of excluding publications reporting protocols or additional analysis and commentary, rather than trial results; after experimenting with the standard validated PubMed "therapy" filters (both broad and narrow) and a rudimentary search for "study protocol", the former was used. A comparison of the three methods is reported in the accompanying iPython notebook [https://github.com/ebmdatalab/trialstracker] 12 .
Accepting that an automated tool cannot produce results with the accuracy of a manual search, we also performed some rudimentary checks of the output of the automated search against existing manual search cohorts. The overall prevalence of unreported studies found by the tool was compared against three previous studies on publication bias. In addition, disparities on individual studies found to be unreported by the tool were compared against the underlying data from a recent publication bias cohort study conducted using clinicaltrials.gov data.
The output data is then shared through an interactive website at https://trialstracker.ebmdatalab.net allowing users to rank sponsors by number of trials missing, number of trials conducted, and proportion of trials missing. Users can click on a sponsor name to examine the number and proportion of trials completed and reported from each year for that sponsor. The site URL changes as users focus on each organisation's performance, so that users can easily share insights into the performance of an individual company or institution. By default sponsors are sorted by the highest number of unreported trials, rather than the highest proportion, in order to initially focus on larger and more well-known organisations. The site is designed responsively to be usable on mobile, tablet or desktop devices.
For transparency and replication, all code for the tool, with comments and all data sources, is available as an iPython notebook 12 . All software is shared as open source, under the MIT license. A full CSV is shared containing all data, including all studies before our filters are applied, allowing others to conduct additional analyses or sensitivity analyses with different filtering methods.

Results
The TrialsTracker tool was successfully built and is now running online at https://trialstracker.ebmdatalab.net. Sample screenshots are presented in Figure 1 and Figure 2.
Since Jan 2006, trial sponsors included in our dataset have completed 25,927 eligible trials, of which 11,714 (45.2%) have failed to make results available. Table 1 to Table 4 report the sponsors with the top five highest number of unreported trials, the    highest number of eligible trials, the highest proportion of unreported trials, and the lowest proportion of unreported trials. In total, 2390/8799 (27.2%) trials with sponsors classed as "industry" were identified as unreported; 122/470 (26.0%) trials with sponsors classed as "US Fed" were identified as unreported; 361/996 (36.2%) trials with sponsors classed as "NIH" were identified as unreported; 8841/15662 (56.4%) trials with sponsors classed as "other" were identified as unreported. We find that 8.7 million patients were enrolled in trials that are identified as unreported.
Checks for consistency with previous work A previous paper automatically matching registry entries to PubMed records and clinicaltrials.gov results found 55% had no evidence of results 11 , consistent with our overall findings. A previous manual audit (of which BG is co-author) found 56% of trials conducted in the University of Oxford reported results; our method also found 56% for the same institution 9 . A previous manual audit examined 4347 trials across 51 academic medical centres 7 . We compared their individual study data against ours and found that 2562 trials (62.6%) in their cohort were also in ours, but note that their study only represented 2% of our total cohort. For studies in both cohorts we found 60% reported results, while they found 66%. Of studies in both cohorts: 1149 were found "reported" by both; 534 studies were found "unreported" by both; 497 were found "reported" by their method and "unreported" by ours; 382 were found "unreported" by theirs and "reported" by ours.

Discussion
The tool was successfully built, and is now fully functional online. We found non-publication rates consistent with those from previous work using manual searches, and reasonable consistency with individual study matches from a previous manual cohort. A wide range of publication failure rates were apparent in the data.

Strengths and weaknesses
Our tool is the first to provide live ongoing interactive monitoring of failure to publish the results of clinical trials. The method of automatic matching has strengths and weaknesses. It can be run automatically, at a lower unit cost than a manual search, and therefore allows coverage of more trials than any traditional cohort study. It also permits repeated re-analysis at minimal additional marginal cost compared to a manual search.
In corollary, the efficiency of automatic matching also brings challenges around specificity and sensitivity. Firstly, there may be false adjudications of non-publication, i.e. if a trial's results paper does not include its registry identifier. However, since 2005 all major medical journals (through the International Committee of Medical Journal Editors; http://icmje.org/recommendations/browse/ publishing-and-editorial-issues/clinical-trial-registration.html) have required trials to be registered, and all trials should include their registry ID in the text. Therefore, in our view, the responsibility for results being undiscoverable, when the registry ID is not included by the trialists, lies solely with the trialists; research that is hard to discover is not transparently reported. We hope that in the future better methods for probabilistic record linkage will also be available for wider use 13 . Secondly, there may be false positives, where a study identified through ID matching and then filtered, is in fact not reporting results. We have used standard filters to account for this, and we are keen to improve our method in the light of concrete constructive feedback. Our checks for consistency against overall prevalence findings and individual study data from previous research to a large extent exclude gross errors in prevalence figures.
Notably there are specific additional methods for linking clinicaltrials.gov records to PubMed records that we tried and rejected. Some trials have a link to a PubMed record directly in the clinicaltrials.gov results_reference tag, which ClinicalTrials documentation (https://prsinfo.clinicaltrials.gov/definitions.html) suggests indicates results from a publication. We found 2263 eligible trials had such tags, but no summary results on ClinicalTrials.gov. However, on manual examination, we found these are often erroneous, and commonly report results of unrelated studies from several years previously. In discussion, clinicaltrials. gov staff confirmed that this field is neither policed nor subject to substantial editorial control (personal communication with Annice Bergeris).

Context of other findings
Our findings are consistent with previous work on publication bias 1 , finding that approximately half of trials fail to report results. Previous studies have used 2007 as their start date for expecting results to be made available, reflecting the FDA Amendment Act 2007. We did not use this date, as this legislation has been widely ignored 5,6 , and because we regard sharing results as an ethical obligation, not a legal one. Our methods accept results posting at any time after study completion, and any sponsor posting results for any trial since 2006 will find their results improve in our live data.

Policy implications
We have previously argued that live ongoing monitoring of trials transparency will help to drive up standards, especially if this information is used by clinicians, policymakers, ethics committees, regulators, patients, patient groups, healthcare payers, and research funders, to impose negative consequences on those who engage in the unethical practice of withholding trial results from doctors, researchers, and patients 14 . Recent comments by US Vice President Joe Biden threatened to withhold financial support from publiclyfunded researchers who fail to report clinical trial results, suggesting some consequences may arise 6 . We would be happy to collaborate or work with organisations seeking to get a better understanding of their own failure to publish, and wishing to act on this data.
We have also previously argued that medicine has an "information architecture" problem; all publicly accessible documents and data on all clinical trials should be aggregated and indexed for comparison and gap identification, and that good knowledge management and better use of trial identifiers will facilitate this 15 . At present, medicine faces serious shortcomings in this area. With 75 trials and 11 systematic reviews being published every day on average 16 better knowledge management must be a priority.

Future research
We have shared all our underlying data so that others can explore in detail non-publication for specific studies, interventions, companies, funders, sponsors, or institutions that interest them. We believe that research work on research methods and reporting should go beyond identifying the overall prevalence of problems, and identify individual people and organisations who are performing poorly, in order to both support and incentivise them to improve. That is only possible with ongoing monitoring and feedback on individual studies, an approach we have taken on other projects such as COMPare 17,18 . We hope that others will also pursue this model of audit and feedback, and assess its impact on performance.

Conclusions
We have designed, built, and launched an easily accessible online service that identifies sponsors who have failed in their duty to make results of clinical trials available.

Software availability
Website The authors published an online ranking system which illustrates how the major sponsors share their clinical trial information, in particular through reporting on completed trials. This research offers a new way to automatically identify and match trials registered on with their published results in ClinicalTrials.gov both the trial registry and abstracts or metadata of publications (indexed in Pubmed). ClinicalTrials.gov This automated process can result in a much more frequent update and provide more precise information to the public, in part by encouraging more accessible reporting.
In this review, we would like to focus our comments on the author's data processing and software. The authors have provided a code repository containing their website along with some Python code related to the data analysis process. The latter comprises a clear and straightforward IPython notebook detailing all the data analysis steps, including raw data processing, missing trial identification, and validation against other studies. We found it is an intuitive way to present works of this scale, although as discussed later we would like to suggest more modularization. In general, the code is understandable and easy to read. Both unit tests and behavioural tests are included to give more confidence in its reliability. We were able to re-run the entire IPython notebook with only some minor modifications.
We do have some minor comments and suggestions regarding the coding quality and reproducibility aspects of this project.
We have noticed that the XML parsing and Pubmed data extraction parts break easily due to variations in the source files or network problems. It would therefore be beneficial to make these two parts into functions with associated unit tests to ensure the correctness and robustness of the code.
Compounding the problem, these parts also take a very long time to compute. We left the program running for several days trying to update the trial-abstract database, only to have it fail part-way through. Further incremental updating mechanisms would help greatly here, for instance adding an extra column to the database to register the last search date so that recently searched entries will not be queried again.
One hopes that the 'live' website is updated from time to time with more recent results. It would be nice to have details on how frequently this happens -is it an automated process?
The current data on Github have some small differences compared to the results presented in the The current data on Github have some small differences compared to the results presented in the paper. We can fully understand that the data in the repository should be updated, and the development is an ongoing process. However, it would have been good from an audibility point of view to make the data which have been used for the paper available. For instance, the specific git commit id used for the paper could be given in the paper itself and the repository's README.
A requirements.txt is provided in the source code to facilitate installing the project's dependencies, however, not all of the dependencies are on the list. Changes in recent versions of some of these cause the code to break. Please specify all the dependencies (even indirect ones) including the versions used in the requirements.txt file. We have submitted a pull request with the list we found worked.
Overall, the new tool offered by the authors enables more frequent and larger-scale identification of whether trials have been reported. Their code is clear and reflects the methodology faithfully. This tool will help in the push for improving clinical trial transparency.

We have read this submission. We believe that we have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.
No competing interests were disclosed.

Title and abstract
The title is appropriate and discusses the content of the paper in one sentence. The abstract starts generally, drills down into the methods concisely and discusses the contribution to the literature which this manuscript and software project appropriately.

Article content
Powell-Smyth and Goldacre report on a piece of work which will make a substantial contribution to the clinical trials enterprise.
They have developed an open source web application which automatically takes data from the US based ClinicalTrials.gov registry and searches for results (either summary results on ClinicalTrials.gov or an abstract on PubMed). The software then ranks study sponsors by the proportion of trials which have reported results.
This approach is novel in its approach to on-line availability of data. This means that the dataset is easily searchable through a web based application. Automated systems have been explored in the past (e.g. The central contribution is an automated system for determining if a trial registered on ClinicalTrials.gov has published summary records on clinicaltrials.gov, or has an abstract indexed on PubMed. The work hinges on whether their automated system can in fact do this. The authors make a persuasive case that they are able to find summary results and abstracts where these have been published. They provide what they have said they can do in the on-line Jupyter notebook. Additionally, the open source code in the Github repository is straightforward to read, and supports their case. Finally, I downloaded the full dataset and explored it, and in the cases which I looked at their spreadsheet had correctly identified completed trials and the accompanying Pubmed abstract.
Therefore, although there may be a few trials which have been misclassified, I think that the methods used appear very robust. Additionally, if trials have been misclassified, the authors give suggestions of how to adjust this through changes to the journal entries on Pubmed, or through summary results on Pubmed.
In the discussion the strengths and limitations of their automated approach are carefully elaborated upon.
The key strength is that a large proportion of the clinical trials landscape is included in their study. The limitation is of course that automated analysis may incorrectly label some trials as unreported when in fact they are, but my assessment of their raw data is that this must be infrequent as I have not been able (in and admittedly unscientific sample obtained by scrolling through the raw data, and looking at trials which I am familiar with if I see them) to identify such a case.

Conclusions
The authors state that they present this work to aim to improve the clinical trials landscape in terms of the 'information architecture' of missing results. I believe that we should take this work at face value as a genuine, innovative approach which is trying to improve the problem of non-reporting, by giving transparency of reporting at the study sponsor level. It is reported carefully. The data presented back up the case for a clear need for improved trial reporting.

Data
This study is an exemplar of how to publish reproducible research. The data and code and extensive documentation are available and free to download and explore. My only suggestion is to have a second repository in case GitHub disappears.

I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.
I was joint co-author on a paper which was cited by this manuscript. Dr Goldacre Competing Interests: has cited my work in a statement given to a House of Commons select committee, and has given a statement to me in support of an application which I made to the University of Nottingham which supported the impact of my work in this field.

, Associate Director, Biostatistics, Premier Research, UK Adam Jacobs
Since my previous comment, I have looked into the under-estimation of disclosure some more, using a more representative sample.
My estimate is that a little over half of the trials identified as "undisclosed" by the Trials Tracker are in fact disclosed, so that the real proportion of undisclosed trials is not 45%, but 21% (12% for industry trials and 26% for non-industry trials). A quick update: I also checked AstraZeneca (this time a completely randomly chosen company), and the story seems to be similar. They have an own disclosure site ( http://www.astrazenecaclinicaltrials.com/ST/Submission/Search ), and from the 68 AstraZeneca trials that TrialsTracker reports as overdue, 38 (55.9%) can be found on that site. (But, they weren't even uploaded to clinicaltrials.gov, much less published in a PubMed-indexed journal.) Let me again repeat that this represents a very bad practice, as these results will have practically zero visibility, they likely won't be found by researchers… but it is also not fair to call them "unreported" either.
So while TrialsTracker's results are invalid for AstraZeneca too, they also draw attention to the fact, how bad is the indexing, dissemination of these results. (For reasons that are entirely unclear to me, again, at least as far as the uploading to clinicaltrials.gov is concerned.) Limitations of this remark are the same as my earlier one.
One might wonder whether it'd make sense to "correct" the results of TrialsTracker (i.e. Sanofi or AstraZeneca) based on these findings. But it likely doesn't make sense -even apart from the fact that what it now measures IS meaningful, even if it is not "non-publication" -because that would make different sponsor's results incomparable (whether they're manually corrected or not). In this situation it is better to be uniformly wrong than wrong sometimes and correct other times, making comparison impossible.
However, in contrast to Adam Jacobs, I did a comprehensive investigation of this issue: I've written an R script that harvests all trials reported on Sanofi's site, and checks them against the master data file of the TrialsTracker project (by filtering all.csv to those trials that were sponsored by Sanofi and are overdue).
Let me make one thing clear: what Sanofi is doing represents a very bad practice in my opinion. (And frankly, I have no idea on why they're not uploading the results to clinicaltrials.gov. It means minimal work; I can't even think of malicious reasons for not doing this…) But, and in that I agree with Adam Jacobs, it is also unfair to call these trials "unreported". They're badly reported, sure, but not unreported.
According to my results, there are 285 Sanofi trials in TrialTracker's database that is listed as "overdue", and from them, 227 (79.6%) can be found on the above page! In other words, amongst the negatives for Sanofi (minimally) 79.6% means false negative! Unfortunately this pretty much invalidates TrialsTracker's findings (about Sanofi, of course) in my opinion.
This situation may be true for other drug companies, I did not have time yet to investigate this issue.
Of course, for complete picture, we should not forget that not only false negatives, but false positives might arise due to TrialsTracker "automated" method. So, to have a fair picture, those that are reported in TrialsTracker as non-overdue should also be more rigorously checked, because mistakes in them might lead to the opposite error, i.e. the overestimation of the reporting rate.
F1000Research lead to the opposite error, i.e. the overestimation of the reporting rate.
What would make this paper more convincing would be if the sensitivity and specificity of their method were to be calculated by comparison against a gold standard of a thorough manual search. It does not seem that Powell-Smith and Goldacre have done this. Although they have done what they describe themselves as "rudimentary checks" of the validity of their data, there is no calculation of specificity and sensitivity, and the checks are based on a sample of limited scope. It is not clear why that sample was chosen, and whether it was prospectively chosen or chosen post-hoc.
I was curious to see how well their method performed, so I downloaded their raw data and looked up the first 10 "undisclosed" trials sponsored by Sanofi, as this was the sponsor with the largest number of "undisclosed" trials according to the Trials Tracker website. Those trials had the trial identifiers NCT00069888, NCT00081796, NCT00087802, NCT00087958, NCT00094081, NCT00094965, NCT00103649, NCT00104013, NCT00115570, and NCT00123565.
All except 2 of those trials had their results disclosed on . Presumably Powell-Smith Sanofi's own website and Goldacre's algorithm missed them as it did not check any sources except clinicaltrials.gov and Pubmed, so would miss sponsor websites. Of the remaining 2, one (NCT00094081) was published in a (but without the publication mentioning the clinical trials ID, so it would also be peer-reviewed journal missed by an automated search), and only one (NCT00123565) remained undisclosed after a 5 minute search of Google and Pubmed. Trial NCT00123565 was of a drug which was abandoned in clinical development in 2008, so no patient is deprived of information on a drug they are taking by the failure to disclose that study.
I do not know whether those 10 trials I happened to pick are representative. However, if they are, it suggests that Powell-Smith and Goldacre have overestimated the number of undisclosed trials by a factor of 10. This would make their results useless for any practical purpose.
I have previously written articles (such as Competing Interests: