Keywords
beta-blocker, DECREASE, data manipulation, inversion method, misconduct
This article is included in the Interactive Figures collection.
beta-blocker, DECREASE, data manipulation, inversion method, misconduct
The effect of beta-blockers on perioperative mortality in non-cardiac surgery has been controversial1 due to concerns regarding the scientific integrity in two related clinical trials2–6. Three meta-analyses that included the trials subject to concerns concluded that beta-blockers decrease perioperative mortality7–9, whereas a meta-analysis that excluded the suspect trials concluded that beta-blockers increase perioperative mortality9. In these studies, perioperative mortality was defined as the death rate of patients in the perioperative setting, including the period of admission, anaesthesia with surgery, and postoperative recovery.
The trials subject to concerns regarding scientific integrity were the Dutch DECREASE-I and DECREASE-IV trials4–6. The committees that investigated the integrity of the DECREASE trials reported that data manipulation was likely, but that the extent of the data manipulation remained unclear4–6. Moreover, the latest guidelines still recommend the usage of beta-blockers in the perioperative period in certain cases10,11, where some of these guidelines are based on other work by the PI of the DECREASE trials. Considering the potential harmful consequences of guidelines based on work by someone who has been repeatedly investigated for breaching scientific integrity, we aim to estimate the extent of data manipulation in the DECREASE studies2–6 to further stimulate the debate on using beta-blockers in the perioperative period for patients undergoing non-cardiac surgery.
The reports on the integrity of the DECREASE trials primarily focused on the provenance of the raw data but did not investigate the extent to which the DECREASE trials deviated from comparable trials. Provenance is primarily concerned with the origins of the data, verifying things such as (but not limited to) the informed consent and whether data correspond to patient files. However, the committee reports did not neglect statistical evaluation; according to the report a statistical expert assessed the applicability of forensic statistical methods6 to evaluate results of trials separately (i.e., DECREASE-I, DECREASE-IV), although the report lacks details as to how this evaluation took place. The expert concluded that the methods previously applied by him were not applicable for use in this case. Nonetheless, in the present study we compare across trials, which is a method that has previously been used to monitor trial data quality or to test for potential data anomalies12,13. Moreover, comparing across trials instead of evaluating them separately has previously proven to be effective in detecting data manipulation14. Comparing the DECREASE trials to other published trials studying the effectiveness of beta-blockers with respect to perioperative mortality could prove informative of the potential extent of the manipulation in the DECREASE trials.
The effectiveness of perioperative beta-blockade is obfuscated by the afflicted DECREASE trials, potentially interacting with the type of beta-blocker and the way that beta-blockers were administered (i.e., dose and duration of treatment). In the randomized trials on beta-blockers, patients were administered various types of beta-blockers (e.g., metroprolol, bisoprolol, atenolol) and in various ways (e.g., intravenously, orally; half an hour before surgery or multiple days before surgery; with or without titration based on heart rate). Factors such as dosage and duration can have an effect on the pharmacological effectiveness with respect to perioperative mortality. Moreover, the highly discrepant results from the DECREASE trials9 might partly be caused by such differences15 and not purely due to data manipulation (given that not all data points can be considered manipulated at this point).
To statistically investigate the evidence for data manipulation in the DECREASE studies2,3, we took three steps. First, we reproduced the findings from the 2014 meta-analysis by Bouri et al.9, which contained sufficient information to estimate the deviation of the DECREASE trials from other published trials on beta-blockers. We also included type of beta-blocker to inspect whether this is predictive of the effect of beta-blockers on perioperative mortality. Second, we evaluated the probability that the DECREASE trials (or more extreme effects) arose from the same effect distribution as the non-DECREASE trials, which are assumed to be the true effect of beta-blockers on perioperative mortality in patients undergoing non-cardiac surgery. Third, we estimated how many data points would have to be manipulated in order to reproduce the results of the DECREASE trials if the initial, non-manipulated results arose from the effect estimates as obtained from the non-DECREASE trials. Considering the committees investigating the scientific integrity of the DECREASE trials were unable to assess this, we consider it worthwhile to investigate this further.
To ensure that we used similar analysis procedures as in the 2014 meta-analysis9, we initially reproduced Bouri et al.'s estimates. This ensured that (1) their results are reproducible and (2) we are using the correct estimates in subsequent steps of our analyses. Using Figure 2 and Figure 3 from the original paper9, we extracted the raw event data for the 2 (control vs experimental) by 2 (event vs no event) design, which we used to recompute the natural logarithm of the risk ratio and its standard error. The extracted event data is available at osf.io/aykeh and our analysis plan was preregistered at osf.io/vnmzc.
We computed the log risk ratio (i.e., log RR) for each study and pooled these using v2.0.0 of the R package metafor16. We estimated a weighted random-effects model using the restricted maximum-likelihood estimator (i.e., REML)17 to estimate the variance of effects. We used the default weighting procedure in the metafor package. We added 0.5 to each cell count, as is common in meta-analyses on risk- and odds ratios in order to prevent computational artefacts18. The 2014 meta-analysis9 did not specify the variance estimate used; hence, (minor) discrepancies between our estimates and the original estimates could be due to differences in the estimation procedure.
We were able to closely reproduce the estimates for the different sets of studies (Figure 2 of the 2014 meta-analysis9). Bouri et al. differentiated between the estimates from the non-DECREASE trials (k = 9) and the DECREASE trials (k = 2). We confirmed the effect size estimates and the variance estimates for both the non-DECREASE and the DECREASE trials, except for some discrepancies at the second decimal level for the estimated effect sizes and a somewhat larger difference between the variance estimates of the DECREASE studies. Table 1 depicts the original and reproduced values for both sets of studies.
Risk ratio | RR Confidence interval | τ2 | ||
---|---|---|---|---|
Non-DECREASE (k=9) | Original | 1.27 | [1.01; 1.60] | 0 |
Reproduced | 1.27 | [1.01; 1.61] | 0 | |
DECREASE (k=2) | Original | 0.42 | [0.15; 1.23] | 0.29 |
Reproduced | 0.47 | [0.19; 1.15] | 0.14 |
Second, we meta-analyzed all studies combined, including a dummy predictor for the DECREASE and non-DECREASE studies to reproduce results presented in Figure 4 of the 2014 meta-analysis9. Our results showed a bit more evidence against equal subgroups than the original meta-analysis9 (original: χ2(1) = 3.91, p = .05; reproduced: χ2(1) = 6.12, p = 0.013). Additionally, the original analyses showed substantial residual heterogeneity (I2 = 74.4%), whereas we found no residual heterogeneity (I2 = 0%). Different variance estimates (e.g., DerSimonian-Laird instead of REML) did not resolve this difference. We tried to clarify these discrepancies by e-mailing the original authors (including a reminder after several weeks), but did not receive a response. Nonetheless, the broad strokes of the meta-regression confirmed that the DECREASE trials were the determining predictor for the effectiveness of beta-blockers (including DECREASE: RR = 0.509; excluding DECREASE: RR = 1.275).
Additionally, and exploratively, we evaluated the predictive effect of the type of beta-blocker used in the trials. Descriptively, the DECREASE trials remained predictive of decreased mortality (RR = 0.509), whereas the non-DECREASE trials provide tentative evidence that atenolol results in lower mortality (RR = 0.777). Nonetheless, for other beta-blockers in the non-DECREASE trials, there is descriptive evidence that beta-blockers could increase mortality (bisoprolol: RR = 2.973; metoprolol: RR = 1.303; propranolol: RR = 1.7). Table 2 shows the meta-regression results in full. We do note that the DECREASE studies only use bisoprolol and any estimates for other beta-blockers are extrapolations.
Estimate | 95% CI | |
---|---|---|
Intercept | -0.252 | -1.228; 0.724 |
Non-DECREASE | - | - |
DECREASE | -1.765 | -5.05; 1.519 |
Atenolol | - | - |
Bisoprolol | 1.342 | -2.015; 4.698 |
Metoprolol | 0.517 | -0.49; 1.523 |
Propranolol | 0.783 | -1.5; 3.065 |
Based on the effect estimates for the non-DECREASE trials from Step 1, we estimated the probability that the observed effects from the DECREASE studies (or more extreme) occurred naturally. We assumed that the non-DECREASE studies estimated the true effect distribution of perioperative beta-blockade on mortality, not perturbed by publication bias due to statistical (non)significance. Publication bias was assumed to not be a problem because a substantial number of nonsignificant effects are included in the dataset (9 of 11 results are nonsignificant). Based on this effect distribution, we estimated the veracity of the DECREASE trials separately, which is the estimated probability of the observed data (or more extreme) under a given true effect19.
Based on the estimated effect distribution from the non-DECREASE trials, we calculated the probability of each DECREASE trial result, or a more extreme result. In other words, we computed the two-tailed p-value for the null hypothesis that the DECREASE trials arose from the same effect distribution as the non-DECREASE trials (H0: = 0). To this end, we applied a Welch t-test20. As means, we used the observed log RR for the DECREASE trials (i.e., DECREASE-I: -1.44; DECREASE-IV: -0.452) and the meta-analyzed log RR for the non-DECREASE trials (i.e., 0.243). As standard deviations, we used the standard error for the DECREASE trials (i.e., DECREASE-I: 0.061; DECREASE-IV: 0.018) and the standard error of the estimated log RR for the non-DECREASE trials (i.e., 0.002). We initially preregistered that the DECREASE trials would be regarded as fixed in the computation of the veracity, which was erroneous because these also have their own standard error; hence, we applied the Welch test to take into account the uncertainty in the estimates of both the DECREASE and non-DECREASE trials.
Results indicate that the DECREASE trials are highly unlikely under the estimated effect distribution from the non-DECREASE trials. More specifically, the results from DECREASE-I (or more extreme) have a probability of approximately 1 in 10 000 (t(8) = –6.75, p = 0.000145) and the results from DECREASE-IV (or more extreme) have a probability of approximately 1 in 1000 (t(8) = –4.996, p = 0.0010587). This indicates that the DECREASE trial results are unlikely to have come from the same population effect distribution as the non-DECREASE trials. Moreover, observing two of such extremely unlikely results jointly, as in the DECREASE trials, would occur in only 2 out of 10 million sets of two trials (i.e., 0.0000002) according to this model. Hence, this result indicates that the DECREASE trials are severely different from the non-DECREASE trials.
Results from Step 1 indicated that no between-trial variance (i.e., homogeneity; τ2 = 0) of the effects was observed; given the small number of trials included (i.e., 9), however, this estimate is highly uncertain. The total N across the non-DECREASE trials was 10529. We conducted sensitivity analyses to see how dependent results are on the heterogeneity estimate (not preregistered; osf.io/vnmzc). Fixing the variance estimate τ2 to .5, indicates that the probability of observing the DECREASE trials jointly is approximately 1.2 out of 100 000 (see Figure 1). To put these numbers into context, a variance of 0.25 would suggest that results of perioperative beta-blockade vary substantially due to contextual circumstances of the study, even if perioperative beta-blockade has no effect whatsoever (RRs between 0.779 and 1.284 in ~64% of the cases).
We estimated the number of data points that would need to be manipulated to arrive at the estimates from the DECREASE trials, given that the non-DECREASE trials represent the true effect of perioperative beta-blockade. In contrast to Step 2, which assumes no data manipulation occurred and that the DECREASE trials occurred naturally from the same effect distribution as the non-DECREASE trials, Step 3 assumes that the DECREASE trials might in fact contain manipulated data. The estimates from Step 3 provide an indication of the extent of potential data manipulation in the DECREASE studies4–6,9.
In order to estimate the number of manipulated data points, we first estimated the probability of perioperative mortality (in log odds) in each trial arm for each trail stratum. As such, we estimate mortality odds four times: once per condition (beta-blocker or control) per trial stratum (DECREASE- and non-DECREASE trials). For all four combinations of condition and trial type, we ran a meta-analysis applying similar methods used in Step 1, resulting in four meta-analytic absolute mortality estimates with corresponding effect variances. Throughout the simulations, we used the point estimates (i.e., fixed effect) to simulate genuine and manipulated data, but supplemented this by using distribution estimates (i.e., random effects) as sensitivity analyses.
We applied the inversion method to estimate the number of manipulated data points in the DECREASE trials21. We assumed that if data are manipulated, each data point is manipulated in the same way and to the same extent. The inversion method iteratively hypothesizes that X out of N data points were manipulated (i.e., X = 0,1,..., N), assuming they were manipulated in the same way. For each combination of X and trial, we simulated 10000 datasets. Each simulated dataset contained X manipulated data points and N-X genuine data points. For each simulated dataset (exact simulation procedure in the next paragraph), we determined the likelihood of the results with
where πE indicates the mortality rate in the beta-blocker condition as drawn from the meta-analytic effect distribution (πC indicates the mortality rate in the control condition). We estimated those parameters using the meta-analytic procedure described in the previous paragraph, resulting in the estimates depicted in Table 3. The likelihood was computed under both the manipulated effect estimates (i.e., Lmanipulated) and the genuine data (i.e., Lgenuine). Table 4 indicates which cell sizes the various nXX refer to within the (simulated) data. After computing the likelihoods, we compared them to determine whether the simulated data were more likely to arise from the genuine trials (Lgenuine > Lmanipulated) or from the manipulated trials (Lmanipulated > Lgenuine). Note that comparing the likelihoods is a minor deviation from the preregistration, where we initially planned on using p-value comparisons (osf.io/vnmzc).
We used these parameters to estimate the number of manipulated data points with the inversion method.
log(odds) | τ | ||
---|---|---|---|
DECREASE (k=2) | Beta-blocker | -3.629 | 0.588 |
Control | -2.498 | 1.644 | |
Non-DECREASE (k=9) | Beta-blocker | -3.034 | 0.712 |
Control | -3.208 | 1.001 |
Dead | Alive | |
---|---|---|
Beta-blockers | n11 | n12 |
Control | n21 | n22 |
For each hypothesis of X out of N manipulated data points, we computed the probability that the manipulated data are more likely than the genuine data (pM = P(Lmanipulated > Lgenuine)). Based on pM, we computed the confidence interval for the estimated X manipulated data points (i.e., XLB; XUB). For a 95% confidence interval, the lower bound is equal to the pM closest to .025, whereas the upperbound is equal to the pM closest to .975.
We computed pM for all X out of N manipulated data points in 10000 randomly generated datasets, which were generated in three steps. For each dataset we:
1. Sampled (across conditions, without replacement) X fictitious participants that would be the result of data manipulation.
2. Determined the population mortality rate for each condition (i.e., for each cell based on the estimates from Table 3). The meta-analytic point estimate was used or a population effect was randomly drawn from the meta-analytic effect distribution.
3. Simulated the number of deaths for the different conditions using a binomial distribution based on the mortality rate as determined in 2, resulting in the cell counts as in Table 4.
Based on the meta-analytic effect from 2 and the cell sizes from 3, we computed the likelihoods Lmanipulated and Lgenuine using Equation 1. As mentioned before, we computed pM, which indicates the probability that the data are more likely under the estimates resulting from the (allegedly) manipulated data (i.e., the DECREASE trials) than under the estimates resulting from the genuine data (i.e., the non-DECREASE trials; pM = P(Lmanipulated > Lgenuine)).
For DECREASE-I (N = 112), the 95% confidence interval for the estimated number of manipulated data points is [0 – 112] or [0 – 112] when based on a point estimate or a more uncertain distribution estimate, respectively. The left column of Figure 2 depicts the pM per X manipulated data points (top panel) and the bounds of the confidence interval when the degree of confidence is altered (lower panel). Staying clearly between the dotted lines in the top panel, depicting the 95% CI (top: .975; bottom: .025), it becomes apparent that the degree of uncertainty is too high to make any reasonable estimates about the number of manipulated data points with sufficient confidence. This is partly due to the small sample size of the DECREASE-I trial (i.e., N = 112) and the availability of just the summary results. Only when the degree of confidence is lowered to around 75% does the interval not span the entire sample size. As such, based on the summary results, little can be said about the extent of the data manipulation that occurred in the DECREASE-I trial, affirming the conclusions of the original committee report6.
The top row panels indicate pM (y-axis) for all X out of N manipulated data points (x-axis). The bottom row indicates the estimated interval of manipulated data points (y-axis) when varying the degree of confidence (x-axis). Dotted lines indicate the bounds for a 95 percent CI. The online version of this figure is interactive.
For DECREASE-IV (N = 1066), the 95% confidence interval for the estimated number of manipulated data points is [3 – 1066] or [10 – 1066] when based on a point estimate or a more uncertain distribution estimate, respectively. The relatively minor difference between the estimates indicates that there is a high degree of confidence that data manipulation did occur based on the difference of the trial results alone. Nonetheless, the range of potentially manipulated data points is still estimated at approximately 1000; this indicates that the summary results are insufficient to provide more than an estimated lower bound. This indicates that it is possible not all data were manipulated (i.e., N = 1066), but at least some were, increasing the importance of well-documented data provenance to discern between genuine and falsified data.
The effect of beta-blockade on perioperative mortality was already unclear based on the investigations regarding scientific integrity; our results strongly affirm that the empirical evidence from the DECREASE trials is highly discrepant from other trials supposedly studying the same effect (i.e., the effectiveness of beta-blockers in decreasing perioperative mortality). Our results indicate that the results from the DECREASE trials are nearly impossible to have arisen from the same effect inspected by the non-DECREASE trials, except when we assume at least some of the data were manipulated. As such, the scientific validity of the DECREASE-I and DECREASE-IV trials should be regarded as highly problematic and untrustworthy when assessing the effectiveness of beta-blockade on perioperative mortality if they truly investigate the same effects as the non-DECREASE trials, as is often assumed9. Nonetheless, the original papers that presented these trial results are not yet retracted2,3, despite the integrity reports4–6.
Our approach to estimating the number of manipulated data points has one major limitation that we would like to highlight: multiplicity. For each estimated proportion of manipulated data points, there is another smaller (or larger) proportion with more (or less) extremely manipulated data points. This problem is similar to how various samples can give rise to the same mean, but contain vastly different individual scores within them (e.g., -2.5 and +2.5 versus -100 and +100; both give the mean zero). Nonetheless, this limitation does still allow us to estimate whether any data manipulation occurred because there is no multiplicity in not manipulating data.
The ESC/ESA and ACC/AHA guidelines11,22 on perioperative beta-blockade already excluded the DECREASE trials in their assessment, but also explicitly state that other trials by Poldermans are excluded. However, upon close inspection of the reference lists, the ACC/AHA guidelines still cites four trials including Poldermans as author as evidence for the guidelines2,23–25, of which two were already inspected by the scientific integrity committees of Erasmus MC2,23. In the ACC/AHA guidelines, the following is said about studies conducted by Poldermans:
"If nonretracted DECREASE publications and/or other derivative studies by Poldermans are relevant to the topic, they can only be cited in the text with a comment about the finding compared with the current recommendation but should not form the basis of that recommendation or be used as a reference for the recommendation."11
Nonetheless, references are made without clear comments. Given the confirmation of problems in the DECREASE-I and DECREASE-IV trials in our results, it stresses that there is reason to distrust trials by Poldermans. For the integrity of the guidelines and the safety of the patients, we pose that investigations should be initiated into works where Poldermans was involved and which were not cleared by the scientific committees of Erasmus MC in their misconduct investigations. In particular those papers cited as evidence in the ACC/AHA guidelines should be investigated, considering that they affect patients and their treatment directly.
Previously, further investigation of trials by Poldermans was deemed unfeasible due to the lack of raw data; here we indicate methods that do make it feasible. Based on just event-count data and trials that supposedly investigate the same effect, we were able to estimate whether part of the data were in fact manipulated and whether the results were within reason of trials investigating the same effect. The results clearly indicated they were not within such reason.
The results of our analyses also highlight that, despite the lack of raw data availability, summary results from larger samples allow for more precise estimates of the number of manipulated data points when similar trials are available. Moreover, larger trials result in relatively more certainty (e.g., DECREASE-IV) about the estimated number of manipulated data points, when using the inversion method, compared to smaller trials (e.g., DECREASE-I). This increased certainty is due to decreased standard errors of the estimated effects, resulting in higher sensitivity to data anomalies. Nonetheless, much residual uncertainty remains and simply less information is available in summary results when compared to raw data. As such, raw data availability would improve the options open to detect potential anomalies (note: raw data are available for DECREASE VI, but upon a freedom of information request by the first author, Erasmus MC refused to share these data; see osf.io/zv953/ for original Dutch correspondence). The results also highlight that in order to prevent detection, it would be in the manipulator's interest to fabricate small and imprecise studies (assuming the manipulator wants to remain undetected), which ultimately detracts from the scientific value of such a study and hence the individual reward for manipulation through reduced impact (hopefully).
With respect to clinical practice, the results provide some tentative evidence that type of beta-blockade can severely influence perioperative mortality. Our reanalysis of the Bouri et al.9 data indicates that type of beta-blockade can reverse the effect on perioperative mortality, even after taking into account whether a study belongs to the DECREASE family. As such, atenolol seems to tentatively decrease perioperative mortality, whereas the others (metoprolol, propranolol, bisoprolol) increase perioperative mortality. However, there seems to be covariation with respect to treatment administration, duration, and dose, which further confounds whether the treatment effect is due to type of beta-blocker or due to one of these other parameters. There are too few studies (k = 11) to properly discern the various treatments from each other, requiring a new randomized trial with high statistical power to determine moderating factors (if any). This affirms the statement from the ESC/ESA guidelines that "high priority needs to be given to new randomized clinical trials to better identify which patients derive benefit from beta-blocker therapy in the perioperative setting, and to determine the optimal method of beta-blockade"22.
Moreover, the DECREASE and non-DECREASE trials seem to apply beta-blockade from different conceptual viewpoints that could confound the effectiveness of beta-blockade. The non-DECREASE trials seem to focus purely on the application of beta-blockers in itself, whereas the DECREASE trials use beta-blockade as a proxy to decrease resting heart rate2,3. As such, the DECREASE studies applied beta-blockade at least a week in advance, specifically in order to lower patient's resting heart rate to <70BPM and potentially habituate the patient to the effects of the beta-blockade. Other studies apply the beta-blockade just prior to the surgery (maximum: one day prior), and therefore seem to regard the treatment specifically and not the proxy of lowered BPM. As such, the differences between the DECREASE and non-DECREASE trials might also in part be a consequence of the different approaches in the various trials. Whether these differences matter in treatment decisions is worthy of further research in a clinical trial with high statistical power to find such differences.
In summary, our research indicates that the DECREASE trials are nearly impossible if we assume they investigate exactly the same effect as the non-DECREASE trials and, under that assumption, our results provide some evidence that at least some data points were manipulated. However, these differences might also be due to different conceptual approaches as to how beta-blockade might prevent mortality in non-cardiac surgery. We recommend renewed investigations into Poldermans’ work given these findings — especially those works still referenced by guidelines on the use of beta-blockers without proper notice. Moreover, it remains unclear whether beta-blockers might be effective in preventing mortality rates in non-cardiac surgery patients. Considering this, we recommend new and more extensively controlled, confirmatory trials to determine whether there is any use in administering beta-blockers in order to decrease perioperative mortality — at the moment there is insufficient evidence to determine any positive effect of beta-blockers on mortality rates.
All manuscript materials are available at https://github.com/chartgerink/2015poldermans and are preserved at Zenodo (doi.org/10.5281/zenodo.84535426).
CHJH was funded by the Office of Research Integrity during part of this project (HHS-ORI; ORIIR160019).
The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Views | Downloads | |
---|---|---|
F1000Research | - | - |
PubMed Central
Data from PMC are received and updated monthly.
|
- | - |
Is the work clearly and accurately presented and does it cite the current literature?
Yes
Is the study design appropriate and is the work technically sound?
Yes
Are sufficient details of methods and analysis provided to allow replication by others?
Yes
If applicable, is the statistical analysis and its interpretation appropriate?
Partly
Are all the source data underlying the results available to ensure full reproducibility?
Yes
Are the conclusions drawn adequately supported by the results?
Partly
References
1. Senn S, Lillienthal J, Patalano F, Till D: An incomplete blocks cross-over in asthma: a case study in collaboration, Cross-over Clinical Trials. Fischer: Stuttgart. 1997. 3-26Competing Interests: I have no competing interests that I am aware of. However, I maintain a full declaration here: http://www.senns.demon.co.uk/Declaration_Interest.htm
Reviewer Expertise: Statistical methods in drug development
Is the work clearly and accurately presented and does it cite the current literature?
Yes
Is the study design appropriate and is the work technically sound?
Yes
Are sufficient details of methods and analysis provided to allow replication by others?
Yes
If applicable, is the statistical analysis and its interpretation appropriate?
Partly
Are all the source data underlying the results available to ensure full reproducibility?
Yes
Are the conclusions drawn adequately supported by the results?
Partly
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Methodology, epidemiology
Alongside their report, reviewers assign a status to the article:
Invited Reviewers | ||
---|---|---|
1 | 2 | |
Version 1 13 Nov 17 |
read | read |
Provide sufficient details of any financial or non-financial competing interests to enable users to assess whether your comments might lead a reasonable person to question your impartiality. Consider the following examples, but note that this is not an exhaustive list:
Sign up for content alerts and receive a weekly or monthly email with all newly published articles
Already registered? Sign in
The email address should be the one you originally registered with F1000.
You registered with F1000 via Google, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Google account password, please click here.
You registered with F1000 via Facebook, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Facebook account password, please click here.
If your email address is registered with us, we will email you instructions to reset your password.
If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.
Comments on this article Comments (0)