Considerations for clinical read alignment and mutational profiling using next-generation sequencing [version 1; peer review: 2 approved, 1 not approved]

Next-generation sequencing technologies are increasingly being applied in clinical settings, however the data are characterized by a range of platform-specific artifacts making downstream analysis problematic and error prone. One major application of NGS is in the profiling of clinically relevant mutations whereby sequences are aligned to a reference genome and potential mutations assessed and scored. Accurate sequence alignment is pivotal in reliable assessment of potential mutations however selection of appropriate alignment tools is a non-trivial task complicated by the availability of multiple solutions each with its own performance characteristics. Using BRCA1 as an example, we have simulated and mutated a test dataset based on Illumina sequencing technology. Our findings reveal key differences in the performances of a range of common commercial and open source tools and will be of importance to anyone using NGS to profile mutations in clinical or basic research.


Introduction
Since emergence in 2005, next-generation sequencing (NGS) technologies have proven prolific tools in the research setting, permeating a variety of scientific disciplines and demonstrating a range of applications that seems to be limited only by the imagination of the sequencing community. The technology continues to develop at a rapid pace with established instrument manufacturers regularly augmenting their product portfolios and an increasing number of start-up companies promising to disrupt the market. Beyond basic research applications, NGS technologies are now increasingly being applied in the clinical environment, driven partly by their rapid maturation and the arrival to market of smaller, cheaper sequencing platforms.
The potential clinical application of NGS has a broad scope ranging from full human genome profiling 1 to investigation of the microbiome 2 and includes applications such as biomarker discovery, patient diagnosis, prediction of drug response and patient stratification for clinical trials. Such applications often involve the targeted profiling of genes known to be of clinical relevance. These genes harbor diagnostically relevant variants including single nucleotide polymorphisms (SNPs), and small insertions and deletions (INDELs). Individual genes have previously been interrogated in clinical testing using traditional techniques such as Sanger sequencing however NGS technologies have already begun to supplant the previous tools of choice in these areas, offering increased speed and throughput with reduced running costs.
Despite many successes and increasing uptake, the data generated by NGS analyzers is not perfect, with each platform yielding characteristic errors and biases. Furthermore, NGS technologies produce reads that are much shorter than those traditionally produced by Sanger sequencing methods and this can complicate matters further, especially in genomes containing a large proportion of repetitive elements 3 . The effect of these problems is most visible in large scale studies such as genome-wide sequencing where a recent study reported a 1 million variant platform-based discrepancy for a single genome 4 . This fact bestows responsibility on both algorithm and software developers and downstream users to develop deep understanding of the various data types and their idiosyncrasies and to apply this appreciation in their analysis and interpretation, in order to correct or compensate for potential errors. Despite forming an area of active research, data interpretation remains an issue and is no doubt a factor in feeding the inertia of many clinical facilities that are reluctant to adopt the new technologies 5 .
Two major computational steps in variant detection from NGS data are read alignment whereby the data are mapped to corresponding locations on a target genome, and mutation calling whereby nucleotides differing from the target genome are assessed and scored on their likelihood of representing a genuine mutation versus an error. While these two stages of analysis may be supplemented with various pre or post processing techniques they represent the most crucial steps and therefore the area of most active software and algorithm development.
Aligners of choice have begun to emerge 6,7 however their strengths are often application specific and different tools are recommended depending on the sequencing platform and individual study goals. Aligners generally fall into the categories of being based on either hash tables or suffix trees 8 . Suffix tree-based aligners such as Heng Li's BWA 9 are characterized by their speed and memory-efficiency whilst generally achieving lower sensitivity than hash-based counterparts such as Stampy 10 . There is thus a necessary trade-off between speed and sensitivity at the read alignment stage with speed often being prioritized due to the volumes of data produced by NGS technologies and the corresponding time required for analysis. Aligners can be further classified as gapped or ungapped based on their ability to produce successful alignments in the presence of small INDELs. Aligners including BWA and Stampy have been shown to produce alignments with a fair degree of success for reads containing a range of INDEL sizes 10 however such abilities will vary based on a range of complicating factors including size and location of the INDEL. As well as generating continuous controversy 11 , the BRCA1 gene presents a particularly interesting set of alignment challenges due to a disproportionately high concentration of INDELs greater than eight nucleotides in length. In fact, 3% of known deleterious mutations in the Breast Cancer Information Core (BIC) database 12 fall into this category and represent a significant overrepresentation when compared to a healthy genome. Furthermore the BRCA1 gene contains multiple areas of high shared identity in the form of tandem repeats, posing another difficulty in achieving accurate read mapping. Challenges like this pose particular difficulties in the clinical setting where errors have the potential to translate to misdiagnosis or mistreatment, directly affecting and endangering the lives of patients. No gold-standard clinical alignment tools yet exist and numerous publicized examples of early translational work appear to base their choice of tool on user-friendliness or availability of a graphical-user-interface rather than assessments of performance. We have investigated the performance of a range of popular alignment tools and assessed their ability to accurately detect known mutations. Several of these tools are already being used as components of diagnostic workflows in the clinical setting. Here we present data generated using simulated reads derived from the human BRCA1 gene. Our findings demonstrate the widely varying abilities of common read alignment tools and their impact on downstream variant calling. Furthermore the results suggest a need for careful and thorough evaluation of all tools being used in a particular analysis pipeline through simulation and analysis of data of known constitution.

Aligners
A range of open source and commercial alignment tools were selected for assessment based on their reported ability to facilitate detection of both SNPs and INDELs as well as frequency of citation in both scientific and commercial literature. The aligners included in the comparison were: BWA (0.5.9-r16), Bfast (0.7.0a) 13 15 and Softgenetics Nextgene (2.2) aligner.

Read simulation
Stampy was used to simulate sixty-seven groups of 200,000 90bp paired-end FASTQ Illumina reads from the human BRCA1 gene (hg19) with an appropriate error profile. Each sequence grouping was mutated in-silico with custom scripts used to introduce a combination of 20 SNPs and 13 INDELs. These were selected from a test set of 2211 (1299 unique) known BRCA1 variants containing 1340 SNPs, 320 insertions and 551 deletions.

Read alignment
Reads were aligned to hg19 chromosome 17 on a HP DL585 G6 server with 4 six-core AMD Opteron 2.8Ghz processors and 256GB of RAM. Multi-threading with the maximum number of threads supported by the aligner was utilized. Each aligner was run with parameters as close as possible to default. Each aligner was run in both single-end and paired-end alignment mode with half of the paired-end reads being used to simulate a single-end read dataset. Run-times were benchmarked based on the wall-clock time taken to produce a SAM format alignment for all 67 sets of FASTQ reads.

Calling of SNPs and INDELs
Each SAM formatted file was converted to BAM format and processed to ensure downstream compatibility with GATK 16 using a combination of tools from the Picard collection. (SamFormatConverter, AddOrReplaceReadGroups, SortSam and BuildBamIndex respectively). BAM files were then processed in a GATK-based pipeline. The pipeline consisted of local realignment around INDELs (RealignerTarget Creator and IndelRealigner), quality score recalibration (CountCovariates and TableRecalibrator) and finally variant calling (Unified-Genotyper). The wrapper scripts sam2bam.sh and gatk.sh are provided and can be used to recreate the processing steps from alignment files (SAM format) to variant call (VCF format).

Mutation panel
The reads and mutation panel utilized here represent a challenging and multi-functional test set with widely varying INDEL sizes (Table 1) and an extensive range of SNPs providing a useful means of assessing the aligners' single and paired-end modes. The reads were created to contain only known mutations from the human BRCA1 gene thus facilitating downstream assessment of mutation profiling accuracy whilst remaining comparable to real-world data. Reads were simulated to closely match the error profile of Illumina's sequencing technology enabling a further level of realism to be captured in the simulated test-set. Only homozygous variants at relatively high levels of sequence coverage (70-140x) were included in the test set to ensure testing of the alignment tools' abilities rather than the quality of the downstream variant calling methods.

Run-times
Run-times varied widely from seconds to hours and are shown in Table 2. Novocraft's Novoalign software performed fastest and was closely followed by BWA and Bowtie 2 in both single and pairedend mode whilst Bfast's paired-end mode represented the slowest run-time by almost half a day. Nextgene was excluded from this comparison due to the fact it is Windows-based software and it was not possible to assess run-times on the same hardware as for the other aligners. The criticality of program speed is largely dependent on end-application. For example, targeted sequencing experiments will generally involve much smaller data volumes than whole human genome sequencing. It should also be borne in mind that read generation by the sequencing machine itself is a rate limiting step and sequence generation and alignment steps can be run in parallel. With the exception of the paired-end run with Bfast, most alignment times recorded here might be considered manageable for most purposes.

Sensitivity of detection
The greatest overall sensitivities of detection were achieved by Novoalign and Omixon's Variant Toolkit in paired-end and singleend mode respectively ( Figure 1, Table 3). Stampy also enabled highly sensitive mutation detection with Bfast performing least favorably. Sensitivities were also assessed based on the category of mutation (Table 4).
Some aligners such as Smalt, Bowtie 2 and CLC were clearly seen to perform more strongly on detection of SNPs than INDELs in general. Nextgene performed similarly for SNPs and deletions but had lower sensitivity of insertion detection. BWA showed obvious decreases in ability to detect INDELs when shifting from paired-end to single-end mode. In contrast, Novoalign, Omixon and Stampy performed well regardless of run-mode or mutation type. Overall Novoalign was the best performer in paired-end mode while Omixon achieved the highest sensitivities in single-end mode. In a clinical environment, sensitivity will likely represent the most important metric in evaluating alignment software, however other factors may also be of importance, depending on the test in question. Specificity values are not included here as they were non-discriminatory in this context due to low numbers of false positives relative to the high number of true negatives.

Incorrect identification of mutations
Controlling the rate of false positive results is clinically important in avoiding unnecessary treatment, expense and patient anxiety. The number of incorrectly identified mutations varied widely dependent on aligner and run-mode ( Figure 2, Table 5). The highest rates were obtained using Bfast, followed by CLC Bio's Beta aligner, Nextgene, Mosaik, Stampy, and Bowtie 2 while Novoalign and Smalt were the strongest performers followed closely by Omixon and CLC. Number of false positives alone is of limited utility in assessing aligner performance, however positive predictive value (PPV) provides a useful metric which combines counts of both true and false positives in a single value (Table 6). PPV was calculated based on the equation:

PPV = True Positives True Positives + False Positives
No inferences were made about prevalence in calculating the values. Novoalign had the highest PPV in both single and paired-end mode. Smalt, CLC and Omixon's Variant Toolkit also performed strongly on this metric. Notably Stampy performed relatively poorly in contrast to its high sensitivity.
Paired-end vs. single-end reads Notably Bfast, Nextgene, Stampy, Mosaik and Bowtie 2 all showed obvious increases in the number of false positives detected in paired-end   This strong performance in single-end mode is relevant not only from a diagnostic standpoint, but also from a clinical cost-saving    perspective as paired-end protocols ultimately incur extra costs per run vs single-end protocols. While paired-end reads generally represent a saving in terms of cost per megabase, they effectively double sequencing output and this may not be a cost-effective option depending on the logistics of the individual run. Furthermore, researchers who outsource sequencing will often see the available protocol for their relatively small sequencing project dictated by the larger projects they are multiplexed alongside.

Effect of INDEL size on detection
Not unexpectedly there appeared to be a general trend of INDEL size affecting most aligners ability to facilitate downstream calling ( Figure 3). The size of the effect varied by aligner with BWA and Novoalign showing good detection rates for all but the largest deletions while others such as Bowtie 2 and the CLC aligners were not successful far beyond a 10bp INDEL size. Nextgene showed better sensitivity for insertions than deletions. Stampy and Omixon's Variant Toolkit were the only two aligners to detect the largest deletions. This highlights a need for those involved with testing and analysis to develop an appreciation of the various mutations that might exist in their target genes and to select their analysis tools appropriately. INDELs in the range represented by the BRCA1 mutation panel have real-world relevance in genetic disorders and strong aligner performance on larger INDELs appears to be an exception rather than a rule.

Summary and conclusions
Using a simulated, targeted sequencing scenario with Illumina read data, the work presented here highlights several important considerations regarding aligner choice in studies involving profiling of mutations. Furthermore the data presented goes some way to characterizing the performance of a comprehensive selection of commonly used aligners and should represent a useful resource for anyone focused on similar scientific studies.
Whilst the dataset used in this study was engineered to include a challenging range of mutations and efforts were made to simulate the error profile of Illumina sequencing technology, it is nevertheless a simplified representation of real-world data. Experimental artifacts such as PCR stutter have the potential to present further challenges to alignment algorithms and there is no consideration of such issues here. Furthermore, the test dataset used in this study produced a uniform, high coverage of the target gene and only homozygous variants were simulated. Finally, the use of only a single variant-caller in the study means that some of the errors encountered may not be due to alignment issues. The aim is to follow up the current study by focusing on an expanded gene-set, alternative variantcallers, homo and heterozygous mutations and different sequence formats. Nonetheless, this focused study demonstrates the utility of simulated data in assessing program performance.
With the exception of Bfast, all aligners performed relatively well on the BRCA1 dataset. Clinical applications necessitate the use of the most highly accurate solutions, however. Only Novoalign, Omixon's Variant Toolkit and Stampy achieved 99% or greater sensitivity in both paired-end and single-end mode. Omixon and Stampy were the only two aligners to detect the longest deletions in the dataset however Stampy's performance was let down by a false positive rate which would be considered unacceptably high for most applications. While Novoalign did not detect the largest deletions, it was the fastest aligner and the most sensitive in paired-end mode. Nevertheless, assuming the longer run-times are not an issue, Omixon's superior sensitivity in single-end read mode likely makes it the best option when paired-end protocols are not possible. It should also be mentioned that as a freely available, open-source tool, BWA's paired-end performance is laudable and goes some way to justifying its widespread use and popularity.
While the tests here produce some clear winners, they also serve to highlight that program performance can vary widely based on the fine-details of a particular run. Even the strongest overall performer can be found lacking in some respects and this means that researchers should be vigilant in their selection of tools for a particular application. In certain instances it may even be necessary to combine two or more approaches to ensure that all relevant aspects of a given dataset are sufficiently characterized and any approach will still require some level of visual inspection and quality control in a clinical setting. The data presented here should facilitate and expedite selection of the correct aligner for a particular task but they do not obviate the requirement for careful consideration nor further testing and analysis on the part of the end-user.

Mihaela Pertea
McKusick-Nathans Institute of Genetic Medicine, Johns Hopkins University School of Medicine, Baltimore, MD, USA

Steven L Salzberg
McKusick-Nathans Institute of Genetic Medicine, Johns Hopkins University School of Medicine, Baltimore, MD, USA In this paper the author sets out to investigate the performance of several alignment tools and to assess their ability to accurately detect known mutations when used in a variant calling pipeline. This is an important issue to address before designing a particular analysis pipeline for variant detection. However, this paper makes multiple very strong claims about the superiority of various alignment algorithms based on highly flawed computational experiments. Overall the results are at best misleading, and many of the conclusions are simply wrong.
Our concerns are related to the following issues:

Concerns about the experimental design:
The experiment claims to measure the accuracy, and in particular the sensitivity and FDR rate, for many sequence aligners. Unfortunately, it simply doesn't measure anything of the sort. Instead, it measures the sensitivity and FDR of the GATK SNP pipeline, a complex series of programs with many, many parameters, with different aligners fed into the very first step of GATK. GATK is exquisitely sensitive to these parameters; in our experience we can easily increase the number of SNPs by a factor of 5-fold simply by varying its parameters, REGARDLESS of the alignments provided at the front end. Unless the author optimizes GATK for each aligner -which he explicitly did not do -these results are simply invalid. Thus the whole experiment is deeply flawed.
It is not sufficient, in a benchmarking test like this one, to use only default running parameters (as the author says he did), and to make no effort at careful evaluation of what would be the best parameters to use for each aligner in that specific experiment. If the author wishes to compare aligners as part of a complex pipeline (GATK), he needs to do much more work than the simple push button runs he did here.
The whole point of simulated data ought to be that one can check each read and see if it was aligned to the correct place. This should be easy to do as all the reads are simulated and therefore their location is known a priori. If (and only if) the author compared the alignments to the true alignment, then he could report valid findings about the sensitivity at finding SNPs, indels, etc. He did not do this, which is somewhat astonishing. As it stands, the main results -including Tables 3  and 4 and Figure 1 -are simply wrong.
Also, in order to make his results reproducible, the author should provide the alignment results for all programs, as well as the exact command lines used for each aligner. Just specifying that he ran the aligners with parameters "as close as possible" to defaults is not enough.

Running time evaluations:
Another major conclusion of the paper concerns run times, which the author reports in a separate section (3.2). An obvious flaw here is that running the aligners on such small datasets (each only 200,000 reads) cannot properly differentiate the relative running times of the different programs, especially the faster ones. Exome sequencing, a very common experiment today, generates roughly 100 million reads per experiment -500 times larger than each sample data used here. Whole-genome data sets are much larger. To provide any realistic run time findings, the author needs to load at least an exome-sized data set and run it. He doesn't need to use simulated reads -many exomes are publicly available. Since he is only measuring run time, he doesn't need to worry about the sensitivity of these alignments, just speed.
If the author wants to report findings about run-time, he needs to scrap this experiment and run a more realistic data set. If 100 million reads, not large by today's standards, swamps the ability of any aligner to handle it, then he can report that.
Other comments in the alignment section are not justified. For example, claiming that "most alignment times recorded here might be considered manageable for most purposes" seems to be little more than the author's unsupported opinion, based on a relatively tiny number of reads.

Other significant concerns:
a). The author used the Stampy package to simulate the reads from the BRCA1 region. What was the reason that this particular read simulator was used, and not another one that is independent from all aligners involved? E.g., the Mason simulator is considered to be relatively realistic. The Stampy simulator might give an unfair advantage to the Stampy aligner. b). Why did the author align the simulated reads only to chromosome 17? If this is supposed to simulate a targeted sequencing experiment, why not just align to the BRCA1 region, which is far, far smaller than the entire chromosome? A much more realistic design would be to align to the whole human genome, which is normally done for real data where contamination from other parts of the genome is common. The author should also specify how he obtained the index required by the different aligners, and how long it took to create such an index (from the running times of the programs presented in the paper I assume this time was not included). c). The way the programs were run is completely unclear, since no command line options are provided. Besides a step required to create an index (see above), some of the aligners require two steps to be run (e.g. BWA requires both an 'aln' and a 'samse/sampe' commands to be run; Stampy can be run in a hybrid version with a BWA option first). Were both of these steps included in the running times presented? Most of these programs have many options that can increase their sensitivity at the cost (sometimes small, sometimes not) of increased run time. d). The author makes a technical error in classifying aligners into two categories, "based on either hash tables or suffix trees." The Burrows-Wheeler Transform (the basis of Bowtie, BWA, and SOAP2) is simply not a suffix tree. Further, it is not only simplistic but incorrect to state that hashbased programs are generally more sensitive, while the ones based on suffix trees are faster. That is wrong in multiple ways; there are many examples of hash-based approaches that are fast but not sensitive, and suffix-tree approaches don't have to be faster. These features (speed/sensitivity) depend much more on the numerous implementation details, of which the author appears to be unaware. e). The two wrapper scripts (sam2bam.sh and gatk.sh) that the author mentions that he made available do not seem to be present.
f). Each of the 67 data sets presented in the paper include 20 SNPs and 13 indels. Why use 67 data sets? And why have exactly the same number of SNPs and indels in each one? What criteria were used to include these particular numbers of SNPs and indels? Since each data set is representative for only one variant of the BRCA1 gene, how likely it is that in real data these 20 SNPs and 13 indels will appear at the same time in the gene? This is an unrealistic data set that has a bizarrely skewed bias. g). The author states -when referring to Figure 3 -that the size of the indels influences their detection rates. He specifically says that the "size of the effect varied by aligner with BWA and Novoalign showing good detection rates for all but the largest deletions." This statement is simply not correct: BWA cannot find large deletions (by design). Neither can Bowtie. However, GATK can find larger deletions in some cases, even if the input alignments don't detect them. There are also entirely separate programs (e.g., Pindel) designed to find larger indels, and researchers looking for large indels know about these programs (and use them). This whole discussion again reflects the fundamental flaw in the experimental design: the author is measuring GATK's performance, not the performance of the aligners. In addition, the author's interpretation of Figure 3 seems biased, and is not supported by the data in the figure itself.

Minor concerns:
a). PPV is defined differently in the main body of the paper and in Table 6′s caption. b). The author needs to include citations or at least web addresses for all the aligners presented in the paper. c). I assume GLG in Table 5 is in fact CLC. d). Where did the author collect the known mutations for the BRCA1 gene from? He needs to provide citations.

Competing Interests:
No competing interests were disclosed.
We confirm that we have read this submission and believe that we have an appropriate level of expertise to state that we do not consider it to be of an acceptable scientific standard, for reasons outlined above.