Introduction
DNA variant analysis of complete genome or exome data has typically relied on filtering of alleles according to population frequency and alterations in coding of amino acids. Numerous variants of unknown significance (VUS) in both coding and non-coding gene regions cannot be categorized with these approaches. To address these limitations, in silico methods that predict biological impact of individual sequence variants on protein coding and gene expression have been developed, which exhibit varying degrees of sensitivity and specificity1. These approaches have generally not been capable of objective, efficient variant analysis on a genome-scale.
Splicing variants, in particular, are known to be a significant cause of human disease2–5 and indeed have even been hypothesized to be the most frequent cause of hereditary disease6. Computational identification of mRNA splicing mutations within DNA sequencing (DNA-Seq) data has been implemented to varying degrees of sensitivity, with most software only evaluating conservation solely at the intronic dinucleotides adjacent to the junction (i.e.7). Other approaches are capable of detecting significant mutations at other positions with constitutive, and in certain instances, cryptic, splice sites5,8,9 which can result in aberrations in mRNA splicing. Presently, only information theory-based mRNA splicing mutation analysis has been implemented on a genome scale10. Splicing mutations can abrogate recognition of natural, constitutive splice sites (inactivating mutation), weaken their binding affinity (leaky mutation), or alter splicing regulatory protein binding sites that participate in exon definition. The abnormal molecular phenotypes of these mutations comprise: (a) complete exon skipping, (b) reduced efficiency of splicing, (c) failure to remove introns (also termed intron retention or intron inclusion), or (d) cryptic splice site activation, which may define abnormal exon boundaries in transcripts using non-constitutive, proximate sequences, extending or truncating the exon. Some mutations may result in combinations of these molecular phenotypes. Nevertheless, novel or strengthened cryptic sites can be activated independently of any direct effect on the corresponding natural splice site. The prevalence of these splicing events has been determined by ourselves and others5,11–13. The diversity of possible molecular phenotypes makes such aberrant splicing challenging to corroborate at the scale required for complete genome (or exome) analyses. This has motivated the development of statistically robust algorithms and software to comprehensively validate the predicted outcomes of splicing mutation analysis.
Putative splicing variants require empirical confirmation based on expression studies from appropriate tissues carrying the mutation, compared with control samples lacking the mutation. In mutations identified from complete genome or exome sequences, corresponding transcriptome analysis based on RNA sequencing (RNA-Seq) is performed to corroborate variants predicted to alter splicing. Manually inspecting a large set of splicing variants of interest with reference to the experimental samples’ RNA-Seq data in a program like the Integrative Genomics Viewer (IGV)14, or simply performing database searches to find existing evidence would be time-consuming for large-scale analyses. Checking control samples would be required to ensure that the variant is not a result of alternative splicing, but is actually causally linked to the variant of interest. Manual inspection of the number of control samples required for statistical power to verify that each displays normal splicing would be laborious and does not easily lend itself to statistical analyses. This may lead to either missing contradictory evidence or to discarding a variant due to the perceived observation of statistically insignificant altered splicing within control samples. In addition, a list of putative splicing variants returned by variant prediction software can often be extremely large. The validation of such a significant quantity of variants may not be feasible, for example, in certain types of cancer, in instances where the genomic mutational load is high and only manual annotation is performed. We have therefore developed Veridical, a software program that automatically searches all given experimental and control RNA-Seq data to validate DNA-derived splicing variants. When adequate expression data are available at the locus carrying the mutation, this approach reveals a comprehensive set of genes exhibiting mRNA splicing defects in complete genomes and exomes. Veridical and its associated software programs are available at: www.veridical.org.
Methods
The program Veridical was developed to allow high-throughput validation of predicted splicing mutations using RNA sequencing data. Veridical requires at least three files to operate: a DNA variant file containing putative mRNA splicing mutations, a file listing of corresponding transcriptome (RNA-Seq) BAM files, and a file annotating exome structure. A separate file listing RNA-Seq BAM files for control samples (i.e. normal tissue) can also be provided. Here, we demonstrate the capabilities of the software for mutations predicted in a set of breast tumours. Veridical compares RNA-Seq data from the same tumours with RNA-Seq data from control samples lacking the predicted mutation. However, in principle, potential splicing mutations for any disease state with available RNA-Seq data can be investigated. In each tumour, every variant is analyzed by checking the informative sequencing reads from the corresponding RNA-Seq experiment for non-constitutive splice isoforms, and comparing these results with the same type of data from all other tumour and normal samples that do not carry the variant in their exomes.
Veridical concomitantly evaluates control samples, providing for an unbiased assessment of splicing variants of potentially diverse phenotypic consequences. Note that control samples include all non-variant containing files (i.e. RNA-Seq files for those tumours without the variant of interest), as well any normal samples provided. Increasing the number of the set of control samples, while computationally more expensive, increases the statistical robustness of the results obtained.
For each variant, Veridical directly analyzes sequence reads aligned to the exons and introns that are predicted to be affected by the genomic variant. We elected to avoid indirect measures of exon skipping, such as loss of heterozygosity in the transcript, because of the possibility of confusion with other molecular etiologies (i.e. deletion or gene conversion), unrelated to the splicing mutations. The nearest natural site is found using the exome annotation file provided, based upon the directionality of the variant, as defined within Table 1. The genomic coordinates of the neighboring exon boundaries are then found and the program proceeds, iterating over all known transcript variants for the given gene. A diagram of this procedure is provided in Figure 1. The variant location, C, is specifically referring to the variant itself. JC refers to the variant-induced location of the predicted mRNA splice site, which is often proximate to, but distinct from the coordinate of the actual genomic mutation itself.
Table 1. Definitions used within Veridical to determine the direction in which reads are checked.
A and B represent natural site positions, defined in Figure 1(B).
| Pertinent Splice Site | | |
|---|
| A | B | Strand | Direction |
|---|
| Exonic | Donorα | + | → |
| Exonic | Donorα | - | ← |
| Intronic | Acceptorβ | + | ← |
| Intronic | Acceptorβ | - | → |

Figure 1.
Diagram portraying the definitions used within Veridical to specify genic variant position and read coordinates. We employ the same conventions as IGV14. Blue lines denote genes, wherein thick lines represent exons and thin lines represent introns.
The program uses the BamTools API15 to iterate over all of the reads within a given genomic region across experimental and control samples. Individual reads are then assessed for their corroborating value towards the analysis of the variant being processed, as outlined in the flowchart in Figure 3. Validating reads are based on whether they alter either the location of the splice junction (i.e. junction-spanning) or the abundance of the transcript, particularly in intronic regions (i.e. read-abundance). Junction-spanning reads contain DNA sequences from two adjacent exons or are reads that extend into the intron (Equation 1(e)). These reads directly show whether the intronic sequence is removed or retained by the spliceosome, respectively. Read-abundance validated reads are based upon sequences predicted to be found in the mutated transcript in comparison with sequences that are expected to be excised from the mature transcript in the absence of a mutation (Equation 1(f)). Both types of reads can be used to validate cryptic splicing, exon skipping, or intron inclusion. A read is said to corroborate cryptic splicing if and only if the variant under consideration is expected to activate cryptic splicing. Junction-spanning, cryptic splicing reads are those in which a read is exactly split from the cryptic splice site to the adjacent exon junction (Equation 1(a)). For read-abundance cryptic splicing, we define the concept of a read fraction, which is the ratio of the number of reads corroborating the cryptically spliced isoform and the number of reads that do not support the use of the cryptic splice site (i.e. non-cryptic corroborating) in the same genomic region of a sample. Cryptic corroborating reads are those which occur within the expected region where cryptic splicing occurs (i.e. spliced-in regions). This region is bounded by the variant splice site location and the adjacent (direction dependent) splice junction (Equation 1(a)). Non-cryptic corroborating reads, which we also term “anti-cryptic” reads, are those that do not lie within this region, but would still be retained within the portion that would be excised, had cryptic splicing occurred (Equation 1(b)). To identify instances of exon skipping, Veridical only employs junction-spanning reads. A read is considered to corroborate exon skipping if the connecting read segments are split such that it connects two exon boundaries, skipping an exon in between (Equation 1(c)). A read is considered to corroborate intron inclusion when the read is continuous and either overlaps with the intron-exon boundary (and is then said to be junction-spanning) or if the read is within an intron (and is then said to be based upon read-abundance). We only consider an intron inclusion read to be junction spanning if it spans the relevant splice junction, A. Equation 1(d) formalizes this concept. We occasionally use the term “total intron inclusion” to denote that any such count of intron inclusion reads includes both those containing and not containing the mutation itself. Graphical examples of some of these validation events, with a defined variant location, are provided in Figure 2.

Figure 2.
Illustrative examples of aberrant splicing detection. Grey lines denote reads, wherein thick lines denote a read mapping to genomic sequence and thin lines represent connecting segments of reads split across spliced-in regions (i.e. exons or included introns). Dotted blue rectangles denote portions of genes which are spliced out in a mutant transcript, but are otherwise present in a normal transcript. Mutant reads are purple if they are junction-spanning and green if they are read-abundance based. Start and end coordinates of reads with two portions are denoted by (rs1, re1) and (rs2, re2), while coordinates of those with only a single portion are denoted by (rs, re). Refer to the caption of Figure 1 for additional graphical element descriptions.

Figure 3.
The algorithm employed by Veridical to validate variants. Refer to Table 1 for definitions concerning direction and Figure 1 for variable depictions. B is defined as follows: B (B site left (←) of A ⇒ B := D. B site right (→) of A ⇒ B := E.
We proceed to formalize the above descriptions as follows. A given read is denoted by r, with start and end coordinates (rs, re), if the read is continuous, or otherwise, with start and end coordinate pairs, (rs1, re1) and (rs2, re2) as diagrammed within Figure 2. Let ℓ be the length of the read. The set ζ denotes the totality of validating reads. The criterion for r ∈ ζ is detailed below. It is important to note that validating reads are necessary but not sufficient to validate a variant. Sufficiency is achieved only if the number of validating reads is statistically significant relative to those present in control samples. ζ itself is partitioned into three sets: ζc, ζe, and ζi for evidence of cryptic splicing, exon skipping, and intron inclusion, respectively. We allow partitions to be empty. Let JC denote the adjacent splice junction, and let B denote the downstream natural site, as defined by Figure 2 and Table 1. Without loss of generality, we consider only the red (i.e. direction is right) set of labels within Figure 1(B), as further typified by Figure 2. Then the (splice consequence) partitions of ζ are given by:
r ∈ ζc ⇔ variant is cryptic ∧ = B – JC ∨ (rs > JC ∧ re < A)) (1a)
r ∉ ζc ∧ variant is cryptic ∧ ¬ ( = B – JC) ⇒ r ∈ anti-cryptic (1b)
r ∈ ζe ⇔ (1c)
r ∈ ζi ⇔ (A∈ [rs, re]) ∨ ((A ∉ [rs, re]) ∧ rs > A – ℓ ∧ re < B ∧ ¬ (A ∈ [rs, re])) (1d)
We separately partition ζ by its evidence type, the set of junction-spanning reads, δ and read-abundance reads, α:
r ∈ δ ⇔ (A ∈ [rs, re]) ∨ (r ∈ ζc ∧ = B – JC) (1e)
r ∈ α ⇔ r ∉ δ (1f)
Once all validating reads are tallied for both the experimental and control samples, a p-value is computed. This is determined by computing a z-score upon Yeo-Johnson (YJ)16 transformed data. This transformation, shown in Equation 2, ensures that the data is sufficiently normally distributed to be amenable to parametric testing.
The transform is similar to the Box-Cox power transformation, but obviates the requirement of inputting strictly positive values and has more desirable statistical properties. Furthermore, this transformation allowed us to avoid the use of non-parametric testing, which has its own pitfalls regarding assumptions of the underlying data distribution17. We selected λ = , because Veridical’s untransformed output is skewed left, due to their being, in general, less validating reads in control samples and the fact that there are, by design, vastly more control samples than experimental samples. We found that this value for λ generally made the distribution much more normal. A comparison of the distributions of untransformed and transformed data is provided in Figure S1. We were not concerned about small departures from normality as a z-test with a large number of samples is robust to such deviations18.
Thus, we can compute the p-value of the pairwise unions of the two sets of partitions of ζ, except the irrelevant ζe ∪ α = Ø. We only provide p-values for these pairwise unions and do not attempt to provide p-values for the partitions for the different consequences of the mutations on splicing. While such values would be useful, we do not currently have a robust means to compute them. Our previous work provides guidance on interpretation of splicing mutation outcomes3–5,10. Thus for ζx ∈ {ζc, ζe, ζi}, let ΦZ (z) represent the cumulative distribution function of the one-sided (right-tailed — i.e. P[X > x]) standard normal distribution. Let N represent the total number of samples and let V represent the set of all ζx validations, across all samples. Then:
The program outputs two tables, along with summaries thereof. The first table lists all validated read counts across all categories for experimental samples, while the second table does the same for the control samples. P-values are shown in parentheses within the experimental table, which refer to the column-dependent (i.e. the read type is given in the column header) p-value for that read type with respect to that same read type in control samples. The program produces three files: a log file containing all details regarding validated variants, an output file with the programs progress reports and summaries, and a filtered validated variant file. The filtered file contains all validated variants of statistical significance (set as p < 0.05, by default), defined as variants with one or more validating reads achieving statistical significance in a strongly corroborating read type. These categories are limited to all junction-spanning based splicing consequences and read-abundance total intron inclusion. For example, a cryptic variant for which p = 0.04 in the junction-spanning cryptic column would meet this criteria, assuming the default significance threshold.
The p-values given by Veridical are more robust when the program is provided with a large number of samples. The minimum sample size is dependent upon the desired power, α value, and the effect size (ES). The minimum samples size could be computed as follows: . For α = 0.05 and β = 0.2 (for a power of 0.8): z = 2.4865 for the one-tailed test. Then, . Ideally, Veridical could be run with a trial number of samples.
Then, one would compute effect sizes from Veridical’s output. The standard deviation in the above formula could also be estimated from one’s data, although it should be transformed using Yeo-Johnson (such as via an appropriate R package) before computing this estimation.
We elected to use RefSeq19 genes for the exome annotation, as opposed to, the more permissive exome annotation sets, UCSC Known Genes20 or Ensembl21. The large number of transcript variants within Ensembl, in particular, caused many spurious intron inclusion validation events. This occurred because reads were found to be intronic in many cases, when in actuality they were exonic with respect to the more common transcript variant. In addition, the inclusion of the large number of rare transcripts in Ensembl significantly increased program run-time and made validation events much more challenging to interpret unequivocally. The use of RefSeq, which is a conservative annotation of the human exome, resolves these issues. It is possible that some subset of unknown or Ensemble annotated intronic transcripts could be sufficiently prevalent to merit inclusion in our analysis. We do not attempt to perform the difficult task of deciding which of these transcripts would be worth using. Indeed, the task of confirming and annotating of such transcripts is already done by the more conservative annotation we employ.
We also provide an R program22 which produces publication quality histograms displaying embedded Q-Q plots and p-values, to evaluate for normality of the read distribution and statistical significance, respectively. The R program performs the YJ transformation as implemented in the car package23. The histograms generated by the program use the Freedman-Draconis24 rule for break determination, and the Q-Q plots use algorithm Type 8 for their quantile function, as recommended by Hyndman and Fan25. This program is embedded within a Perl script, for better integration into our workflow. Lastly, a Perl program was implemented to automatically retrieve and correctly format an exome annotation file from the UCSC database20 for use in Veridical. All data use hg19/GRCh37, however when new versions of the genome become available, this program can be used to update the annotation file.
Results
Veridical validates predicted mRNA splicing mutations using high-throughput RNA sequencing data. We demonstrate how Veridical and its associated R program are used to validate predicted splicing mutations in somatic breast cancer. Each example depicts a particular variant-induced splicing consequence, analyzed by Veridical, with its corresponding significance level. The relevant primary RNA-Seq data are displayed in IGV, along with histograms and Q-Q plots showing the read distributions for each example. The source data are obtained from controlled-access breast carcinoma data from The Cancer Genome Atlas (TCGA)26. Tumour-normal matched DNA sequencing data from the TCGA consortium was used to predict a set of splicing mutations, and a subset of corresponding RNA sequencing data was analyzed to confirm these predictions with Veridical. Overall, 442 tumour samples and 106 normal samples were analyzed. Briefly, all variants used as examples in this manuscript came from running the matched TCGA exome files (to which the RNA-Seq data corresponds) through SomaticSniper27 and Strelka28 to call somatic mutations, followed by the Shannon Human Splicing Pipeline10 to find splicing mutations, which served as the input to Veridical. Details of the RNA-Seq data can be found within the supplementary methods of the TCGA paper26. Accordingly, the following examples demonstrate the utility of Veridical to identify potentially pathogenic mutations from a much larger subset of predicted variants.
Leaky Mutations
Mutations that reduce, but not abolish, the spliceosome’s ability to recognize the intron/exon boundary are termed leaky3. This can lead to the mis-splicing (intron inclusion and/or exon skipping) of many but not all transcripts. An example, provided in Figure 4, displays a predicted leaky mutation (chr5:162905690G>T) in the HMMR gene in which both junction-spanning exon skipping (p < 0.01) and read-abundance-based intron inclusion (p = 0.04) are observed. We predict this mutation to be leaky because its final Ri exceeds 1.6 bits — the minimal individual information required to recognize a splice site and produce correctly spliced mRNA4. Indeed, the natural site, while weakened by 2.16 bits, remains strong — 10.67 bits. This prediction is validated by the variant-containing sample’s RNA-Seq data (Figure 4), in which both exon skipping (5 reads) and intron inclusion (14 reads, 12 of which are shown, versus an average of 4.051 such reads per control sample) are observed, along with 70 reads portraying wild-type splicing. Only a single normally spliced read contains the G→T mutation. These results are consistent with an imbalance of expression of the two alleles, as expected for a leaky variant. Figure 5 shows that for the distribution of read-abundance-based intron inclusion is marginally statistically significant (p = 0.04).

Figure 4.
IGV images depicting a predicted leaky mutation (chr5:162905690G>T) within the natural acceptor site of exon 12 (162905689–162905806) of HMMR. This gene has four transcript variants and the given exon number pertains to isoforms a and b (reference sequences NM_001142556 and NM_012484). RNA-Seq reads are shown in the centre panel. The bottom blue track depicts RefSeq genes, wherein each blue rectangle denotes an exon and blue connecting lines denote introns. In the middle panel, each rectangle (grey by default) denotes an aligned read, while thin lines are segments of reads split across exons. Red and blue coloured rectangles in the middle panel denote aligned reads of inserts that are larger or smaller than expected, respectively. Reads are highlighted by their splicing consequence, as follows: cryptic splicing (green), exon skipping (purple), junction-spanning intron inclusion (dark green), and read-abundance intron inclusion (cyan). (A) depicts a genomic region of chromosome 5: 162902054–162909787. The variant occurs in the middle exon. Intron inclusion can be seen in this image, represented by the reads between the first and middle exon (since the direction is left, as described within Table 1). These 14 reads are read-abundance-based, since they do not span the intron-exon junction. (B) depicts a closer view of the region shown in (A) — 162905660–162905719. The dotted vertical black lines are centred upon the first base of the variant-containing exon. The thin lines in the middle panel that span the entire exon fragment are evidence of exon skipping. These 5 reads are split across the exon before and after the variant-containing exon, as seen in (A).

Figure 5.
Histogram of read-abundance-based intron inclusion with embedded Q-Q plots of the predicted leaky mutation (chr5:162905690G>T) within HMMR, as shown in Figure 4. The arrowhead denotes the number of reads (14 in this case) in the variant-containing file, which is more than observed in the control samples (p = 0.04).
Inactivating Mutations
Variants that inactivate splice sites have negative final Ri values3 with only rare exceptions4, indicating that splice site recognition is essentially abolished in these cases. We present the analysis of two inactivating mutations within the PTEN and TMTC2 genes from different tumour exomes, namely: chr10:89711873A>G and chr12:83359523G>A, respectively. The PTEN variant displays junction-spanning exon skipping events (p < 0.01), while the TMTC2 gene portrays both junction-spanning and read-abundance-based intron inclusion (both splicing consequences with p < 0.01). In addition, all intron inclusion reads in the experimental sample contain the mutation itself, while only one such read exists across all control samples analyzed (p < 0.01). The PTEN variant contains numerous exon skipping reads (32 versus an average of 2.466 such reads per control sample). The TMTC2 variant contains many junction-spanning intron inclusion reads with the G→A mutation (all of its junction-spanning intron inclusion reads: 22 versus an average of 0.002 such reads per control sample). IGV screenshots for these variants are provided within Figure 6. This figure also shows an example of junction-spanning cryptic splice site activated by the mutation (chr1:985377C>T) within the AGRN gene. The concordance between the splicing outcomes generated by these mutations and the Veridical results indicates that the proposed method detects both mutations that inactivate splice sites and cryptic splice site activation.

Figure 6.
(A) depicts an inactivating mutation (chr10:89711873A>G) within the natural acceptor site of exon 6 (89711874–89712016) of PTEN. The dotted vertical black line denotes the location of the relevant splice site. The region displayed is 89711004–89712744 on chromosome 10. Many of the 32 exon skipping reads are evident, typified by the thin lines in the middle panel that span the entire exon. There is also a substantial amount of read-abundance-based intron inclusion, shown by the reads to the left of the dotted vertical line. Exon skipping was statistically significant (p < 0.01), while read-abundance-based intron inclusion was not (p = 0.53). Panels (B) and (C) depict an inactivating mutation (chr12:83359523G>A) within the natural donor site of exon 6 (83359338–83359523) of TMTC2. (B) depicts a closer view (83359501–83359544) of the region shown in (C) and only shows exon 6. Some of the 22 junction-spanning intron inclusion reads can be seen. In this case, all of these reads contain the mutation, shown by the green adenine base in each read, between the two vertical dotted lines. (C) depicts a genomic region of chromosome 12: 83359221–83360885, TMTC2 exons 6–7. The variant occurs in the left exon. 65 read-abundance-based intron inclusion can be seen in this image, represented by the reads between the two exons. Panel (D) depicts a mutation (chr1:985377C>T) causing a cryptic donor to be activated within exon 27 (the second from left, 985282–985417) of AGRN. The region displayed is 984876–985876 on chromosome 1 (exons 26–29 are visible). Some of the 34 cryptic (junction-spanning) reads are portrayed. The dotted black vertical line denotes the cryptic splice site, at which cryptic reads end. The read-abundance-based intron inclusion, of which two reads are visible, was not statistically significant (p = 0.68). Refer to the caption of Figure 4 for IGV graphical element descriptions.
Cryptic Mutations
Recurrent genetic mutations in some oncogenes have been reported among tumours within the same, or different, tissues of origin. Common recurrent mutations present in multiple abnormal samples are recognized by Veridical. This avoids including a variant-containing sample among the control group, and outputs the results of all of the variant-containing samples. A relevant example is shown in Figure 7. The mutation (chr1:46726876G>T) causes activation of a cryptic splice site within RAD54L in multiple tumours. Upon computation of the p-values for each of the variant-containing tumours, relative to all non-variant containing tumours and normal controls, not all variant-containing tumours displayed splicing abnormalities at statistically significant levels. Of the six variant-containing tumours, two had significant levels of junction-spanning intron inclusion, and one showed statistically significant read-abundance-based intron inclusion. Details for all of the aforementioned variants, including a summary of read counts pertaining to each relevant splicing consequence, for experimental versus control samples, are provided in Table 2.

Figure 7.
IGV images and their corresponding histograms with embedded Q-Q plots depicting all six variant-containing files with a mutation (chr1:46726876G>T) which, in some cases, causes a cryptic donor to be activated within the intron between exons 7 and 8 of RAD54L. This results in the extension of the downstream natural donor (the 5′ end of exon 8). This gene has two transcript variants and the given exon numbers pertain to isoform a (reference sequence NM_003579). Only samples IV and V have statistically significant intron inclusion relative to controls. read-abundance-based intron inclusion can be seen in (A), between the two exons. The region displayed is on chromosome 1: 46726639–46726976. (B) depicts the corresponding histogram for the 15 read-abundance-based intron inclusion reads (p = 0.05) that are present in sample IV. The intron-exon boundary on the right is the downstream natural donor. (C) typifies some of the 13 junction-spanning intron inclusion reads that are a direct result of the intronic cryptic site’s activation. In these instances, reads extending past the intron-exon boundary are being spliced at the cryptic site, instead of the natural donor. In particular, samples IV and V both have a statistically significant numbers of such reads, 7 (p = 0.01) and 5 (p = 0.04), respectively. This is further typified by the corresponding histogram in (D). (C) focuses upon exon 8 from (A) and displays the genomic positions 46726908–46726957. Refer to the caption of Figure 4 for IGV graphical element descriptions. In the histograms, arrowheads denote numbers of reads in the variant-containing files. The bottom of the plots provide p-values for each respective arrowhead. Statistically significant p-values and their corresponding arrowheads are denoted in red.
Table 2. Examples of variants validated by Veridical and their selected read types.
Header abbreviations Chr, Cv, Cs, #, SC, and ET, denote chromosome, variant coordinate, splice site coordinate, sample number (where applicable), splicing consequence, and evidence type, respectively. Headers containing R with some subscript denote numbers of validated reads for the specified variant’s splicing consequence(s) and evidence type(s). RE denotes reads within variant-containing tumour samples. RT and RN denote control samples, for tumours and normal cells, respectively. Rμ is the per sample mean of RT and RN. Splicing consequences: CS denotes cryptic splicing, ES denotes exon skipping, and II denotes intron inclusion. Evidence types: JS denotes junction-spanning and RA denotes read-abundance.
| Gene | Chr |
Cv
|
Cs
| Variant | Type | Initial Ri
| Final Ri
| ΔRi
| # | SC | ET | p-value |
RE
|
RT
| RN
|
Rμ
| Figure |
|---|
|
HMMR
| chr5 | 162905690 | 162905689 | G/T | Leaky | 12.83 | 10.67 | -2.16 | | ES | JS | < 0.01 | 5 | 11 | 0 | 0.020 |
4, 5
|
| II | RA | 0.04 | 14 | 2133 | 103 | 4.051 | |
|
PTEN
| chr10 | 89711873 | 89711874 | A/G | Inactivating | 12.09 | -2.62 | -14.71 | | ES | JS | < 0.01 | 32 | 975 | 386 | 2.466 |
6(A)
|
|
TMTC2
| chr12 | 83359523 | 83359524 | G/A | Inactivating | 1.74 | -1.27 | -3.01 | | II | JS | < 0.01 | 22 | 2241 | 383 | 4.754 |
6(B)
|
| II | JSwM | < 0.01 | 22 | 0 | 1 | 0.002 | |
| II | RA | < 0.01 | 65 | 7293 | 1395 | 15.739 |
6(C)
|
|
AGRN
| chr1 | 985377 | 985376 | C/T | Cryptic | -2.24 | 4.79 | 7.03 | | CS | JS | < 0.01 | 34 | 97 | 23 | 0.217 |
6(D)
|
|
RAD54L
| chr1 | 46726876 | 46726895 | G/T | Cryptic | 13.4 | 14.84 | 1.44 | I | II | JS | N/A | 0 | 645 | 58 | 1.274 |
7
|
| II | RA | 0.54 | 3 | 2171 | 290 | 4.458 | |
| II | II | JS | 0.51 | 1 | 645 | 58 | 1.274 | |
| II | RA | 0.33 | 6 | 2171 | 290 | 4.458 | |
| III | II | JS | N/A | 0 | 645 | 58 | 1.274 | |
| II | RA | 0.33 | 6 | 2171 | 290 | 4.458 | |
| IV | II | JS | 0.01 | 7 | 645 | 58 | 1.274 | |
| II | RA | 0.05 | 15 | 2171 | 290 | 4.458 | |
| V | II | JS | 0.04 | 5 | 645 | 58 | 1.274 | |
| II | RA | N/A | 0 | 2171 | 290 | 4.458 | |
| VI | II | JS | N/A | 0 | 645 | 58 | 1.274 | |
| II | RA | N/A | 0 | 2171 | 290 | 4.458 | |
Performance
The performance of the software is affected by the number of predicted splicing mutations, the number of abnormal samples containing mutations and control samples and the corresponding RNA-Seq data for each type of sample. Veridical has the ability to analyze approximately 3000 variants in approximately 4 hours, assuming an input of 100 BAM files of RNA-Seq data. The relationship between time and numbers of BAM files and variants are plotted in Figure 8 for a 2.27 GHz processor. Veridical uses memory in linear proportion to the number and size of the input BAM files. In our tests, using RNA-Seq BAM files with an average size of approximately 6 GB, Veridical used approximately 0.7 GB for ten files to 1 GB for 100 files.

Figure 8.
Profiling data for Veridical runtime. Tests were conducted upon an Intel Xeon @ 2.27 GHz. Visualizations were generated with R22 using Lattice30 and Effects31. A surface plot of time vs. numbers of BAM files and variants is provided in (A). Effect plots are given in (B) and demonstrate the effects of the numbers of BAM files and variants upon runtime. The effect plots were generated using a linear regression model (R2 = 0.7525).
Discussion
We have implemented Veridical, a software program that automates confirmation of mRNA splicing mutations by comparing sequence read-mapped expression data from samples containing variants that are predicted to cause defective splicing with control samples lacking these mutations. The program objectively evaluates each mutation with statistical tests that determine the likelihood of and exclude normal splicing. To our knowledge, no other software currently validates splicing mutations with RNA-Seq data on a genome-wide scale, although many applications can accurately detect conventional alternative splice isoforms (i.e.29). Veridical is intended for use with large data sets derived from many samples, each containing several hundred variants that have been previously prioritized as likely splicing mutations, regardless of how the candidate mutations are selected. It is not practical to analyze all variants present in an exome or genome, rather only a filtered subset, due to the extensive computations required for statistical validation. As such, Veridical is a key component of an end-to-end, hypothesis-based, splicing mutation analysis framework that also includes the Shannon splicing mutation pipeline10 and the Automated Splice Site Analysis and Exon Definition server5. There is a trade-off between lengthy run-times and statistical robustness of Veridical, especially when there are either a large number of variants or a large number of RNA-Seq files. As with most statistical methods, those employed here are not amenable to small sample sets, but become quite powerful when a large number of controls are employed. In order to ensure that mutations can be validated, we recommend an excess of control transcriptome data relative to those from samples containing mutations (> 5 : 1), guided by the power analysis described in Methods. We do not recommend the use of a single nor a few control samples to corroborate a putative mutation. Not surprisingly, we have found that junction-spanning reads have the greatest value for corroborating cryptic splicing and exon skipping. Even a single such read is almost always sufficient to merit the validation of a variant, provided that sufficient control samples are used. For intron inclusion, both junction-spanning and read-abundance-based reads are useful and a variant can readily be validated with either, provided that the variant-containing experimental sample(s) show a statistically significant increase in the presence of either form of intron inclusion corroborating reads.
Veridical is able to automatically process variants from multiple different experimental samples, and can group the variant information if any given mutation is present in more than one sample. The use of a large sample size allows for robust statistical analyses to be performed, which aid significantly in the interpretation of results. The main utility of Veridical is to filter through large data sets of predicted splicing mutations to prioritize the variants. This helps to predict which variants will have a deleterious effect upon the protein product. Veridical is able to avoid reporting splicing changes that are naturally occurring through checking all variant-containing and non-containing control samples for the predicted splicing consequence. In addition, running multiple tumour samples at once allows for manual inspection to discover samples that contained the alternative splicing pattern, and consequently, permits the identification of DNA mutations in the same location which went undetected during genome sequencing.
The statistical power of Veridical is dependent upon the quality of the RNA-Seq data used to validate putative variants. In particular, a lack of sufficient coverage at a particular locus will cause Veridical to be unable to report any significant results. A coverage of at least 20 reads should be sufficient. This estimate is based upon alternative splicing analyses in which this threshold was found to imply concordance with microarray and RT-PCR measurements32–35. There are many potential legitimate reasons why a mutation may not be validated: (a) A lack of gene expression in the variant containing tumour sample, (b) nonsense-mediated decay may result in a loss of expression of the entire transcript, (c) the gene itself may have multiple paralogs and reads may not be unambiguously mapped, (d) other non-splicing mutations could account for a loss of expression, and (e) confounding natural alternative splicing isoforms may result in a loss of statistical significance during read mapping of the control samples. The prevalence of loci with insufficient data is dependent upon the coverage of the sequencing technology used. As sequencing technologies improve, the proportion of validated mutations is expected to increase. Such an increase would mirror that observed for the prevalence of alternative splicing events36. In addition, mutated splicing factors can disrupt splicing fidelity and exon definition37. This effect could decrease Veridical’s ability to validate splicing mutations affected by a disruption of the definition of the pertinent exon. Veridical does not currently form any equivalence between distinct variants affecting the same splice site. Such variants will be analyzed independently. Veridical is intended to be used with RNA-Seq data that not only corresponds to matched DNA-Seq data, but also only for sets of samples with comparable sequencing protocols, since the non-normalized comparisons performed rely upon the evening out of batch effects, due to a substantial number of control samples. It is important to note that acceptance of the null hypothesis, due to an absence of evidence required to disprove it, does not imply that the underlying prediction of a mutation at a particular locus is incorrect, but merely that the current empirical methods employed were insufficient to corroborate it.
“Validate,” in the present context, refers to the condition where sufficient statistical evidence has been marshaled in support of a variant. However, the threshold for significance can vary so these analyses can also be thought of as strongly corroborating variants. Recent studies in Bayesian statistics have suggested that a p-value threshold of 0.05 does not correspond to strong support of the alternative hypothesis. Accordingly, Johnson38 recommends the use of tests at the 0.005 or 0.001 level of significance.
We consider alternative splicing to be a different problem. Veridical does not aim to identify putatively pathogenic variants, but rather, to confirm existing in silico predictions thereof. We do infer exon skipping events (i.e. alternative splicing) de novo, but only to catalog dysregulated splicing “phenotypes” due to genomic sequence variants. This is not the first study to use a large control dataset. Indeed the Variant Annotation, Analysis & Search Tool (VAAST)39 does this to search for disease-causing (non-splicing) variants and the Multivariate Analysis of Transcript Splicing (MATS)29 tool (among others) can be used for the discovery of alternative splicing events. However, in our case, in most instances the distribution of reads in a single sample is compared to the distributions of reads in the control set, as opposed to a likelihood framework-based approach. We are suggesting that our approach be coupled to existing approaches to act as an a posteriori, hypothesis-driven, check on the veridicality of specific variants.
While there is considerable prior evidence for splicing mutations that alter natural and cryptic splice site recognition, we were somewhat surprised at the apparent high frequency of statistically significant intron inclusion revealed by Veridical. In fact, evidence indicates that a significant portion of the genome is transcribed36, and it is estimated that 95% of known genes are alternatively spliced32. Defective mRNA splicing can lead to multiple alternative transcripts including those with retained introns, cassette exons, alternate promoters/terminators, extended or truncated exons, and reduced exons40. In breast cancer, exon skipping and intron retention were observed to be the most common form of alternative splicing in triple negative, non-triple negative, and HER2 positive breast cancer41. In normal tissue, intron retention and exon skipping has been predicted to affect 2572 exons in 2127 genes and 50 633 exons in 12 797 genes, respectively42. In addition, previous studies suggest that the order of intron removal can influence the final mRNA transcript composition of exons and introns43. Intron inclusion observed in normal tissue may result from those introns that are removed from the transcript at the end of mRNA splicing. Given that these splicing events are relatively common in normal tissues, it becomes all the more important to distinguish expression patterns that are clearly due to the effects of splicing mutations — one of the guiding principles of the Veridical method.
Veridical is an important analytical resource for unsupervised, thorough validation of splicing mutations through the use of companion RNA-Seq data from the same samples. The approach will be broadly applicable for many types of genetic abnormalities, and should reveal numerous, previously unrecognized, mRNA splicing mutations in exome and complete genome sequences.
Open Peer Review
(doi: 10.5256/f1000research.4148.r4404)
https://f1000research.com/articles/3-8/v2#referee-response-4404
Competing Interests: No competing interests were disclosed.
https://f1000research.com/articles/3-8/v2#referee-response-4404
(doi: 10.5256/f1000research.4148.r4407)
https://f1000research.com/articles/3-8/v2#referee-response-4407
Competing Interests: No competing interests were disclosed.
https://f1000research.com/articles/3-8/v2#referee-response-4407
(doi: 10.5256/f1000research.4148.r4405)
https://f1000research.com/articles/3-8/v2#referee-response-4405
Competing Interests: No competing interests were disclosed.
https://f1000research.com/articles/3-8/v2#referee-response-4405
(doi: 10.5256/f1000research.3444.r3121)
https://f1000research.com/articles/3-8/v1#referee-response-3121
The program fulfils an important need in the community, the results appear promising and will be of special interest to groups performing RNA-seq analysis in medical settings. I have only some minor suggestions that the authors may like to consider.
Suggestions:
Also, the legend text is overly complicated: D > E swap D and E. While aficionados of first order logic will follow without problems, I would suggest that it would be better for didactic purposes to delete this and to implicitly assume that D<E for the sake of this figure. Figure 1B is confusing at this point in the manuscript because the motivation for switching the variable A,B, D, and E is not yet clear. On the other hand, panel C and panel D are trivial and do not add anything. I would suggest using Figure 1 to provide one concrete example one a simple level, and stating in the text that the variables are to be switched if the candidate mutation is located on the other side of the exon.
Also, the explanations of the method that are couched in first order logic-like notation are difficult to follow, because it is not stated whether the variant C can precede the start of the read (in which case C-S would be negative). The subscripts for r in turn have the subscript s1 but the variable S in the formula does not.
Although in the end, I think I follow the overall method, the reader is forced to make arbitrary assumptions in order to interpret the formulae being used to explain the method. A similar comment pertains to the flow chart in Figure 2.
Therefore, I would suggest the authors take some pains to improve the clarity of the explanation of the method. I would suggest that they show one of two concrete examples and provide English language specifications of the FOL-like formulae that describe the partitioning of reads.
Competing Interests: No competing interests were disclosed.
https://f1000research.com/articles/3-8/v1#referee-response-3121
All data points in Figure 3 (a) consist of data from actual runs. We did not conduct a sufficient number of large-scale runs to accurately determine performance and believe that the regression model provides information for users who wish to use the software in that context.
This would become feasible if the community were to agree upon a standard definition of custom fields; this concept could be generalized to other scenarios in which both a variant and one (or more) other associated coordinates (in this case, a splice site, but elsewhere, perhaps the site of some specific interaction). This process would ultimately require an explicit definition of the content and schema for such additional fields, and would need to solicit views from the community to ensure widespread adoption. We would be interested in contributing to this endeavor.
"It would be interesting to see a plot on the relationship of the p-values called by Veridical and the sequencing depth covered."
We have conducted a preliminary comparison of p-values for specific evidence types and splicing consequences vs. coverage per exonic base (the coverage of the gene divided by the length of the pertinent exon) and found that the results were quite difficult to interpret. We suspect that this could be explained by the fact that, due to significant intron inclusion, normalizing by exonic length is not really appropriate. Cryptic sites and alternative splicing make direct comparison of these values difficult to compute.
Veridical does not currently address cases in which multiple independent variants pertain to the same splice site. In such a case, each variant is analyzed independently. Based on Mendelian disorders, we have rarely observed multiple independent variants alter the strength of the same splice site, and in any case, the phase of these variants is unknown, so they could reside on different chromosomes.
Control samples consist of all non-variant containing (NVC) tumour samples (relative to the variant being analyzed) and normal samples. In the examples shown in the paper, we maximized the number control samples used (106 normal samples and, in general, over 400 NVC samples). Control samples do indeed contain splicing mutations and this is a strength of our statistical paradigm. We term them control samples, because we consider their read distribution to correspond to that expected from the null hypothesis, as they do not contain the specific variant in question. The “noise” introduced by these splicing variants in control samples ensures that we are testing the variant containing samples against samples with many, distinct, splicing mutations. Thus, the greater the number of samples, the smaller the batch effects, which makes the method more reliable. While the normal and tumor DNA sequencing data was paired to call somatic variants, we do not require paired controls for the RNA-Seq data. In fact, TCGA does not provide such data. Since we initially designed the software for the task of analyzing their dataset, we did not wish to assume that paired RNA-Seq data was available.
We do not have access to the TCGA tumour samples used. While TCGA performed Sanger validation of a small subset of their variants, none of the variants validated by Veridical had any associated Sanger sequencing data.
The variants predicted by the Shannon splicing mutation analysis pipeline exhibit changes in information content (ΔRi). The change in information content of a splice site is directly related to the thermodynamics of the binding event (Schneider, 1997) and therefore the strength of the splice site (Rogan et al., 1998; Rogan et al., 2003). Therefore, one would expect ΔRi to be highly predictive of valid splicing variants, where cryptic sites are expected to result in increased splice site strength, and natural sites would be expected to be weakened.
The median ΔRi for cryptic variants in the set of pre-Veridical variants (those variants called as somatic mutations and predicted by the Shannon Pipeline to affect mRNA splicing) was 2.6 bits, while the median for validated variants was 2.38 bits. For non-cryptic variants the pre- and post-validation median ΔRi values were -2.52 and -3.18 bits, respectively.
The information content of cryptic variants actually decreased slightly in the validated set. This likely is related to other factors, such as the initial strength of the natural site and the exon length, which are not accounted for by this analysis. However, the average information content of natural sites did decrease, as expected. The reason for the decrease not being more substantial, again, is likely due to the involvement of other factors, such as the distance of the variant to the natural site – which impacts its Ri value. We have refrained from including this discussion in the manuscript, since there is no requirement per se for Veridical to be used with the Shannon Pipeline. An explicit definition of the required input format is provided in the paper.
All data points in Figure 3 (a) consist of data from actual runs. We did not conduct a sufficient number of large-scale runs to accurately determine performance and believe that the regression model provides information for users who wish to use the software in that context.
This would become feasible if the community were to agree upon a standard definition of custom fields; this concept could be generalized to other scenarios in which both a variant and one (or more) other associated coordinates (in this case, a splice site, but elsewhere, perhaps the site of some specific interaction). This process would ultimately require an explicit definition of the content and schema for such additional fields, and would need to solicit views from the community to ensure widespread adoption. We would be interested in contributing to this endeavor.
"It would be interesting to see a plot on the relationship of the p-values called by Veridical and the sequencing depth covered."
We have conducted a preliminary comparison of p-values for specific evidence types and splicing consequences vs. coverage per exonic base (the coverage of the gene divided by the length of the pertinent exon) and found that the results were quite difficult to interpret. We suspect that this could be explained by the fact that, due to significant intron inclusion, normalizing by exonic length is not really appropriate. Cryptic sites and alternative splicing make direct comparison of these values difficult to compute.
Veridical does not currently address cases in which multiple independent variants pertain to the same splice site. In such a case, each variant is analyzed independently. Based on Mendelian disorders, we have rarely observed multiple independent variants alter the strength of the same splice site, and in any case, the phase of these variants is unknown, so they could reside on different chromosomes.
Control samples consist of all non-variant containing (NVC) tumour samples (relative to the variant being analyzed) and normal samples. In the examples shown in the paper, we maximized the number control samples used (106 normal samples and, in general, over 400 NVC samples). Control samples do indeed contain splicing mutations and this is a strength of our statistical paradigm. We term them control samples, because we consider their read distribution to correspond to that expected from the null hypothesis, as they do not contain the specific variant in question. The “noise” introduced by these splicing variants in control samples ensures that we are testing the variant containing samples against samples with many, distinct, splicing mutations. Thus, the greater the number of samples, the smaller the batch effects, which makes the method more reliable. While the normal and tumor DNA sequencing data was paired to call somatic variants, we do not require paired controls for the RNA-Seq data. In fact, TCGA does not provide such data. Since we initially designed the software for the task of analyzing their dataset, we did not wish to assume that paired RNA-Seq data was available.
We do not have access to the TCGA tumour samples used. While TCGA performed Sanger validation of a small subset of their variants, none of the variants validated by Veridical had any associated Sanger sequencing data.
The variants predicted by the Shannon splicing mutation analysis pipeline exhibit changes in information content (ΔRi). The change in information content of a splice site is directly related to the thermodynamics of the binding event (Schneider, 1997) and therefore the strength of the splice site (Rogan et al., 1998; Rogan et al., 2003). Therefore, one would expect ΔRi to be highly predictive of valid splicing variants, where cryptic sites are expected to result in increased splice site strength, and natural sites would be expected to be weakened.
The median ΔRi for cryptic variants in the set of pre-Veridical variants (those variants called as somatic mutations and predicted by the Shannon Pipeline to affect mRNA splicing) was 2.6 bits, while the median for validated variants was 2.38 bits. For non-cryptic variants the pre- and post-validation median ΔRi values were -2.52 and -3.18 bits, respectively.
The information content of cryptic variants actually decreased slightly in the validated set. This likely is related to other factors, such as the initial strength of the natural site and the exon length, which are not accounted for by this analysis. However, the average information content of natural sites did decrease, as expected. The reason for the decrease not being more substantial, again, is likely due to the involvement of other factors, such as the distance of the variant to the natural site – which impacts its Ri value. We have refrained from including this discussion in the manuscript, since there is no requirement per se for Veridical to be used with the Shannon Pipeline. An explicit definition of the required input format is provided in the paper.
All data points in Figure 3 (a) consist of data from actual runs. We did not conduct a sufficient number of large-scale runs to accurately determine performance and believe that the regression model provides information for users who wish to use the software in that context.
This would become feasible if the community were to agree upon a standard definition of custom fields; this concept could be generalized to other scenarios in which both a variant and one (or more) other associated coordinates (in this case, a splice site, but elsewhere, perhaps the site of some specific interaction). This process would ultimately require an explicit definition of the content and schema for such additional fields, and would need to solicit views from the community to ensure widespread adoption. We would be interested in contributing to this endeavor.
"It would be interesting to see a plot on the relationship of the p-values called by Veridical and the sequencing depth covered."
We have conducted a preliminary comparison of p-values for specific evidence types and splicing consequences vs. coverage per exonic base (the coverage of the gene divided by the length of the pertinent exon) and found that the results were quite difficult to interpret. We suspect that this could be explained by the fact that, due to significant intron inclusion, normalizing by exonic length is not really appropriate. Cryptic sites and alternative splicing make direct comparison of these values difficult to compute.
Veridical does not currently address cases in which multiple independent variants pertain to the same splice site. In such a case, each variant is analyzed independently. Based on Mendelian disorders, we have rarely observed multiple independent variants alter the strength of the same splice site, and in any case, the phase of these variants is unknown, so they could reside on different chromosomes.
Control samples consist of all non-variant containing (NVC) tumour samples (relative to the variant being analyzed) and normal samples. In the examples shown in the paper, we maximized the number control samples used (106 normal samples and, in general, over 400 NVC samples). Control samples do indeed contain splicing mutations and this is a strength of our statistical paradigm. We term them control samples, because we consider their read distribution to correspond to that expected from the null hypothesis, as they do not contain the specific variant in question. The “noise” introduced by these splicing variants in control samples ensures that we are testing the variant containing samples against samples with many, distinct, splicing mutations. Thus, the greater the number of samples, the smaller the batch effects, which makes the method more reliable. While the normal and tumor DNA sequencing data was paired to call somatic variants, we do not require paired controls for the RNA-Seq data. In fact, TCGA does not provide such data. Since we initially designed the software for the task of analyzing their dataset, we did not wish to assume that paired RNA-Seq data was available.
We do not have access to the TCGA tumour samples used. While TCGA performed Sanger validation of a small subset of their variants, none of the variants validated by Veridical had any associated Sanger sequencing data.
The variants predicted by the Shannon splicing mutation analysis pipeline exhibit changes in information content (ΔRi). The change in information content of a splice site is directly related to the thermodynamics of the binding event (Schneider, 1997) and therefore the strength of the splice site (Rogan et al., 1998; Rogan et al., 2003). Therefore, one would expect ΔRi to be highly predictive of valid splicing variants, where cryptic sites are expected to result in increased splice site strength, and natural sites would be expected to be weakened.
The median ΔRi for cryptic variants in the set of pre-Veridical variants (those variants called as somatic mutations and predicted by the Shannon Pipeline to affect mRNA splicing) was 2.6 bits, while the median for validated variants was 2.38 bits. For non-cryptic variants the pre- and post-validation median ΔRi values were -2.52 and -3.18 bits, respectively.
The information content of cryptic variants actually decreased slightly in the validated set. This likely is related to other factors, such as the initial strength of the natural site and the exon length, which are not accounted for by this analysis. However, the average information content of natural sites did decrease, as expected. The reason for the decrease not being more substantial, again, is likely due to the involvement of other factors, such as the distance of the variant to the natural site – which impacts its Ri value. We have refrained from including this discussion in the manuscript, since there is no requirement per se for Veridical to be used with the Shannon Pipeline. An explicit definition of the required input format is provided in the paper.
All data points in Figure 3 (a) consist of data from actual runs. We did not conduct a sufficient number of large-scale runs to accurately determine performance and believe that the regression model provides information for users who wish to use the software in that context.
This would become feasible if the community were to agree upon a standard definition of custom fields; this concept could be generalized to other scenarios in which both a variant and one (or more) other associated coordinates (in this case, a splice site, but elsewhere, perhaps the site of some specific interaction). This process would ultimately require an explicit definition of the content and schema for such additional fields, and would need to solicit views from the community to ensure widespread adoption. We would be interested in contributing to this endeavor.
"It would be interesting to see a plot on the relationship of the p-values called by Veridical and the sequencing depth covered."
We have conducted a preliminary comparison of p-values for specific evidence types and splicing consequences vs. coverage per exonic base (the coverage of the gene divided by the length of the pertinent exon) and found that the results were quite difficult to interpret. We suspect that this could be explained by the fact that, due to significant intron inclusion, normalizing by exonic length is not really appropriate. Cryptic sites and alternative splicing make direct comparison of these values difficult to compute.
Veridical does not currently address cases in which multiple independent variants pertain to the same splice site. In such a case, each variant is analyzed independently. Based on Mendelian disorders, we have rarely observed multiple independent variants alter the strength of the same splice site, and in any case, the phase of these variants is unknown, so they could reside on different chromosomes.
Control samples consist of all non-variant containing (NVC) tumour samples (relative to the variant being analyzed) and normal samples. In the examples shown in the paper, we maximized the number control samples used (106 normal samples and, in general, over 400 NVC samples). Control samples do indeed contain splicing mutations and this is a strength of our statistical paradigm. We term them control samples, because we consider their read distribution to correspond to that expected from the null hypothesis, as they do not contain the specific variant in question. The “noise” introduced by these splicing variants in control samples ensures that we are testing the variant containing samples against samples with many, distinct, splicing mutations. Thus, the greater the number of samples, the smaller the batch effects, which makes the method more reliable. While the normal and tumor DNA sequencing data was paired to call somatic variants, we do not require paired controls for the RNA-Seq data. In fact, TCGA does not provide such data. Since we initially designed the software for the task of analyzing their dataset, we did not wish to assume that paired RNA-Seq data was available.
We do not have access to the TCGA tumour samples used. While TCGA performed Sanger validation of a small subset of their variants, none of the variants validated by Veridical had any associated Sanger sequencing data.
The variants predicted by the Shannon splicing mutation analysis pipeline exhibit changes in information content (ΔRi). The change in information content of a splice site is directly related to the thermodynamics of the binding event (Schneider, 1997) and therefore the strength of the splice site (Rogan et al., 1998; Rogan et al., 2003). Therefore, one would expect ΔRi to be highly predictive of valid splicing variants, where cryptic sites are expected to result in increased splice site strength, and natural sites would be expected to be weakened.
The median ΔRi for cryptic variants in the set of pre-Veridical variants (those variants called as somatic mutations and predicted by the Shannon Pipeline to affect mRNA splicing) was 2.6 bits, while the median for validated variants was 2.38 bits. For non-cryptic variants the pre- and post-validation median ΔRi values were -2.52 and -3.18 bits, respectively.
The information content of cryptic variants actually decreased slightly in the validated set. This likely is related to other factors, such as the initial strength of the natural site and the exon length, which are not accounted for by this analysis. However, the average information content of natural sites did decrease, as expected. The reason for the decrease not being more substantial, again, is likely due to the involvement of other factors, such as the distance of the variant to the natural site – which impacts its Ri value. We have refrained from including this discussion in the manuscript, since there is no requirement per se for Veridical to be used with the Shannon Pipeline. An explicit definition of the required input format is provided in the paper.
(doi: 10.5256/f1000research.3444.r3524)
https://f1000research.com/articles/3-8/v1#referee-response-3524
The idea is ingenious and novel as applied to mutations affecting splicing, although not in general (see VAAST [Yandell et al. 2011], which exploits the availability of large numbers of samples to identify likely deleterious variants; it is also the premise for the HapMap and 1000 Genomes projects). The software is fast and practical, being able to test thousands of variants in hundreds of samples within hours. This is the first software of its kind, and if accurate it will be a very valuable resource for clinical genomics.
That being said, while the article provides proof-of-concept and clearly demonstrates the potential of the tool with specific examples, there are several missing pieces that are needed to provide the readers with a view of its overall performance and limitations and to help them use it effectively.
Major comments:
Also, several other tools (e.g., MATS, Miso, SpliceTrap) have been developed for the related problem of discovering alternative splicing events and comparing them among samples. MATS in particular, allows differential splicing analyses with multiple replicates. Ideally the paper would include a comparison with MATS on the data set analyzed; this comparison is informative even if MATS is used with only a subset of the samples.
In particular, the 14 supporting reads in the left intron on Figure 4 seem hardly sufficient to indicate an II event, all the more as there is a larger number of reads in the neighboring intron (not predicted to be II). The authors should provide other type of evidence for this event.
Minor comments:
This is a potentially very powerful and useful tool. I gave the article an 'Approved with reservation' because it is critical to include results in the aggregate to complement the showcased examples, as well as to discuss its limitations. I will gladly change once these few issues are addressed.
Competing Interests: No competing interests were disclosed.
https://f1000research.com/articles/3-8/v1#referee-response-3524
First, we address your comments regarding VAAST and de novo exome ... Continue reading We greatly appreciate your remarks and useful suggestions. All references to figures pertain to the first version of the manuscript.
First, we address your comments regarding VAAST and de novo exome annotation software packages. VAAST uses a feature-based, maximal likelihood, approach to identify variants suspected to be pathogenic. While Yandell et al. present a general, and indeed very useful method, it is very different from Veridical in two key respects. Most importantly, Veridical does not aim to identify putatively pathogenic variants, but rather, to confirm existing in silico predictions thereof. Although it is not designed for this purpose, one could envision using the likelihood approach used by VAAST to extract solely splicing variants, annotating them, and then confirming these variants using Veridical with corresponding RNA-Seq data (assuming such data exists). Veridical is the only software available for statistical validation of splicing mutations. While both programs prioritize variants, their goals are rather dichotomous. Similarly, Veridical depends upon the use of an (ideally conservative) exome annotation. We do infer exon skipping events (i.e. alternative splicing) de novo, but only to catalog dysregulated splicing “phenotypes” due to genomic sequence variants. Since we rely upon existing annotations, improvements in this area will serve to benefit Veridical and complement it well. In many ways, our parsimonious approach was possible precisely because the problem we address is quite a lot easier to address than de novo alternative splicing inference. We are not suggesting that Veridical be used to analyze all variants, resulting from exome or whole-genome sequencing experiments. The Shannon Pipeline, which predicts candidate variants, contains several default or modifiable filters which can limit the number of candidates input to Veridical. We are suggesting that our approach be coupled to existing approaches to act as an a posteriori, hypothesis-driven, check on the veridicality of specific variants.
We consider alternative splicing to be a different problem, for the following reasons. The authors state: “MATS calculates a P-value for each exon isoform by comparing the observed posterior probability with a set of simulated posterior probabilities from the null hypothesis.” We do not simulate posterior probabilities; we calculate them directly from the distribution of control samples, which represent the null hypothesis. Further, MATS requires the user to set uniform thresholds for all alternative splicing events, which in our case, may vary between variants (i.e. inactivating versus leaky mutations). Finally, neither MATS, nor Miso or SpliceTrap provide any means of analyzing novel isoforms created by the activation of cryptic splice sites.
We appreciate your insightful summary of our underlying statistical methodology and appreciate the limitations of our statistical inference method. It is certainly the case that the use of different RNA-Seq protocols could prove problematic. Veridical is intended to be used with RNA-Seq data from the same individual as matched DNA-Seq data. We assume that the control RNA-Seq data are generated with comparable sequencing protocols. This has been articulated in the Discussion.
We agree that levels of intron inclusion and alternative splicing can be highly variable across samples, due to factors extrinsic to the mutation itself. Such variability is a significant contributory factor to the production of transcriptomic “noise”. Given that statistically significant levels of intron inclusion were observed across all control samples, the prevalence of adjacent intron inclusions (which is not uncommon, per se), does not weaken the inference that the predicted splicing mutation increases the levels of intron inclusion. However, in such cases, the statistical threshold to achieve significance would be higher. A recent analysis indicates that p-value thresholds of 0.005 to 0.001 are appropriate (Johnson, 2013). Our statistical approach tacitly accounts for this and indeed greater levels of such “noise” could actually increase the statistical utility of our computation. Because of a wide degree of variation of intron inclusion in both normal and tumor transcriptomes, we deliberately elected against normalizing gene level expression in the program.
Regarding the specific example cited, the 14 intronic reads in the sample containing the variant in the read-abundance category (Figure 4 (A)) are contrasted with the per sample mean of 4.051 reads across all non-variant-containing tumours and normal samples in this intron (i.e. the totality of control samples, which results in a p-value of 0.04. We agree that this is not a strong result, and would not permit us to reject the null hypothesis had the threshold significance level been reduced to ≤ 0.01 (which the user is free to stipulate).In this particular case, the variant is still strongly validated, due to the presence of junction-spanning, exon skipping reads (that are absent from all of the control samples). We do not stipulate that the predicted mutation have only one abnormal read type, but rather, that the test assesses each consequence separately. We suggest that the corroborating evidence be taken together in support of variant-induced missplicing. Furthermore, junction-spanning exon skipping is, in general, rarer than intron inclusion in these data. Nevertheless, we do show examples of read-abundance tests for intron inclusion that achieve a significance level of < 0.01, such as within TMTC2 in Figure 6(C). As you point out, intron inclusion patterns also vary by intron and there are a large number of such intron inclusion reads in the downstream intron in Figure 4(A). However, in our entire set of predicted splicing mutations, we did not find any pertaining to the intron between exons 12 and 13. The observed increased intron inclusion between exons 12 and 13 may be related to the weak (5.2 bit) donor (5’) splice site in exon 12. In fact, substantial intron inclusion was expected in intron 11, due to the minimal strength of the exon 11 donor of 2.1 bits.
Regarding the comment concerning cryptic sites, we always mean non-canonical splice sites activated by sequence-level variation, which are distinctly different from tissue-related alternatively spliced isoforms lacking one or more constitutive exons. We and others have reported numerous examples of common SNPs inducing cryptic splice site activation, which could result in the annotation of accordingly altered transcripts.
It is unfortunate that Veridical had to be categorized as a “web tool”. We would have preferred to indicate that the program is a standalone-tool. F1000Research does not provide this category, even though other standalone programs have been published by the journal with the Web Tool designation. In the Web Tools category, the journal guidelines require that we provide a set of examples (complete with input and output files) representing the range of results that are obtained with the software. Therefore, providing a full analysis of the complete TCGA breast cancer dataset is beyond the scope of this paper. It will be presented elsewhere.
We believe that the statements in the manuscript's conflict of interests section are sufficient to describe the terms of use. We have added the requested details on Veridical.org, both indicating its duration and instructions for obtaining longer-term access to this software.
First, we address your comments regarding VAAST and de novo exome annotation software packages. VAAST uses a feature-based, maximal likelihood, approach to identify variants suspected to be pathogenic. While Yandell et al. present a general, and indeed very useful method, it is very different from Veridical in two key respects. Most importantly, Veridical does not aim to identify putatively pathogenic variants, but rather, to confirm existing in silico predictions thereof. Although it is not designed for this purpose, one could envision using the likelihood approach used by VAAST to extract solely splicing variants, annotating them, and then confirming these variants using Veridical with corresponding RNA-Seq data (assuming such data exists). Veridical is the only software available for statistical validation of splicing mutations. While both programs prioritize variants, their goals are rather dichotomous. Similarly, Veridical depends upon the use of an (ideally conservative) exome annotation. We do infer exon skipping events (i.e. alternative splicing) de novo, but only to catalog dysregulated splicing “phenotypes” due to genomic sequence variants. Since we rely upon existing annotations, improvements in this area will serve to benefit Veridical and complement it well. In many ways, our parsimonious approach was possible precisely because the problem we address is quite a lot easier to address than de novo alternative splicing inference. We are not suggesting that Veridical be used to analyze all variants, resulting from exome or whole-genome sequencing experiments. The Shannon Pipeline, which predicts candidate variants, contains several default or modifiable filters which can limit the number of candidates input to Veridical. We are suggesting that our approach be coupled to existing approaches to act as an a posteriori, hypothesis-driven, check on the veridicality of specific variants.
We consider alternative splicing to be a different problem, for the following reasons. The authors state: “MATS calculates a P-value for each exon isoform by comparing the observed posterior probability with a set of simulated posterior probabilities from the null hypothesis.” We do not simulate posterior probabilities; we calculate them directly from the distribution of control samples, which represent the null hypothesis. Further, MATS requires the user to set uniform thresholds for all alternative splicing events, which in our case, may vary between variants (i.e. inactivating versus leaky mutations). Finally, neither MATS, nor Miso or SpliceTrap provide any means of analyzing novel isoforms created by the activation of cryptic splice sites.
We appreciate your insightful summary of our underlying statistical methodology and appreciate the limitations of our statistical inference method. It is certainly the case that the use of different RNA-Seq protocols could prove problematic. Veridical is intended to be used with RNA-Seq data from the same individual as matched DNA-Seq data. We assume that the control RNA-Seq data are generated with comparable sequencing protocols. This has been articulated in the Discussion.
We agree that levels of intron inclusion and alternative splicing can be highly variable across samples, due to factors extrinsic to the mutation itself. Such variability is a significant contributory factor to the production of transcriptomic “noise”. Given that statistically significant levels of intron inclusion were observed across all control samples, the prevalence of adjacent intron inclusions (which is not uncommon, per se), does not weaken the inference that the predicted splicing mutation increases the levels of intron inclusion. However, in such cases, the statistical threshold to achieve significance would be higher. A recent analysis indicates that p-value thresholds of 0.005 to 0.001 are appropriate (Johnson, 2013). Our statistical approach tacitly accounts for this and indeed greater levels of such “noise” could actually increase the statistical utility of our computation. Because of a wide degree of variation of intron inclusion in both normal and tumor transcriptomes, we deliberately elected against normalizing gene level expression in the program.
Regarding the specific example cited, the 14 intronic reads in the sample containing the variant in the read-abundance category (Figure 4 (A)) are contrasted with the per sample mean of 4.051 reads across all non-variant-containing tumours and normal samples in this intron (i.e. the totality of control samples, which results in a p-value of 0.04. We agree that this is not a strong result, and would not permit us to reject the null hypothesis had the threshold significance level been reduced to ≤ 0.01 (which the user is free to stipulate).In this particular case, the variant is still strongly validated, due to the presence of junction-spanning, exon skipping reads (that are absent from all of the control samples). We do not stipulate that the predicted mutation have only one abnormal read type, but rather, that the test assesses each consequence separately. We suggest that the corroborating evidence be taken together in support of variant-induced missplicing. Furthermore, junction-spanning exon skipping is, in general, rarer than intron inclusion in these data. Nevertheless, we do show examples of read-abundance tests for intron inclusion that achieve a significance level of < 0.01, such as within TMTC2 in Figure 6(C). As you point out, intron inclusion patterns also vary by intron and there are a large number of such intron inclusion reads in the downstream intron in Figure 4(A). However, in our entire set of predicted splicing mutations, we did not find any pertaining to the intron between exons 12 and 13. The observed increased intron inclusion between exons 12 and 13 may be related to the weak (5.2 bit) donor (5’) splice site in exon 12. In fact, substantial intron inclusion was expected in intron 11, due to the minimal strength of the exon 11 donor of 2.1 bits.
Regarding the comment concerning cryptic sites, we always mean non-canonical splice sites activated by sequence-level variation, which are distinctly different from tissue-related alternatively spliced isoforms lacking one or more constitutive exons. We and others have reported numerous examples of common SNPs inducing cryptic splice site activation, which could result in the annotation of accordingly altered transcripts.
It is unfortunate that Veridical had to be categorized as a “web tool”. We would have preferred to indicate that the program is a standalone-tool. F1000Research does not provide this category, even though other standalone programs have been published by the journal with the Web Tool designation. In the Web Tools category, the journal guidelines require that we provide a set of examples (complete with input and output files) representing the range of results that are obtained with the software. Therefore, providing a full analysis of the complete TCGA breast cancer dataset is beyond the scope of this paper. It will be presented elsewhere.
We believe that the statements in the manuscript's conflict of interests section are sufficient to describe the terms of use. We have added the requested details on Veridical.org, both indicating its duration and instructions for obtaining longer-term access to this software.
First, we address your comments regarding VAAST and de novo exome ... Continue reading We greatly appreciate your remarks and useful suggestions. All references to figures pertain to the first version of the manuscript.
First, we address your comments regarding VAAST and de novo exome annotation software packages. VAAST uses a feature-based, maximal likelihood, approach to identify variants suspected to be pathogenic. While Yandell et al. present a general, and indeed very useful method, it is very different from Veridical in two key respects. Most importantly, Veridical does not aim to identify putatively pathogenic variants, but rather, to confirm existing in silico predictions thereof. Although it is not designed for this purpose, one could envision using the likelihood approach used by VAAST to extract solely splicing variants, annotating them, and then confirming these variants using Veridical with corresponding RNA-Seq data (assuming such data exists). Veridical is the only software available for statistical validation of splicing mutations. While both programs prioritize variants, their goals are rather dichotomous. Similarly, Veridical depends upon the use of an (ideally conservative) exome annotation. We do infer exon skipping events (i.e. alternative splicing) de novo, but only to catalog dysregulated splicing “phenotypes” due to genomic sequence variants. Since we rely upon existing annotations, improvements in this area will serve to benefit Veridical and complement it well. In many ways, our parsimonious approach was possible precisely because the problem we address is quite a lot easier to address than de novo alternative splicing inference. We are not suggesting that Veridical be used to analyze all variants, resulting from exome or whole-genome sequencing experiments. The Shannon Pipeline, which predicts candidate variants, contains several default or modifiable filters which can limit the number of candidates input to Veridical. We are suggesting that our approach be coupled to existing approaches to act as an a posteriori, hypothesis-driven, check on the veridicality of specific variants.
We consider alternative splicing to be a different problem, for the following reasons. The authors state: “MATS calculates a P-value for each exon isoform by comparing the observed posterior probability with a set of simulated posterior probabilities from the null hypothesis.” We do not simulate posterior probabilities; we calculate them directly from the distribution of control samples, which represent the null hypothesis. Further, MATS requires the user to set uniform thresholds for all alternative splicing events, which in our case, may vary between variants (i.e. inactivating versus leaky mutations). Finally, neither MATS, nor Miso or SpliceTrap provide any means of analyzing novel isoforms created by the activation of cryptic splice sites.
We appreciate your insightful summary of our underlying statistical methodology and appreciate the limitations of our statistical inference method. It is certainly the case that the use of different RNA-Seq protocols could prove problematic. Veridical is intended to be used with RNA-Seq data from the same individual as matched DNA-Seq data. We assume that the control RNA-Seq data are generated with comparable sequencing protocols. This has been articulated in the Discussion.
We agree that levels of intron inclusion and alternative splicing can be highly variable across samples, due to factors extrinsic to the mutation itself. Such variability is a significant contributory factor to the production of transcriptomic “noise”. Given that statistically significant levels of intron inclusion were observed across all control samples, the prevalence of adjacent intron inclusions (which is not uncommon, per se), does not weaken the inference that the predicted splicing mutation increases the levels of intron inclusion. However, in such cases, the statistical threshold to achieve significance would be higher. A recent analysis indicates that p-value thresholds of 0.005 to 0.001 are appropriate (Johnson, 2013). Our statistical approach tacitly accounts for this and indeed greater levels of such “noise” could actually increase the statistical utility of our computation. Because of a wide degree of variation of intron inclusion in both normal and tumor transcriptomes, we deliberately elected against normalizing gene level expression in the program.
Regarding the specific example cited, the 14 intronic reads in the sample containing the variant in the read-abundance category (Figure 4 (A)) are contrasted with the per sample mean of 4.051 reads across all non-variant-containing tumours and normal samples in this intron (i.e. the totality of control samples, which results in a p-value of 0.04. We agree that this is not a strong result, and would not permit us to reject the null hypothesis had the threshold significance level been reduced to ≤ 0.01 (which the user is free to stipulate).In this particular case, the variant is still strongly validated, due to the presence of junction-spanning, exon skipping reads (that are absent from all of the control samples). We do not stipulate that the predicted mutation have only one abnormal read type, but rather, that the test assesses each consequence separately. We suggest that the corroborating evidence be taken together in support of variant-induced missplicing. Furthermore, junction-spanning exon skipping is, in general, rarer than intron inclusion in these data. Nevertheless, we do show examples of read-abundance tests for intron inclusion that achieve a significance level of < 0.01, such as within TMTC2 in Figure 6(C). As you point out, intron inclusion patterns also vary by intron and there are a large number of such intron inclusion reads in the downstream intron in Figure 4(A). However, in our entire set of predicted splicing mutations, we did not find any pertaining to the intron between exons 12 and 13. The observed increased intron inclusion between exons 12 and 13 may be related to the weak (5.2 bit) donor (5’) splice site in exon 12. In fact, substantial intron inclusion was expected in intron 11, due to the minimal strength of the exon 11 donor of 2.1 bits.
Regarding the comment concerning cryptic sites, we always mean non-canonical splice sites activated by sequence-level variation, which are distinctly different from tissue-related alternatively spliced isoforms lacking one or more constitutive exons. We and others have reported numerous examples of common SNPs inducing cryptic splice site activation, which could result in the annotation of accordingly altered transcripts.
It is unfortunate that Veridical had to be categorized as a “web tool”. We would have preferred to indicate that the program is a standalone-tool. F1000Research does not provide this category, even though other standalone programs have been published by the journal with the Web Tool designation. In the Web Tools category, the journal guidelines require that we provide a set of examples (complete with input and output files) representing the range of results that are obtained with the software. Therefore, providing a full analysis of the complete TCGA breast cancer dataset is beyond the scope of this paper. It will be presented elsewhere.
We believe that the statements in the manuscript's conflict of interests section are sufficient to describe the terms of use. We have added the requested details on Veridical.org, both indicating its duration and instructions for obtaining longer-term access to this software.
First, we address your comments regarding VAAST and de novo exome annotation software packages. VAAST uses a feature-based, maximal likelihood, approach to identify variants suspected to be pathogenic. While Yandell et al. present a general, and indeed very useful method, it is very different from Veridical in two key respects. Most importantly, Veridical does not aim to identify putatively pathogenic variants, but rather, to confirm existing in silico predictions thereof. Although it is not designed for this purpose, one could envision using the likelihood approach used by VAAST to extract solely splicing variants, annotating them, and then confirming these variants using Veridical with corresponding RNA-Seq data (assuming such data exists). Veridical is the only software available for statistical validation of splicing mutations. While both programs prioritize variants, their goals are rather dichotomous. Similarly, Veridical depends upon the use of an (ideally conservative) exome annotation. We do infer exon skipping events (i.e. alternative splicing) de novo, but only to catalog dysregulated splicing “phenotypes” due to genomic sequence variants. Since we rely upon existing annotations, improvements in this area will serve to benefit Veridical and complement it well. In many ways, our parsimonious approach was possible precisely because the problem we address is quite a lot easier to address than de novo alternative splicing inference. We are not suggesting that Veridical be used to analyze all variants, resulting from exome or whole-genome sequencing experiments. The Shannon Pipeline, which predicts candidate variants, contains several default or modifiable filters which can limit the number of candidates input to Veridical. We are suggesting that our approach be coupled to existing approaches to act as an a posteriori, hypothesis-driven, check on the veridicality of specific variants.
We consider alternative splicing to be a different problem, for the following reasons. The authors state: “MATS calculates a P-value for each exon isoform by comparing the observed posterior probability with a set of simulated posterior probabilities from the null hypothesis.” We do not simulate posterior probabilities; we calculate them directly from the distribution of control samples, which represent the null hypothesis. Further, MATS requires the user to set uniform thresholds for all alternative splicing events, which in our case, may vary between variants (i.e. inactivating versus leaky mutations). Finally, neither MATS, nor Miso or SpliceTrap provide any means of analyzing novel isoforms created by the activation of cryptic splice sites.
We appreciate your insightful summary of our underlying statistical methodology and appreciate the limitations of our statistical inference method. It is certainly the case that the use of different RNA-Seq protocols could prove problematic. Veridical is intended to be used with RNA-Seq data from the same individual as matched DNA-Seq data. We assume that the control RNA-Seq data are generated with comparable sequencing protocols. This has been articulated in the Discussion.
We agree that levels of intron inclusion and alternative splicing can be highly variable across samples, due to factors extrinsic to the mutation itself. Such variability is a significant contributory factor to the production of transcriptomic “noise”. Given that statistically significant levels of intron inclusion were observed across all control samples, the prevalence of adjacent intron inclusions (which is not uncommon, per se), does not weaken the inference that the predicted splicing mutation increases the levels of intron inclusion. However, in such cases, the statistical threshold to achieve significance would be higher. A recent analysis indicates that p-value thresholds of 0.005 to 0.001 are appropriate (Johnson, 2013). Our statistical approach tacitly accounts for this and indeed greater levels of such “noise” could actually increase the statistical utility of our computation. Because of a wide degree of variation of intron inclusion in both normal and tumor transcriptomes, we deliberately elected against normalizing gene level expression in the program.
Regarding the specific example cited, the 14 intronic reads in the sample containing the variant in the read-abundance category (Figure 4 (A)) are contrasted with the per sample mean of 4.051 reads across all non-variant-containing tumours and normal samples in this intron (i.e. the totality of control samples, which results in a p-value of 0.04. We agree that this is not a strong result, and would not permit us to reject the null hypothesis had the threshold significance level been reduced to ≤ 0.01 (which the user is free to stipulate).In this particular case, the variant is still strongly validated, due to the presence of junction-spanning, exon skipping reads (that are absent from all of the control samples). We do not stipulate that the predicted mutation have only one abnormal read type, but rather, that the test assesses each consequence separately. We suggest that the corroborating evidence be taken together in support of variant-induced missplicing. Furthermore, junction-spanning exon skipping is, in general, rarer than intron inclusion in these data. Nevertheless, we do show examples of read-abundance tests for intron inclusion that achieve a significance level of < 0.01, such as within TMTC2 in Figure 6(C). As you point out, intron inclusion patterns also vary by intron and there are a large number of such intron inclusion reads in the downstream intron in Figure 4(A). However, in our entire set of predicted splicing mutations, we did not find any pertaining to the intron between exons 12 and 13. The observed increased intron inclusion between exons 12 and 13 may be related to the weak (5.2 bit) donor (5’) splice site in exon 12. In fact, substantial intron inclusion was expected in intron 11, due to the minimal strength of the exon 11 donor of 2.1 bits.
Regarding the comment concerning cryptic sites, we always mean non-canonical splice sites activated by sequence-level variation, which are distinctly different from tissue-related alternatively spliced isoforms lacking one or more constitutive exons. We and others have reported numerous examples of common SNPs inducing cryptic splice site activation, which could result in the annotation of accordingly altered transcripts.
It is unfortunate that Veridical had to be categorized as a “web tool”. We would have preferred to indicate that the program is a standalone-tool. F1000Research does not provide this category, even though other standalone programs have been published by the journal with the Web Tool designation. In the Web Tools category, the journal guidelines require that we provide a set of examples (complete with input and output files) representing the range of results that are obtained with the software. Therefore, providing a full analysis of the complete TCGA breast cancer dataset is beyond the scope of this paper. It will be presented elsewhere.
We believe that the statements in the manuscript's conflict of interests section are sufficient to describe the terms of use. We have added the requested details on Veridical.org, both indicating its duration and instructions for obtaining longer-term access to this software.
(doi: 10.5256/f1000research.3444.r3119)
https://f1000research.com/articles/3-8/v1#referee-response-3119
Competing Interests: No competing interests were disclosed.
https://f1000research.com/articles/3-8/v1#referee-response-3119
Regarding our use of the term “validation”, we understand that in silico validation is not comparable to in vitro or in vivo validation ... Continue reading We greatly appreciate your review of this paper.
Regarding our use of the term “validation”, we understand that in silico validation is not comparable to in vitro or in vivo validation assays. That said, we would like to elaborate. The DNA and RNA samples were matched from the same individual, which is not the case for many in vitro assays, i.e. mini-gene expression analysis by reverse transcription followed by PCR. The very large set of controls used in our analysis is also atypical in experimental validation of proposed mutations. When results are obtained that are statistically significant, it is conventional to refer to them as “validating”. Even if DNA-Seq and RNA-Seq data were not matched from the same individual, Veridical would still determine if known splicing mutations were expressed in a known subset of tumors, however we would instead refer to this as "corroborating".
The comparison with GWAS is inappropriate: we are not comparing means of distinct case and control distributions; rather we are computing the read distribution from the control samples and determining the probability that the mutation bearing sample falls within that distribution. Additionally, the initial hypothesis in a GWAS is quite vague. Thus, the resultant associations are not of the same category as those provided by Veridical. While Veridical does indeed validate the splicing consequence observed, when we say that it validates the mutation we do only mean that it strongly corroborates the mutation as a causative agent of the splicing consequence. The responsibility to decide if the p-value reported by the program is sufficient is left up to the user, who should avoid incorrect post hoc ergo propter hoc arguments.
Our reference to prioritization of variants for subsequent verification is based upon the result of Veridical's statistical tests. We explicitly mention that the software is not well suited for the analysis of raw output from genome-scale analyses, and that filtering should be performed a priori, as we conducted, with a separate Perl script, which is available with Veridical.
Regarding our use of the term “validation”, we understand that in silico validation is not comparable to in vitro or in vivo validation assays. That said, we would like to elaborate. The DNA and RNA samples were matched from the same individual, which is not the case for many in vitro assays, i.e. mini-gene expression analysis by reverse transcription followed by PCR. The very large set of controls used in our analysis is also atypical in experimental validation of proposed mutations. When results are obtained that are statistically significant, it is conventional to refer to them as “validating”. Even if DNA-Seq and RNA-Seq data were not matched from the same individual, Veridical would still determine if known splicing mutations were expressed in a known subset of tumors, however we would instead refer to this as "corroborating".
The comparison with GWAS is inappropriate: we are not comparing means of distinct case and control distributions; rather we are computing the read distribution from the control samples and determining the probability that the mutation bearing sample falls within that distribution. Additionally, the initial hypothesis in a GWAS is quite vague. Thus, the resultant associations are not of the same category as those provided by Veridical. While Veridical does indeed validate the splicing consequence observed, when we say that it validates the mutation we do only mean that it strongly corroborates the mutation as a causative agent of the splicing consequence. The responsibility to decide if the p-value reported by the program is sufficient is left up to the user, who should avoid incorrect post hoc ergo propter hoc arguments.
Our reference to prioritization of variants for subsequent verification is based upon the result of Veridical's statistical tests. We explicitly mention that the software is not well suited for the analysis of raw output from genome-scale analyses, and that filtering should be performed a priori, as we conducted, with a separate Perl script, which is available with Veridical.
Regarding our use of the term “validation”, we understand that in silico validation is not comparable to in vitro or in vivo validation ... Continue reading We greatly appreciate your review of this paper.
Regarding our use of the term “validation”, we understand that in silico validation is not comparable to in vitro or in vivo validation assays. That said, we would like to elaborate. The DNA and RNA samples were matched from the same individual, which is not the case for many in vitro assays, i.e. mini-gene expression analysis by reverse transcription followed by PCR. The very large set of controls used in our analysis is also atypical in experimental validation of proposed mutations. When results are obtained that are statistically significant, it is conventional to refer to them as “validating”. Even if DNA-Seq and RNA-Seq data were not matched from the same individual, Veridical would still determine if known splicing mutations were expressed in a known subset of tumors, however we would instead refer to this as "corroborating".
The comparison with GWAS is inappropriate: we are not comparing means of distinct case and control distributions; rather we are computing the read distribution from the control samples and determining the probability that the mutation bearing sample falls within that distribution. Additionally, the initial hypothesis in a GWAS is quite vague. Thus, the resultant associations are not of the same category as those provided by Veridical. While Veridical does indeed validate the splicing consequence observed, when we say that it validates the mutation we do only mean that it strongly corroborates the mutation as a causative agent of the splicing consequence. The responsibility to decide if the p-value reported by the program is sufficient is left up to the user, who should avoid incorrect post hoc ergo propter hoc arguments.
Our reference to prioritization of variants for subsequent verification is based upon the result of Veridical's statistical tests. We explicitly mention that the software is not well suited for the analysis of raw output from genome-scale analyses, and that filtering should be performed a priori, as we conducted, with a separate Perl script, which is available with Veridical.
Regarding our use of the term “validation”, we understand that in silico validation is not comparable to in vitro or in vivo validation assays. That said, we would like to elaborate. The DNA and RNA samples were matched from the same individual, which is not the case for many in vitro assays, i.e. mini-gene expression analysis by reverse transcription followed by PCR. The very large set of controls used in our analysis is also atypical in experimental validation of proposed mutations. When results are obtained that are statistically significant, it is conventional to refer to them as “validating”. Even if DNA-Seq and RNA-Seq data were not matched from the same individual, Veridical would still determine if known splicing mutations were expressed in a known subset of tumors, however we would instead refer to this as "corroborating".
The comparison with GWAS is inappropriate: we are not comparing means of distinct case and control distributions; rather we are computing the read distribution from the control samples and determining the probability that the mutation bearing sample falls within that distribution. Additionally, the initial hypothesis in a GWAS is quite vague. Thus, the resultant associations are not of the same category as those provided by Veridical. While Veridical does indeed validate the splicing consequence observed, when we say that it validates the mutation we do only mean that it strongly corroborates the mutation as a causative agent of the splicing consequence. The responsibility to decide if the p-value reported by the program is sufficient is left up to the user, who should avoid incorrect post hoc ergo propter hoc arguments.
Our reference to prioritization of variants for subsequent verification is based upon the result of Veridical's statistical tests. We explicitly mention that the software is not well suited for the analysis of raw output from genome-scale analyses, and that filtering should be performed a priori, as we conducted, with a separate Perl script, which is available with Veridical.
(doi: 10.5256/f1000research.3444.r3120)
https://f1000research.com/articles/3-8/v1#referee-response-3120
Actually, the interpretation of genetic variants is a difficult and key issue in current research.
The integration of genomic and transcriptomic data, namely the use of RNA-seq-based transcriptome characterization as a “molecular phenotype” of cells is useful and meaningful.
The software is standalone (not a web-tool) and it is completed by perl scripts, facilitating data management.
The manuscript declare that “Veridical and its associated software programs are available at: www.veridical.org”.
Actually, Veridical is commercially available to the scientific community. A trial version lasting 30 days can be downloaded by the website, but in order to obtain binaries, the website requests a registration with an institutional email address - they reserve the right to deny access to users who register with third-party mail servers (Gmail, Yahoo, Hotmail, etc.).
No pricing information is included in the manuscript and, more importantly, in the webpages accessible to download the software, either before or after registration.
After downloading the software, I was not able to find R scripts that can be useful to generate some plots, as indicated in the manuscript.
Saying that, the paper is written in a clear language and it is quite complete.
I propose a few revisions that in my opinion can improve the manuscript readability and clarity.
Introduction
Methods
Results
I have two important criticisms about the Results section:
Competing Interests: No competing interests were disclosed.
https://f1000research.com/articles/3-8/v1#referee-response-3120
The requirement to provide an institutional email address is quite ubiquitous across a number of academic disciplines, including many fields of informatics. In bioinformatics, for example, ANNOVAR requires non-profit users to register an address and HGMD does as well. ChemAxon, software for cheminformatics requires this. In astroinformatics, Sloan Digital Sky Survey III (SDSSIII) also requires an institutional email. These examples are represent a fraction of the numerous scientific websites that request such emails. We consider it a professional courtesy to provide legitimate email information, and reserve the right to disapprove addresses of individuals who seek to mask their identity, which may enable them to avoid acknowledging the provenance of the software.
We specifically address each of your other comments below. All references to figures pertain to the first version of the manuscript.
López-Bigas et al. (2005) conducted a general analysis using the complete sets of SwissProt genes and OMIM known disease genes. Their mathematical model does not depend upon any specific hereditary disease. Nevertheless, certain genes have been demonstrated to exhibit very high frequencies of splicing mutations (ATM, NF1).
This has been corrected.
This has been revised accordingly.
In general, we provide exon numbers for reference and to allow for future exon identification. While we specifically describe in the captions which exon contains the variant and relate this to the exon number, we agree that graphical indications of key figure elements within IGV screenshots could improve the clarity. These figures have been revised accordingly.
In this sentence, we are not referring to the ubiquitous assumption of an underlying normal distribution, which is indeed not required for non-parametric tests. Instead, the reference is to, other, lesser known, assumptions that are tacit in most non-parametric methods. The citation we provide (Johnson, 1995) refutes the commonly-held notion that non-parametric tests make no assumptions about the underlying data distribution, and describes numerous pitfalls that can occur when using non-parametric methods. For example, the author articulates a particular assumption implicit in a comparison of means via the Mann-Whitney test which actually requires that, “the two distributions are identical, in shape and scale, differing only in their means. This assumption can be harder to justify than the asymptotic normality demanded by the t test, and is rarely evaluated.”
The revised manuscript describes the procedure and criteria for determining the number of control samples needed.
The profiling information and example results are more clearly demarcated in the revision.
We now elaborate upon experimental protocols, processing, and data providence. The details of the RNA-Seq data and BAM file generation can obtained from the TCGA consortia, which generated them, specifically in the supplementary methods of Koboldt et al. (2012). All of our data input directly into Veridical are available, and even the program used to generate the histograms is provided. This ensures that all of our results are reproducible.
The input data were obtained by analyzing the original TCGA dataset with the Shannon pipeline, as the paper describes. The pipeline has been published in a peer-reviewed context and is also available on a trial basis. The details of the TCGA analytical results obtained with the Shannon pipeline are considerable in length, putting them beyond the scope of this paper and will be described elsewhere.
The journal guidelines concerning “web tools” state: “The article should provide examples of suitable input data sets and include an example of the output that can be expected from the tool and how this output should be interpreted.” The examples shown are representative of the splicing analysis outcomes present in the full TCGA analysis. The examples provided in this paper were designed to illustrate the capabilities of Veridical and are not an attempt to forge biological conclusions of this large dataset.
We provide in Table 2 the read counts for non-variant containing tumour samples, and for the variant containing samples. We acknowledge that cancer gene expression results in a gross dysregulation of mRNA splicing thereby causing the presentation of the altered transcriptome. This was a motivating factor behind our choice of statistical method.
If such exons exist, which are annotated as introns by RefSeq, and are actively transcribed in normal or breast cancer tissues, the large number of control samples will reflect this and such events will accordingly not trigger statistically significant intron inclusion events. We have added a comment to the Discussion reflecting this.
This has been addressed in the revision.
The requirement to provide an institutional email address is quite ubiquitous across a number of academic disciplines, including many fields of informatics. In bioinformatics, for example, ANNOVAR requires non-profit users to register an address and HGMD does as well. ChemAxon, software for cheminformatics requires this. In astroinformatics, Sloan Digital Sky Survey III (SDSSIII) also requires an institutional email. These examples are represent a fraction of the numerous scientific websites that request such emails. We consider it a professional courtesy to provide legitimate email information, and reserve the right to disapprove addresses of individuals who seek to mask their identity, which may enable them to avoid acknowledging the provenance of the software.
We specifically address each of your other comments below. All references to figures pertain to the first version of the manuscript.
López-Bigas et al. (2005) conducted a general analysis using the complete sets of SwissProt genes and OMIM known disease genes. Their mathematical model does not depend upon any specific hereditary disease. Nevertheless, certain genes have been demonstrated to exhibit very high frequencies of splicing mutations (ATM, NF1).
This has been corrected.
This has been revised accordingly.
In general, we provide exon numbers for reference and to allow for future exon identification. While we specifically describe in the captions which exon contains the variant and relate this to the exon number, we agree that graphical indications of key figure elements within IGV screenshots could improve the clarity. These figures have been revised accordingly.
In this sentence, we are not referring to the ubiquitous assumption of an underlying normal distribution, which is indeed not required for non-parametric tests. Instead, the reference is to, other, lesser known, assumptions that are tacit in most non-parametric methods. The citation we provide (Johnson, 1995) refutes the commonly-held notion that non-parametric tests make no assumptions about the underlying data distribution, and describes numerous pitfalls that can occur when using non-parametric methods. For example, the author articulates a particular assumption implicit in a comparison of means via the Mann-Whitney test which actually requires that, “the two distributions are identical, in shape and scale, differing only in their means. This assumption can be harder to justify than the asymptotic normality demanded by the t test, and is rarely evaluated.”
The revised manuscript describes the procedure and criteria for determining the number of control samples needed.
The profiling information and example results are more clearly demarcated in the revision.
We now elaborate upon experimental protocols, processing, and data providence. The details of the RNA-Seq data and BAM file generation can obtained from the TCGA consortia, which generated them, specifically in the supplementary methods of Koboldt et al. (2012). All of our data input directly into Veridical are available, and even the program used to generate the histograms is provided. This ensures that all of our results are reproducible.
The input data were obtained by analyzing the original TCGA dataset with the Shannon pipeline, as the paper describes. The pipeline has been published in a peer-reviewed context and is also available on a trial basis. The details of the TCGA analytical results obtained with the Shannon pipeline are considerable in length, putting them beyond the scope of this paper and will be described elsewhere.
The journal guidelines concerning “web tools” state: “The article should provide examples of suitable input data sets and include an example of the output that can be expected from the tool and how this output should be interpreted.” The examples shown are representative of the splicing analysis outcomes present in the full TCGA analysis. The examples provided in this paper were designed to illustrate the capabilities of Veridical and are not an attempt to forge biological conclusions of this large dataset.
We provide in Table 2 the read counts for non-variant containing tumour samples, and for the variant containing samples. We acknowledge that cancer gene expression results in a gross dysregulation of mRNA splicing thereby causing the presentation of the altered transcriptome. This was a motivating factor behind our choice of statistical method.
If such exons exist, which are annotated as introns by RefSeq, and are actively transcribed in normal or breast cancer tissues, the large number of control samples will reflect this and such events will accordingly not trigger statistically significant intron inclusion events. We have added a comment to the Discussion reflecting this.
This has been addressed in the revision.
The requirement to provide an institutional email address is quite ubiquitous across a number of academic disciplines, including many fields of informatics. In bioinformatics, for example, ANNOVAR requires non-profit users to register an address and HGMD does as well. ChemAxon, software for cheminformatics requires this. In astroinformatics, Sloan Digital Sky Survey III (SDSSIII) also requires an institutional email. These examples are represent a fraction of the numerous scientific websites that request such emails. We consider it a professional courtesy to provide legitimate email information, and reserve the right to disapprove addresses of individuals who seek to mask their identity, which may enable them to avoid acknowledging the provenance of the software.
We specifically address each of your other comments below. All references to figures pertain to the first version of the manuscript.
López-Bigas et al. (2005) conducted a general analysis using the complete sets of SwissProt genes and OMIM known disease genes. Their mathematical model does not depend upon any specific hereditary disease. Nevertheless, certain genes have been demonstrated to exhibit very high frequencies of splicing mutations (ATM, NF1).
This has been corrected.
This has been revised accordingly.
In general, we provide exon numbers for reference and to allow for future exon identification. While we specifically describe in the captions which exon contains the variant and relate this to the exon number, we agree that graphical indications of key figure elements within IGV screenshots could improve the clarity. These figures have been revised accordingly.
In this sentence, we are not referring to the ubiquitous assumption of an underlying normal distribution, which is indeed not required for non-parametric tests. Instead, the reference is to, other, lesser known, assumptions that are tacit in most non-parametric methods. The citation we provide (Johnson, 1995) refutes the commonly-held notion that non-parametric tests make no assumptions about the underlying data distribution, and describes numerous pitfalls that can occur when using non-parametric methods. For example, the author articulates a particular assumption implicit in a comparison of means via the Mann-Whitney test which actually requires that, “the two distributions are identical, in shape and scale, differing only in their means. This assumption can be harder to justify than the asymptotic normality demanded by the t test, and is rarely evaluated.”
The revised manuscript describes the procedure and criteria for determining the number of control samples needed.
The profiling information and example results are more clearly demarcated in the revision.
We now elaborate upon experimental protocols, processing, and data providence. The details of the RNA-Seq data and BAM file generation can obtained from the TCGA consortia, which generated them, specifically in the supplementary methods of Koboldt et al. (2012). All of our data input directly into Veridical are available, and even the program used to generate the histograms is provided. This ensures that all of our results are reproducible.
The input data were obtained by analyzing the original TCGA dataset with the Shannon pipeline, as the paper describes. The pipeline has been published in a peer-reviewed context and is also available on a trial basis. The details of the TCGA analytical results obtained with the Shannon pipeline are considerable in length, putting them beyond the scope of this paper and will be described elsewhere.
The journal guidelines concerning “web tools” state: “The article should provide examples of suitable input data sets and include an example of the output that can be expected from the tool and how this output should be interpreted.” The examples shown are representative of the splicing analysis outcomes present in the full TCGA analysis. The examples provided in this paper were designed to illustrate the capabilities of Veridical and are not an attempt to forge biological conclusions of this large dataset.
We provide in Table 2 the read counts for non-variant containing tumour samples, and for the variant containing samples. We acknowledge that cancer gene expression results in a gross dysregulation of mRNA splicing thereby causing the presentation of the altered transcriptome. This was a motivating factor behind our choice of statistical method.
If such exons exist, which are annotated as introns by RefSeq, and are actively transcribed in normal or breast cancer tissues, the large number of control samples will reflect this and such events will accordingly not trigger statistically significant intron inclusion events. We have added a comment to the Discussion reflecting this.
This has been addressed in the revision.
The requirement to provide an institutional email address is quite ubiquitous across a number of academic disciplines, including many fields of informatics. In bioinformatics, for example, ANNOVAR requires non-profit users to register an address and HGMD does as well. ChemAxon, software for cheminformatics requires this. In astroinformatics, Sloan Digital Sky Survey III (SDSSIII) also requires an institutional email. These examples are represent a fraction of the numerous scientific websites that request such emails. We consider it a professional courtesy to provide legitimate email information, and reserve the right to disapprove addresses of individuals who seek to mask their identity, which may enable them to avoid acknowledging the provenance of the software.
We specifically address each of your other comments below. All references to figures pertain to the first version of the manuscript.
López-Bigas et al. (2005) conducted a general analysis using the complete sets of SwissProt genes and OMIM known disease genes. Their mathematical model does not depend upon any specific hereditary disease. Nevertheless, certain genes have been demonstrated to exhibit very high frequencies of splicing mutations (ATM, NF1).
This has been corrected.
This has been revised accordingly.
In general, we provide exon numbers for reference and to allow for future exon identification. While we specifically describe in the captions which exon contains the variant and relate this to the exon number, we agree that graphical indications of key figure elements within IGV screenshots could improve the clarity. These figures have been revised accordingly.
In this sentence, we are not referring to the ubiquitous assumption of an underlying normal distribution, which is indeed not required for non-parametric tests. Instead, the reference is to, other, lesser known, assumptions that are tacit in most non-parametric methods. The citation we provide (Johnson, 1995) refutes the commonly-held notion that non-parametric tests make no assumptions about the underlying data distribution, and describes numerous pitfalls that can occur when using non-parametric methods. For example, the author articulates a particular assumption implicit in a comparison of means via the Mann-Whitney test which actually requires that, “the two distributions are identical, in shape and scale, differing only in their means. This assumption can be harder to justify than the asymptotic normality demanded by the t test, and is rarely evaluated.”
The revised manuscript describes the procedure and criteria for determining the number of control samples needed.
The profiling information and example results are more clearly demarcated in the revision.
We now elaborate upon experimental protocols, processing, and data providence. The details of the RNA-Seq data and BAM file generation can obtained from the TCGA consortia, which generated them, specifically in the supplementary methods of Koboldt et al. (2012). All of our data input directly into Veridical are available, and even the program used to generate the histograms is provided. This ensures that all of our results are reproducible.
The input data were obtained by analyzing the original TCGA dataset with the Shannon pipeline, as the paper describes. The pipeline has been published in a peer-reviewed context and is also available on a trial basis. The details of the TCGA analytical results obtained with the Shannon pipeline are considerable in length, putting them beyond the scope of this paper and will be described elsewhere.
The journal guidelines concerning “web tools” state: “The article should provide examples of suitable input data sets and include an example of the output that can be expected from the tool and how this output should be interpreted.” The examples shown are representative of the splicing analysis outcomes present in the full TCGA analysis. The examples provided in this paper were designed to illustrate the capabilities of Veridical and are not an attempt to forge biological conclusions of this large dataset.
We provide in Table 2 the read counts for non-variant containing tumour samples, and for the variant containing samples. We acknowledge that cancer gene expression results in a gross dysregulation of mRNA splicing thereby causing the presentation of the altered transcriptome. This was a motivating factor behind our choice of statistical method.
If such exons exist, which are annotated as introns by RefSeq, and are actively transcribed in normal or breast cancer tissues, the large number of control samples will reflect this and such events will accordingly not trigger statistically significant intron inclusion events. We have added a comment to the Discussion reflecting this.
This has been addressed in the revision.