CusVarDB: A tool for building customized sample-specific variant protein database from next-generation sequencing datasets

Cancer genome sequencing studies have revealed a number of variants in coding regions of several genes. Some of these coding variants play an important role in activating specific pathways that drive proliferation. Coding variants present on cancer cell surfaces by the major histocompatibility complex serve as neo-antigens and result in immune activation. The success of immune therapy in patients is attributed to neo-antigen load on cancer cell surfaces. However, which coding variants are expressed at the protein level can’t be predicted based on genomic data. Complementing genomic data with proteomic data can potentially reveal coding variants that are expressed at the protein level. However, identification of variant peptides using mass spectrometry data is still a challenging task due to the lack of an appropriate tool that integrates genomic and proteomic data analysis pipelines. To overcome this problem, and for the ease of the biologists, we have developed a graphical user interface (GUI)-based tool called CusVarDB. We integrated variant calling pipeline to generate sample-specific variant protein database from next-generation sequencing datasets. We validated the tool with triple negative breast cancer cell line datasets and identified 423, 408, 386 and 361 variant peptides from BT474, MDMAB157, MFM223 and HCC38 datasets, respectively.

. In addition, a mutation in gene BRAF V600E is known to drive some melanomas (Chapman et al., 2011). Some of these mutant proteins are proteolytically processed in cancer cells, resulting in major histocompatibility complex (MHC) presentation of mutant peptides. These mutant peptides serve as neo-antigens that recruit T cells and result in immune activation (Kreiter et al., 2015). However, not all mutants are encoded at the protein level. Therefore, it is important to identify mutant proteins expressed by cancer cells. However, there are no easy-to-use pipelines for biologists to identify such coding variants, which alter the protein sequences and may play an important role in tumorigenesis. Detection of cancer-specific proteoforms has been studied by several research teams using proteogenomics methods. This approach integrates proteomics data with genome sequence data to identify protein complement of genomic variants ( (Apweiler et al., 2004) can be used to identify the variant peptides (Alfaro et al., 2017) but millions of protein-variants from these databases might increase the probability of identifying false positives. Therefore, there is a need for an improved method to identify sample specific sequence variations at the proteomic level. Tools such as CustomProDB (Wang & Zhang, 2013) and MZVar (https://bitbucket.org/sib-pig/mzvar-public/ src/master/) have been developed for generating variant protein database. These tools require processed files such as VCF or BED file as an input. Executing the preprocessing steps requires knowledge of computation and also requires multiple tools to generate the output. Hence, we developed CusVarDB with an in-built pipeline for genomics suite to identify variants and create custom variant protein database.

Implementation
CusVarDB is available at https://sourceforge.net/projects/ cusvardb/ and http://bioinfo-tools.com/Downloads/CusVarDB/ V1.0.0/. Our tool supports graphical user interface (GUI) for easy execution of next-generation sequencing (NGS) pipelines. The GUI was developed using Microsoft Visual Studio Community edition 2017. Linux commands were executed through the Python program (terminal.py) developed by Linwei available on GitHub. Portable version of Perl was used to execute the Annovar scripts. A python script was used to generate the custom variant database; the scripts were made portable using PyInstaller. The dry-run concept is implemented in the tool to customize commands according to user's need and run in batches.
CusVarDB inherits different NGS pipelines for genomics, RNA-Seq and exome-seq datasets. The tool performs the following steps: i) alignment of genomic data to a reference genome; ii) variant calling; iii) variant annotation; and iv) generate variant protein database (

Operation
CusVarDB runs on Windows 10 system. It requires Linux Bash Shell and ANNOVAR tool to be enabled and installed by the user. Minimum system requirements include Intel i5 or i7, having at least four cores with 8 GB of RAM and 1 TB hard drive. High performance processors such as Intel i9 or Xeon and large quantity of RAM can enable faster execution of tasks.
The tool requires FASTA, FASTQ and dbSNP database as input files. FASTA file will be loaded at configuration panel and indexed using bwa index or hisat2-build command. The quality control panel loads the FASTQ file and produces a quality check report using the FastQC tool. The third panel asks the user to

Amendments from Version 1
The new modifications for this article are as follows: 1. We removed some sentences in articles to avoid exceeding the word limit 2. As per the reviewer 1 and 2 comments, we modified the contents in the Introduction and Use case section 3. Supplementary Table 3  to correct the mapping errors made by alignment tool to increase variant calling accuracy. Variant calling was performed using HaplotypeCaller. Finally, raw variants were annotated using ANNOVAR and the annotated variants were incorporated to the protein database to create custom variant protein database.
Proteomic searches were carried out using Proteome Discoverer 2.3 (Thermo Scientific, Bremen, Germany). Searches could also be carried out using freely available open source alternatives such as MaxQuant, MsFragger, MS-GF+, MyriMatch or OMSSA. Mass spectrometry raw files were obtained from the PRIDE archive (PXD008222) and searched against the customized variant protein database using SEQUEST-HT search algorithm. Trypsin and LysC were set as proteolytic enzymes with a maximum missed cleavage of one. Carbamidomethylation of cysteine was set as a fixed modification, and acetylation of protein N-terminus and oxidation of methionine were set as variable modifications with a minimum peptide length of seven amino acids. The precursor mass tolerance was fixed as 10 ppm, and 0.05 Da for fragment ions. Mass spectrometry data were searched against the decoy database with 1% false discovery rate cut-off at the peptide level.
We have provided a dataset for users to test the tool. The test dataset was taken from the study SRR7418758 archived on the NCBI Sequence Read Repository (SRA) and aligned using human reference genome version 19 (hg19). Using samtools view command, reads mapped to chromosome 22 were extracted and converted to FASTQ files (paired-end). We have also provided chromosome 22 nucleotide sequences from hg19 and corresponding variant information from dbSNP database.

Use cases
As a test case, we analyzed exome (Daemen et al., 2013) and proteome datasets (Lawrence et al., 2015) from breast cancer cell lines. We incorporated 12,429; 13,923; 12,386 and 11,600 non-synonymous SNPs from BT474 (accession number SRR925752), MDMAB157 (accession number SRR925788), MFM223 (accession number SRR925796) and HCC38 (accession number SRR925778) cells, respectively, to the protein database ( Figure 2A). These non-synonymous SNPs were incorporated to the reference protein database (Human RefSeq release 93) to create customized variant protein database. Mass spectrometry-based raw files were searched against their respective custom variant protein database, which resulted in identification of 423, 408, 386 and 361 variant peptides from BT474, MDMAB157, MFM223 and HCC38, cell line datasets ( Figure 2B). Interestingly, we observed mutant protein expression of Replication Timing Regulatory Factor 1 (RIF1) and Torsin-1A-interacting protein 1 (TOR1AIP1) across all four breast cancer cell lines. Mutant plectin (PLEC), marker Of Proliferation Ki-67 (MKI67), HEAT Repeat Containing 1 (HEATR1) and AHNAK nucleoprotein (AHNAK) were detected in three of the four breast cancer cell lines. These coding mutations have also been reported in other cancers The resultant variant peptide lists are available as Underlying data (Kasaragod et al., 2020a). load the FASTQ file(s), set the number of threads and RAM to perform the alignment and variant calling steps. This step produces the raw VCF file, which will be used for annotation in annotation tab. The annotation panel annotates the variants and incorporates those variants to the protein database to create a custom variant protein database. Detailed information of tool installation and execution of a test dataset are available in the manual (Manual.pdf) available to download here and on Zenodo (Kasaragod et al., 2020b).

Use cases methods
The genomics datasets downloaded from the SRA repository were subjected to a quality check using FastQC tool. Poor quality reads with Phred score less than twenty were trimmed using fastx_trimmer. Trimmed reads were mapped to human reference genome version 19 (hg19). AddOrReplaceReadGroup and Markduplicates operations from GATK were performed to add label information and to remove polymerase chain reaction (PCR) artefacts. Further, post alignment processing steps such as IndelRealigner and BaseRecalibrator were performed The data generated from the tool demonstrates the usefulness and the ease of detection of variant peptides in an integrated omics analysis.

Conclusions
CusVarDB generates customized variant protein database from NGS datasets. To our knowledge, this is the first tool which uses Linux Bash Shell to execute the NGS tools on a Windows OS. The tool provides additional feature of dry-run, making the commands highly customizable. The variant protein database generated by this tool is highly compatible for use as a reference protein database for analysis of variants at the proteomic level. CusVarDB currently allows incorporation of non-synonymous mutations. It does not allow incorporation of protein variations resulting from indels and frame-shifts. We developed a flexible tool with the intension of updating it in the future.

Open Peer Review
The manuscript claims that CusVarDB is a useful tool on the onco proteo-genomics and different from other tools that require BED or VCF files as input, while it works with FASTA and FASTQ files as an input. As a use case, it provides examples of its use in genomics and proteomics data from four cancer cell lines.
Unfortunately, in my opinion there are several technical and conceptual issues related to this work: Regarding decisions on technological architecture, it is not clear which is the advantage of building an application on technology that only runs on windows to run programs that run mostly on Linux. As a consequence, I could not install and execute the software successfully. The manuscript claims that VCF files are difficult to generate because: "Executing the preprocessing steps requires knowledge of computation and also requires multiple tools to generate the output.". However, there are different commercial and open source solutions to analyze sequencing reads and generate VCF files. As an example, my research group maintains the Next-Generation Sequencing Experience Platform, which is able to make this analysis with a graphical interface on any operative system. Other well known web based solutions are Galaxy or CyVerse. For variants interpretation, functionalities of CusVarDB should be compared with those offered by well known platforms such as VEP. 1.
One common issue with wrapping tools for other software such as GATK and bwa is that the software tools are evolving and hence the wrapping software quickly becomes outdated.

2.
sequencing reads and generate VCF files. NGSEP also has a graphical interface and can run on any operative system.

Reviewer Expertise: Bioinformatics, applied algorithms, software development, population genomics
We confirm that we have read this submission and believe that we have an appropriate level of expertise to state that we do not consider it to be of an acceptable scientific standard, for reasons outlined above.
I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.
findings presented in the article? Yes