pysradb: A Python package to query next-generation sequencing metadata and data from NCBI Sequence Read Archive

The NCBI Sequence Read Archive (SRA) is the primary archive of next-generation sequencing datasets. SRA makes metadata and raw sequencing data available to the research community to encourage reproducibility and to provide avenues for testing novel hypotheses on publicly available data. However, methods to programmatically access this data are limited. We introduce the Python package, pysradb, which provides a collection of command line methods to query and download metadata and data from SRA, utilizing the curated metadata database available through the SRAdb project. We demonstrate the utility of pysradb on multiple use cases for searching and downloading SRA datasets. It is available freely at https://github.com/saketkc/pysradb.


Introduction
Several projects have made efforts to analyze and publish summaries of DNA-1 and RNA-seq 2,3 datasets. Obtaining metadata and raw data from the NCBI Sequence Read Archive (SRA) 4 is often the first step towards reanalyzing public next-generation sequencing datasets in order to compare them to private data or test a novel hypothesis. The NCBI SRA toolkit 5 provides utility methods to download raw sequencing data, while the metadata can be obtained by querying the website or through the Entrez efetch command line utility 6 . Most workflows analyzing public data rely on first searching for relevant keywords in the metadata either through the command line utility or the website, gathering relevant sample(s) of interest and then downloading these. A more streamlined workflow can enable the performance of all these steps at once.
In order to make querying both metadata and data more precise and robust, the SRAdb 7 project provides a frequently updated SQLite database containing all the metadata parsed from SRA. SRAdb tracks the five main data objects in SRA's metadata: submission, study, sample, experiment and run. These are mapped to five different relational database tables that are made available in the SQLite file. The metadata semantics in the file remain as they are on SRA. The accompanying package, SRAdb 8 , made available in the R programming language 9 , provides a convenient framework to handle metadata queries and raw data downloads by utilizing the SQLite database. Though powerful, SRAdb requires the end user to be familiar with the R programming language and does not provide a command-line interface for querying or downloading operations.
The pysradb package 10 builds upon the principles of SRAdb, providing a simple and user-friendly commandline interface for querying metadata and downloading datasets from SRA. It obviates the need for the user to be familiar with any programming language for querying and downloading datasets from SRA. Additionally, it provides utility functions that will further help a user perform more granular queries, which are often required when dealing with multiple datasets on a large scale. By enabling both metadata search and download operations at the command-line, pysradb aims to bridge the gap in seamlessly retrieving public sequencing datasets and the associated metadata. pysradb 10 is written in Python 11 and is currently developed on GitHub under the open-source BSD 3-Clause License. To simplify the installation procedure for the end-user, it is also available for download through PyPI and bioconda 12 .

Methods
Implementation pysradb 10 is implemented in Python and uses pandas 13 for data frame based operations. Since downloading datasets can often take a long time, pysradb displays progress for long haul tasks using tqdm 14 . The metadata information is read in the form of an SQLite 15 database, made available by SRAdb 7 .
Each sub-command of pysradb contains a self-contained help string that describes its purpose and usage example. The help text can be accessed by passing the '-help' flag. There is also additional documentation available for the sub-commands on the project's website. We also provide example Jupyter 16 notebooks that demonstrate the functionality of the Python API.
pysradb's development primarily occurred on GitHub and the code is tested continuously using Travis CI webhook. This monitors all incoming pull requests and commits to the master branch. The testing happens on Python version 3.5, 3.6, and 3.7 on an Ubuntu 16.04 LTS virtual machine, while testing webhooks on the bioconda channel provide additional testing on Mac-based systems. Nevertheless, pysradb should run on most Unix derivatives.
Operation pysradb 10 can be run on either Linux-or Mac-based operating systems. It supports Python 3.5, 3.6 and 3.7. Requiring just two additional dependencies, pysradb can be easily installed using either a pip-or conda-based package manager via the bioconda 12 channel.
An earlier version of this article can be found on bioRxiv https://doi.org/10.1101/578500 Use cases pysradb 10 provides a chain of sub-commands for retrieving metadata, converting one accession to other and downloading. Each sub-command is designed to perform a single operation by default, while additional operations can be performed by passing additional flags. In the following section we demonstrate some of the use cases of these sub-commands.
pysradb uses SRAmetadb.sqlite, a SQLite file produced and made available by SRAdb 7 project. The file itself can be downloaded using pysradb as:

Search
Consider a case where a user is looking for Ribo-seq 17 public datasets on SRA. These datasets will often have 'ribosome profiling' appearing in the abstract or sample description. We can search for such projects using the 'search' sub-command: The results here list all relevant 'ribosome profiling' projects.
Getting metadata for a SRA project Each SRA project (accession prefix 'SRP') on SRA consists of single or multiple experiments (accession prefix 'SRX') which are sequenced as single or multiple runs (accession prefix 'SRR'). Each experiment is carried out on an individual biological sample (accession prefix 'SRS').
pysradb metadata can be used to obtain all the experiment, sample, and run accessions associated with a SRA project as: However, this information by itself is often incomplete. We require detailed metadata associated with each sample to perform any downstream analysis. For example, the assays used for different samples and the corresponding treatment conditions. This can be done by supplying the '-desc' flag: It accepts array and sequence-based data from gene profiling experiments. For sequence-based data, the corresponding raw files are deposited to the SRA. GEO assigns a dataset accession (accession prefix 'GSE') that is linked to the corresponding accession on the SRA (accession prefix 'SRP'). It is often necessary to interpolate between the two accessions. gse-to-srp sub-command allows converting GSE to SRP: $ pysradb gse-to-srp GSE24355 GSE25842 study_alias study_accession GSE24355 SRP003870 GSE25842 SRP005378 It can be further expanded to obtain the corresponding experiment and run accessions: $ pysradb gse-to-srp -detailed -expand GSE100007 | head study_alias study_accession experiment_accession sample_accession experiment_alias sample_alias Getting a list of GEO experiments for a GEO study Any GEO study (accession prefix 'GSE') will involve a collection of experiments (accession prefix 'GSM'). We can obtain an entire list of experiments corresponding to the study using the gse-to-gsm sub-command from pysradb: $ pysradb gse-to-gsm GSE41637 | head However, a list of GSM accessions is not useful if one is performing any downstream analysis, which essentially requires more detailed information about the metadata associated with each experiment. This relevant metadata associated with each sample can be obtained by providing gse-to-gsm additional flags: The metadata information can then be parsed from the sample_attribute column. To obtain more structured metadata, we can use an additional flag '-expand': $ pysradb gse-to-gsm -desc -expand GSE41637 | head study_alias experiment_alias source_name strain tissue GSE41637 GSM1020640_1 mouse_brain dba/2j brain GSE41637 GSM1020641_1 mouse_colon dba/2j colon GSE41637 GSM1020642_1 mouse_heart dba/2j heart GSE41637 GSM1020643_1 mouse_kidney dba/2j kidney GSE41637 GSM1020644_1 mouse_liver dba/2j liver GSE41637 GSM1020645_1 mouse_lung dba/2j lung GSE41637 GSM1020646_1 mouse_skm dba/2j skeletal muscle Getting SRR from GSM gsm-to-srr allows conversion from GEO experiments (accession prefix 'GSM') to SRA runs (accession prefix 'SRR'): $ pysradb gsm-to-srr GSM1020640 GSM1020646 experiment_alias run_accession GSM1020640_1 SRR594393 GSM1020646_1 SRR594399 Downloading SRA datasets pysradb enables seemless downloads from SRA. It organizes the downloaded data following the NCBI hiererachy: 'SRP => SRX => SRR' of storing data. Each 'SRP' (project) has multiple 'SRX' (experiments) and each 'SRX' in turn has multiple 'SRR' (runs). Multiple projects can be downloaded at once using the download sub-command: download also allows Unix pipes-based inputs. Consider our previous example of the project SRP000941 with different assays. However, we want to be able to download only 'RNA-seq' samples. We can do this by subsetting the metadata output for only 'RNA-seq' samples: $ pysradb metadata SRP000941 -assay | grep 'study|RNA-Seq' | pysradb download This will only download the 'RNA-seq' samples from the project.
Summary pysradb 10 provides a command-line interface to query metadata and download sequencing datasets from the SRA. It enables seamless retrieval of metadata and conversion between different accessions. pysradb is written in Python 3 and is available on Linux and Mac OS. The source code is hosted on GitHub and licensed under BSD 3-clause license. It is available for installation through PyPI and bioconda.

Grant information
The author(s) declared that no grants were involved in supporting this work.
The tool requires an initial download of the sqlite database from the SRAdb project. Whilst I can see how this then makes all subsequent operations quick, it does mean that you have to download a >2GB file (which expands to >30GB), taking 30+ mins before you can do anything with the program. It presumably also means that you need to re-download this file every time there is an update to the data in GEO otherwise your searches are likely to be out of date. On our site at least, people are often getting data for papers which have just been released so this is going to entail a lot of waiting for this file to download. It would be great if there was a way to point to a publicly accessible SQL server to do queries without having to do the local download, and then providing the option of pulling a local copy if you need greater performance. Also having a way to do incremental updates to this file instead of re-downloading the whole thing would be nice. Neither of these is a deal breaker, but they mightn't be too hard to implement?
The individual tools all worked as described, with the exception of the issues listed at the bottom, and the experience was generally very good with the tool.
One frustrating limitation is that the piping support is not univeral throughout the tool. You can pipe into the download command, but not, for example, into the metadata command. Being able to chain operations such as: pysradb gse-to-srp GSE24355 | pysradb metadata | pysra download ..or pysradb search '"oocyte development"' | head | pysradb metadata ..would be really nice and presumably not too hard to support?
The downloading side of the tool is very useful and probably the part which is hardest to achieve in the main SRA site. Whilst this worked as described there are some aspects of the way it works which make it a little frustrating. Firstly, it downloads SRA files, which hardly anyone wants -having a way to get the fastq files directly would be a really useful addition rather than having to run fastq-dump manually afterwards. It also downloads into a structured set of folders, which makes sense, but for large downloads means your files are scattered through multiple folders which makes life harder when you want to process them. Even the --out-dir option doesn't mean the files are in that directory, but just that it's used as a basename. For the names of the files it would be nicer to have a name which incorporated the relevant SRR/SRX ids and maybe the user submitted sample name so that you can actually have a meaningful and complete name from the file. For example, the types of filenames generated by SRA explorer (https://ewels.github.io/sra-explorer/) are a nice compromise between being predictable, unique and yet informative at the same time.
If I'm being really picky I'd also quibble a bit at the choice of some of the defaults in the API. For example, I can't see why the --desc and --expand options aren't the default for the metadata sub-program -give me everything in a nice format and let me cut that down if I don't need everything.
Overall this tool is really nice and will be useful for a lot of people. With a small amount of refinement this is likely to become part of our standard toolbox.