Predicting ionizing radiation exposure using biochemically-inspired genomic machine learning [version 2; peer review: 3 approved]

Background: Gene signatures derived from transcriptomic data using machine learning methods have shown promise for biodosimetry testing. These signatures may not be sufficiently robust for large scale testing, as their performance has not been adequately validated on external, independent datasets. The present study develops human and murine signatures with biochemically-inspired machine learning that are strictly validated using k-fold and traditional approaches. Methods: Gene Expression Omnibus (GEO) datasets of exposed human and murine lymphocytes were preprocessed via nearest neighbor imputation and expression of genes implicated in the literature to be responsive to radiation exposure (n=998) were then ranked by Minimum Redundancy Maximum Relevance (mRMR). Optimal signatures were derived by backward, complete, and forward sequential feature selection using Support Vector Machines (SVM), and validated using k-fold or traditional validation on independent datasets. Results: The best human signatures we derived exhibit k-fold validation accuracies of up to 98% (DDB2,  PRKDC, TPP2, PTPRE, and GADD45A) when validated over 209 samples and traditional validation accuracies of up to 92% (DDB2,  CD8A,  TALDO1,  PCNA,  EIF4G2,  LCN2, CDKN1A,  PRKCH,  ENO1,  and PPM1D) when validated over 85 samples. Some human signatures are specific enough to differentiate between chemotherapy and radiotherapy. Certain multi-class murine signatures have sufficient granularity in dose estimation to inform eligibility for cytokine therapy (assuming these signatures could be translated to humans). We compiled a list of the most frequently appearing genes in the top 20 human and mouse signatures. More frequently appearing genes among an ensemble of signatures may Open Peer Review


Introduction
Potential radiation exposures from industrial nuclear accidents, military incidents, or terrorism are threats to public health 1 . There is a need for large scale biodosimetry testing, which requires efficient screening techniques to differentiate exposed individuals from non-exposed individuals and to determine the severity of exposure 2 . Current diagnostic techniques, including the cytogenetic gold standard 3-6 , may require several days to provide accurate dose estimates 1,7 of large cohorts. To address the need for faster diagnostic techniques that accurately measure radiation exposures, gene signatures based on transcriptomic data have been introduced 7-10 . Probit regression models of radiation response using 25 probes on peripheral blood samples achieved up to 90% accuracy for distinguishing between irradiated blood samples and unirradiated controls 9 . A 74-gene classifier based on nearest centroid expression levels was 98% accurate in distinguishing four levels of irradiation from controls 10 . This level of performance implies that samples exposed to different levels of radiation may be distinguishable based on mRNA expression levels of different genes. While this suggests the feasibility of transcriptional modeling of radiation responses, validation with external datasets is required to establish its reliability for rapid diagnostics. A caveat of these signatures is that they have not all been externally validated on datasets independent of the source data used for model development. A 29-gene signature modelled using a support vector machine (SVM) was externally validated on such a dataset, resulting in 80% accuracy in distinguishing higher (≥8Gy) from lower dose (≤2Gy) radiation exposure in novel samples 7 . Previous studies have identified biomarkers that distinguish irradiated (ex vivo) from unirradiated blood samples with high accuracies [11][12][13][14][15] . The present study derives signatures with improved performance on externally validated samples by employing a different selection of modelling techniques. The machine learning pipeline used here addresses some of the previous limitations through a more rigorous feature selection process and stricter validation procedures.
Previously, the Student's t-test 7 , the F-test 10 , and correlation coefficients 9 were used to identify potential radiation biomarker genes. Although statistical criteria can distinguish genes that are differentially expressed upon radiation exposure, they do not eliminate expressed genes with redundant responses to radiation exposure. Redundancy increases the possibility of overfitting, thereby reducing the generalizability of these models to predict responses in independent datasets. We address this limitation with the information theory-based criterion for gene selection known as minimum redundancy maximum relevance (mRMR) [16][17][18] , which ranks genes according to shared mutual information between expression levels and radiation dose (relevance), and by minimizing mutual information shared by expression values of these and other genes (redundancy) 17,18 . mRMR outperforms ranking criteria based solely on maximizing relevance 17 . In contrast with heuristic approaches like differential expression, we only consider genes with evidence of a relationship to radiation response, which significantly limits the number of model features. Biochemicallyinspired genomic machine learning (ML) has been used to derive high performing gene signatures that predict chemotherapy and hormone therapy responses [18][19][20] . From an initial set of mRMRderived biochemically relevant genes, wrapper approaches for feature selection 21 are used to find an optimal set of genes that predict exposure to radiation.
It can be challenging to obtain highly accurate models that perform well on externally validated samples for several reasons. Aside from biases in training data, batch effects and lack of reproducibility may introduce systematic and random sources of variability into gene expression microarray data. Different source datasets can impact data normalization, reducing model performance. We utilize two validation procedures. The first is a signature-centric approach that mirrors external k-fold validation 7 . The limitation of signature-centric validation is that, while signatures allow for the identification of important genes associated with radiation response, a tangible model is required to generate actual diagnostic predictions. To address this limitation, we also use a second model-centric approach, which we term "traditional validation". This procedure applies quantile normalization to training and test data before a model is fitted to the training data. This quantile method has been shown to be more effective than scaling, loess, contrast, and non-linear methods in reducing variation between microarray data 22 . Model validation was not expected to perform as well as signature validation, because quantile normalization is not always successful in eliminating variation between microarray datasets, whereas k-fold validation is independent of this source of variation. This study shows that robust model validation is a critical step in reproducibly predicting which individuals have been exposed to significant levels of radiation.

Datasets
Murine gene expression datasets 23 were obtained from peripheral blood (PB) mononuclear cell samples of ten-week old C57B16 mice that either received total body radiation at 50 cGy, 200 cGy, or 1000 cGy or were not exposed. Post-exposure, total RNA was isolated after 6 hours and expression was determined by microarray analysis using Operon Mouse V3.0.1 (Gene Expression Omnibus (GEO): GPL4783 from GSE10640[GPL4783]) 24 and Operon Mouse V4.0 arrays (GEO: GPL6524 from GSE10640[GPL6524]) 24 . Similar analyses were performed with human expression microarrays 18 , including datasets GEO:

Amendments from Version 1
In this revision, we have summarized additional studies that apply machine learning to identifying biomarkers of radiation exposure (requested by Drs. Quintens and Mysara). We corrected the text to address their comment that Glipr2 did not occur more frequently than Ms4a1 in the murine gene signatures (this was an oversight, since the original Figure 3 was correct). For clarity, we have highlighted Eif2ak4 and Ccng1, rather than Glipr2. Based on a reader's suggestion, we have also determined the accuracy of the human signatures we derived for detection of partial body irradiation exposures. The human signatures have been validated on a partial body radiation gene expression dataset in an experimental baboon primate model (GEO: GSE77254). The revised paper includes a description of this dataset and the results of this analysis. Figure 1. Flow chart of the biochemically inspired machine learning pipeline used to derive gene signatures. In panel (v), k-fold validation splits data into k sections, where each section acts as a test set in turn while the remaining sections act as a training set. Panel (v) depicts k-fold validation for k = 3. Coloured circles represent the samples in a dataset where different colours represent different radiation doses. In panel (vi), quantile normalization forces data into the same distribution. To demonstrate this, thirty random genes were chosen to form a signature. The histograms on the left represent the distributions of expression levels of these genes in the pre-normalized datasets GSE1725 and GSE6874 [GPL4782]. The histograms on the right represent the distribution of expression levels of the same genes post-normalization.

GSE6874
[GPL4782] 9 , GSE10640[GPL6522] 24 , GSE1725 25 , and GSE701 26 . GSE6874 and GSE10640 consist of PB samples collected 6 hours post-exposure from healthy donors and patients undergoing total body irradiation at 150-200 cGy analyzed with Operon Human V3.0.2 (GEO: GPL4782) and Operon Human V4.0 (GEO: GPL6522) microarrays. GSE10640[GPL6522] consists of 32 patients treated with alkylator-based chemotherapy without radiation. GSE1725 contains lymphoblastoid cell line samples derived from 57 subjects treated with 500 cGy. RNA was extracted 4 hours after exposure. Expression was measured using Affymetrix Human Genome U95 Version 2 Array (GEO: GPL8300). GSE701 contains lymphoblastoid cell lines from Fondation Jean Dausset-CEPH which were irradiated at 300 cGy or 1000 cGy and extracted 1-24 hours after exposure. Expression was measured using the Affymetrix Human Genome U95A Array (GEO: GPL91). The GSE77254 dataset 27 was also used to validate our human signatures. This dataset consisted of blood samples collected from baboons that were either total body or partial body irradiated with Cobalt 60 at either 2.5 or 5 Gy. Expression for each subject was measured 1 to 2 days after exposure and was related to their hematologic acute radiation syndrome (HARS) scores.
Preprocessing (Figure 1, panel i) Rows and columns of microarray data that are less than 95% complete were removed and any remaining missing values were imputed using the nearest-neighbor algorithm. Only genes that are common across all datasets have been retained. Expression values of each probe were transformed to z-scores and the mean expression value of probes for the same gene have been assigned as the expression of each gene. Human and murine signatures were derived separately.
Biochemically-inspired gene selection 18-20 (Figure 1, panel ii) A literature search has been conducted to identify genes implicated in radiation response using the search queries "radiation genes," "radiation response genes," and "radiation signatures" on PubMed. Cited genes comprise those differentially expressed after radiation exposure, genes present in DNA repair databases and other radiation signatures, and evolutionarily conserved genes that were highly expressed in radio-resistant species. A list of 998 genes was compiled 28-41 , Supplementary Table X) for deriving signatures.
Minimum Redundancy Maximum Relevance (mRMR) gene ranking 11,12 (Figure 1, panel iii) Rank is assigned by incremental selection of genes based on the mutual information difference (MID) criterion 16,17 . Highly ranked genes have expression information that shares mutual information with radiation exposure and shares little information with expression of other genes. The MID criterion used to select the next ranked gene is where i is a gene selected from Ω, the total gene space, S is the set of genes selected before i, |S| is the number of genes selected before i, I(i, h) is the mutual information between expression of gene i and radiation dose (h), and I(i, j) is the mutual information between expression of gene i and expression of gene j. Support Vector Machine (SVM) Learning SVM models are classifiers that use hyperplane boundaries to separate samples into exposure classes by maximizing the distance between the separating hyperplanes and samples of each class. The fitcecoc function of MATLAB 2017a's Statistics and Machine Learning Toolbox 42 with a SVM template was used to fit SVM models to training data. The fitcecoc function was used because it allows the fitting of multiclass models, which was required for analysis of murine samples that were irradiated at four different exposure levels. The SVM models use the Gaussian radial basis function kernel and a range of selected box-constraint and kernel-scale parameters. The box-constraint, denoted by the variable C, determines how severely misclassifications are penalized during training. The kernel-scale, denoted σ, represents the width of the Gaussian radial basis function. These parameters collectively control the tradeoff between underfitting and overfitting 43 . After feature selection, a grid search is performed to determine the optimal (C,σ) combination for values of C and σ between 1 and 100000 (inclusive) by powers of 10 such that C ≥ σ.
Feature selection (FS) (Figure 1, panel iv) 21 Greedy feature selection was used to derive signatures. Complete sequential feature selection (CSFS) sequentially adds genes to an initially empty base set. The added gene is the highest mRMRranked gene that is not already included. This is repeated until all genes have been evaluated and the best performing subset of genes is identified. Forward sequential feature selection (FSFS) sequentially adds genes from the top 50 mRMR ranked genes to an initially empty base set. The added gene is the one whose addition improves the model by the greatest margin. Backward sequential feature selection (BSFS) sequentially removes genes from the top 30 mRMR ranked genes. The gene removed is the one whose removal causes the greatest improvement in the model. For BSFS and FSFS, we measure model improvement using misclassification or log loss during k-fold validation (see Performance metrics section below). Genes are added or removed until model performance plateaus. During feature selection, C and σ parameters need to be chosen for SVM learning (see SVM Learning section above). Thus, each signature is characterized by the feature selection algorithm used, the dataset used to derive it, and the C-σ combination used for its SVM models during feature selection. This leads to a large number Validating signatures (Figure 1, panel v) Stratified k-fold validation was used to validate signatures. Samples of the validation dataset were partitioned into k sets, comprised of an approximately equal distribution of radiation levels. For validation, each set was used to test a model trained on the remaining sets, resulting in predictions for all samples in the dataset. Advantages of this approach are that variation between datasets is not pertinent and that signatures can be validated on differently labeled datasets (with samples irradiated at different levels).
Validating models (Figure 1, panel vi) Model validation requires separate training and test datasets (the training set is often used for FS). Genes from the signature are extracted from the training and test sets and their expression values are quantile normalized by sample. An important distinction between our approach and a previous study 7 is that quantile normalization is applied immediately before validation, so expression of only the genes present in the signature being validated have been normalized. By contrast, previous approaches perform quantile normalization over entire datasets; while this reduces variability in expression values within datasets, it also suppresses the dynamic range, with potential consequential effects on the prognostic value of expression data. After normalization, an SVM model was fit to training datasets and used to generate predictions from the test dataset.
Performance metrics (Figure 1, panel vii) Performance was determined by comparing predicted radiation doses with actual radiation exposures of each sample. Metrics included misclassification error rate, goodness-of-fit, and multiclass log loss. Misclassification is the percentage of samples that were incorrectly classified, goodness-of-fit is the average absolute value difference between predicted radiation exposure and actual radiation exposure, and multi-class log loss is where N is the number of samples, M is the number of class labels, p ij is the predicted probability that observation i is in class j, and y ij is an indicator variable equal to 1 if sample i is in class j and 0 otherwise.

Results
We discovered radiation gene signatures using the microarray data of human and mouse peripheral blood samples and human lymphoblastoid cell lines, which were validated either according to signature ( Figure 1, panel v) or with the respective model ( Figure 1, panel vi). The murine data were obtained from a wider range of radiation exposure levels (0 cGy, 50 cGy, 200 cGy, 1000 cGy) than the human whole body radiation datasets, which were binary comparisons of radiation effects (0 cGy vs. 150-200 cGy, 0 cGy vs. 500 cGy, or 300 cGy vs. 700 cGy). This made possible the discovery of murine gene signatures with finer granularity for discriminating individuals exposed to different exposure levels, which is not currently feasible with the human samples. Table 1 displays the murine signatures derived using our pipeline which had the best performance metrics during k-fold validation on an independent dataset. In addition to the signature information, we report the feature selection algorithm (FS Algorithm) used to discover the signature, the internal validation performance metrics (FS Misclassification fraction and FS Log Loss function A list of the most consistently appearing genes in the best performing signatures were obtained by pooling the top 20 murine signatures (assessed by validation log loss) from GSE6874[GPL4783] and GSE10640[GPL6524], and respectively collating the top 17 and 19 most frequent genes. The union of these two sets comprises 33 genes displayed in a heat map based on the frequencies of each gene ( Figure 2). Surprisingly, the compositions of signatures derived from both datasets are not as similar as one may expect. The genes that appear more frequently in signatures derived from one dataset infrequently appear in the other even though both datasets consisted of the same types of samples irradiated at the same exposure levels.

Murine gene signatures
The shared mutual information of these expressed genes with radiation dose (Figure 3) indicates whether only high mutual  information genes appear in the best signatures or whether some lower mutual information genes may also be selected by our feature selection algorithms. The frequency of each gene among these signatures (represented by diameter of the circle) correlates with the mutual information between expression and radiation dose (ρ = 0.8016). However, it would be an oversimplification to create signatures based solely upon mutual information, since some genes in lower performing signatures exhibit higher mutual information content. Development of accurate signatures requires more than a collection of gene features whose individual expression values share information with radiation dose, since many of these genes may reveal similar information, and redundant machine learning model features. For instance, Bax and Blnk are both common among the best murine signatures, even though Blnk shares much more mutual information with radiation dose than Bax expression. Since Blnk and Bax are involved in completely different pathways -Bax is an inducer of apoptosis 44 whereas Blnk is involved in a B-cell antigen receptor signaling pathway required for optimal B-cell development 45 , they provide different types of information to the overall model. Conversely, we also observe that genes with high information content, such as Ms4a1, may appear less frequently than genes with lower information content, such as Eif2ak4 or Ccng1.
Although mRMR prioritizes genes with non-redundant, complementary contributions, subsequent wrapper steps of forward and backward sequential feature selection occur independently of the mRMR ranking. mRMR reduces the list of features considered by these algorithms, but it is possible for only high mutual information genes to be selected for the final signature. Thus, the inclusion of lower mutual information genes, such as Ube2v1 and Urod, reinforces the effectiveness of the mRMR method.
The cellular roles of these protein products (Figure 2 and   Figure 5). Indeed, the performance of the matrix shows that the predicted errors for a small fraction of samples deviate from the actual exposures by no more than a single adjacent exposure level. Although the predictions presented in the confusion matrix come from a single iteration of k-fold validation, the standard error associated with misclassification for this signature is extremely low (0.0013) so this confusion matrix is representative of nearly all possible iterations of k-fold validation.

Human gene signatures
The best performing signatures obtained from each human dataset, assessed by k-fold validation, are presented in Table 2. Although four human radiation datasets were available, GSE701 contained only 10 samples, which was insufficient for derivation of a unique gene signature. While k-fold validation removes the requirement for inter-dataset normalization, it assesses the ability of signatures (genes) to predict radiation exposure without tying the signatures to corresponding models. Each signature is characterized by the feature selection algorithm and its validation statistics, which have been averaged over the 3 independent datasets that were excluded from the original data used to derive the signature.
Since traditional validation typically requires separate training and test sets that feature samples irradiated at the same exposure levels, only signatures derived from GEO: GSE6874[GPL4782] and GEO: GSE10640[GPL6522] could be analyzed. Table 3 presents the best human signatures according to this validation approach. This type of external validation is the most challenging due to the variability associated with different microarray experiments and batch effects of different platforms. This potentially explains the lower performance obtained by traditional validation (Table 3) compared with k-fold validation on the same datasets ( Table 2). The remaining human signatures are described in Supplementary Files Y1-Y5.
To determine which human genes are most consistently selected, the most frequently appearing genes (11 or 12 depending on number of equally prevalent genes in different signatures) were compiled from the top 20 human signatures (assessed by lowest average log loss during k-fold validation) from GSE10640[GPL6522], GSE6874[GPL4782], and GSE1725. The union of these three lists indicates the relative frequencies of each gene ( Figure 6).  Table 4). The difference between the validation metrics preceding and following removal of a gene represents the weight of the gene within a signature. ΔMC, ΔLL, and ΔGoF represent the changes in misclassification, log loss, and goodness of fit, respectively.
GADD45A appears in 14 of the top 20 signatures derived from GSE1725. Of the 14 signatures, 10 were single gene signatures, as GADD45A alone was expected to sufficiently distinguish irradiated from unirradiated samples. In these cases, it was assumed that a null signature would perform as well as a predictor that randomly draws predictions from a uniform distribution of doses. Removal of GADD45A from these 14 signatures, results in an average increase in misclassification, log loss, and goodness of fit by 0.319, 0.368, and 109 cGy, respectively (see Table 4a).
In contrast, elimination of BAX, which only appears in 2 of the top 20 signatures derived from GSE1725 and results in an average increase in misclassification, log loss, and goodness of fit by 0.018, 0.147, and 2.95 cGy respectively (Table 4e). Comparing the effects of removing DDB2 (Table 4c) and PRKAB1 (Table 4f) from the top 20 GSE10640[GPL6522] signatures confirms the impact of genes that frequently occur within the most accurate gene signatures.
However, the diagnostic contributions of GADD45A and DDB2 expression to the radiation levels at which samples were exposed (500 cGy and 150-200 cGy respectively) are confounding. The effects on model performance resulting from As was the case with murine signatures, genes appearing in the best human signatures do not necessarily share high mutual information with radiation dose. However, the compositions of the    signalling (GADD45A 33 and PPMD1 33 ). The four common genes belong to the DNA repair and regulating JNK-p38 (MAPK14) pathways, which may imply particular significance to these functions in human response to radiation exposure. Interestingly, GADD45A and PPMD1 are antagonistic, that is, GADD45A activates while PPMD1 inhibits p38.

Validating gene signatures on partial body irradiated samples
We also evaluated the total body irradiation human signatures with expression data from baboons (GSE77254) that were exposed to partial body irradiation. All signatures derived from human samples (see Supplementary Files Y4 and Y5) were completely contained in this dataset and so were eligible for validation. The signatures chosen contained all datapoints, circumventing the need to perform nearest neighbour imputation. Paralogous baboon genes were cross-referenced with those that were used to derive human signatures and expression values of multiple probes within the same gene were averaged.
Signatures were used to differentiate between various label combinations: (1)  Multiple Y4 signatures achieved 0% misclassification in distinguishing unirradiated samples from radiated samples (above label combinations 1, 2, 4, and 5) and multiple Y5 signatures achieved 0% misclassification in label combinations 1, 2, and 5. However, the best performing signatures on this dataset were not the best performing signatures obtained during validation on GSE6874 (Y4) and GSE10640 (Y5). We speculate that technical factors involved in the study design explain why signatures performed differently. For example, the human signatures were derived from blood samples that were collected 6-24 hours after exposure whereas the baboon blood samples were obtained 24-48 hours after exposure. Also, a different microarray platform was used to obtain expression values for the baboon samples.
We also investigated total body radiation signatures on predicting exposures with different sources of partial body irradiation expression data: GSE66372 48 and GSE84898 49 . These murine and baboon datasets lacked several genes present in the signatures we derived. None of the Y4 and Y5 signatures were completely contained in GSE66372; the PSMD9 single gene signature was the only human signature that was completely contained in GSE84898. However, the PSMD9 signature has poor performance among Y5 signatures based on its log loss metric on GSE6874.

Discussion
Biochemically inspired genomic signatures of human and murine radiation response exhibit high accuracies in validating independent Some of the best performing signatures consisted of one to three gene features. The first signature in Table 2  None of the samples exposed to ≥200 cGy are misclassified below this radiation dose based on the multi-class murine signatures ( Figure 5). In the future, a similar analyses could be performed in clinical studies of human subjects exposed to different radiation levels, which might prove useful for determining treatment eligibility after exposure to high levels of myelosuppressive radiation 51 .
A comparison of the most frequently appearing genes in the optimal human ( Figure 6) and mouse signatures (Figure 2) with signatures previously derived in other studies reveals little overlap (Table 5). The compositional differences can be attributed to types of samples used for model training, microarray platforms used, and feature selection techniques used in deriving signatures. However, genes consistently selected in optimized signatures in at least three independent studies include BAX, DDB2, GADD45A, LY9, and TRIM22. Expression of these genes is indeed predictive of radiation dose and not a result of noise in individual datasets. An ensemble signature consisting of these genes achieves up to 92.3% accuracy in k-fold validation over 277 samples and up to 81.2% accuracy in traditional validation over 78 samples. The quality of the gene signature is largely determined by the quality and amount of training data used to fit the SVM model. Thus, this level of accuracy is not the upper Ensemble models should be considered which combine genes discovered in different well-performing signatures. Although the most frequently represented human and murine genes were compiled, genes common to one dataset did not appear equally frequently in signatures from the other. This discordance may possibly result of noise in the different datasets, or perhaps to intrinsic differences between them. Compilation of frequently appearing genes in different datasets may be useful for discovery of consistently represented genes that are incorporated into high-performance signatures.
The types of data available for this study and the analytical approaches we used potentially limited the interpretation of these gene signatures. Blood samples of mouse and human datasets were all collected within 24 hours of exposure. Thus, signatures derived on these datasets may only be valid in white blood cells with a limited time window (<24 hours). Additionally, one of the datasets we used to derive signatures, GSE6874, appears to have been a particularly noisy dataset, based on the average misclassification rates on GSE10640, GSE1725, and GSE6874 of 0.03, 0.02, and 0.11, respectively. Assuming that it is possible to differentiate samples irradiated at different levels of exposure using expression data, the feature selection misclassification metric estimates the theoretical limit of how well differentially irradiated samples can be separated based on expression.
The surprisingly high feature selection misclassification values obtained from GSE6874 may therefore be indicative of greater levels of noise in the data. Lastly, the greedy feature selection algorithms used to derive signatures cannot guarantee optimal results, that is, we cannot confirm that we have found the best possible signatures from each dataset for predicting radiation exposure. This potentially explains the discordance in gene composition between murine datasets (Figure 2).
Nevertheless, the validation performance of radiation signatures is significantly improved ( Table 5). The signatures that were externally k-fold validated achieved nearly 100% accuracy. Some of our human signature models are also externally validated in the traditional sense (i.e. using a single model). This validation method, which is representative of an actual scenario, achieves >90% accuracy, and is directly relevant to creating a routine, efficient and highly accurate expression-based radiation prognostic assay.

Data availability
All data underlying the results are available as part of the article and no additional source data are required. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Supplementary material
Supplementary File X: This spreadsheet lists all the genes found from our literature search (see Methods) that were considered during feature selection. For each gene, we report the reason for inclusion and a link to the paper containing the supporting evidence.
Click here to access the data. Click here to access the data.

Supplementary
Supplementary Files Z1-Z2: These files contain results concerning tradiation validation of Y4 and Y5 human signatures on partial body radiation exposed primates. Different comparison groups described in the text are indicated in separate tabs in each File. . The latter study also showed for the first time the suitability of exon signatures as sensitive radiation biomarkers, and highlights the importance of prior knowledge at the exon level for subsequent primer-or probe-based assays (e.g. qRT-PCR). This may be discussed.

Methods
In the data Pre-processing "Rows and columns of microarray data that are less than 95% ○ complete were removed and any remaining missing values were imputed using the nearestneighbor algorithm" How many rows and columns were removed, and on which basis was the 95% threshold selected. Also, what is the effect of the nearest-neighbor algorithm on the data "over-fitting". Is it possible to perform PCA on the data after removing any row/columns with less than 100% completeness and compare to the currently presented approach (95% removal and filling the remainder of the missing data)? This would allow the visualization of the effect of the proposed methodology on the segregation between the various records.
Only genes common to all datasets were retained. Does that mean common between mouse and human datasets? How were aliases identified?
○ The second step in the process is the selection of genes based on a non-exhaustive list of publications. Why was this necessary if the mRMR method for feature selection was applied? ○ I particularly like the idea of performing quantile normalization after feature selection. Is this something that has been published before? Can the authors speculate (or maybe even compare) about the performance of their method on pre-normalized datasets?
○ Concerning the method used for the "Validation of models", I would think this approach would be more vulnerable towards the test/training dataset. What would occur to the accuracy when doing the normalization over-all of the data? Would the accuracy change drastically? Is it possible to extend the testing to cover additional data? ○ Many datasets exist on human PBMCs/whole blood irradiated with a range of doses. Why were these datasets not considered for this study while lymphoblastoid cell lines were? ○ It would be helpful to have a comparison of model performance with that of "traditional" machine learning methods, as used in some of the indicated references.

Results
"We discovered radiation gene signatures using the microarray data of human and mouse peripheral blood samples and human lymphoblastoid cell lines, which were validated either according to signature." Were the human lymphoblastoid cell line and prepheral blood samples grouped together in one model? If so, would it be possible to visualize how the expression data of the shortlisted genes for each data type separately (using PCA for example)?
○ Can the authors comment on their observation that signatures derived from both murine datasets are not very similar? Apart from "noise, or intrinsic differences in the datasets", could it possibly also be a consequence of the method used, i.e. mRMR in which low mutual information genes are selected? Based on Fig. 2 and 6 it seems that genes with higher mutual information in general have higher frequencies. Which seems logical.

○
The authors state that Ms4a1 appears less frequently than Glipr2. However, from the sizes of the circles, Ms4a1 seems to appear more frequently than Glipr2. Please verify this statement. None of these genes are among the 33 most frequently appearing genes of our murine signatures (see Figure 2). (Recall that GSE10640[GPL4783] is a murine dataset). Additionally, overfitting is mitigated by validation using independent datasets. The nearest neighbor source data occurs before model development, so the model is not fit at that point.

If applicable, is the statistical analysis and its interpretation appropriate? Partly
Regarding the questions about genes common to all datasets being retained: 1) Mouse and human signatures were derived separately. So only genes common to all human datasets were available for selection in deriving human signatures and only genes common to all mice datasets were available for selection in deriving mouse signatures. 2) Our scripts did not take gene name aliases into account. It is therefore possible that genes that were left out of the analysis because they were indicated by different names in different datasets.

○
The initial list of publications we consulted may not have been complete, but it was the result of an extensive search. We were concerned that mRMR method applied to all genes without independent experimental support would result in Type II errors. We wanted to identify the key genes from the large volume of peer-reviewed work implicating various genes in radiation response. This hypothesis based study was not designed to discover novel genes, whose putative role in radiation response was unproven or unknown. We have automated the initial feature selection procedures using expression of all (including non-coding) genes, but this is beyond the scope of our efforts. The discovery strategy would require statistical correction for the likelihood of incorrectly rejecting a null hypothesis due to multiple comparison testing.

○
Regarding quantile normalization after feature selection, this is the first time that we have used this approach. We attempted this because our initial efforts to derive signatures from pre-normalized data were unsuccessful (poor performance). Since we do not use the expression values for the majority of genes, there was no compelling reason to normalize across the entire set of expressed genes. We are not aware of other previous efforts, but cannot exclude this possible. We did not perform an exhaustive literature search for the method that we employed, since the universe of potential applications is nearly limitless.

○
We speculate that if quantile normalization is done before feature selection, then the reduction in dynamic range in the transformed data may lead to the derivation of poorer signatures.To give a more quantitative answer, we took the top performing signature in the Y5 Supplementary file with respect to log loss (DDB2 GTF3A TNFRSF10B ) and re-validated with normalization over all data instead of just over genes in the signature; misclassification error was 2% higher and the goodness of fit was 4 cGy higher (log loss remained approximately the same). Interestingly, normalization performed over all of the data does not appear to have significant effects on the performance of our signatures. To "extend the testing to cover additional data," it would be necessary to renormalize the initial set of genes to include these "additional genes". If no other genes are to be included, only additional samples, then the expression values of the additional samples would need to be renormalized.

○
Regarding other human datasets of irradiated PBMCs, in the feature selection stage, ○ we specifically required datasets with large numbers of samples. The three human datasets we used for feature selection (GSE6874, GSE10640, GSE1725) were the largest ones we found deposited in GEO. In particular, GSE1725, the dataset with cell lines derived from patients, was the largest dataset available to us, containing 110 samples (171 samples in total, but 61 are UV irradiated). At the earliest stage of our project, we specifically chose datasets that maximized the number of common genes represented among them. This requirement enabled us to validate our signatures by either traditional or k-fold approaches. Genes available for feature selection must be present in all datasets in order for validation to work. We were surprised to find many datasets where key genes in our models were missing from expression data (see the last section on partial body irradiation data analysis in the results section of the revised version 2 of this paper). Regarding comparison of model performance with other traditional machine learning methods, we have used these methods (eg. random forest, SVMs, decision trees) in previous gene expression studies (see reference 13 of version 1 which is reference 18 of version 2). The improved performance we describe here cannot be attributed to the specific model building approaches.

○
Regarding grouping of lymphoblastoid and PBL samples, models were always trained on one dataset at a time; they were not combined.

○
The dissimilarity of the murine datasets may be related to their use of different microarray platforms and were collected at different times. It is difficult to tease out the intrinsic differences of the datasets from the performance of the methods.

○
Regarding Ms4a1 and Glipr2, we verify the reviewers' observation. We corrected the density plot during preparation of the paper, but inadvertently neglected to make the corresponding change to the text. We have replaced Glipr2 with Ccng1 or Eif2ak4.
○ Regarding ranking of genes in Tables 1 and 2, signatures derived using BSFS and CSFS list the genes according to mRMR rank. For signatures derived using FSFS, the genes would be listed according to the order in which they were selected. So in this sense, they are indeed ranked according to importance.

○
Regarding the low frequencies of human vs murine signature genes, in the descriptions of these figures, we mention that "Frequencies are first scaled within and then between datasets to ensure values between 0 and 1." Human signature gene frequencies appear suppressed because DDB2 and GADD45A in particular were represented more frequently than any other gene by a large margin.

○
Regarding the numbers of genes with low mutual information in the human vs murine signatures, it may be relevant that expression data are not adjusted for either white blood cell count or body mass. We speculate that the gene expression response in the human samples reflects lower numbers of radiation exposed cells. This could dampen the signals and mutual information with radiation dose compared to the murine response.

○
Regarding the idea that it would be counterintuitive to have genes with low mutual information as important for dose prediction, it is true that signatures are dominated by genes whose expression values share high mutual information with dose. The purpose of mRMR is to make sure that we do not overlook the genes whose expression values may encode information that is not present in the genes whose expression values have high mutual information with dose, which is why sometimes genes with lower mutual information may appear as frequently or even a bit more ○ frequently than genes with higher mutual information.
Regarding the single gene signature results, we agree that single gene signatures are more susceptible to extrinsic sources of variation unrelated to radiation exposure. However, simpler signatures may be necessary under laboratory conditions that limit the amount or complexity of testing, e.g. space radiation assays performed by astronauts.
ionizing radiation exposure. Good discussion of the identified genes' functions and roles in radiation response. Conclusions are well supported by the results. Hopefully the analysis and genes identified in this study will be incorporated and/or validated in future studies examining prediction of radiation exposure.

Is the work clearly and accurately presented and does it cite the current literature? Yes
Is the study design appropriate and is the work technically sound? Yes

If applicable, is the statistical analysis and its interpretation appropriate? Yes
Are all the source data underlying the results available to ensure full reproducibility? Yes

Are the conclusions drawn adequately supported by the results? Yes
Competing Interests: No competing interests were disclosed.
I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.

Author Response 09 Jun 2018
Peter Rogan, Western University, London, Canada Thank you for your kind comments. We agree that the approach and software should be useful for future studies of human radiation exposures. We are particularly motivated to apply multiclass SVMs, which were highly accurate in the murine dataset, to the analysis of a large set of radiation oncology patients exposed to different radiation doses.

Competing Interests:
No competing interests were disclosed.

Comments on this article Version 2
Author Response 06 Apr 2021