ALL Metrics
-
Views
-
Downloads
Get PDF
Get XML
Cite
Export
Track
Method Article

Signal-processing tools for core-collection selection from genetic-resource collections

[version 1; peer review: peer review discontinued]
PUBLISHED 23 Apr 2015
Author details Author details
OPEN PEER REVIEW
PEER REVIEW DISCONTINUED

This article is included in the Japan Institutional Gateway gateway.

Abstract

Selecting a representative core collection (CC) is a proven and effective strategy for overcoming the expenses and difficulties of managing genetic resources in gene banks around the globe. Because of the diverse applications available for these sub-collections, several algorithms have been successfully implemented to construct them based on genotypic, phenotypic, passport or geographic data (either by individual datasets or by consensus). However, to the best of our knowledge, no single comprehensive dataset has been properly explored to date.
Thus, researchers evaluate multiple datasets in order to construct representative CCs; this can be quite difficult, but one feasible solution for such an evaluation is to manage all available data as one discrete signal, which allows signal processing tools (SPTs) to be implemented during data analysis.
In this research, we present a proof-of-concept study that shows the possibility of mapping to a discrete signal any type of data available from genetic resource collections in order to take advantage of SPTs for the construction of CCs that adequately represent the diversity of two crops. This method is referred to as 'SPT selection.'
All available information for each element of the tested collections was analysed under this perspective and compared, when possible, with one of the most used algorithms for CC selection.
Genotype-only SPT selection did not prove as effective as standard CC selection algorithms; however, the SPT approach can consider genotype alongside other types of information, which results in well-represented CCs that consider both the genotypic and agromorphological diversities present in original collections.Furthermore, SPT-based analysis can evaluate all available data both in a comprehensive manner and under different perspectives, and despite its limitations, the analysis renders satisfactory results. Thus, SPT-based algorithms for CC selection can be valuable in the field of genetic resources research, management and exploitation.

Keywords

Core collection, Fast Fourier Transform, Genetic resource management, Rice, Foxtail millet

Background

One of the most promising techniques for conserving the diversity of genetic resources is ex situ genebank germoplasm collection. A significant effort has been made on a global scale to preserve, characterize, distribute and utilise genetic resources in order to understand their biological phenomena and to confront the vulnerable situation regarding the sustainability of future human development1,2. As the size of germoplasm collections increase, it becomes difficult to appropriately manage and extensively evaluate them3; thus, the core collection (CC) concept4 has become a fundamental genetic resource management approach and exploits the potential of a complete collection in terms of viable data management and monetary expenses5–8.

Different CCs have different purposes, characteristics and evaluation criteria7,9–11; thus, several different algorithms and informatics tools have been developed and implemented12–15 with different approaches for satisfying particular needs of each CC. Because these CCs are constructed mainly on the basis of genotypic, phenotypic, passport or geographic data (either by individual datasets or by consensus)16, there is a lack of all-inclusive datasets; this limits the possibility of generating a CC that may satisfy most basic and applied genetic resource research programs. To the best of our knowledge, no single comprehensive dataset has been properly explored to date. One possible method to create a comprehensive dataset is to represent the available data as numerical values. Several methods exist that represent genomic information into numerical values17 and agromorphological traits (ATs) into scores18. Through this mapping process, treating each data vector as a discrete signal that can, in turn, be analysed by signal processing tools (SPTs) is possible, thus providing an effective tool for a comprehensive evaluation of datasets. We present a proof-of-concept study that shows the possibility of mapping to a discrete signal any type of data available from genetic resource collections in order to take advantage of SPTs for CC selections; this possibility provides new decision-making criteria for genetic resource management and research.

Methods

Mapping data

Each input data must be mapped to a numerical value. This is a fundamental process of the algorithm because it enables different datasets to be analysed together, regardless of their nature. In this manner, dissimilar passport data, single nucleotide polymorphisms (SNPs), restriction fragment length polymorphisms (RFLPs), geographic information and phenotypical traits can be included in one comprehensive dataset. To consistently represent each data type, reference tables are implemented according to the nature of each particular data: genetic information (originally represented as character elements) is now represented by a numerical vector, and trait variation, simple sequence repeat (SSR) molecular markers and passport data can be represented as either binary or normalized data depending on the quantitative/qualitative nature of the data. The original data and reference tables for this study are available in supplementary material 1. Data transformation for this study rendered a matrix containing the representation of MC samples (i1, i2, i3, …in) with (j1, j2, j3, … jm) elements each, where n is the total number of samples, and m is the number of included samples characteristics, represented by a numerical value as data(i, j).

Signal construction

Numerical representations of each jth data element can be treated as frequency values in m data time in such a manner that each ith sample is treated as a discrete signal. The i signal corresponds to the information behaviour from each sample. This perspective will enable the implementation of SPTs such as the discrete Fourier transform and power spectrum comparison. Although SPTs can be implemented on all data available for each sample, not all data elements contain the same informativeness value to discriminate between samples. To overcome the informative difference in each j element of data, a principal component analysis (PCA) can be performed to rearrange data into a new matrix that has the high informative elements of data at the beginning and that arranges subsequent elements according to their informativeness, discarding those whose variance equals 0. This process renders two new matrices: the original characteristics mapped vectors matrix (x) and rearranged variance value matrix (X). Matrix X, therefore, contains n samples that are formed by a numerical vector with m=m-(non informative characteristics).

Fast Fourier transform

The main objective of Fourier transform is the decomposition of any signal into a complex histogram of frequencies. Signal function is then represented as a vectorial function whose angle and magnitude determine a sampled point in the signal19. The original Fourier model is expressed as follows:

f^(ξ)=∫−∞∞f(x)e−2πixξdx     (1)

where x is the temporal variable, ξ it the frequential variable, i is a -1 square root and e is the natural exponent. From equation 1, a derivative can be determined for any point ξ sampled in the signal.

f(x)[cos(2πeξ)+i*​sin(2πeξ)]     (2)

Fourier transform can be implemented into any complex numerical series, but in a practical sense, the computational cost increases exponentially. Thus, fast Fourier transform (FFT) is more often implemented and can be defined according to Cooley-Tukey algorithm20 as follows:

Xk=∑n=0N−1xne−iπknN    (3)

where N is the vector length, x is the temporal variable, i is a -1 square root and e is the natural exponent; in such matter that an euclidean representation - with the angle, magnitude and phase that corresponds to their position in the signal - exists for any signal dot.

Therefore, mapping any signal into a vectorial representation that contains information from every original signal dot is possible. From this complex vector, useful data can be retrieved to establish a comparison between them that indirectly represents the original signal’s juxtaposition21.

Distance matrix computation

Inspired by the genomic signal processing alignment-free distance (GAFD) model22, each signal corresponding to the PCA-mapped accessions data in a set Åœi was converted into its frequency representation by applying discrete Fourier transform. Its power spectrum F^i was then computed. Subsequently, the distance d(i, j) for a given pair of comprehensive data signals was calculated by obtaining the mean squared error (MSE) of their respective power spectra:

D(i,j)=∑x(F^i(x)−F^j(x))2     (4)

Finally, a distance matrix (DM) was created by performing a pairwise comparison of all sequences in the set.

In parallel, we constructed a point-to-point (RAW) DM on the basis of the MSE given to a pair of signals prior to the PCA analysis.

Core collection selection

Selecting a CC by this method requires the generation of a DM for each sample of the MC; this provides the interrelations among samples and enables adequate selection. A schematic of the complete workflow is presented in Figure 1.

ae10c947-df55-41ba-8b6b-8759d1e5d579_figure1.gif

Figure 1. General workflow of the FFT-based core collection selection algorithm.

PCA: Principal Component Analysis; FFT: Fast Fourier Transform; CC: Core Collection.

In the past, several methodological procedures have been implemented to select K elements from an MC on the basis of information provided by its DM; among such procedures, the most frequently used one is the hierarchical clustering method11. However, the current algorithm does not rely on hierarchical clustering for CC selection, instead - similar to the least distance stepwise sampling method23 - CC elements are selected by an iterative process, where r samples are selected by different criteria (which may be individually implemented) on each iteration.

Selection criteria (based on the DM without hierarchical clustering) for the current algorithm is as follows:

  • • a) The ith sample with the most lower distance values among jth elements.

  • • b) The ith sample with the most higher distance values among jth elements.

  • • c) The ith sample with a lower distance average.

  • • d) The ith sample with a higher distance average.

  • • e) The ith sample with a lower overall distance.

  • • f) The ith sample with a higher overall distance.

In cases where multiple samples share selection values, an appearance priority will complete the criteria.

An example of selection process is presented in Figure 2 and its final result is presented in Figure 3.

ae10c947-df55-41ba-8b6b-8759d1e5d579_figure2.gif

Figure 2.

First three principal component’s distribution of Rdata (a), methodology’s first (b), second (c) and third (d) iterations; final K=72 distribution is presented in (e).

ae10c947-df55-41ba-8b6b-8759d1e5d579_figure3.gif

Figure 3. First three principal component’s distributions of K=72 CC selection (X) from Rdata MC.

Once the selected samples (r) are included in the future CC, they (along with others that are identical to them (s)) are removed from X for the next iteration; then, a DM2 with n2 = n–r–s is calculated. This process will continue Z times until R >= K, where R = (r1 + r2….rZ ) and K = predefined CC elements desired.

Evaluation of the selected core collection

As discussed previously, the best way to evaluate a CC depends on the purpose of that CC, and even if it can be evaluated from the same dataset from which it was constructed, evaluating it with a different dataset7 is desirable. In this study, we use other datasets for our evaluation whenever possible. The list given below provides the evaluation parameters implemented in this study.

  • a) The average distance between each MC sample and the nearest CC sample (ANE) can be calculated using the equation as follows:

    ANEtot=1L∑k=1K∑j=1JD(k−cMCj)     (5)

    where K is all CC elements, k is each CC element and D is the distance between k and each jth cMC element whose closest CC element is k, including itself, thus rendering L total comparisons. The ideal ANE value is 0, where each sample of the CC represents itself and those similar to it. This parameter evaluates the homogeneity of the represented MC diversity.

  • b) The average distance between each CC sample and the nearest CC sample (ENE) can be calculated using the equation as follows:

    ENEtot=1L∑k=1KD(k−cCC)     (6)

    where K is all CC elements, k is each CC element and D is the distance between k and its closest CC element cCC, excluding itself, in L total comparisons. With such an evaluation parameter, higher dispersion renders higher scores with the aim of evaluating the dispersion among selected CC elements.

  • c) The average distance between CC samples (E) can be calculated using the equation as follows:

    Etot=1L∑k=1K∑j=1JD(k−cCCj)      (7)

    where K is all CC elements, k is each CC element and D is the distance between k and all other jth CC elements cCC, excluding itself, in L total comparisons. This evaluation parameter indicates higher scores when CC elements have greater distances between themselves.

    While previous evaluation parameters are useful for data dispersion analysis, such parameters will not evaluate how well the distribution of the MC is represented on the CC; therefore, the distribution comparisons tests that were included are as follows:

  • d) The homogeneity test (F – test for variances and t – test for means; α = 0.05) between the CC and MC for each trait can be represented as a percentage of traits that are statistically different (MD for means and VT for variances)9.

  • e) The coincidence rate (CR) can be calculated using the equation as follows:

    CR=1M∑m=1MRCCRMC∗100     (8)

    where R is the range of each m trait, and M represents the number of traits.

  • f) The variable rate (CV) can be calculated using the equation as follows:

    CV=1M∑m=1MCVCCCVMC∗100     (9)

    where CV is the coefficient of the variation of each m trait in the CC and MC, and M is the number of traits.

    According to Hu et al.10, a valid CC has CR > 80 and MD < 20, which are the limits for the ideal representation of the identity and distribution of the MC.

  • g) The alleles coverage (CA) can be calculated using the equation as follows:

    CA=[|1−(|1−ACC|/AMC)|]∗100    (10)

    where ACC is a set of alleles in the CC, and AMC is a set of alleles in the MC; ACC measures the percentage of alleles from the MC that are present in the CC12.

To compare the obtained CCs with an established methodology, we implemented Core Hunter 2 (CH)13 as a reference and used it with the program’s default parameters on the agrological and genomic datasets.

Experimental datasets

To determine the efficiency of the analysis of data behaviour by a point-to-point direct comparison, a synthetic dataset was constructed using binary data (Sdata) with manageable n and m elements (supplementary material 1).

To test the algorithm in real biological-context scenarios, the CCs from different MCs were constructed and evaluated.

To test the algorithm’s CCs versus the scores of the MCs, 780 rice (Oriza sativa(L.)) accession and 423 foxtail millet (Setaria italica subsp. italica (L.) P. Beauv.) accession data were retrieved from the National Institute of Agrobiological Sciences (NIAS) http://www.gene.affrc.go.jp/databases_en.php.

According to the available data, different datasets were assembled. The 762 SNPs from the 780 rice accessions retrieved from the NIAS database (Rdata) were divided arbitrarily into two subsets of 331 SNPs each for constructing two smaller datasets (RdataI and RdataIII). In addition, ATs were categorized and mapped into the binary data for 273 of the 780 accessions, resulting in 38 variables (RdataII). The variables from the 423 foxtail millet genotypes with transposon displays24 were used as a single dataset (Fdata). For a subset of 141 accessions (FdataI), 9 ATs were categorized and mapped into binary data, resulting in 28 variables (FdataII). The substitution tables used during this mapping are presented as supplementary material 1.

Implementation

All methodological procedures (except for CH, which was implemented according to the software’s default parameters and which is available for download at www.corehunter.org) were performed using FreeMat v4.2 (www.freemat.sourceforge.net). All original codes are available as supplementary material 1.

Results and discussion

Selection and evaluation

The selection criteria were chosen to look for the best possible distribution of selected CC elements within the DM. Although hierarchical clustering has proven to be an effective method for determining collection structure and sampling CC25 and although it has been implemented in different crops26,27 and included in various selection algorithms11, hierarchical reconstruction presents the challenge of selecting an appropriate model for biological interpretation that can be applied to everything from unweighted pair-group averages to Markov models in Bayesian estimations28. To avoid the challenge of selecting a reconstruction model, we decided to work strictly with the DM. By selecting the items described in this methodology, we aimed to retrieve representative elements from among the distributions of collections; however, because of its iterative nature, this methodology may render high redundancy under certain data distributions. Despite this limitation, the methodology has proven to be capable of selecting representative elements of the MC’s diversity.

Evaluation criteria were applied according to Odong et al.7 without excluding the classic criteria used in 9,10. The selected CCs render proper results in general terms. As expected, selected CCs did not always reach for optimal values for MD and CR, this is due the fact that it is not the aim of the selection method to render a CC with similar distribution to that of the MC, but to make sure to include as much diversity as possible.

It is our belief that scoring the CC sets obtained with these methodologies will enable genetic resource banks to provide clear descriptors of what their CC strengths and limitations are with respect to the MC from which they come and will provide adequate tools for determining the possible purposes of the selected CCs.

Mapping

Although several representations of genotypic characteristics (particularly those involving DNA sequences29–31) have been proposed, real-number-based mappings have not been discarded; indeed, this type of mapping has been highly studied for signal analysis even when they share two principal problems: the preferential magnitude of some nucleotides and the non-equidistance of all nucleotides32,33. The arbitrary values selected for SNPs’s numerical representation of genotypes aim to maintain equidistant relations among purines and among pyrimidines in such a manner that the same distance is also preserved between at least one of them and the undetermined values. ATs are represented as binary data. This representation may prove useful for discrete data but requires a clustering procedure for continuous data. In this study, we arbitrarily generated clusters for the latter and then represented them as the former. Although this implementation may not be the most accurate regarding biological or agronomical significance, it serves as the first approach for testing the feasibility of the use of signal processing techniques when merging several datasets to construct one CC.

RAW versus FFT

The RAW comparison establishes a distance value on the basis of the average distance between each mapped value on each element while the FFT power spectra implementation compares the signals in the frequency domain. Using FFT, establishing a DM on the basis of how data ’shifted’ rather than on the basis of average point-to-point comparisons was possible. The FFT approach provides a different DM, where its compared elements are clustered based on the similarity of the shift among data, regardless of whether the shift is in the opposite phase. We believe that this procedure may reveal additional information about the relations not only between elements but also between the individual components within each element.

FFT comparisons of signals without PCA are a good approach for CC selection. Nevertheless, PCA implementation enables us to avoid possible misleads in random data arrangements, as, for example, palindromic data that could result in the same power spectra. Moreover, through PCA, we could organize data according to their levels of impact on the difference between accessions, which - when their magnitudes were obtained - inherently rendered a representation of informativity relations among values. This ’data behaviour’ was used as the element for pairwise comparisons, and although this approach clusters differently from RAW comparisons, we believe that it will provide a new perspective for CC selection and open the possibility of further data exploration.

Our first approach was to measure the comparisons under different K values. We compared the approach of the RAW signals with the PCA-FFT-treated signals. Results from Sdata, Fdata and Rdata are presented in Table 1–Table 3. As expected, most evaluation criteria improved as K increased.

Table 1. ∆ K selected CC scores from MC Sdata Raw and PCA Signal evaluated with Sdata.

Sdata PCA Sdata RAW
K121824121824
ANE0.23480.23110.21640.26970.22870.2164
ENE0.3390.33860.34010.36960.32280.3214
E0.55620.56220.55470.55580.53330.5299
MD000000
VT41.66675041.666733.333358.333341.6667
CR64.840371.691873.715460.644775.246580.4716
CV9080.79861.207486.0876136.6446139.1418280.8481
AR74.336381.415989.380561.946977.876180.531

Table 2. ∆ K selected CC scores from MC Fdata Raw and PCA Signal evaluated with Fdata.

Fdata PCA Fdata RAW
K487296487296
ANE0.64540.64230.64070.64890.64310.643
ENE0.6460.64720.64740.650.64480.6452
E0.72970.73010.73040.72310.72360.7239
MD1.17990.590.591.76991.47491.4749
VT50.442553.687356.637250.737556.047255.1622
CR83.688387.060588.970983.533486.930887.7461
CV0.84940.4190.73571.10374.740.7361
VA96.394597.765298.599595.351697.49797.4374

Table 3. ∆ K selected CC scores from MC Rdata Raw and PCA Signal evaluated with Rdata.

Rdata PCA Rdata RAW
K48961564896156
ANE0.60130.59660.59420.61180.60520.6042
ENE0.59390.59440.59810.61060.60850.609
E0.71050.70740.70510.7030.70380.7054
MD9.11465.98963.906210.15625.46884.4271
VT42.447948.697958.072957.552172.916770.0521
CR70.571678.47783.295769.902278.104580.0167
CV1.01710.43430.31377.94070.43751.1344
VA92.675896.899298.529893.985698.182398.5031

The use of FFT signals renders better overall scores than the use of RAW signals in Sdata and Fdata; however, this advantage diminishes in Rdata. We speculate that this difference can be explained by the mapping procedures used; further research regarding this matter is encouraged.

Using the CHs’ rendered K values, we used both CH and FFT to generate the CCs on the above datasets. The evaluation of these CCs is summarized in Table 4 and in Figure 4, Figure 5. Both methodologies rendered similar results, yet PCA rendered better results on parameters representing MC distribution; this could be an effect of the selection method’s intrinsic redundancy.

Table 4. CCs selected from MC Sdata, Fdata and Rdata using PCA signals and Core Hunter compared with respective same data.

SdataFdataRdata
PCACHPCACHPCACH
K1284156
ANE0.23480.23140.64070.63920.59420.5952
ENE0.3390.39060.64740.63860.59810.6047
E0.55620.5630.73040.71760.70510.7017
MD000.591.17993.90625.4688
VT41.666758.333356.637266.666758.072986.7188
CR65.604576.100188.970993.011983.295789.6723
CV9080.978132.60780.73570.4290.31370.4001
AR74.336376.991298.599598.480398.529899.3852
ae10c947-df55-41ba-8b6b-8759d1e5d579_figure4.gif

Figure 4.

First two principal component’s distributions of k=11 CC (orange) selected by CH (a), PCA (b) and RAW (c) in Sdata distribution (blue).

ae10c947-df55-41ba-8b6b-8759d1e5d579_figure5.gif

Figure 5.

First two principal component’s distributions of k=84 CC (orange) selected by CH (a) and PCA (b) in Fdata distribution (blue).

Thus far, the proposed CC selection method and algorithm appear worthy of further exploration. We are aware that two particular fundamental elements require immediate attention. First, a better mapping solution for both genotypic and AT numerical representation needs to be determined. Second, the selection system developed by us is directly based on the DM and is prone to high redundancy in some data distributions. As discussed earlier, this selection system was chosen in order to avoid the problems associated with hierarchical clustering and further allocation selections13,34. Both issues should be addressed in the near future.

Comprehensive data analysis

To demonstrate that FFT-based CC selection can include and analyse data regardless of its origin, we concatenated corresponding signals from FdataI with FdataII as well as RdataI and RdataIII with RdataII to construct MFdata, MRdataI and MRdataIII. The comprehensive sets were used to construct CCs; the sets were then compared with both their original genotype and phenotype MCs. These comparisons are shown in Table 5–Table 8, and their distributions are represented in Figure 6–Figure 9.

Table 5. CCs selected from MC FdataI and MC MFdata PCA signals and evaluated with FdataI and FdataII

vs FdataIvs FdataII
FdataIMFdataFdataIIMFdata
K24
ANE0.63330.63560.40490.4093
ENE0.64130.64230.43740.4351
E0.71940.71130.6230.5914
MD1.76682.473500
VT66.077733.922346.4264.2857
CR89.490889.819880.67782.1913
CV45.703335.684721.8658132.1517
AR91.764792.720697.590494.3775

Table 6. CCs selected from MC RdataI, MRdataI, RdataIII and MRdataIII PCA signals and evaluated with RdataI.

vs RdataI
RdataIMRdataIRdataIIIMRdataIII
K24
ANE0.61480.61560.62510.6169
ENE0.59890.61070.6210.6194
E0.69620.69090.69850.6934
MD8.85428.59387.29176.7708
VT52.083363.541752.083353.3854
CR80.736783.76881.727881.8623
CV56.394959.627945.6875199.9377
AR86.509788.14486.565190.7202

Table 7. CCs selected from MC RdataI, MRdataI, RdataIII and MRdataIII PCA signals and evaluated with RdataIII.

vs RdataIII
RdataIMRdataIRdataIIIMRdataIII
K24
ANE0.62850.62760.63140.623
ENE0.62730.62940.63680.6267
E0.70360.70540.72260.7056
MD8.07297.55217.291710.4167
VT52.864660.677151.562546.875
CR79.599581.035679.680984.53
CV28.367356.368990.047560.7279
AR88.907188.770587.595693.0471

Table 8. CCs selected from MC RdataI, MRdataI, RdataIII and MRdataIII PCA signals and evaluated with RdataII.

vs RdataII
RdataIIMRdataIMRdataIII
K24
ANE0.45940.46520.4618
ENE0.47960.48960.4742
E0.64020.62050.6169
MD05.26320
VT39.473742.105360.5263
CR63.808261.898868.2437
CV3.82622.22854.1332
AR95.426898.780598.7805
ae10c947-df55-41ba-8b6b-8759d1e5d579_figure6.gif

Figure 6.

First two principal component’s distributions of k=24 CC (orange) selected by PCA from FDataI (a) and MDataI (b) in FdataI distribution (blue).

ae10c947-df55-41ba-8b6b-8759d1e5d579_figure7.gif

Figure 7.

First two principal component’s distributions of k=24 CC (orange) selected by PCA from FDataII (a) and MDataI (b) in FdataII distribution (blue).

ae10c947-df55-41ba-8b6b-8759d1e5d579_figure8.gif

Figure 8.

First two principal component’s distributions of k=24 CC (orange) selected by PCA from RDataIII (a) and MDataIII (b) in RDataI distribution (blue).

ae10c947-df55-41ba-8b6b-8759d1e5d579_figure9.gif

Figure 9.

First two principal component’s distributions of k=24 CC (orange) selected by PCA from RDataII (a) and MRDataI (b) in RdataII distribution (blue).

These comprehensive CCs showed overall better scores than genotypic-only CCs when compared with genotypic-only data. On the contrary, there was a better overall score in phenotypic-only CCs when compared against phenotypic-only data. In the latter case, it should be kept in mind that comprehensive data also consider genotypic data; this could explain why better selections are made when only phenotypic data are considered because genotypic variations may reduce the impact of some phenotypic traits in the PCA analysis.

The generation of a DM based on signal comparisons originating from mixed data construction enables us to explore one of the most interesting applications of this algorithm. By mapping genotypic and AT data, constructing a single signal with all data available for a particular accession is possible. The possibility of including genotypic data with phenotypic traits, geographical locations, climates, habitats, nutritional requirements, symbiotic relationships and so forth provides an opportunity for determining the best information to be included in the selection process in order to cope with the particular objectives for which that CC is being selected. This concept, in addition to adequate scoring systems, may prove useful in designing tailored CCs that comply with specific research/breeding objectives.

Conclusions

The use of SPTs in CC selection, as presented in this algorithm, enables us to analyse all available data comprehensively and from different perspectives. Despite its limitations, this signal construction makes it possible to analyse all available data regarding each accession in CC selection with good results. The efficiency of SPTs in CC selection suggests that the use of these tools in MC analysis may provide useful information not only for CC but also for other purposes. The implementation of current and other SPTs in all-inclusive MC-mapped signals is worth further exploration, and we believe that it will be an important asset to genetic resource management and exploitation.

Comments on this article Comments (0)

Version 1
VERSION 1 PUBLISHED 23 Apr 2015
Comment
Author details Author details
Competing interests
Grant information
Copyright
Download
 
Export To
metrics
Views Downloads
F1000Research - -
PubMed Central
Data from PMC are received and updated monthly.
- -
Citations
CITE
how to cite this article
Borrayo E and Takeya M. Signal-processing tools for core-collection selection from genetic-resource collections [version 1; peer review: peer review discontinued]. F1000Research 2015, 4:97 (https://doi.org/10.12688/f1000research.6391.1)
NOTE: If applicable, it is important to ensure the information in square brackets after the title is included in all citations of this article.
track
receive updates on this article
Track an article to receive email alerts on any updates to this article.

Peer review discontinued

At the request of the author(s), this article is no longer under peer review. What does this mean?

Comments on this article Comments (0)

Version 1
VERSION 1 PUBLISHED 23 Apr 2015
Comment
Alongside their report, reviewers assign a status to the article:
Approved - the paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations - A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approved - fundamental flaws in the paper seriously undermine the findings and conclusions
Sign In
If you've forgotten your password, please enter your email address below and we'll send you instructions on how to reset your password.

The email address should be the one you originally registered with F1000.

Email address not valid, please try again

You registered with F1000 via Google, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Google account password, please click here.

You registered with F1000 via Facebook, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Facebook account password, please click here.

Code not correct, please try again
Email us for further assistance.
Server error, please try again.