ALL Metrics
-
Views
-
Downloads
Get PDF
Get XML
Cite
Export
Track
Method Article
Revised

Formal definition of the MARS method for quantifying the unique target class discoveries of selected machine classifiers

[version 2; peer review: 2 approved]
PUBLISHED 01 Jul 2022
Author details Author details
OPEN PEER REVIEW
REVIEWER STATUS

This article is included in the Artificial Intelligence and Machine Learning gateway.

Abstract

Conventional binary classification performance metrics evaluate either general measures (accuracy, F score) or specific aspects (precision, recall) of a model’s classifying ability. As such, these metrics, derived from the model’s confusion matrix, provide crucial insight regarding classifier-data interactions. However, modern- day computational capabilities have allowed for the creation of increasingly complex models that share nearly identical classification performance. While traditional performance metrics remain as essential indicators of a classifier’s individual capabilities, their ability to differentiate between models is limited. In this paper, we present the methodology for MARS (Method for Assessing Relative Sensitivity/ Specificity) ShineThrough and MARS Occlusion scores, two novel binary classification performance metrics, designed to quantify the distinctiveness of a classifier’s predictive successes and failures, relative to alternative classifiers. Being able to quantitatively express classifier uniqueness adds a novel classifier-classifier layer to the process of model evaluation and could improve ensemble model-selection decision making. By calculating both conventional performance measures, and proposed MARS metrics for a simple classifier prediction dataset, we demonstrate that the proposed metrics’ informational strengths synergize well with those of traditional metrics, delivering insight complementary to that of conventional metrics.

Keywords

Machine learning, Binary classification, Classifier performance evaluation, Classifier selection optimization, Classifier comparative uniqueness

Revised Amendments from Version 1

We incorporated a new figure, MARS ShineThrough bar chart (MARS charts section), which allows for the prompt visualization of the classifiers’ individual ETP but provides no information about combined classifier target-class discovery efforts. The discussion section was extended to explain the circumstances under which MARS metric usage is ideal and to further emphasize that for applications with different tradeoffs, an inverted MARS evaluation method, aimed at maximizing true negatives, may be preferable.

See the authors' detailed response to the review by Timothy A. Warner
See the authors' detailed response to the review by Samir Chatterjee

Introduction

Traditionally, binary classification performance has been assessed using a combination of statistical measures derived from the classifier’s confusion matrix (accuracy, precision, recall/sensitivity, specificity, F score), or the classifier’s various confusion matrices, in the case of classifications at different cut-off thresholds (ROC curve, AUC metric). Accuracy is defined as the percentage of correct predictions out of all predictions. Precision is the percentage of predicted positives that are true. Recall (sensitivity) is the percentage of actual positives that are correctly predicted. Specificity is the percentage of actual negatives that are correctly predicted. F scores (various variants like F1, F2) combine precision and recall, weighting each equally, or unequally, to account for different misclassification costs. Finally, for binary classifiers that assign a probability or score to predictions, ROC curves and AUC metrics account for these ranked predictions, allowing for sensitivity and specificity to be observed at different cut-off thresholds. To plot the ROC curve and assess AUC, sensitivity and specificity are measured @k, where k is the number of top-ranked predictions and increases from 1 to the total number of observations in the dataset. Effective classifiers demonstrate a “bulge” in the ROC curves, and concomitant AUC close to 1, indicating that they discover far more true positives in the top-ranked k items, than would be expected in a random selection of k items. Notably, none of these conventional metrics assess the distinctiveness (uniqueness) of the classifier’s predictions, relative to other classifiers. In other words, conventional metrics are unable to assess what percentage of true positives (‘hits’) are found only by the current algorithm but not by alternatives, nor what percentage of false negatives (‘misses’) were missed by the current algorithm but not by alternatives.

Prior to modern-day computational capabilities, the inability to quantify classifier uniqueness had not been seen as a significant limitation, as available computing power did not allow for the use of big-data or complex classifiers, resulting in a low-diversity classifier prediction sample space for most applications. However, within the context of modern-day computational power, which allows for the use of high-volume data to train complex ML classifiers for tasks beyond traditional classification/regression, e.g., discovery-driven tasks, such as flagging potentially hazardous products via online reviews (discussed below); the inability to quantify how many, and what proportion, of a classifier’s correct (and incorrect) predictions are exclusive to that classifier is a significant limitation. Especially considering that complex models may often report equal accuracy (or precision, or recall, or AUC), but have fundamentally different decision boundaries, resulting in a high-diversity prediction sample space – hence, the classifiers may each have the unique ability to identify distinct observations from the target class, and this classifier uniqueness ought to be assessable.

Such assessments about classifier uniqueness have been made possible through the use of novel MARS (Method for Assessing Relative Sensitivity) ShineThrough and MARS Occlusions scores, whose software-level implementation was recently described in Ref. 19. However, since19 focuses solely on the usage and interpretation of the software artifact’s outputs, it does not outline the methodological framework used to generate ShineThrough and Occlusion scores. Thus, in this paper, we present the mathematical foundations behind MARS metrics and their corresponding software artifact. Furthermore, we also provide step-by-step sample calculations that illustrate the inner workings of Shinethrough and Occlusion scores for a simple dataset. Being able to quantitatively assess classifier uniqueness has multiple benefits: better decisions could be made about combining complementary classifiers (vs duplicative classifiers), and improved characterizations could be run of where particular classifiers ‘shine through’ (spot true positives that no other classifiers spot) or ‘occlude’ (hide or miss observations in the target class, by mistakenly classifying those observations as false negatives, when all other classifiers were able to spot those observations as true positives).

As an example of the problematic omission of exclusivity metrics in the evaluation and comparison of classifiers, consider the following cases. Recently,1 evaluated the generalized, binary predictive ability of eight classifiers across ten datasets. ROC curve values for the top-ranked classifiers revealed that Support Vector Machine (SVM), Artificial Neural Network (ANN), and Partial Least Squares Regression (PLS) classifier performances were nearly identical across all datasets.2 compared the performance of several classifiers, namely, Random Forest (RF), Decision Tree (DT), and k-nearest neighbors (kNN), using binary classification schemes for variable stars. Similar to Refs. 1,2’s precision, recall, and F1 scores indicated that all three classifiers performed nearly identically.35 reported similar outcomes, with virtually equal performance metric values across the top n-ranked classifiers. In all these cases, while the performance of the classifiers is nearly identical according to conventional classifier evaluation metrics, the classifiers clearly made different false positive and false negative errors, and thus triumphed, or failed, relative to other classifiers on particular observations. Clearly, the scope of traditional statistical performance measures is too narrow to provide the insight required to distinguish between the top n-ranked classifiers based on their respective exclusive hits or misses. Novel classifier exclusivity metrics are needed to illustrate the success or failure of classifiers on particular observations, relative to their competing classifiers. These exclusivity metrics should reflect the extent to which a classifier exclusively finds (“shines through”) observations in the target class (that are not spotted by competing classifiers), or exclusively misses (“occludes”) observations in the target class that are spotted by competing classifiers.

Consider a classification task where the data scientist is attempting to identify safety concerns expressed by consumers in millions of online product reviews (e.g., see Refs. 69), using alternative candidate classifiers C1 and C2. The classification task is critical: missed safety concerns are unaddressed product hazards that could injure current or future product users. Assume the two competing classifiers, C1 and C2, both have precision of 80%, and recall of 80%, superficially (i.e., prima facie) indicating the classifiers have similar performance. However, if we are able to take into consideration the exclusivity of the classifier’s predictions (“shine through” and “occlusion”), we may find that C1 finds a significant proportion of the target class (safety concerns, in this observation) that C2 misses (“occludes”). Assessing classifier exclusivity is thus essential to revealing that two classifiers with 80% precision are by no means identical in their target- observation discovery ability, and may be complementary, rather than simply competing. This realization allows the data scientist to discover more safety concerns, through intelligent classifier combination (e.g., taking true positives from both classifiers), rather than the data scientist simply deciding to eliminate a superficially comparable classifier (when regarding conventional classifier performance metrics only prima facie).

Hence, while traditional performance metrics are highly efficient at identifying elite models, they tend to fall short when the task at hand requires that these (elite) models be differentiated, particularly so if the source data is of high volume.

In this paper, we present the methodology for MARS (“Method for Assessing Relative Sensitivity”), a novel approach that evaluates the comparative uniqueness of a classifier’s predictions, relative to other classifiers.19 By mathematically defining MARS ‘ShineThrough’ and ‘Occlusion’ scores, we demonstrate how these metrics assess model performance as a function of the model’s ability to exclusively capture unique true positives not found by the other classifiers (‘ShineThrough’) and the model’s inability to capture true positives found by the other classifiers (‘Occlusion’). These metrics, designed to complement widely used traditional and alternative measures, add another layer to classifier assessment, provide crucial insight that helps better distinguish and explain the behavior of the top n-ranked classifiers, and can be further extended to find optimal complementary classifier combinations for target-class discovery.

Related work

Binary classification Machine Learning (ML) performance metrics provide quantitative insight pertaining to different facets of a classifier’s true behavior, i.e., its performance on unseen data. For example, while precision is defined as the proportion of predicted positives that are actually positives, recall (sensitivity) is the overall proportion of actual positives that were correctly labelled as such.10 These metrics, derived from the classifier’s confusion matrix (Figure 1), offer complementary assessments concerning the classifier’s ability to detect and correctly label true positives, as evidenced by their mathematical definitions:

Precision=TPTP+FP

3e92b47e-a62b-4086-92d2-c2ae7af8538b_figure1.gif

Figure 1. Format of a conventional classifier confusion matrix.

Abbreviations used: TP = True Positives, FP = False Positives.

RecallSensitivity=TPTP+FN

Abbreviations used: FN = False Negatives.

Similar to sensitivity, which calculates the model’s true positive rate, specificity evaluates the overall proportion of negatives that were correctly labelled by the classifier (true negative rate).11 Consequently, it follows a similar formulation:

Specificity=TNTN+FP

Abbreviations used: TN = True Negatives.

These metrics (precision, recall, specificity) provide crucial insight relating to classifier-class interactions. Other measures, such as accuracy and F score,12 provide a more generalized interpretation of model behavior. F score, defined as the harmonic mean of precision and recall, evaluates the classifier’s performance across three confusion matrix components: TP, FP, FN, and can be defined as follows:

Fβ=1+β2·precision·recallβ2·precision+recall

Where β is arbitrarily chosen such that recall is β times as important as precision. The two most commonly used implementations are F1 and F2 scores.1315

Overall accuracy, unlike the aforementioned metrics, incorporates all four confusion matrix components into its calculations:

Accuracy=TP+TNTP+TN+FP+FN

As for visual metrics and evaluation of a classifier over multiple classification cut-off thresholds (ranked predictions), Receiver Operating Characteristics (ROC) curves16,17 and Precision-Recall (PR) curves are generally considered to be the standard. ROC curves display what proportion of the total target class items were found by the classifier (sensitivity) in the x top- ranked target class predictions (x-axis).

Precision-Recall [PR] curves are sometimes used as an alternative to ROC curves,18 to illustrate fluctuations in hit- and miss-rates, as increasing numbers of top-ranked observations are considered by a classifier. Notably, neither ROC curve nor PR curves indicate how many of the true positives in the top-ranked predictions are exclusive to the current classifier (i.e., were target-class items not found by any other classifier), nor how many of the false negatives are exclusive to the current classifier (i.e., were target-class items correctly found by all the other classifiers). Regarding this, the use of the MARS software artifact, proposed in Ref. 19, has been suggested as a way to overcome this limitation, which we further validate in this paper by presenting the mathematical foundations behind the software-level implementation of the MARS metrics.

Methods

We assess overall classifier uniqueness across two separate dimensions: MARS ShineThrough and MARS Occlusion scores. These performance measures are briefly defined in Ref. 19 as:

  • 1. MARS ShineThrough Score: The proportion of exclusive true positives discovered only by the classifier under consideration, relative to the total number of unique true positives (i.e., counting each target-class observation once only, if it is found by any classifier) discovered across all classifiers.

  • 2. MARS Occlusion Score: The classifier’s proportion of exclusive false negatives (missed only by the current classifier) that were correctly labelled by all the other classifiers relative to the total number of unique true positives discovered across all classifiers (i.e., counting each target-class observation once only, if it is found by any classifier).

These performance measures are rigorously analyzed and mathematically anatomized in the subsections MARS Shinethrough scores and MARS Occlusion scores below. Note that the approach described in the following sections can be easily adapted to true negatives and false positives, instead of true positives and false negatives, but is omitted for brevity (as the calculations are identical).

Notation reference

Table 1 provides a quick-reference glossary of the symbols used in our definitions.

Table 1. Glossary of symbols used.

SymbolDefinition
iObservation number
jClassifier number
nTotal number of observations
yi,CjPredicted class label for observation i, predicted by classifier j
tiTrue class label for observation i
JSet of classifiers
CwClassifier of interest
CjClassifier j
ZiConstant defined in (2.1) for observation i
RiConstant defined in (4.1) for observation i
TTPallTotal number of unique true positives across all classifiers
ETPCjExclusive true positives found by classifier j
EFNCjExclusive false negatives for classifier j

MARS ShineThrough Scores

Let n be the number of observations in a given dataset and J the set of classifiers, under consideration. Similarly, let yi be classifier’s predicted class label and ti the true class label (0 or 1) at observation i.

Then, we can define the total number of true positives (TTPall) as the sum, over n observations, of the maximum value of the product between predicted and true class labels across all j classifiers:

(1)
TTPall=inmaxyi,Cj·tiCjJ

To determine the total number of exclusive true positives (ETPCw) discovered by the classifier of interest, Cw,j, i.e., target class observations found only by the current classifier and not found by the other classifiers, we use:

(2)
ETPCw=inyi,Cw·timaxyi,Cj·tiJCw·Ζi

Where we sum (over n observations) the difference between the product of predicted and actual class labels and the maximum value of the same product across the remaining j -1 classifiers. Additionally, we multiply the latter by constant Zi, defined as:

(2.1)
Ζi=1yi,Cw=1,ti=10,otherwise

Consequently, the sum at observation i will have a non-zero value if and only if the classifier’s predicted and actual labels belong to the target class.

Then, using (1) and (2), we calculate the ShineThrough Score for classifier j as follows:

(3)
ShineThroughCj=ETPCwTTPall

Hence, MARS ShineThrough provides a much-needed numerical interpretation of the classifier’s comparative uniqueness, i.e., what proportion of the total number of true positives were exclusively identified by the classifier under consideration, relative to the competing classifiers. Occlusion scores, on the other hand, provide insight relating to the classifier’s comparative weaknesses.

MARS occlusion scores

We define the total number of expected false negatives (EFNCw) labelled by the classifier of interest, Cw, and correctly labelled by all of the remaining

j − 1 classifiers as:

(4)
EFNCw=inminyi,j,Cj·tiJCw·Ri

Where we find the minimum value of yi,j·tiacross the remaining j − 1 classifiers and multiply the output by binary constant Ri, defined as:

(4.1)
Ri=1yi,Cw=0,ti=10,otherwise

Thus, the summation will have a non-zero value at observation i if and only if the classifier under consideration incorrectly labelled the target class. Using (1) and (4), we then define the MARS Occlusion score for Cw as:

(5)
OcclusionCw=EFNCwTTPall

Where we divide EFNCw by TTPall to determine what proportion of the classifier’s false negatives are true positives for the remaining j 1 classifiers, therefore, quantitatively assessing the classifier’s comparative weaknesses.

Use cases

For the purposes of illustration, in the following subsections, we provide a stylized dataset and step-by-step, worked examples showing the computation of the MARS ShineThrough and MARS Occlusion scores, as well as the plotting of multiple MARS scores visually, in MARS charts.20

While we provide an arbitrary, stylized dataset in this paper (to facilitate the understanding of the step-by-step examples), MARS metric performance on a real dataset can be found in Ref. 19. However, the latter does not provide any worked-out examples or rigorous mathematical explanations beyond the software-artifact’s outputs.

Dataset

We created a simple, binary classification dataset with ten observations, each assigned an artificially generated “true” class label, for illustrative purposes. We also generated (predicted) labels for arbitrary classifiers: J = {C1, C2, C3, C4}. Actual (true) and classifier (predicted) labels can be seen in Table 2.

Table 2. Sample classifier prediction matrix.

Observation ID, for Observation i
12345678910
PredictedC11000111100
class (C14)C21111000010
C30100100010
C40111001001
Actual class0101011101

MARS ShineThrough score metric: example computation

In order to calculate MARS scores, we first determine the total number of true positives discovered across all four classifiers using Eq. (1), that is:

TTPall=i=110maxyi,Cj·tiCjJ

We illustrate the sum’s inner calculations for the first two observations below:

@i=1,true class=0:
max1×01×00×00×0=0
@i=2,true class=1:
max0×11×11×11×1=1

Thus, the sum at i = 10 would be:

TTPall==1100+1+0+1+0+1+1+1+0+1=6

Summing over all ten observations yields the value of 6, indicating that every target-class observation was correctly labelled by at least one classifier. This can be double-checked by looking at the classifiers’ target class predictions in Table 2 (i = 2,4,6,7,8,10).

To calculate individual ShineThrough scores for the classifier under consideration, we divide the total number of exclusive true positives found by Cw by the total number of unique true positives (i.e., correctly classified observations in the target-class) across all classifiers (Eq. (3)). We demonstrate the ETP calculation procedure for C1 in Table 3.

Table 3. Sample ShineThrough calculations for C1. Zi, constant defined for observation i.

Observation (i)Pred. class (yi)True class (ti)ZiInner sum - Eq. (2)
1100(1 × 0) − max (1 × 0, 0 × 0, 0 × 0) × 0 = 0
2010(0 × 1) − max (1 × 1, 1 × 1, 1 × 1) × 0 = 0
3000(0 × 0) − max (1 × 0, 0 × 0, 1 × 0) × 0 = 0
4010(0 × 1) − max (1 × 1, 0 × 1, 1 × 1) × 0 = 0
5100(1 × 0) − max (0 × 0, 1 × 0, 0 × 0) × 0 = 0
6111(1 × 1) − max (0 × 1, 0 × 1, 0 × 1) × 1 = 1
7111(1 × 1) − max (0 × 1, 0 × 1, 1 × 1) × 1 = 0
8111(1 × 1) − max (0 × 1, 0 × 1, 0 × 1) × 1 = 1
9000(0 × 0) − max (1 × 0, 1 × 0, 0 × 1) × 0 = 0
10010(0 × 1) − max (0 × 1, 0 × 1, 1 × 1) × 0 = 0

Finally, we use Eq. (3) to obtain C1 ShineThrough scores:

ShineThroughC1=26

This reveals that C1 alone accounts for one third of the discovered target class observations, suggesting its behavior is fairly unique amongst its peers. The calculations can be easily verified by looking at observations i = 6 and i = 8 in Table 2. Additionally, we can also calculate combined ShineThrough scores for two or more classifiers by summing the number of unique TPs discovered by the models, i.e., their combined ETP.

For example, using Table 2, we can obtain the combined ShineThrough score for C1 and C4 using Eq. (1), (2), and (3), as follows:

ETPC1,4=i=110yi,C1,4·timaxyi,Cj·tiJC1,4·Zi
@i=10:
ETPC1,4=i=1100+0+0+0+0+1+1+1+0+1=4
ShineThroughC1,4=46=23

This combined-ShineThrough indicates that two-thirds of the total target class observations Eq. (6), were exclusively discovered by classifiers C1 and C4, revealing that when combined, the classifiers are highly capable of target-class discovery, relative to the remaining classifiers. Note that originally (prior to combining classifiers), the observation at i = 7 was not considered to be exclusive for any of the classifiers, however, once C1 and C4 had their predictions combined, it became exclusive for C1,4.

MARS occlusion score metric: example computation

As for occlusions scores, we can calculate the total number of exclusive false negatives (missed only by the current classifier) that were correctly classified by the other classifiers following Eq. (4):

EFNCw=inminyi,j,Cj·tiJCw·Ri

In the case of C1, the first two iterations of the sum are as follows:

@i=1:
y=1,t1=0,Ri=0
min1×00×00×0×0=0
@i=2:
y=0,t2=1,Ri=1:
min1×11×11×1×1=1

Following the same procedure, the final sum at i = 10 would be:

EFNC1=i=1100+1+0+0+0+0+0+0+0+0=1

Then, we calculate the Occlusion score for classifier C1 using Eq. (5):

OcclusionC1=EFNC1TTPall=16

Unlike ShineThrough scores (where higher scores suggest better performance), with Occlusion scores it is the case that lower scores suggest better performance. In the case of C1, its Occlusion score reveals that 16% of the target class observations discovered by all other competing classifiers are being misclassified by C1. Similar to ShineThrough scores, we can also sum classifier exclusive FN predictions to calculate combined Occlusion scores. For example, for C1 and C3, whose combined predictions only have false negatives correctly labelled by the other classifiers (C2 or C4) at observation i = 4 (Table 1), we can calculate combined Occlusion1,3 as follows:

@i=4:
y=0,t4=1,Ri=1
min1×11×1×1=1

Then,

OcclusionC1,3=16

Occlusion scores for the combined classifier, C3,4, indicate that one third of the target class labels were misclassified by the combination of classifier C3 and classifier C4, but correctly labelled by at least one of the remaining j − 1 classifiers.

MARS charts

MARS ShineThrough and Occlusion scores can also be visualized, allowing for the rapid depiction of the classifiers’ relative uniqueness. For our example dataset and classifiers above, the MARS metrics can be transformed from proportions (of total true positives) to counts (of unique hits or misses), and visualized, across individual and combined classifiers, as seen in Figures 2-4, using a bubble-chart style format. Figure 2 is the MARS ShineThrough chart for classifiers C1-4; the radius of the yellow circle represents the number (count) of exclusive true positives found by the classifier on the y-axis. The radius of the orange circle represents the number of exclusive true positives found by both the classifier on the y-axis and x-axis, i.e., combined ShineThrough. Figure 3 is the MARS Occlusion chart: the radius of the red circle represents the classifier of interest (y-axis) number of false negatives (correctly labelled by the other classifiers) and the radius of the orange circle represents the combined number of exclusive false negatives labelled by the classifiers on the x and y-axis (correctly labelled by the remaining classifiers).

3e92b47e-a62b-4086-92d2-c2ae7af8538b_figure2.gif

Figure 2. MARS ShineThrough Chart, comparing count (represented by bubble radius) of target-class observations (True Positives) exclusively spotted by classifiers C1 and the pairwise classifier combinations.

Bubble size is proportional to ShineThrough score: the larger the bubble, the higher the classifier(s) ShineThrough score.

3e92b47e-a62b-4086-92d2-c2ae7af8538b_figure3.gif

Figure 3. MARS Occlusion Chart, comparing count (represented by bubble radius) of target-class observations (False Negatives) exclusively missed by classifiers C1-4 and the pairwise classifier combinations.

Bubble size is proportional to Occlusion score: the larger the bubble, the higher the classifier(s) Occlusion score.

3e92b47e-a62b-4086-92d2-c2ae7af8538b_figure4.gif

Figure 4. MARS ShineThrough Bar Chart, comparing count of target-class observations exclusively found by classifiers C1-4.

Note that orange circles can only be as small as their respective yellow or red counterparts, which in turn may be as small as zero (indicating that the classifier found no exclusive true positives or false negatives).

Individual classifier ETP counts can also be displayed via bar chart (Figure 4), allowing for prompt visual analysis of the classifiers’ individual capabilities, but providing no information about combined classifier target-class discovery efforts.

Discussion

Conventional metrics (Table 4; columns 2-4) immediately identify C4 as the unquestionably strongest classifier, due to its high accuracy (column 2), precision (column 3), and recall (column 4) values. However, notice that the information presented in these columns (2-4) does not go beyond identifying the individually strongest classifier, there is no insight relating to the classifiers’ decision boundaries or prediction uniqueness. On the other hand, while MARS metrics (Table 4; columns 5-6) do not provide a clear-cut answer as to which classifier is individually strongest, they do bring forth valuable insight about the models’ decision boundaries and possible synergies.

Table 4. Traditional vs MARS Metrics for the worked example.

ClassifierMetrics
AccuracyPrecisionRecallSTOCC
C10.500.600.500.330.16
C20.200.400.330.00.0
C30.300.330.160.00.0
C40.700.800.660.160.0

MARS ShineThrough (ST) and Occlusion (OCC) scores (Table 4; columns 5 and 6, respectively) and MARS charts (Figures 2-4) reveal that C1 is uniquely adept at spotting one third (0.33) of the target class items, and, that while C4 performs reasonably well on its own (Table 4; row 4), it could be used alongside C1 to further optimize target-class item discovery. Occlusion scores further validate the combination of C1 and C4, as C1 is the only classifier that has an Occlusion score > 0 (Figure 2), indicating that it has a unique target-class prediction error (@ i = 2, Table 2) that may be best handled by a secondary model (C4 in this case, as it has the second highest ST score after C1).

While some classifier combinations may improve overall target-class discovery performance, the opposite is also possible. For example, Figure 2 shows that the combination of C3 and C4 produces MARS ShineThrough scores identical to those of C4 alone, indicating that it is a weak combination, and should, therefore, be avoided. Thus, while traditional performance metrics gauge individual classifier capabilities by quantitively interpreting classifier-data interactions, MARS scores and charts examine classifier uniqueness and target-class discovery power by simultaneously interpreting both classifier-data and classifier-classifier interactions.

Note that the MARS evaluation mechanism was developed for a prototypical application of maximizing the volume of safety concerns found in online reviews, while constraining the close-reading verification effort required to determine if predicted positives are true positive. That is, the MARS method assists with elevating binary classifier yield: that is, increasing verified true positives per unit of effort reviewing predicted positives. The MARS evaluation mechanism is best suited to applications where the false positive cost is low, such as our prototypical application of discovering safety concerns in online reviews: a true positive (online review that contains a safety concern) is valuable, while a false positive (online review that does not contain a safety concern) has low cost, as each false positive wastes only a little reading effort, especially when there are few online reviews (predicted positives) shortlisted by the ML algorithm(s) for escalated attention by a human reviewer who is manually reviewing the predicted positive observations. For other applications – such as disease discovery – where the false positives, and false negatives, have differing trade-offs, the MARS evaluation method presented here may not be appropriate, and an inverted MARS evaluation method, aimed at maximizing true negatives, may be preferable.

Conclusions

In this paper, we presented the mathematical background and interpretation for two novel binary classification performance metrics – MARS ShineThrough and MARS Occlusion scores, whose software-level implementation, in the Python language, was recently described in Ref. 19. The formal definition of the MARS method, provided in this paper, will allow the research community to verify the correctness of the MARS method (through peer-review), accurately implement the MARS method in other programming languages (such as JavaScript, PHP, and R), and develop novel alternatives to, and enhancements to, the MARS method (such as visualizations that chart MARS metrics across multiple classifier cut-off thresholds instead of the single classifier cut-off threshold illustrated here). The stylized dataset and worked sample calculations provided in the Use cases section of this paper, above, is usable by the research community as a test case, to validate the correctness of each computational step of future software implementations. MARS metrics and MARS charts add yet another layer to the process of classifier assessment, providing crucial insight about each classifier’s behavior relative to that of its peers. ShineThrough scores evaluate the comparative unique strengths of the classifier, by determining the proportion of total true positives that were exclusively found by the classifier. On the other hand, Occlusion scores measure the proportion of observations that were correctly labelled by the other classifiers but misclassified by the current classifier, i.e., the classifier’s comparative unique weaknesses.

Naturally, the metrics synergize well with conventional measures, as the latter are constrained to the individual classifier’s confusion matrix, while the former make use of the entire observation sample space, thus, evaluating classifier behavior from a previously unseen standpoint: the relative number of target class observations spotted or missed only (i.e., exclusively) by one classifier. This was demonstrated throughout the provided worked-out examples, which calculated ShineThrough and Occlusion scores for our stylized dataset (Table 2), and in Ref. 19 with a real dataset, albeit without the comprehensive mathematical explanation and examples presented in this paper. As a result, the MARS methodological framework adds a new classifier-comparison dimension – exclusive hits and misses – not expounded by conventional classifier evaluation methods.

Data availability

All data underlying the results are available as part of the article and no additional source data are required.

Software availability

Webapp: https://mars-classifier-evaluation.herokuapp.com

Source code available from: https://github.com/SoftwareImpacts/SIMPAC-2021-191

Archived source code at time of publication: https://doi.org/10.24433/CO.2485385.v120

License: MIT

Comments on this article Comments (0)

Version 2
VERSION 2 PUBLISHED 04 Apr 2022
Comment
Author details Author details
Competing interests
Grant information
Copyright
Download
 
Export To
metrics
Views Downloads
F1000Research - -
PubMed Central
Data from PMC are received and updated monthly.
- -
Citations
CITE
how to cite this article
Restrepo F, Mali N, Abrahams A and Ractham P. Formal definition of the MARS method for quantifying the unique target class discoveries of selected machine classifiers [version 2; peer review: 2 approved]. F1000Research 2022, 11:391 (https://doi.org/10.12688/f1000research.110567.2)
NOTE: If applicable, it is important to ensure the information in square brackets after the title is included in all citations of this article.
track
receive updates on this article
Track an article to receive email alerts on any updates to this article.

Open Peer Review

Current Reviewer Status: ?
Key to Reviewer Statuses VIEW
ApprovedThe paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approvedFundamental flaws in the paper seriously undermine the findings and conclusions
Version 2
VERSION 2
PUBLISHED 01 Jul 2022
Revised
Views
5
Cite
Reviewer Report 03 Aug 2022
Samir Chatterjee, School of Information and Technology Management, Claremont Graduate University, Claremont, USA 
Approved
VIEWS 5
I am happy ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Chatterjee S. Reviewer Report For: Formal definition of the MARS method for quantifying the unique target class discoveries of selected machine classifiers [version 2; peer review: 2 approved]. F1000Research 2022, 11:391 (https://doi.org/10.5256/f1000research.135201.r142931)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
Views
7
Cite
Reviewer Report 18 Jul 2022
Timothy A. Warner, Department of Geology and Geography, West Virginia University, Morgantown, WV, USA 
Approved
VIEWS 7
Many thanks for the careful revisions and particularly for the detailed response to my suggestions. The revised paper makes a valuable contribution.

Some very minor, final suggestions are listed below:
  1. The MARS acronym
... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Warner TA. Reviewer Report For: Formal definition of the MARS method for quantifying the unique target class discoveries of selected machine classifiers [version 2; peer review: 2 approved]. F1000Research 2022, 11:391 (https://doi.org/10.5256/f1000research.135201.r142932)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
Version 1
VERSION 1
PUBLISHED 04 Apr 2022
Views
11
Cite
Reviewer Report 16 Jun 2022
Samir Chatterjee, School of Information and Technology Management, Claremont Graduate University, Claremont, USA 
Approved with Reservations
VIEWS 11
The paper proposes two new binary classifier metrics in addition to existing traditional metrics such as accuracy, precision, recall, F-score. Two classifiers of equal accuracy may each have the unique ability to identify distinct observations from the target class.
... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Chatterjee S. Reviewer Report For: Formal definition of the MARS method for quantifying the unique target class discoveries of selected machine classifiers [version 2; peer review: 2 approved]. F1000Research 2022, 11:391 (https://doi.org/10.5256/f1000research.122189.r136525)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
  • Author Response 20 Jun 2022
    Peter Ractham, Department of Management Information Systems, Thammasat University, Bangkok, 10200, Thailand
    20 Jun 2022
    Author Response
    Manuscript Number: 110567 

    Dear Dr. Chatterjee,

    Thank you for your helpful comments and suggestions regarding our manuscript, Formal definition of the MARS method for quantifying the unique target ... Continue reading
  • Author Response 17 Jun 2022
    Peter Ractham, Department of Management Information Systems, Thammasat University, Bangkok, 10200, Thailand
    17 Jun 2022
    Author Response
    Dear Reviewer,

    It seems that we'll have to wait for the update version of the manuscript from F1000 before we can submit the revised manuscript suggested by you. We'll ... Continue reading
COMMENTS ON THIS REPORT
  • Author Response 20 Jun 2022
    Peter Ractham, Department of Management Information Systems, Thammasat University, Bangkok, 10200, Thailand
    20 Jun 2022
    Author Response
    Manuscript Number: 110567 

    Dear Dr. Chatterjee,

    Thank you for your helpful comments and suggestions regarding our manuscript, Formal definition of the MARS method for quantifying the unique target ... Continue reading
  • Author Response 17 Jun 2022
    Peter Ractham, Department of Management Information Systems, Thammasat University, Bangkok, 10200, Thailand
    17 Jun 2022
    Author Response
    Dear Reviewer,

    It seems that we'll have to wait for the update version of the manuscript from F1000 before we can submit the revised manuscript suggested by you. We'll ... Continue reading
Views
19
Cite
Reviewer Report 17 May 2022
Timothy A. Warner, Department of Geology and Geography, West Virginia University, Morgantown, WV, USA 
Approved with Reservations
VIEWS 19
This paper describes statistics that summarize similarity of the labelling of unknown samples by different classifiers. The method has two levels: (1) individual classifiers vs the rest and (2) groups of two classifiers vs the rest. The method and the ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Warner TA. Reviewer Report For: Formal definition of the MARS method for quantifying the unique target class discoveries of selected machine classifiers [version 2; peer review: 2 approved]. F1000Research 2022, 11:391 (https://doi.org/10.5256/f1000research.122189.r136529)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
  • Author Response 20 Jun 2022
    Peter Ractham, Department of Management Information Systems, Thammasat University, Bangkok, 10200, Thailand
    20 Jun 2022
    Author Response
    Manuscript Number: 110567 

    Dear Dr. Warner,

    Thank you for reviewing our manuscript, Formal definition of the MARS method for quantifying the unique target class discoveries of selected machine ... Continue reading
COMMENTS ON THIS REPORT
  • Author Response 20 Jun 2022
    Peter Ractham, Department of Management Information Systems, Thammasat University, Bangkok, 10200, Thailand
    20 Jun 2022
    Author Response
    Manuscript Number: 110567 

    Dear Dr. Warner,

    Thank you for reviewing our manuscript, Formal definition of the MARS method for quantifying the unique target class discoveries of selected machine ... Continue reading

Comments on this article Comments (0)

Version 2
VERSION 2 PUBLISHED 04 Apr 2022
Comment
Alongside their report, reviewers assign a status to the article:
Approved - the paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations - A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approved - fundamental flaws in the paper seriously undermine the findings and conclusions
Sign In
If you've forgotten your password, please enter your email address below and we'll send you instructions on how to reset your password.

The email address should be the one you originally registered with F1000.

Email address not valid, please try again

You registered with F1000 via Google, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Google account password, please click here.

You registered with F1000 via Facebook, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Facebook account password, please click here.

Code not correct, please try again
Email us for further assistance.
Server error, please try again.