Keywords
Customer churn prediction, Data sampling techniques, Algorithmic fairness, Class imbalance problem
This article is included in the Research Synergy Foundation gateway.
Customer churn prediction, Data sampling techniques, Algorithmic fairness, Class imbalance problem
Customer churn, the phenomenon in which customers are shifting to rival companies due to dissatisfaction with the existing services or to other inevitable reasons,1 is one of the common issues usually encountered in every customer-oriented sector, including telecommunication. Customer churn prediction (CCP) is a supervised binary classification procedure that detects the potential churners before they are churned. Since there are no standardized principles for collecting data for CCP tasks, data distribution between classes will be varied from one data set to another. Therefore, one class might have extremely underrepresented compared to another class. In CCP, the target class is those being churned or not. To be exact, churn is always a minority class when the non-churn class usually comes in large numbers. Therefore, churn is used to consider a rare object2 in service-based domains including telecom. Thus, telecom datasets always suffer from a class imbalance problem (CIP) and lead to a situation in which minority instances remain unlearned.
Advanced machine learning techniques can be applied to predict potential churners. Let us consider a dataset with 10,000 data instances with 10% of churn samples i.e., 1000 churners and 9,000 non-churners. Even if a carefully built model could predict 90% correctly on the minority class, it means 100 customers are misclassified to the wrong class. Suppose 60 churners are misclassified as non-churners, i.e., false negatives, the company will lose a huge amount of revenue since recruiting new customers is more expensive than keeping the existing ones.3 Thus, the ultimate goal in the telecom sector is to increase profit by decreasing customer churn. Hence, CIP is a block when trying to achieve the major goal of CCP, since it degrades classification accuracy. Algorithmic fairness has become a very active research topic since ProPublica observed that the algorithms could yield discriminative outcomes, which impacted a minority group in real life.4
Algorithmic fairness is monitored in line with the protected features or sensitive variables in the dataset. Sensitive data could generally be, but not limited to gender, race, age group or religion. Algorithmic fairness is achieved if the decisions generated by a model do not favor more or less any individual or a group.5 The lesser the bias in the training data, the bigger the chance of achieving algorithmic fairness. However, it is almost not possible to train a zero-bias model since the historical data could have contained bias for many reasons.6 The common reasons for bias in the training data involve the compounding of initial bias over time, using proxy variables, and unbalancing of sample size between minority and majority group.7
In the CCP process, customers’ behavior is analyzed within specific time windows, for example within one month.8 Once the prediction is done, the outcomes are reused as training data for the next prediction. Therefore, there are high chances to have repeated bias in the historical data without even noticing. One solution for CIP is to apply data sampling techniques (DSTs) to the training data. Since the major function of DSTs is to increase or decrease the sample instances to balance between majority and minority classes, there are changes in the number of samples in the different groups in the dataset. The main goal of this study is to explore and identify the impact of using DSTs on training data on algorithmic fairness in the CCP process. To the best of our knowledge, there is very little research concerning algorithmic fairness in the CCP process. We believe the findings of this study would provide valuable insights to future CCP research.
Ethical Approval Number: EA1742021
Ethical Approval Body: Research Ethics Committee 2021, Multimedia University
In this study, the original data set is prepared to make three versions of unbalanced datasets, with rates of 5%, 15% and 30%. Each version is applied with four DSTs and compared the results with the unsampled original dataset to evaluate the classification performance and impacts on algorithmic fairness. The step-by-step methods to conduct the study are presented in Figure 1.
A real-world telecom dataset was provided by one of Malaysia’s leading telecom companies (see Underlying data for details on access to this dataset). The original dataset contains 1,265,535 customer records, which were collected from January 2011 to December 2011. Since the original data set is huge in volume, we randomly selected 100,000 records and utilized them for this study. We included demographics, call information, network usage, billing information, and customer satisfactory data in our dataset since they are considered as influential factors in the CCP process.9,10 A total of 22 features were extracted after careful aggregation, i.e., new features were created based on the original data and some unnecessary features were deleted from it, and features are listed in Table 1.
The final dataset was prepared with three different rates of unbalancing: 5%, 15%, and 30%. We created a Python script (see Extended data) which used the Pandas tool of Scikit-learn machine learning library to prepare three versions of datasets. We set up these specific rates because we wanted to experiment with extremely unbalanced cases up to intermediate levels.
In the data preprocessing stage, we excluded any null values. Since we found only a few outliers in the selected dataset, we manually removed them without using any specific procedure. We applied four DSTs to the data: Random Over Sampler (ROS), Random Under Sampler (RUS),11 Synthetic Minority Oversampling Technique (SMOTE),12 and Adaptive Synthetic Oversampling Technique (ADASYN).13 The selection of DSTs was based on their popularity and to know the impact of each of them on the algorithmic fairness in the CCP process.
We applied six popular classifiers: Random Forest (RF), Decision Tree (DT), LightGBM (LGBM), Gradient Boosting (GB), Logistics Regression (LR), and XGBoost.14 We created our own Python script (see Extended data) using Scikit-learn machine learning library to perform this step. After a careful exploratory data analysis, we dropped Customer ID, Avrg local amt, Avrg std amt, Avrg idd amt, Avrg dialup amt from the predictor variable list since they were weakly correlated to the target variable.
We performed two evaluations: performance measures15 and algorithmic fairness metrics.16
Performance measures
In measuring the classifier's performance, we applied standard measures which are commonly used in most of machine learning classification tasks, including precision, recall and accuracy. We applied F-1 and AUC-ROC scores since accuracy alone is not enough to evaluate the actual performance of the classifiers. We created an own script (see Extended data) using Scikit-learn, a free machine learning software library for Python programming language. The performance of each classification was done as follows:
Algorithmic fairness metrics
We emphasized the assessment of whether the classifier is discriminated between women, a protected group, and men, a non-protected group. We applied two well-known fairness definitions in measuring algorithmic fairness, and utilized the popular AI-fairness 360 tool to calculate algorithmic fairness.16
Statistical parity (SP): Also known as an equal acceptance rate. SP is achieved if women have an equal probability to be predicted in the positive, i.e., churn class, as the men.17
SP difference measures the difference of a specific outcome between the protected (female group) and non-protected (male group). The smaller the SP difference between the two groups, we can say that the model treats the unprotected group statistically similar to the protected group.
Disparate Impact (DI): Also known as indirect discrimination where no protected variables are directly applied, but biased outcomes are still produced relying on the variables correlated with protected variables.18 The standardized threshold in calculation of DI is 0.8, which means the group whose DI values are under 0.8 are discriminated by the classifier.
The threshold value 80% is advised by the US Equal Employment Opportunity Commission.19 The model could be DI-free when the value is larger than 80% but it should be lower than 125% according to.20
whereThe preliminary classification results for the datasets with different data unbalanced rates using four DSTs are shown in Tables 2–4. Table 2 shows the specific results of classification performance gotten when testing on 5% of unbalanced rate with respect to the chosen classifiers and four DSTs.
Table 3 shows the details results of classification performance obtained when testing on 15% of unbalanced dataset with respect to the chosen classifiers and four DSTs.
Table 4 shows the details results of classification performance obtained when testing on 30% of unbalanced dataset with respect to the chosen classifiers and four DSTs.
In our study, we have observed that a variable, is-senior remained unbalanced even after applying the DSTs. The algorithmic fairness scores for each group with different unbalanced rates are described in Tables 5–7. Table 5 shows the comparative results of SP difference and DI scores calculated on 5% unbalanced dataset and original dataset.
Table 6 displays the comparative results of SP difference and DI scores calculated on 15% unbalanced dataset and original dataset.
Table 7 describes the comparative results of SP difference and DI scores calculated on 30% unbalanced dataset and original dataset.
Recent works of algorithmic fairness research in machine learning applications is broadly organized into three main trends. Some studies emphasize enhancing or proposing better fairness notions and evaluation metrics in line with the domains concerned,17,21 some focus on the ways to mitigate the bias in the classification process (which can further be divided into three main groups: pre-, in-, and post-processing techniques),22-25 while the last trend proposes how to maintain the ethical AI standards and policies in practicing machine learning applications in different sectors.26,27
Despite some previous empirical studies on the impact of using preprocessing techniques on algorithmic fairness, the findings of previous works could not pinpoint the direct impact of using DSTs on algorithmic fairness. Lourenc and Antunes,28 which is the closest work to our research, distinguish the effect of data preparation on algorithmic fairness. However, their work has been tested with two small datasets and provides general results of using random under- and over- DSTs. Importantly, their work fails to be tested on the widely-applied DSTs, SMOTE and ADASYN. In contrast, we apply real-world business data and show how different DSTs influence dissimilar levels of imbalance rate.
In the classification task, RF seems to be the best classifier since it yielded the best results over the other five models, while LR provided the worst scores for almost all metrics. It was observed that RUS worked better for the extremely unbalanced situation compared with 15% and 30% imbalanced rates. The best outcomes were found via ROS, SMOTE, and ADASYN in all different unbalanced rates, thus, could be concluded that oversampling techniques seem to provide more promising prediction results over undersampling techniques. This might be because the undersampling technique modifies the data by decreasing the majority of instances, which makes the dataset lack useful information for learning.
For all three unbalanced rates, the original dataset always gave less statistical parity differences (SPD) comparing to sampled datasets created using four DSTs, while datasets with RUS and ROS yield a slightly larger SPD but the statistics showed there is no disparate impact. However, we can hypothetically consider there might still be a bias as both RUS and ROS have their limitations. With RUS, important and essential data could have been removed and the classifier could provide a biased result since there was less information to learn from. On the other hand, with ROS, the prediction performance could also be biased due to the overfitting problem. In this sense, it is suggestible to apply different fairness measures and to compare the fairness scores. For the DI scores, if there is DI less than 0.8, there is indirect discrimination towards the unprotected group. The mathematical equivalence of DI suggests equalizing the outcomes between protected and unprotected groups. However, in reality, the conditions in the context of interest drive us to allow DI to a specific group up to some percentage. For example, in telecom CCP, the number of female customers could be very less than the dataset, since most males usually apply for a network plan representing the whole household. Therefore, we assume considering DI with 80% rule is reasonable.
In the 5% unbalanced original dataset, LGBM, LR and XG-Boost imposed with DI values of 0.79, 0.64, and 0.78 respectively. But there is no DI in the other two original datasets for 10% and 30%. This reveals that more discrimination could occur on a more unbalanced dataset. The analysis on all datasets with SMOTE and ADASYN provides alarming information on the classifier’s discrimination on the unprotected group. The 30% unbalanced dataset yields the worst unfair results since this is the highest SPD between female and males’ group with LR as 0.38 and 0.43, respectively. Overall, among all DSTs, ADASYN, and SMOTE tend to provide more unfair outcomes compared to other DSTs. In contradiction, they both provide a better classification performance in comparison to RUS and ROS. There is not a huge difference among the three different data unbalanced levels. However, in this study, we experimented with the gender attribute as a sensitive variable.
Due to the nature of the CCP process and the rarity issue, training datasets have high chances to have compounded bias and suffer from unbalanced problems not only for the target class but also in the other attributes including sensitive variables. We have noticed that one variable remained unbalanced even after applying the DSTs; in such a case, a careful selection of data attributes should be done to avoid selection bias.
As the quality of training data is important, we would suggest enhanced mechanisms of data repairing techniques to prevent bias in the training data. Furthermore, the algorithmic fairness problem is mostly concerns societal discrimination. For example, in the scholarship selection process, if classifiers give more favors to males than females who have the same qualifications as males but are not selected, this will decrease their chances of scholarship. In a profit-centered industry like telecom, one could think there will be no loss for the customers though any group is less or more favored. It is important to consider the impact of biased decisions for the sake of the company’s reputation, the importance of equal treatment to customers, and to practice ethical AI policies.
In this paper, we experimented on three versions of unbalanced real-world telecom datasets to assess the impact of using four types of DSTs on the algorithmic fairness in the CCP process and compared the results with the unsampled original dataset. Classification performance and algorithmic fairness were evaluated with well-known metrics. The outcomes imply that RF provides the best classification results. Using SMOTE and ADASYN yields larger SPD between male and female groups as well as a disparate impact on the female over the male group. Previous work emphasizes the use of this method in choosing a scholarship candidate, releasing prisoners on parole, and choosing a credit candidate. Since machine learning applications would be applied to almost every sector in the near future, the practice of using fairer or unbiased systems is essential. Our study highlights the importance of paying attention to algorithmic fairness in the machine-driven decision-making process of the profit-centered and customer-oriented sectors on which very little research work has been done. Particularly, our finding highlights the fact that a careful choice of DSTs must be done to achieve unbiased prediction results. In future work, we would like to test the same procedure on a larger dataset and would like to measure more algorithmic fairness metrics to investigate the best suitable algorithmic measures for the CCP task. Moreover, we would like to test more sensitive variables rather than just gender.
The real-world telecom dataset was obtained from the Business Intelligence and Analytics department of Telekom Malaysia Bhd. The authors were required to go through a strict approval process following established data governance framework. Interested readers/reviewers may contact the Business Intelligence and Analytics department to request the data (technicalsuport@tm.com.my). The decision as to whether or not to grant access to the data is at the discretion of Telekom Malaysia Bhd.
As most telco companies own similar customer data, other customer churn datasets that are representative of the data being used in this research can be found as follows:
Analysis code available from: https://github.com/mawmaw/fairness_churn.
Archived analysis code as at time of publication: https://doi.org/10.5281/zenodo.5516218.29
License: MIT License.
Views | Downloads | |
---|---|---|
F1000Research | - | - |
PubMed Central
Data from PMC are received and updated monthly.
|
- | - |
Is the work clearly and accurately presented and does it cite the current literature?
Yes
Is the study design appropriate and is the work technically sound?
Yes
Are sufficient details of methods and analysis provided to allow replication by others?
Yes
If applicable, is the statistical analysis and its interpretation appropriate?
Partly
Are all the source data underlying the results available to ensure full reproducibility?
Partly
Are the conclusions drawn adequately supported by the results?
Yes
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Machine Learning, Cloud Computing ad Mathematical Modeling
Is the work clearly and accurately presented and does it cite the current literature?
Yes
Is the study design appropriate and is the work technically sound?
Yes
Are sufficient details of methods and analysis provided to allow replication by others?
Yes
If applicable, is the statistical analysis and its interpretation appropriate?
Yes
Are all the source data underlying the results available to ensure full reproducibility?
Yes
Are the conclusions drawn adequately supported by the results?
Yes
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Machine learning
Alongside their report, reviewers assign a status to the article:
Invited Reviewers | |||
---|---|---|---|
1 | 2 | 3 | |
Version 2 (revision) 27 Jun 22 |
read | ||
Version 1 30 Sep 21 |
read | read |
Provide sufficient details of any financial or non-financial competing interests to enable users to assess whether your comments might lead a reasonable person to question your impartiality. Consider the following examples, but note that this is not an exhaustive list:
Sign up for content alerts and receive a weekly or monthly email with all newly published articles
Already registered? Sign in
The email address should be the one you originally registered with F1000.
You registered with F1000 via Google, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Google account password, please click here.
You registered with F1000 via Facebook, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Facebook account password, please click here.
If your email address is registered with us, we will email you instructions to reset your password.
If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.
Comments on this article Comments (0)