ALL Metrics
-
Views
-
Downloads
Get PDF
Get XML
Cite
Export
Track
Research Article

Estimating the Reliability Function (2+1) Cascade Model for Inverse Chen Distribution

[version 1; peer review: awaiting peer review]
PUBLISHED 19 Jan 2026
Author details Author details
OPEN PEER REVIEW
REVIEWER STATUS AWAITING PEER REVIEW

This article is included in the Fallujah Multidisciplinary Science and Innovation gateway.

Abstract

Background

A cascaded model in machine learning denotes a system in which numerous models are successively organized, with the result of one model functioning as the input for the subsequent model. This method enables the utilization of the strengths of individual models to enhance overall performance, especially in intricate tasks such as image production or named entity recognition. This research examines the reliability R of a certain (2+1) cascade Stress-Strength model about the Inverse Chen distribution. When the Strength and Stress adhere to Inverse Chen random variables with unspecified shape parameter α and known parameter β.

Methods

Five distinct methods (Maximum Likelihood Estimator (ML), Shrinkage Estimation Method with Shrinkage Weight Factor Estimators (sh1 ) and Trigonometric Shrinkage Weight Function (sh2 ), Least Square Estimation Method (LS), Weighted Least Square Estimation Method (WLS)) are employed to estimate the specified reliability and to facilitate a comparison among them in a simulation conducted using the MATLAB(2023b) program, numerical simulation is employed after generating random samples of sizes 25, 50, 75, and 100 from the Inverse Chen Distribution to apply the Mean Square Error (MSE) criterion for comparison.

Results

The reliability results estimated by the five methods were recorded in tables, and the numerical simulation results were recorded in tables and graphs illustrating the suitability of data with respect to all the distributions under study.

Conclusions

Compared the inverse Chen distribution to some distributions such as the log-logistic, uniform, exponential, Rayleigh, Chi-Square Half-Normal, Gumbel, and Cauchy distributions, using real data for the diamonds from a major mining zone in Southwest Africa, the inverse Chen distribution seems to be a more competitive model for this data of diamonds than the other distributions. The optimal estimation approach for the reliability via MSE is the trigonometric shrinkage weight function estimator (sh2 ).

Keywords

Cascade system, Inverse Chen distribution, Maximum likelihood method, Shrinkage technique, Least Square estimation method, Weighted Least Square, Mean Squared criteria

1. Introduction

A cascaded model in machine learning refers to a system where multiple models are sequentially arranged, with the output of one model serving as the input for the next. This approach leverages the strengths of various models to improve overall performance, particularly in complex tasks like image generation or named item recognition. The cascaded model has numerous practical uses, such as the localization of different diseases from chest X-ray images.1

Stress-strength reliability estimation assesses the likelihood that a system’s strength (its resistance to failure) surpasses the stress it endures. This is frequently represented as R=P(χ>γ) , where χ denotes strength and γ signifies stress. This notion is essential in multiple domains such as engineering, quality control, and economics, facilitating the evaluation of a system’s probability of success or failure under specified conditions.211 A large number of academics are studying stress-strength reliability to estimate the parameters of probability distributions. In 2021, the reliability of the type stress-strength for all Exponentiated Inverted Weibull, Lomax, Polo, and Gompertz Fréchet distributions was discussed.25 In 2022 and 2023, the considering reliability was developed by Refs. 611 for Generalised Exponential-Poisson, Power Rayleigh, and Inverse Chen distributions, respectively.

The Cascade Reliability model is a specific variant of the Strength-Stress model that addresses a certain (2+1) cascade configuration, featuring components A and B in operation, while component C is designated as a standby element. Let χ1 and χ2 represent the strength of components A and B , respectively, while γ1 and γ2 denote the equivalent stress responses. If any component is active, it signifies a failure; let component C be active, with χ3 representing its strength and γ3 denoting the load applied to it. When χ3=mχ1(mχ2) and γ3=kγ1(kγ2), where 0<m<1 and k>1 .12

A multitude of academics are investigating the Cascade Reliability model; Mutkekar and Munoli (2016) investigated a (1+1) cascade model related to the exponential distribution, focusing on reliability function estimators utilizing the Maximum Likelihood Estimator (ML) to estimate the parameters, addition that the UMVUE which is the Uniformly Minimum Variance Unbiased Estimator. They concluded that the UMVUE outperformed the ML.13 In 2018, Nada and Ahmed assessed the robustness of a specific stress-strength for (2+1) cascade mode with respect to Weibull distribution, characterized by unknown scale parameters when the shape parameter is known. Four distinct approaches were employed to assess reliability, with ML identified as the superior estimator.14 In 2019, they examined the reliability cascade (2+1) utilizing the Generalized Inverse Rayleigh and Inverse Weibull distributions, respectively. Through various methodologies, they demonstrated that Maximum Likelihood Estimation is the superior estimation approach.15,16 In 2021, the reliability was estimated for each of (1+1) and (2+2).1719 In 2022, and 2025, (3+1) Cascade models for different distributions were derived, respectively.20,21

This paper aims to analyze and estimate the reliability R of the (2+1) cascade Stress-Strength model, where both Stress and Strength adhere to the Inverse Chen distribution with unknown shape parameters αi , i=1,2,3 and a known parameter λ . Various methodologies will be employed, including Maximum Likelihood Estimation (ML), shrinkage estimation with shrinkage Weight Factor estimators ( sh1 ) and trigonometric shrinkage weight function ( sh2 ), least square estimation (LS), and weighted least square estimation (WLS). A comparative analysis of these five methods are conducted using the mean squared error criterion, derived from simulation studies, to evaluate the estimation results across the different methodologies based on MSE.

2. Methods

2.1 Inverse Chen distribution

The Inverse Chen distribution is a versatile tool for modeling non-monotonic hazard functions and positively skewed data. It is well-suited for time-to-failure data in industrial operations, biomedical research, and engineering systems. The distribution can handle light-tailed and heavy-tailed phenomena due to its shape and scale parameters. Its strong right-skewness and leptokurtic features make it a strong substitute for traditional models. The Inverse Chen distribution has been used in real-world applications, such as stress-strength reliability under Bayesian frameworks, survival probability in medical research, and simulation of electronic components in geological and environmental research. Its empirical adaptability and theoretical stability make it a valuable addition to lifetime distributions.2225

The Inverse Chen distribution is a two-parameter continuous probability distribution designed to model asymmetric lifetime data, especially in reliability analysis. Inverse Chen distribution with two parameters shape parameter and scale parameter.22,23,25 The Probability Density Function (PDF) in Eq. (1) and Eq. (3) for χ and γ , respectively. And the Cumulative Distribution Function (CDF) in Eq. (2) and Eq. (4) for χ and γ , respectively. The Inverse Chen distribution, which is a different version of the original Chen distribution introduced by Chen (2000), is used to describe lifetime data with hazard rates that go up over time. It can handle a wide range of danger shapes and is statistically defined on the positive real line. The distribution is good for modelling data that isn’t symmetric, like failure times, stress-strength reliability, and other types of data. It has been utilized in reliability engineering, biomedical research, stress-strength modelling, and geological and environmental studies. It can handle skewed and heavy-tailed data in a lot of different ways.22,23,25

2.2 The mathematical formula reliability

The Cascade Reliability model is a particular form of the Strength-Stress model that pertains to a certain (2+1) cascade arrangement, incorporating components A and B , in operation, while component C serves as a standby element. If any component is active, it indicates a failure; let component C be active, with χ3 reflecting its strength and γ3 denoting the force exerted on it. When χ3=mχ1(mχ2) and γ3=kγ1(kγ2), where 0<m<1 and k>1 .12

Let χi and γj be independent and identical random variables that are distributed Inverse Chen with unknown shape parameters αi and βj , and a known value λ = 3, such that ( i=1,2,3 and j=1,2,3 ), let χi indicate the strength with parameters αi , and γj refer to stress with parameters βj , then the probability density functions are f(χi), g(γj) and cumulative distribution functions are F(χi), G(γj) respectively9;

(1)
f(χi)=αiλ(χi)(λ+1)eχiλeαi(1eχiλ);χi>0;αiandλ>0
(2)
F(χi)=eαi(1eχiλ);χi>0;αiandλ>0
(3)
g(γj)=βjλ(γj)(λ+1)eγjλeβj(1eγjλ);γj>0;βjandλ>0
(4)
G(γj)=eβj(1eγjλ);γj>0;βj,λ>0

The (2+1) cascade model’s reliability function is provided by.13,14

(5)
R=P[χ1γ1,χ2γ2]+P[χ1<γ1,χ2γ2,χ3γ3]+P[χ1γ1,χ2<γ2,χ3γ3]R=R1+R2+R3
R1=P[χ1γ1,χ2γ2]=0F¯χ1(γ1)g(γ1)dγ10F¯χ2(γ2)g(γ2)dγ2=0(1eα1(1eγ1λ))λβ1(γ1)(λ+1)eγ1λeβ1(1eγ1λ)dγ10(1eα2(1eγ2λ))λβ2(γ2)(λ+1)eγ2λeβ2(1eγ2λ)dγ2

Then, obtain

(6)
R1=(α1α1+β1)(α2α2+β2)
(7)
R2=P[χ1<γ1,χ2γ2,χ3γ3]=P[χ1<γ1,mχ1kγ1]P[χ2γ2],
when χ3=mχ1(mχ2) and γ3=kγ1(kγ2), where 0<m<1 and k>1 , and
(8)
P[χ2γ2]=(α2α2+β2)
P[χ1<γ1,mχ1kγ1]=0Fχ1(γ1)F¯χ1(kmγ1)g(γ1)dγ1=0eα1(1eγ1λ)(1eα1(1e(kmγ1)λ))λβ1(γ1)(λ+1)eγ1λeβ1(1eγ1λ)dγ1

Upon resolving the equation, obtain:

(9)
P[χ1<γ1,χ1kmγ1]=(β1α1+β1+α1(km)λ)(β1α1+β1)

In compensating Eqs. (8) and (9) within Eq. (12), R2 is getting as follows:

(10)
R2=(α2α2+β2)(β1α1(km)λ(α1+β1)(α1+β1+α1(km)λ))
(11)
R3=P[χ1γ1,χ2<γ2,χ3γ3]=P[χ1γ1]p[χ2<γ2,χ2kmγ2]=(α1α1+β1)0eα2(1eγ2λ)(1eα2(1e(kmγ2)λ))λβ2(γ2)(λ+1)eγ2λeβ2(1eγ2λ)dγ2=(α1α1+β1)(β2α2(km)λ(α2+β2)(α2+β2+α2(km)λ))

Substitution Eqs. (6), (10) and (11) in Eq. (5) says, R

(12)
R=(α1α1+β1)(α2α2+β2)+(α2α2+β2)(β1α1(km)λ(α1+β1)(α1+β1+α1(km)λ))+(α1α1+β1)(β2α2(km)λ(α2+β2)(α2+β2+α2(km)λ))

2.3 Model reliability estimation ( R̂ )

2.3.1 Maximum Likelihood Estimator (MLE)

Let χi for i=1,2,3 represent a strength random sample drawn from an IC ( α , λ) distribution with sample size n , where α is an unknown shape parameter, and λ is a known parameter.

By taking the logarithm of the likelihood function L ; L=f(xi,i=1,2,,n;α,λ) . Next, derive L by α , and the results equal zero. The maximum likelihood estimator for α ( α̂mle ) is obtained, which serves as the joint probability function, and has the following general form9:

(13)
α̂mle=ni=1n(eχiλ)n

Let χ1i;i=1,2,,n1,χ2i;i=1,2,,n2 and ,χ3i;i=1,2,,n3 strength random sample from IC( α1,λ), IC( α2,λ), IC( α3,λ), with sample size n1,n2,andn3 respectively:

(14)
α̂ξmle=nξ=1nξ(eχλ)nξ,ξ=1,2,3

For the stress random variable, let γ1j;j=1,2,,m1,γ2j;j=1,2,,m2 and ,γ3j;j=1,2,,m3 stress random sample from IC( β1,λ), IC( β2,λ), IC( β3,λ), with sample size m1,m2,andm3 respectively; the ML estimator for the unknown parameter β1,β2andβ3 is as

(15)
β̂ξmle=mξ=1mξ(e(γjξ)λ)mξ,ξ=1,2,3

By substituting α̂ξmle and β̂ξmle in Eq. (12), then the estimation reliability of R via the MLE can be summarized as follows:

(16)
R̂(mle)=(α̂1(mle)α̂1(mle)+β̂1(mle))(α̂2(mle)α̂2(mle)+β̂2(mle))+(α̂2(mle)α̂2(mle)+β̂2(mle))(β̂1(mle)α̂1(mle)(km)λ(α̂1(mle)+β̂1(mle))(α̂1(mle)+β̂1(mle)+α̂1(mle)(km)λ))+(α̂1(mle)α̂1(mle)+β̂1(mle))(β̂2(mle)α̂2(mle)(km)λ(α̂2(mle)+β̂2(mle))(α̂2(mle)+β̂2(mle)+α̂2(mle)(km)λ))

2.3.2 Shrinkage Estimation Method

Let a classical shrinking estimator of the parameter α be α̂ into a prior guess α0 through the reduction weighting factor k( α̂ ) where 0 ≤ k( α̂ ) ≤1. Suppose that α0 is too close to the genuine value of α. Accordingly, the procedure of the shrinkage estimator introduced by Thompson9:

(17)
α̂sh=k(α̂)α̂mle+(1k(α̂))α0

2.3.2.1 The Shrinking Weight Factor Estimators (sh1)

The shrinkage weight factor is proposed as a function of sample sizes nξ and mξ ; respectively, which is assumed as the following formula.

i.e. ;kξ(α̂ξ) = nξnξ+100 , kξ(β̂ξ) = mξmξ+100 , such that ξ=1,2,3 , and this indicates in the resulting shrinkage estimators:

(18)
αξ̂sh1=kξ(α̂ξ)αξ̂mle+(1kξ(α̂ξ))α0ξ
(19)
βξ̂sh1=kξ(β̂ξ)βξ̂mle+(1kξ(β̂ξ))β0ξ

By substituting Eqs. (18) and (19) into Eq. (12), the reliability estimation can be derived approximately as follows:

(20)
R̂(sh1)=(α̂1(sh1)α̂1(sh1)+β̂1(sh1))(α̂2(sh1)α̂2(sh1)+β̂2(sh1))+(α̂2(sh1)α̂2(sh1)+β̂2(sh1))(β̂1(sh1)α̂1(sh1)(km)λ(α̂1(sh1)+β̂1(sh1))(α̂1(sh1)+β̂1(sh1)+α̂1(sh1)(km)λ))+(α̂1(sh1)α̂1(sh1)+β̂1(sh1))(β̂2(sh1)α̂2(sh1)(km)λ(α̂2(sh1)+β̂2(sh1))(α̂2(sh1)+β̂2(sh1)+α̂2(sh1)(km)λ))

2.3.2.2 The Trigonometric Shrinkage Weight Function (sh2)

kξ(α̂ξ) = |sin(nξ)nξ| and kξ(β̂ξ) = |sin(mξ)mξ| , that ξ=1,2,3 , implies the following shrinkage estimators:

(21)
αξ̂sh2=kξ(α̂ξ)αξ̂mle+(1kξ(α̂ξ))α0ξ
(22)
βξ̂sh2=kξ(β̂ξ)βξ̂mle+(1kξ(β̂ξ))β0ξ

Substituting Eqs. (21) and (22) into Eq. (12), the reliability estimation with the Trigonometric Shrinkage Weight Function estimator can be articulated approximately as follows:

(23)
R̂(sh2)=(α̂1(sh2)α̂1(sh2)+β̂1(sh2))(α̂2(sh2)α̂2(sh2)+β̂2(sh2))+(α̂2(sh2)α̂2(sh2)+β̂2(sh2))(β̂1(sh2)α̂1(sh2)(km)λ(α̂1(sh2)+β̂1(sh2))(α̂1(sh2)+β̂1(sh2)+α̂1(sh2)(km)λ))+(α̂1(sh2)α̂1(sh2)+β̂1(sh2))(β̂2(sh2)α̂2(sh2)(km)λ(α̂2(sh2)+β̂2(sh2))(α̂2(sh2)+β̂2(sh2)+α̂2(sh2)(km)λ))

2.3.3 Least Square Estimation Methods (LS)

The method of Least Square estimates the parameter by minimizing the Eq.14,16

LS=I=1n[F(χi)E(F(χi))]2,

Such that E(F(χi)) equivalent to plotting position Pi ; Pi=in+1 . Let’s χi,i=1,2,,n, follow

IC(α,λ),
(24)
LS=I=1n[α(1eχiiλ)lnPi]2

After deriving Eq. (24) and equating the result to zero, get α̂LS ;

α̂LS=i=1nlnPi(1eχiλ)i=1n(1eχiλ)2

The Least Square estimator α̂ξLS and β̂ξLS , where ξ=1,2,3 and Pj=jm+1

(25)
α̂ξLS=i=1lnPi(1eχiξλ)i=1(1eχiξλ)2
(26)
β̂ξLS=j=1lnPj(1eγξjλ)j=1(1eγξjλ)2

Substituting Eqs. (25) and (26) into Eq. (12), the reliability estimation obtained approximately by the least square approach is as follows:

(27)
R̂(LS)=(α̂1(LS)α̂1(LS)+β̂1(LS))(α̂2(LS)α̂2(LS)+β̂2(LS))+(α̂2(LS)α̂2(LS)+β̂2(LS))(β̂1(LS)α̂1(LS)(km)λ(α̂1(LS)+β̂1(LS))(α̂1(LS)+β̂1(LS)+α̂1(LS)(km)λ))+(α̂1(LS)α̂1(LS)+β̂1(LS))(β̂2(LS)α̂2(LS)(km)λ(α̂2(LS)+β̂2(LS))(α̂2(LS)+β̂2(LS)+α̂2(LS)(km)λ))

2.3.4 Weighted Least Square Estimation Method (WLS)

The method of weighted least square estimators for the Inverse Chen distribution can be employed by minimizing the following equation:

S=I=1nwi[F(χi)E(F(χi))]2,
where wi=1var(F(χi))=i(n+1)2(n+2)(ni+1),i=1,2,,n .14,16

From Eq. (24), it can be obtained:

(28)
S=I=1nwi[α(1eχiλ)lnPi]2

By deriving (28) and make the result equal to zero, one can obtain the Weighted Least Square estimator α̂WLS and β̂WLS , where ξ=1,2,3 ;

(29)
α̂WLS=i=1wilnPi(1eχiξλ)i=1wi(1eχiξλ)2
and
(30)
β̂WLS=j=1wjlnPj(1eγξjλ)j=1wj(1eγξjλ)2

Substituting Eqs. (29) and (30) into Eq. (12), the reliability estimation obtained approximately by the weighted least squares method is as follows:

(31)
R̂(WLS)=(α̂1(WLS)α̂1(WLS)+β̂1(WLS))(α̂2(WLS)α̂2(WLS)+β̂2(WLS))+(α̂2(WLS)α̂2(WLS)+β̂2(WLS))(β̂1(WLS)α̂1(WLS)(km)λ(α̂1(WLS)+β̂1(WLS))(α̂1(WLS)+β̂1(WLS)+α̂1(WLS)(km)λ))+(α̂1(WLS)α̂1(WLS)+β̂1(WLS))(β̂2(WLS)α̂2(WLS)(km)λ(α̂2(WLS)+β̂2(WLS))(α̂2(WLS)+β̂2(WLS)+α̂2(WLS)(km)λ))

2.4 Estimators comparison

The behaviour of estimated R can be studied via a simulation approach using different methods. A statistical criterion has been employed for comparison of the results, MSE (mean squared error) that follows the formula:

MSE=1Li=1L(R̂iR)2

The simulation analysis is done after repeated (1000) times to get independent different sizes samples.26

2.4.1 Random Sample Generated for Inverse Chen Distribution

Assume that a random variable U with a standard uniform distribution, IC data can be generated by the adoption of CDF inverse transformation, wherein if:

F(χi)=eα(1eχiλ)Ui=eα(1eχiλ),
then χi=[ln(1lnUiα]1λ ,

Then χ=[ln(1lnUα)]1λ;i=1,2,3,,nξandγ=[ln(1lnUβ)]1λ;j=1,2,3,,mξ.

2.4.2 Simulation analysis

The simulation program is developed using MATLAB 2023b software to facilitate the comparison of reliability estimators, which may be outlined by the following steps:

Step 1: Select random samples χ11,χ12,,χ1n1,χ21,χ22,,χ2n2 , and γ11,γ12,,γ1m1,γ21,γ22,,γ2m2 of sizes ( n1,n2,m1,m2 ) which represents a, b, c and d where a = (25,25,25,25), b = (50,50,50,50), c = (75,75,75,75) and d = (100,100,100,100) respectively, that are generated from Inverse Chen Distribution.

Step 2: Calculate the R into Eq. (12) estimation value of all suggestion methods in Eqs. (16), (20), (23), (27), and (31), correspondingly. The real parameter values are selected for 4 experiments ( α1,α2,β1,β2 ) in Table 1.

Table 1. k, m and real parameters values of IC distribution.

Exp. k m α1 α2 β1 β2
1 0.132233
2 0.133223
3 0.422332
4 0.423333

Step 3: Calculate the MSE for all proposed estimators using L =1000 duplicates.

Step 4: Compare the simulation results; the best estimation method for the reliability is the one that estimates R with the smallest MSE value.

2.5 Application of real dataset

An analysis based on a real dataset is carried out in this section. According to,27 the application consists of 25 observations that show the size distribution of diamonds taken from a significant mining zone in South-West Africa. The diamond sizes, however, are as follows: 7.5, 7, 2.5, 4.5, 2, 2, 3, 2, 1, 1.5, 5, 7, 3, 1, and 2, 39, 358, 257.5, 137, 69.5, 40.5, 28, 20.5, 16.5, and 9. The information used in this application of diamonds with a sample size of 25 came from.27 To demonstrate that the inverse Chen distribution appears to be a more competitive model for these data than the Log-logistic, Uniform, exponential, Rayleigh, Chi-Square Half-Normal, Gumbel and Cauchy distributions. Table 6 is a descriptive statistics table that is telling us about the dataset of 25 observations of diamonds taken.

Table 2. Values of the R̂ and MSE for Experiment (1), when R=0.96011 .

Sample size R̂ & MSEML Sh1 Sh2 LS WLS
a R̂ 1.260390.926170.959351.161121.22696
MSE 9.016528E-21.15214E-35.8E-74.040396E-27.121048E-2
b R̂ 1.243620.887610.959391.266811.41269
MSE 8.037718E-25.25699E-35.0E-79.406369E-20.20482938
c R̂ 1.246480.836160.959391.280121.83854
MSE 8.2005E-21.536436E-25.1E-70.102405120.77163248
d R̂ 1.255770.760140.959391.301661.45258
MSE 8.741526E-23.998972E-25.1E-70.116657120.24252224

Table 3. Values of the R̂ and MSE for Experiment (2), when R=1.44008 .

Sample size R̂ & MSEML Sh1 Sh2 LS WLS
a R̂ 1.244861.461601.440561.276891.40496
MSE 3.810948E-24.6323E-42.3E-72.662804E-21.23329E-3
b R̂ 1.255311.487251.440551.251381.19484
MSE 3.414171E-22.22541E-32.2E-73.560821E-26.014402E-2
c R̂ 1.240821.527111.440571.287431.30456
MSE 3.970369E-27.574951E-32.4E-72.3302723E-21.8364757E-2
d R̂ 1.245331.572811.440561.278881.45241
MSE 3.792602E-21.761808E-22.3E-72.598545E-21.5191E-4

Table 4. Values of the R̂ and MSE for Experiment (3), when R=1.05717 .

Sample size R̂ & MSEML Sh1 Sh2 LS WLS
a R̂ 1.267271.030141.056571.281131.18178
MSE 4.413994E-27.3069E-43.6E-75.015671E-21.552754E-2
b R̂ 1.252841.001291.056621.220501.88866
MSE 3.828831E-23.12220E-32.9E-70.026677460.69137484
c R̂ 1.263960.956661.056611.371441.44999
MSE 4.276126E-21.010137E-23.1E-79.876488E-20.15430941
d R̂ 1.276450.889431.056591.263861.59378
MSE 4.808246E-22.813524E-23.3E-74.271981E-20.28795376

Table 5. Values of the R̂ and MSE for Experiment (4), when R=1.26600 .

Sample size R̂ & MSEMLSh1Sh2LS WLS
a R̂ 1.266471.265961.2660031.256371.31417
MSE 2.15339E-72.0027 E-98 E-139.2846902E-52.320036058E-3
b R̂ 1.265741.266061.266001.291281.93136
MSE 7.1085E-83.136E-92E-136.38688948E-40.442693260462
c R̂ 1.263501.266931.266011.255481.67768
MSE 6.282474E-68.56552E-72.2E-111.10788375E-40.169480978596
d R̂ 1.271421.263041.265991.323201.21980
MSE 2.9293665E-58.775271E-69.5E-113.271346043E-32.134521184E-3

Table 6. Descriptive statistics.

Sample sizeMinMaxRangeMeanMedianModeStd. Dev.VarianceQ1Q3 IQR
25135835747.35290.68208.622826

3. Results analysis

Four experiments have been done by the simulation technique in the current study. The discretion for the mentioned experiments is in Table 1.

Experiment 1 is shown in Table 2, which displays the values of the R̂ and MSE when R=0.96011 .

Likewise, Experiment 2 is displayed in Table 3, which displays the values of the R̂ and MSE when R=1.44008 . As well, Experiment 3 is shown in Table 4, which displays the values of the R̂ and MSE when R=1.05717 . Finally, Experiment 4 appears in Table 5, which displays the values of the R̂ and MSE when R=1.26600 .

While Table 6 shows the Chen distribution’s descriptive statistics for the application of diamonds with a sample size of 25. Where the mean (47.3) is much higher than the median (5), which means that the dataset is positively skewed (a few very large values are pushing the average up). The mode (2) and Q1 (2) show that small values happen a lot. The dataset is very spread out because the standard deviation (90.6) and variance (8208.6) are both very high. The IQR (26) shows that most of the data is fairly close together (between 2 and 28), but the extreme maximum (358) makes a big range.

To ascertain the suitability of the inverse Chen distribution as a model for the dataset, the Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC), and Kolmogorov-Smirnov (K-S) distance, along with P-values, are employed, in addition to comparing the cumulative distribution function (CDF) of the specified distribution with the empirical CDF. Table 7 indicates that the Inverse Chen distribution has lower values for both AIC and BIC. This indicates that the Inverse Chen distribution may be the best fit for the dataset.

Table 7. Parameter estimates with different criteria.

DistributionLog LikelihoodAICBIC Parameters
Inverse Chen -111.8641227.7281230.1659alpha = 1.000, lambda = 1.000
Log-logistic -120.3073244.6145247.0523a = 1.000, b = 1.000
Uniform -146.9434297.8868300.3245a = 1.000, b = 358.000
Exponential -117.8759237.7517238.9706mu = 41.060
Rayleigh -180.4535362.9069364.1258sigma = 66.788
Chi-square -825.26051.6525e+031.6537e+03v = 41.060
Half-Normal -132.0354266.0708267.2897sigma = 86.815
Gumball -143.1424290.2848292.7226mu = 41.060, beta = 86.815

The empirical and calculated cumulative distribution functions of Inverse Chen and other particular distributions (Log-logistic, Uniform, exponential, Rayleigh, Chi-Square Half-Normal, Gumbel and Cauchy distributions), along with the histograms of the K-S statistic and P-value, are illustrated in Figures 1 and 2.

0d9a8f51-7080-4548-8b17-d7d9aafe82ab_figure1.gif

Figure 1. Empirical, fitted distribution.

0d9a8f51-7080-4548-8b17-d7d9aafe82ab_figure2.gif

Figure 2. Histogram for K-S and P-value of the proposed distributions.

4. Discussion

Based on the simulation technique, we draw the following results:

4.1 Experiment 1

4.1.1 Within estimation method

For the maximum likelihood estimation (MLE): the mean squared error (MSE) is lower in sample size b and higher in sample size a. Generally, the MSE decreases from sample size a to b and thereafter increases from b to d.

  • For sh1: lower mean squared error (MSE) is observed in sample size a, whereas higher MSE is noted in d. Generally, MSE increases from a to d.

  • For sh2: lower mean squared error (MSE) is observed in d, whereas higher MSE is noted in a. Generally, MSE decreases with an increasing sample size from a to d.

  • For LS: lower mean squared error (MSE) is observed in a, while higher MSE is noted in d. Generally, MSE decreases with an increasing sample size from a to d.

  • For WLS: The mean squared error (MSE) is lower in a and higher in c. Generally, MSE increases with sample size from a to c and subsequently decreases from c to d.

  • It is shown that 60% of sample size an achieves the lowest mean squared error, while 20% applies to b and d, respectively.

4.1.2 Between estimation techniques

The estimate technique sh2 exhibits the lowest mean squared error (MSE), indicating it is superior to the others. sh1 ranks second across all sample sizes, while the ranks of LS, WLS, and MLE fluctuate between third and fifth, depending on the sample size.

4.2 Experiment 2

4.2.1 Within estimating technique

For the maximum likelihood estimation (MLE): the mean squared error (MSE) is lower for sample size b and higher for sample size c. Generally, the MSE decreases from sample size a to b and thereafter decreases from c to d.

  • For sh1: For sample size a, the mean squared error (MSE) is lower, but for sample size d, the MSE is higher. Generally, the MSE increases as the sample size is augmented from a to d.

  • For sh2: lower mean squared error (MSE) is observed in b, whereas higher MSE is noted in c. Generally, MSE decreases with increasing sample size from a to b and then increases; also, MSE decreases with the transition from c to d.

  • For LS: The mean squared error (MSE) is lower in condition c and higher in condition b. Generally, MSE increases with larger sample sizes from a to b, and likewise rises from c to d.

  • For WLS: The mean squared error (MSE) is lower in region D and higher in region B. Generally, MSE increases with sample size from a to b, whereas it decreases from c to d.

  • It is indicated that 20% of sample sizes a, c, and d respectively meet the minimum mean squared error, whereas 40% pertains to sample size b.

4.2.2 Comparison of estimate methodologies

The estimate technique sh2 exhibits the lowest mean squared error (MSE), indicating it is superior to the others. sh1 ranks second across all sample sizes, while the ranks of LS, WLS, and MLE fluctuate between third and fifth, depending on the sample size.

4.3 Experiment 3

4.3.1 Internal estimation method

For the maximum likelihood estimation (MLE): the mean squared error (MSE) is lower in sample size b and higher in sample size d. Generally, the MSE decreases from sample size a to b and thereafter increases from b to d.

  • For sh1: lower mean squared error (MSE) is observed with sample size a, but higher MSE is noted with sample size d. Generally, the mean squared error (MSE) increases with the augmentation in sample size from a to d.

  • For sh2: lower mean squared error (MSE) is acceptable in b, but higher MSE is acceptable in a. Generally, the mean squared error (MSE) diminishes as the sample size increases from a to b, while the MSE escalates as it increases from b to d.

  • For LS: lower mean squared error (MSE) is acceptable in b, but higher MSE is acceptable in c. Generally, the mean squared error (MSE) diminishes as the sample size increases from a to b, and similarly, the MSE decreases as it increases from c to d.

  • The mean squared error (MSE) is lower in sample set a compared to sample set b, but MSE increases from sample set c to sample set d. Generally, MSE rises with an increase in sample size from a to b.

  • It is shown that 40% of sample size an achieves the minimum mean squared error, 40% for b, and 20% for c.

4.3.2 Comparison of estimate methodologies

The estimate technique sh2 exhibits the lowest mean squared error (MSE), indicating it is superior to the others. Subsequently, sh1 ranks second throughout all sample sizes, while the ranks of LS, WLS, and MLE fluctuate between third and fifth, depending on the sample size.

4.4 Experiment 4

4.4.1 Within estimation method

For the maximum likelihood estimation (MLE): the mean squared error (MSE) is lower in sample size b and higher in sample size d. Generally, the MSE decreases from sample size a to b and thereafter increases from b to d.

  • For sh1: For sample size a, the mean squared error (MSE) is lower, whereas for sample size d, the MSE is higher. Generally, the MSE increases as the sample size progresses from a to d.

  • For sh2: lower MSE is observed in b, whereas higher MSE is noted in d. Generally, MSE decreases with an increasing sample size from a to b, and conversely, MSE increases from b to d.

  • For LS: lower mean squared error (MSE) is acceptable in a, while higher MSE is acceptable in d. Generally, the mean squared error (MSE) increases with an expanding sample size from a to b, as well as from c to d.

  • For WLS: The mean squared error (MSE) is lower in region d and higher in region b. Generally, MSE increases with larger sample sizes from a to b, and also rises from c to d.

  • It is indicated that 40% of sample sizes a and b achieve the minimum mean squared error, while 20% pertains to d.

4.4.2 Between estimation methodologies

The estimate technique sh2 exhibits the lowest mean squared error, indicating it is superior to the others. Following sh2, sh1 ranks second across all sample sizes, while maximum likelihood estimation (MLE) ranks third, least squares (LS) ranks fourth, and weighted least squares (WLS) ranks fifth. Consequently, we assert that the trigonometric shrinkage estimation approach outperformed the other presented estimation methods according to the minimal mean squared error criterion.

5. Conclusions

In this research, the Cascaded model is used. In statistics, Cascaded models are essential for decomposing complex issues, enhancing precision, and guaranteeing resilience. They are extensively used in data science, machine learning, and reliability research. Cascaded model is applied to the proposed distribution (the inverse Chen distribution) because of the importance of it in statistical model for reliability research, lifetime data modeling, and stress-strength studies particularly when working with asymmetric or complex data. Several methods to estimate reliability are utilized, and then the Monte Carlo simulation method is employed to compare the reliability estimated by these methods via the Mean Square Error (MSE). Four cases are used for different sample sizes, and it is found that the shrinkage method is the best estimated method. Then some real data about diamonds are applied to the proposed distribution and compared the fit of the proposed distribution to the real data with the other distributions mentioned.

The proposed distribution is the best fit via the fit criteria; the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC), the Inverse Chen distribution has lower values for both criterion and it is the most appropriate. This suggests that the dataset could be best suited for the Inverse Chen distribution.

AI disclosure statement

Without the use of AI technology, the authors independently produced all scientific components of this study, including statistical and mathematical derivations, statistical analyses, and findings. In compliance with open scientific and academic publishing standards, the Microsoft Copilot (October 2025 edition) was only used to improve language clarity, formatting, and structural organization.

Comments on this article Comments (0)

Version 1
VERSION 1 PUBLISHED 19 Jan 2026
Comment
Author details Author details
Competing interests
Grant information
Copyright
Download
 
Export To
metrics
Views Downloads
F1000Research - -
PubMed Central
Data from PMC are received and updated monthly.
- -
Citations
CITE
how to cite this article
Abduljabbar Mohammed M, Ahmed Abdulateef E and Najim Salman A. Estimating the Reliability Function (2+1) Cascade Model for Inverse Chen Distribution [version 1; peer review: awaiting peer review]. F1000Research 2026, 15:79 (https://doi.org/10.12688/f1000research.174489.1)
NOTE: If applicable, it is important to ensure the information in square brackets after the title is included in all citations of this article.
track
receive updates on this article
Track an article to receive email alerts on any updates to this article.

Open Peer Review

Current Reviewer Status:
AWAITING PEER REVIEW
AWAITING PEER REVIEW
?
Key to Reviewer Statuses VIEW
ApprovedThe paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approvedFundamental flaws in the paper seriously undermine the findings and conclusions

Comments on this article Comments (0)

Version 1
VERSION 1 PUBLISHED 19 Jan 2026
Comment
Alongside their report, reviewers assign a status to the article:
Approved - the paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations - A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approved - fundamental flaws in the paper seriously undermine the findings and conclusions
Sign In
If you've forgotten your password, please enter your email address below and we'll send you instructions on how to reset your password.

The email address should be the one you originally registered with F1000.

Email address not valid, please try again

You registered with F1000 via Google, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Google account password, please click here.

You registered with F1000 via Facebook, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Facebook account password, please click here.

Code not correct, please try again
Email us for further assistance.
Server error, please try again.