Keywords
SKF, review, population-based, single-agent, continuous problems, discrete problems, single objective optimization, multiobjective optimization
This article is included in the Research Synergy Foundation gateway.
SKF, review, population-based, single-agent, continuous problems, discrete problems, single objective optimization, multiobjective optimization
Optimization in terms of time and complexity is often needed in this technological era. Optimization helps in cost minimization and profit maximization. The complexity of the traditional exact optimization methods led to enormous amount of computational work which is not practical. Therefore, approximate methods especially metaheuristics are becoming popular. These general-purpose algorithms have lower complexity and offer good solutions.
Studies of well-known Kalman Filter1 have contributed to the development of many algorithms in estimation as well as in other fields, such as optimization. In optimization, two optimizers have been developed based on Kalman filter. They are Heuristic Kalman Algorithm (HKA)2 and Simulated Kalman Filter (SKF).3 This paper emphasizes the fundamental advancements and applications of SKF, which originally proposed by Ibrahim et al. in 2015. The intention of this paper is to gather and briefly review all the papers related to SKF for the beneficial of students and researchers who would like to venture in this field.
The SKF, which is also an estimation-based metaheuristic algorithm4 was first introduced as a solution to unimodal optimization problems.3 A year later, it was tested on various optimization problems and found to be a promising optimizer.5
In principle, SKF tries to solve an optimization problem by estimating the optimum solution. By taking the inspiration from the Kalman filter algorithm, each agent in the SKF will go through the same three-step process available in the Kalman filter that consist of prediction, measurement and estimation. The SKF uses the same prediction and measurement equation as in the Kalman filter. Moreover, since the measurement in Kalman filter estimation comes from sensors, the measurement is simulated in the optimization algorithm, thus the algorithm is named Simulated Kalman Filter (SKF).
Figure 1 shows the flowchart of the SKF algorithm. The optimization process starts with the initialization of solutions, followed by the fitness evaluation of each solution, and then the generation of new solutions for the next iteration. This process will stop when the stopping criteria is met. To further understand how the SKF works, a tutorial published in 2019 can be referred.6 In addition, further studies of the SKF algorithm revealed that the P, Q and R parameters can be replaced with random numbers without affecting the performance of the algorithm.7-9
The exploration in the SKF algorithm solely depends on the measurement equation. The sine operator in the measurement equation gives a 50-50 chance for exploration to occur in the measurement phase. In the estimation phase, exploitation is set to take place. The predicted value is updated by a multiplication of two small numbers (the Kalman gain and the distance between the predicted and measurement value). This may cause the SKF algorithm to prematurely converge.
In 2018, Ibrahim et al. introduced Opposition-Based Learning (OBL) concept in SKF to overcome the problem of premature convergent. In SKF with oppositional-learning prediction operator (SKF-OPO), the opposition population is generated around the best-so-far solution identified in the prediction phase.10 If the opposite prediction has a better fitness value than the original prediction (which is the previous estimated solution), the opposite prediction will be used as the predicted solution. This method proves that a proper prediction operator will help SKF to escape from premature convergence. Further experiments were conducted to observe the impact of opposition population generation in SKF-OPO with the introduction of jumping rate.11 This jumping rate is later compared to a random number between 0-1. If the random number is smaller than the jumping rate, then the opposition population will be generated at prediction phase. Results show that the higher the jumping rate, the better the performance of the algorithm. Another variation of OBL implementation in SKF was introduced in 2019 by Mohd Azmi et al. known as Current Optimum Opposition-Based Learning SKF (COOB-SKF). In this algorithm, the formation of the opposite population uses the best-so-far solution as the center between the estimate population and the opposition estimate population.12
The original population-based SKF algorithm uses the synchronous update mechanism, in which all agents have to go through all optimization steps before the best-so-far solution is updated. However, in 2018, Ab. Aziz et al. found that when the update strategy of SKF is made individual-oriented, the results are better.
The researchers later explore the possibility to use both mechanisms in SKF. There are three variants of SKF adaptive switching algorithms: the fitness-based adaptive switching synchronous-asynchronous SKF (ASSA-SKF), fitness-evaluated adaptive switching SKF with randomness (ASSKFR), and diversity-based adaptive switching synchronous-asynchronous SKF (DASSA-SKF). In ASSA-SKF, the SKF algorithm starts with synchronous update and later changes its update mechanism when the fitness is found to be static for a number of fitness evaluations.13 In ASSKFR, the switching happens when the switching counter is greater than a random number which is chosen every time a switching occurs.14 One may choose to start with synchronous update or with asynchronous update. Lastly in DASSA-SKF, instead of using fitness as the switching indicator, the decision to switch depends on the diversity of the population.15 Every time the diversity of the population is not changing for a certain number of iterations, then the iteration strategy will switch from synchronous update to asynchronous update and vice versa. All findings have shown that the algorithms benefit when there is a greater number of switching happening, thus encouraging more explorations.
Hybridization between algorithms can be used to improve an algorithm’s performance. SKF has been subjected to hybridization with three algorithms: the Particle Swarm Optimization (PSO),16 the Gravitational Search Algorithm (GSA),17 and the Sine Cosine Algorithm (SCA).18
Muhammad et al. have proposed four ways to show how GSA19 and PSO20 can be hybridized with SKF during its prediction step due to the absence of prediction operator in SKF.
• Model 1: GSA/PSO as prediction operator
• Model 2: GSA/PSO as prediction operator when a better solution is found
• Model 3: GSA/PSO as prediction operator with jumping rate
• Model 4: GSA/PSO as prediction operator with jumping rate and when a better solution is found
In another paper, although the fourth model has a better performance compared to the original SKF algorithm, this method is not the best hybrid method for SKF-GSA and SKF-PSO.21
A fairly recent SKF hybrid is the hybridization between SKF with SCA in 2018.18 The SCA algorithm is formulated using the mathematical sine and cosine term. In the hybrid version of SKF and SCA, namely the Kalman Filter-based Sine Cosine Algorithm (KFSCA), the prediction and estimation phases of SKF are implemented in SCA. Instead of using the simulated measurement equation of SKF during the measurement phase, SCA equations are used to update the individual agent’s position. Five unimodal benchmark functions are used to compare the performance of the hybrid KFSCA algorithm with the original SKF and SCA algorithm. The statistical results showed that the KFSCA performed significantly better than the original SKF and SCA algorithms, and had a higher convergence rate compared to the original SKF algorithm.
High convergence rate is a signature for all Kalman filter-based algorithms. While most researchers tried many ways to introduce more explorations to prevent the SKF algorithm from prematurely converging due to its fast convergence speed, Mat Jusof et al. opined that a faster convergence speed was favored especially in solving unimodal problems.22 In his paper, Simulated Kalman Filter with Improved Accuracy, a new exponential-based SKF named SKFIA was introduced. This SKFIA algorithm uses modified equations in the estimation phase where exponential term is introduced in the calculation of the Kalman gain and the corresponding error covariance. Instead of using the suggestion constant for measurement noise, an exponential term is added in the calculation of the measurement noise and is made dependent on the predicted error covariance. This is to enable a large step size at the beginning of the search and smaller size towards the end. Performance evaluation comparing the SKFIA with the original SKF using the first four benchmark functions of CEC2014 (3 unimodal and 1 multimodal function) shows SKFIA is able to find better results compared to the original SKF.
Besides being subjected to various modifications to improve its performance, SKF also has been modified to extend its usage. Three main extension of SKF are SKF algorithms for discrete problems, single solution version of SKF algorithm, and SKF for multi-objective problems.
The first extension of SKF algorithm is to make it available to solve discrete problems. There are three approaches proposed by Md. Yusof et al. in solving combinatorial optimization problems using SKF, giving birth to three new modified algorithms: the binary SKF (BSKF), angle modulated SKF (AMSKF) and distance evaluated SKF (DESKF).23
Right after the introduction of SKF, Md. Yusof et al. published Binary SKF (BSKF) algorithm to enable SKF to operate in binary search space BSKF.24 This is followed by the publication of DESKF and AMSKF a year later.25,26 Another variant of the DESKF is the local optimum DESKF algorithm27 and the DESKF with state encoding.28 All these discrete SKF algorithms are meant to solve binary problems. The state-encoded DESKF (SEDESKF) was introduced in 2018 to solve combinatorial optimization problems that were not in binary search space.
In the analysis on the number of agents towards the performance of the SKF optimizer, the results showed that the SKF algorithm performed its best at a surprisingly large population of 700 agents.29 Due to this reason, another variant of the SKF algorithm was introduced in 2018 which was a single agent version of the SKF algorithm.30 In order to use the same equations as the SKF algorithm, a prediction operator needs to be introduced during the prediction phase. Thus, a simple local adaptive neighborhood method centered around the best-found solution so far is used to predict the location of the optimum solution. However, the introduction of this new adaptive prediction operator introduces another parameter. The ssSKF algorithm with adaptive coefficient value of 5 is proven to be significantly better than the original SKF of 100 agents. Another tutorial published in 2019 can be referred to understand the difference between the population-based and the single-solution algorithm.31 The parameter tuning analysis published last year revealed that ssSKF performed best when the adaptive coefficient was set at 10.32
The last but quite significant extension of SKF is the introduction of multi-objective SKF (MOSKF).33 Research on multi-objective algorithm is gaining attention worldwide because it deals with problems with two or more objectives that might be conflicting to one another. The SKF is transformed into MOSKF by applying the non-dominated sorting approach. Each agent is associated with cost function value and diversity spacing parameter. Besides that, the mutation and crossover operators are also adopted in MOSKF. Only a limited number of agents undergo mutations and crossovers to reduce the complexity of the algorithm while at the same time promote randomness. The experimental results on three multi-objective benchmark functions showed MOSKF performed better than the Non-dominated Sorting Genetic Algorithm II. However, this MOSKF has no real-world application or has not been subjected to solve more complex problems yet.
Applications of SKF algorithm and its variants can be categorized into nine different area of applications:
A. Drill path optimization
B. Airport gate allocation problem (AGAP)
C. Assembly sequence planning (ASP)
D. System identification
E. Feature selection
F. Image template matching
G. Controller tuning
H. Wireless sensor network
I. Engineering design problem
Table 1 shows the summary of applications of SKF algorithm and its variants in the 9 fields mentioned above.
In this paper, we provided a brief review of SKF, its improvements, extensions and applications. It does not include any comparison study between SKF and other algorithms. Since the introduction of SKF at the end of 2015, more than 57 papers have been published. The majority of them focused on methods to improve the SKF performance and their applications in various fields due to its simple implementation and fast convergence behavior. However, additional works might need to be done to test the algorithms performance to solve more complex applications and large-scale problems, especially for multi-objective optimization. Another interesting area to explore is on improving the exploration of the single agent version of SKF since no single work has been carried out yet for the ssSKF algorithm.
NHAA & NAAA: wrote most of the manuscript.
ZI & MSM: revised the manuscript.
MRK: compiled all SKF related papers.
The authors would like to acknowledge all parties directly or indirectly involved in the preparation of this review paper.
Views | Downloads | |
---|---|---|
F1000Research | - | - |
PubMed Central
Data from PMC are received and updated monthly.
|
- | - |
Is the topic of the review discussed comprehensively in the context of the current literature?
Partly
Are all factual statements correct and adequately supported by citations?
Yes
Is the review written in accessible language?
Yes
Are the conclusions drawn appropriate in the context of the current research literature?
Partly
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Kalman Filtering, Estimation, Fractional Calculus and Descriptor Systems
Is the topic of the review discussed comprehensively in the context of the current literature?
Yes
Are all factual statements correct and adequately supported by citations?
Yes
Is the review written in accessible language?
Yes
Are the conclusions drawn appropriate in the context of the current research literature?
Partly
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Specialist in optimal estimation methods, with emphasis on Kalman filters applied to aerospace engineering.
Alongside their report, reviewers assign a status to the article:
Invited Reviewers | ||
---|---|---|
1 | 2 | |
Version 1 25 Oct 21 |
read | read |
Provide sufficient details of any financial or non-financial competing interests to enable users to assess whether your comments might lead a reasonable person to question your impartiality. Consider the following examples, but note that this is not an exhaustive list:
Sign up for content alerts and receive a weekly or monthly email with all newly published articles
Already registered? Sign in
The email address should be the one you originally registered with F1000.
You registered with F1000 via Google, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Google account password, please click here.
You registered with F1000 via Facebook, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Facebook account password, please click here.
If your email address is registered with us, we will email you instructions to reset your password.
If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.
Comments on this article Comments (0)