ALL Metrics
-
Views
-
Downloads
Get PDF
Get XML
Cite
Export
Track
Research Article
Revised

A novel hybrid genetic algorithm and Nelder-Mead approach and it’s application for parameter estimation

[version 3; peer review: 2 approved, 1 approved with reservations]
PUBLISHED 07 Apr 2025
Author details Author details
OPEN PEER REVIEW
REVIEWER STATUS

This article is included in the Artificial Intelligence and Machine Learning gateway.

Abstract

Background

Traditional optimization methods often struggle to balance global exploration and local refinement, particularly in complex real-world problems. To address this challenge, we introduce a novel hybrid optimization strategy that integrates the Nelder-Mead (NM) technique and the Genetic Algorithm (GA), named the Genetic and Nelder-Mead Algorithm (GANMA). This hybrid approach aims to enhance performance across various benchmark functions and parameter estimation tasks.

Methods

GANMA combines the global search capabilities of GA with the local refinement strength of NM. It is first tested on 15 benchmark functions commonly used to evaluate optimization strategies. The effectiveness of GANMA is also demonstrated through its application to parameter estimation problems, showcasing its practical utility in real-world scenarios.

Results

GANMA outperforms traditional optimization methods in terms of robustness, convergence speed, and solution quality. The hybrid algorithm excels across different function landscapes, including those with high dimensionality and multimodality, which are often encountered in real-world optimization issues. Additionally, GANMA improves model accuracy and interpretability in parameter estimation tasks, enhancing both model fitting and prediction.

Conclusions

GANMA proves to be a flexible and powerful optimization method suitable for both benchmark optimization and real-world parameter estimation challenges. Its capability to efficiently explore parameter spaces and refine solutions makes it a promising tool for scientific, engineering, and economic applications. GANMA offers a valuable solution for improving model performance and effectively handling complex optimization problems.

Keywords

Genetic Algorithm, Maximum Likelihood Estimation, Nelder-Mead algorithm, Power Density, Weibull Distribution, Wind speed analysis

Revised Amendments from Version 2

We have included some references as per the suggestions of reviewers 1 and 2, in the reference section.

See the authors' detailed response to the review by El-ghalia Boudissa and HABBI FATIHA
See the authors' detailed response to the review by Olympia Roeva

1. Introduction

In the continuous pursuit of optimization, where achieving the finest possible outcomes with utmost efficiency and accuracy is crucial, the fusion of diverse methodologies frequently yields superior solutions. Optimization algorithms are always looking for ways to improve efficiency and robustness, encouraging professionals and scholars to investigate novel ideas that happen to be mostly inspired by nature or mathematical concepts. Hybridization in optimization algorithms has garnered significant attention in recent years, offering a potent means to enhance efficacy and efficiency. Among these methodologies, Genetic Algorithms (GA) and the Nelder-Mead Simplex Algorithm (NM) emerge as prominent contenders, each boasting distinct advantages and applications. However, the fusion of these theories has recently proven to be an enticing strategy for enhancing optimization capabilities across various domains.

Inspired by evolution and natural selection, genetic algorithms operate by repeatedly developing a population of potential solutions over a series of generations. The concepts of genetic recombination and survival of the fittest are collectively mirrored by the selection, crossover, and mutation operators involved in this evolutionary process. GAs are a popular choice in various industries, including engineering, finance, and biology, because of their impressive effectiveness in solving complicated, high-dimensional optimization problems with non-linear and multimodal objective functions.

The Nelder-Mead Simplex Algorithm, on the other hand, provides a geometric method for repeatedly refining a simplex—a multi-dimensional geometric shape in the direction of the ideal solution. Its foundation is mathematical optimization. Nelder- Mead algorithms are especially well-suited for problems with few variables or smooth objective functions because, in contrast to GAs, which rely on a population-based approach, they operate on a single point or simplex at each iteration. Due to its ease of use, simplicity, and speedy convergence to local optima, it now has a deserving place within the optimization toolbox.

Genetic Algorithm (GA) based hybrids have emerged as powerful tools for optimization, combining GA’s global search capabilities with the local refinement strengths of other algorithms. These hybrids balance exploration and exploitation, allowing for efficient navigation of complex, high-dimensional search spaces. However, they are not without limitations. Challenges such as slow convergence rates, parameter sensitivity, and computational overhead remain prevalent. Furthermore, many existing studies lack comprehensive comparisons of hybrid methodologies and fail to explore their scalability and adaptability to diverse optimization problems. Gaps also persist in understanding the interplay between exploration and exploitation in these hybrids, leaving room for novel approaches that address these issues.

Overview of Recent GA Hybrids

Genetic Algorithm (GA) based hybrid approaches have become a cornerstone in modern optimization research, combining GA’s global exploration abilities with various techniques to enhance local refinement. The following highlights some key advancements:

  • 1. GA-Nelder–Mead (GA-NM):1

    • Methodology: Integrates GA for its broad search capabilities with the Nelder–Mead (NM) simplex algorithm, which excels in refining solutions locally.

    • Strengths: Offers improved convergence speeds and precision in parameter estimation, effectively balancing exploration and exploitation.

    • Weaknesses: Struggles with scalability in higher dimensions and demands careful tuning of parameters for optimal performance.

  • 2. GA-Harris Hawks Optimization (GA-HHO):2

    • Methodology: Combines GA’s exploration strength with the Harris Hawks Optimization (HHO) method for exploiting promising regions.

    • Strengths: Demonstrates exceptional performance in handling complex, multimodal optimization problems.

    • Weaknesses: Computational demands increase significantly, and parameter sensitivity can affect robustness.

  • 3. Real-Value Genetic Algorithm and Extended Nelder–Mead (RVGA- ENM):3

    • Methodology: Employs RVGA for global searches and the Extended Nelder–Mead (ENM) algorithm for refining solutions, specifically applied to energy demand forecasting.

    • Strengths: Achieves superior accuracy in predictions and effective refinement of solutions.

    • Weaknesses: Highly reliant on the quality of the initial population and available training data.

  • 4. GA-Tabu Search (GA-TS):4

    • Methodology: Utilizes GA for broad search capabilities and Tabu Search (TS) for local optimization, designed for maintenance scheduling in cogeneration plants.

    • Strengths: Efficiently handles scheduling challenges in complex systems.

    • Weaknesses: Suffers from significant computational overhead as the problem size grows.

  • 5. GA-Machine Learning (GA-ML):5

    • Methodology: Integrates GA with machine learning (ML) models to optimize graph-related problems.

    • Strengths: Provides adaptability and enhanced performance through insights derived from ML techniques.

    • Weaknesses: Complexity increases due to the integration of ML, leading to greater computational requirements.

  • 6. Harris Hawks-Nelder–Mead (HH-NM):6

    • Methodology: Combines HHO and NM for tackling optimization in design and manufacturing scenarios.

    • Strengths: Demonstrates strong convergence performance and resilience in solving intricate problems.

    • Weaknesses: Requires fine-tuned parameter adjustments to maintain consistency.

  • 7. GA-Artificial Neural Network (GA-ANN):7

    • Methodology: Couples GA with Artificial Neural Networks (ANNs) for optimizing process parameters, particularly in plastic injection molding.

    • Strengths: Effectively enhances manufacturing quality and process efficiency.

    • Weaknesses: Dependence on ANN training data can limit its applicability to diverse scenarios.

  • 8. GA-Simulated Annealing (GA-SA):8

    • Methodology: Merges GA’s exploratory capabilities with Simulated Annealing’s (SA) temperature-based refinement strategy.

    • Strengths: Efficiently escapes local optima and maintains diversity in the search process.

    • Weaknesses: Computational costs are high, with slower convergence for high-dimensional tasks.

  • 9. GA-Particle Swarm Optimization (GA-PSO):9

    • Methodology: Combines GA’s global exploration with Particle Swarm Optimization (PSO) for exploiting solutions.

    • Strengths: Performs exceptionally well in multimodal optimization landscapes.

    • Weaknesses: Risks stagnation in local optima if not equipped with adaptive mechanisms.

  • 10. GA-Nelder-Mead (GA-NM):10

    • Methodology: Utilizes the NM simplex method within GA to enhance solution precision in smooth, low-dimensional problems.

    • Strengths: Improves optimization precision through effective local refinement.

    • Weaknesses: Faces challenges in scalability and requires precise parameter settings.

These developments illustrate the versatility and potential of GA hybrids in addressing a range of optimization challenges while emphasizing the need for careful parameter tuning and scalability enhancements. GANMA builds on this foundation by offering a structured, robust framework that addresses existing limitations.

Despite advancements in hybrid optimization algorithms, several key challenges persist. Many studies lack comprehensive comparisons, failing to evaluate scalability, convergence, and adaptability across diverse tasks. Additionally, the balance between global exploration and local exploitation remains under-explored, limiting efficiency in finding optimal solutions. Scalability issues are prominent, as many hybrids falter in high-dimensional problems, highlighting the need for robust methods capable of maintaining performance in complex spaces. Parameter sensitivity is another hurdle, with insufficient adaptive tuning mechanisms leading to inconsistent results. Furthermore, validation is often restricted to benchmark functions, offering limited insight into real world applicability where constraints and objectives are more complex. These gaps emphasize the need for innovative hybrids that address these issues while ensuring efficiency, scalability, and practical relevance.

Individually, both GA and NM have strengths and limits that make them appropriate for specific optimization scenarios. GA excels in global exploration, utilizing population variety to explore large solution spaces and avoid local optima. On the other hand, NM excels at local refinement, expertly traversing convex and smooth terrain to locate specific optima. The hybridization of GA with NMA addresses the limitation of GA in fine-tuning solutions near optima, at which NMA excels. This synergy improves the algorithm’s convergence speed and solution quality. Other researchers have primarily focused on individual optimization methods or hybridizations excluding GA and NMA, leaving a gap in fully exploiting the complementary strengths of these methods.

The GANMA method effectively addresses these gaps through its innovative design and balanced approach. By seamlessly integrating Genetic Algorithm (GA) and Nelder-Mead Algorithm (NM), GANMA achieves a robust balance between global exploration and local exploitation, enhancing its efficiency in diverse optimization tasks. Its structured framework allows for improved scalability, maintaining performance even in high-dimensional problem spaces. Additionally, GANMA incorporates adaptive mechanisms for parameter tuning, reducing sensitivity and ensuring consistent results across various scenarios. Unlike many existing hybrids, GANMA has been rigorously tested on both benchmark functions and real-world parameter estimation tasks, demonstrating its adaptability and robustness. These features position GANMA as a superior hybrid optimization method, addressing the limitations of existing approaches while offering practical solutions for complex, multidimensional challenges.

Many industries are interested in using the Nelder-Mead Simplex Algorithm (NM) working together with Genetic Algorithms (GA), including bio-informatics,11 finance,12,13 and engineering.14,15 Combining these methods provides a potent method of resolving challenging optimization issues in engineering, where designs are complicated and rules are demanding. Combining GA with NM helps improve portfolio management and risk assessment in the financial industry, where on-time and correct choices are essential. Similarly, hybrid algorithms speed up tasks like genomic analysis and drug discovery16,17 in bio-informatics, where understanding biology relies on smart computer methods. This article explores how combining NM and GA enhances both, highlighting how they work together to solve real-world issues. This paper has tested the GANMA algorithm on fifteen benchmark problems in three dimensions (10, 20, and 30). According to the results from the experiment, the suggested GANMA algorithm is a promising one that can quickly find the best or almost the best solution for most of the functions examined.

The remaining portion of the research study is structured as follows: Section 2 provides the fundamentals of the Genetic Algorithm and the Nelder-Mead simplex search. Section 3 discusses the proposed hybridized method, benchmark functions, and an alternative hybridization approach. The parameter setup for all methods and computational configurations are detailed in Section 4. Section 5 presents the results and discussion for benchmark functions, while Section 6 focuses on the Weibull distribution. Parameter estimation methods are described in Section 7, and Section 8 provides an analysis of Monte Carlo simulations and results. Two real-world wind speed datasets are analyzed in Section 9 to demonstrate the effectiveness of the proposed technique. Finally, Section 10 concludes the study with key observations.

2. Overview of GA, and NM

A brief overview of GA, and NM have been described below.

2.1 Real-Coded Genetic Algorithm (GA)

Metaheuristic optimization methods like GA are higher-level frameworks designed to guide heuristic or local search procedures. In contrast, heuristic searches are problem-specific strategies for exploring the solution space. GA leverages metaheuristic principles to perform heuristic searches iteratively, balancing exploration and exploitation. So, we can say that GA is an approach to heuristic search. The ideas of the biological evolution of species serve as its inspiration. In contrast to traditional optimization methods, GA11,18 starts with a collection of starting solutions known as chromosomes.

Genetic algorithms (GAs) work by continually improving solutions based on their fitness, which measures how well they solve a problem. Unlike some traditional methods, GAs don’t assume anything about the problem, like whether it’s smooth or has just one best solution. Instead, they explore different possibilities to find good solutions, even in complex situations where there might be many equally good answers. GAs have been used successfully in many difficult optimization problems. They often work better than traditional methods, especially when there are multiple equally good solutions. This flexibility and ability to handle complex situations make GA a valuable tool for solving optimization problems in various fields.

Following is a summary of the GA stages in this study:

  • I. Initialization:

    • First, create a vector of real values for each variable between predefined ranges. This vector will represent the initial population of individuals.

  • II. Evaluation:

    • Evaluate the fitness of each individual using an objective function.

  • III. Selection:

    • Select individuals from the population to create a mating pool based on their fitness values.

  • IV. Crossover (Recombination):

    • Pair selected individuals and perform crossover to create offspring by blending or combining their real values.

  • V. Mutation:

    • Introduce random changes to the real values of offspring to promote exploration of the search space.

  • VI. Combining Populations:

    • Combine the offspring generated from crossover and mutation with the initial population.

  • VII. Sorting:

    • The combined population is sorted based on their fitness levels, with the most fit people having the lowest fitness values.

  • VIII. Elitism:

    • Keep only the top half of the sorted population, discarding the bottom half. This ensures that the best-performing individuals from the previous generation are preserved for the next generation.

  • IX. Termination:

    • Steps 2 through Step 8 should be repeated for the designated number of generations or until a termination criterion—such as achieving a maximum number of iterations or reaching a certain fitness level is satisfied.

This approach with elitism helps maintain diversity in the population while ensuring that the best individuals are preserved across generations, ultimately leading to the discovery of better solutions in the optimization process. Real-coded genetic algorithms are suitable for optimization problems with continuous decision variables and offer advantages such as direct representation of real-valued solutions, robustness, and ability to handle high-dimensional search spaces.

2.2 Nelder–Mead Simplex Search Method (NM)

The simplex search technique has been widely used for basic unconstrained minimization problems, such as nonlinear least squares, nonlinear simultaneous equations, and general function minimization.19 Originally proposed by Spendley, Hext, and Himsworth (1962),20 the method was later refined by Nelder and Mead (1965)21 to improve its efficiency and applicability.

The Nelder-Mead Algorithm (NMA) is selected for its simplicity and effectiveness in local solution refinement, making it a strong complement to the Genetic Algorithm’s (GA) global search capabilities. While a variety of optimization algorithms exist, NMA’s low computational overhead and reliability in small-dimensional spaces make it an efficient and practical choice for hybridization.

However, NMA’s reliance on simplex geometry and localized operations restricts its exploratory capacity, often causing it to converge prematurely to local optima in complex, multi-modal landscapes. Preliminary experiments (to be included) under these limitations highlight the necessity of GA’s global search to overcome such challenges.

The steps of the Nelder-Mead21,22 algorithm are summarized in as follows:

  • I. Initialization:

    • A simplex is a collection of n + 1 vertices in a n dimensional space. These vertices can be deliberately selected or created at random.

    • At every simplex vertex, evaluate the objective of the function.

  • II. Ordering:

    • Order the vertices based on their corresponding function values.

    • Let x1, x2, …, x n+1 denote the vertices such that f (x1) ≤ f (x2) ≤ f (x n+1).

  • III. Centroid:

    • Calculate the centroid of each vertex, except the worst (highest) one:

      (1)
      xcentroid=1ni=1nxi

  • IV. Reflection:

    • Reflect the worst vertex (highest) through the centroid to obtain a trial point

      (2)
      xr=xcentroid+α(xcentroidxn+1)

      where α is a reflection coefficient, typically set to 1

    • Evaluate the objective function at xr .

  • V. Expansion:

    • Expanding further should be considered if the reflected point xe is superior to the second-worst vertex:

      (3)
      xe=xcentroid+γ(xrxcentroid)

      where γ is an expansion coefficient, usually γ>1

    • Evaluate the objective function at xe .

  • VI. Contraction:

    • Contraction should be done if the reflected point xr is worse than the worst vertex:

      • Outside contraction:

        (4)
        xc=xcentroid+ρ(xrxcentroid)

      • Inside contraction:

        (5)
        xc=xcentroid+σ(xcentroidxr)

        where ρ is a contraction coefficient, typically 0 < ρ, σ <1 (typically 0.5).

    • Evaluate the objective function at xc .

  • VII. Update simplex:

    • Replace the worst vertex with the new trial point if it improves the function value.

  • VIII. Termination:

    • Till a termination criterion such as a maximum number of iterations, a small modification in step size, or a slight modification in the function value is satisfied, repeat the steps described above.

The algorithm converges when the simplex becomes sufficiently small or when the function values at the vertices are close to each other. The choice of parameters α, γ, and ρ can significantly affect the performance of the algorithm and may need to be tuned based on the problem characteristics.

3. Methods

3.1 Motivation

The combination of Genetic Algorithms (GA) with the Nelder-Mead simplex algorithm (NM) is driven by their supportive characteristics in both global exploration and local exploitation. GA is a population-based technique that effectively explores diverse sections of the search space, although fine-tuning solutions at local optima may provide issues. In contrast, it requires greater capacity for worldwide investigation. Combining both methods intends to take advantage of the characteristics of both algorithms, resulting in a more balanced and efficient optimization process. This hybridization method has the potential to improve convergence rates, solution quality, and robustness, making it a compelling choice for handling complicated optimization problems across several domains.

3.2 Genetic algorithm with Nelder-Mead Simplex Search (GANMA)

The suggested algorithm’s (GANMA) stages are summed up as follows:

  • I. Initialization:

    • Generate an initial population of solutions for the GA.

  • II. Evaluation:

    • Analyze each solution’s objective function within the population.

  • III. Genetic Algorithm (GA) Cycle:

    • Selection: Select a parent from the current population. Selection techniques that are often used include rank-based, roulette wheel, and tournament selection.

    • Crossover: Perform crossover to create offspring solutions. Since this is a real coded GA, a common method is the arithmetic crossover or simulated binary crossover.

    • Mutation: Apply mutation operators to the offspring solutions. Here is where the Nelder-Mead simplex algorithm comes into play. After mutation, the simplex is formed around the mutated solutions.

    • Elitism: Combine the initial population and offspring after mutation, then calculate the mean combination. Sort the combined population according to their fitness and keep the first half population while rejecting the other half.

    • Replacement: Replace the initial population with the best half from the previous step.

  • IV. Nelder-Mead Simplex Algorithm:

    • Define the simplex for the NM algorithm. This can be done by selecting a set of initial points around the best solution found by the GA so far. (The simplex in NMA is defined around the best GA solution to ensure the refinement starts near a promising region. This choice leverages GA’s exploration strength, as demonstrated in our results section.)

    • Reflection: Take the centroid of the remaining points and reflect the worst point of the simplex.

    • Expansion: Attempt to extend the simplex in that direction if the reflected point is superior to the second-worst but not superior to the greatest.

      • Contraction: If neither reflection nor expansion produces a better point, contract the simplex towards the best point.

      • Update the simplex based on the chosen operation (reflection, expansion, contraction, or shrinkage).

      • Repeat the above steps until convergence criteria are met.

  • V. Termination:

    • Repeat the GA cycle and NM algorithm until a termination criterion is satisfied. This could be a maximum number of iterations, reaching a specific fitness threshold, or convergence of the simplex.

  • VI. Output:

    • The best solution found after the termination criterion is met.

    • Apply the Nelder-Mead simplex algorithm to the best solution.

  • VII. Optimal:

    • The best solution found after the NM algorithm.

NMA is applied to the best solution after reproduction and mutation in each iteration, not just the final solution. This strategy allows continuous refinement throughout the optimization process. By combining GA with NM in this way, you leverage the GA’s global exploration capability with the NM’s local refinement ability, potentially leading to improved convergence and robustness in optimization tasks.

GANMA stands out as a versatile hybrid algorithm capable of addressing a wide range of optimization problems, transcending the domain-specific focus of many existing hybrids. Its well-balanced framework effectively combines the global search power of Genetic Algorithms (GA) with the local refinement precision of the Nelder-Mead Algorithm (NMA), ensuring scalability, robustness, and efficiency. This synergy enables GANMA to overcome common challenges, such as parameter sensitivity and poor performance in high-dimensional or multimodal landscapes. Furthermore, GANMA’s structured approach is rigorously validated, making it a reliable solution for both theoretical benchmark functions and complex real-world applications.

The pseudo-code for the hybridization of the GA and Nelder-Mead simplex algorithm is presented in Algorithm 1.

Algorithm 1. Combination of GA and Nelder-Mead.

1: Initialize GA parameters (size of population, rate of mutation, rate of crossover, number of generations)

2: Initial population

3: while termination condition is not met do

4:   Evaluate each individual’s current level of fitness

5:   Select parents (using tournament selection) for crossover

6:   for each pair of parents do

7:       Apply one-point crossover

8:       Apply uniform mutation

9:    end for

10:   Combine initial population with offspring

11:   Evaluate the fitness of the combined population

12:   Sort the combined population by fitness

13:   Keep the top half of the sorted population

14:   Create a simplex from the best individuals (e.g., top 2)

15:   Perform Nelder-Mead steps on the simplex:

16:     - Reflection

17:     - Expansion

18:     - Contraction

19:     - Shrink

20:   Update the simplex

21:   Replace the worst individuals with the simplex’s best individuals

22:   Evaluate the fitness of the updated population

23: end while

24: From the final population, choose the best solution

25: Perform Nelder-Mead steps on the best solution

26: Find the optimal solution

The detailed flow diagram of the proposed algorithm is shown in Figure 1.

5a573b27-bb41-4d55-9194-12266f20ea50_figure1.gif

Figure 1. Flow-chart of GANMA.

3.3 Hybrid Genetic Algorithm with Nelder-Mead (GA-NMA)

Here is an algorithm that combines a Genetic Algorithm (GA) with the Nelder-Mead Algorithm (NMA), where the GA first locates the interval containing the global minimum, and NMA refines the solution:

  • I. Initialize GA Population Generate an initial population of candidate solutions. Define the fitness function for evaluation.

  • II. Apply GA Operations Selection: Choose individuals based on their fitness. Crossover: Combine pairs of individuals to produce offspring. Mutation: Introduce random variations to maintain diversity.

  • III. Evaluate the Population Compute the fitness of each individual.

  • IV. Iterate GA Process Repeat the selection, crossover, mutation, and evaluation steps for a predefined number of generations or until convergence criteria are met.

  • V. Identify Promising Interval Extract the best individual(s) from the final GA population. Define the search interval around the best individual to locate the global minimum.

  • VI. Initialize NMA Use the best solution from GA as the starting point for NMA. Construct an initial simplex based on the chosen starting point.

  • VII. Apply NMA Iteratively refine the solution using simplex operations (reflection, expansion, contraction, and shrinkage). Stop when the termination criteria (e.g., small simplex size or convergence) are met.

  • VIII. Output Final Solution Return the refined solution as the global minimum estimate.

The pseudo-code for the hybridization of the GA and Nelder-Mead simplex algorithm is presented in Algorithm 2.

Algorithm 2. Hybrid GA-NMA Algorithm.

1: Initialize GA Population: Generate an initial population of candidate solutions.

Define the fitness function for evaluation.

2: while stopping criteria are not met do

3:   Selection: Choose individuals based on their fitness.

4:   Crossover: Combine pairs of individuals to produce offspring.

5:   Mutation: Introduce random variations to maintain diversity.

6:   Evaluate Population: Compute the fitness of each individual.

7: end while

8: Identify Promising Interval: Extract the best individual(s) from the final GA population and define the search interval around the best individual.

9: Initialize NMA: Use the best solution from GA as the starting point. Construct an initial simplex based on this starting point.

10: repeat

11:   Apply Simplex Operations: Perform reflection, expansion, contraction, and shrinkage steps.

12: until termination criteria are met (e.g., small simplex size or convergence)

13: Output Final Solution: Return the refined solution as the global minimum estimate.

3.4 Benchmark functions

This study analyzes 15 benchmark test functions for simulation tests to fully investigate the feasibility as well as the effectiveness of GANMA. The 15 benchmark test functions (denoted as f1 to f15), cover different types. The unimodal functions (A function with a single peak or trough, making it straightforward to locate the global optimum) f1 through f4 are included in the first kind. Multimodal functions (A function with multiple peaks or troughs, presenting challenges in finding the global optimum due to local optima) f5 through f9 are included in the second category. Shifted unimodal and multimodal functions (Shifted Unimodal Function: an unimodal function whose peak or trough is relocated to a different position in the search space. Shifted Multimodal Function: A multimodal function with its peaks or troughs displaced, adding complexity by altering the relative positions of local and global optima), f10 - f15, are included in the third category. Table 1 displays the expressions, ranges, and global minimum values of the 15 test functions. The function’s dimensions (n) are 10, 20, and 30, in that order.

Table 1. Benchmark test functions.

NoFunction nameFormulationRange fmin
f1 Sphere i=1nxi2 [-100,100]0
f2 Rosenbrock i=1n1(100(xi+1xi2)2+(1xi)2) [-2,2]0
f3 Rotated high-conditioned elliptic i=1n(106)i1n1xi2 [-100,100]0
f4 Ellipsoid i=1n(i.xi)2 [-100,100]0
f5 Ackley 20exp(0.21ni=1nxi2)exp(1ni=1ncos(2πxi))+20+e [-30,30]0
f6 Griewank i=1nxi2/4000i=1ncos(xii)+1 [-600,600]0
f7 Rastrigin 10n+i=1n(xi210cos(2πxi)) [-5,5]0
f8 Schwefel 418.9829ni=1nxisin(|xi|) [-500,500]0
f9 Schwefel1.2 i=1n|xi| [-5,5]0
f10 Shifted Sphere i=1n(xiΟi)2 [-100,100]0
f11 Shifted Elliptic i=1n(106)i1n1(xiΟi)2 [-100,100]0
f12 Shifted Rosenbrock i=1n1(100(xi+1xi2)2+(xiΟi)2) [-30,30]0
f13 Shifted Rastrigin i=1n[(xiΟi)210cos(2π(xiΟi))] [-5,5]0
f14 Shifted Griewank 14000i=1n(xiΟi)2i=1ncos(xiΟii)+1 [-600,600]0
f15 Shifted Ackley 20exp(0.21ni=1n(xiΟi)2)exp(1ni=1ncos(2π(xiΟi)))+20+e [-30,30]0

4. Parameter setup

4.1 Genetic Algorithm (GA) Parameters

For problem dimensions 10, 20, and 30, the Genetic Algorithm (GA) was executed for 300, 400, and 600 generations, respectively, starting with a population of 100 individuals. Eighty percent of the population had a one-point crossover, which translates to a crossover rate of 0.8. With a mutation rate of 0.05, random mutation was employed. With a tournament size of five, parents were selected by tournament selection, and the top 10 percent of each generation’s top performers were preserved through an elitism technique.

4.2 Nelder-Mead Algorithm (NM) Parameters

The Nelder-Mead (NM) algorithm was initialized using the solutions provided by the GA. Standard transformation coefficients were applied, including a reflection coefficient (α) of 1, an expansion coefficient (γ) of 1.5, and both contraction (ρ) and shrinkage (σ) coefficients set to 0.5. The step size was maintained at 1.0. The algorithm’s simplex shrinking process concluded when the convergence tolerance reached 10 6.

4.3 Hybrid Algorithm (GANMA) Settings

The hybrid process iterated through GA and NMA stages for 300, 400, and 600 generations for 10, 20, and 30 dimensions respectively. The stopping criteria were based on either the maximum number of iterations or fitness convergence, defined by a fitness tolerance of ϵ = 10 5, ensuring early detection of optimal solutions. Table 1 displays the expressions, dimensions, ranges, and global minimum values of the fifteen benchmark test functions (denoted as f1 - f15).

4.4 Computational Environment

The experiments were conducted in a consistent computational environment using Python 3.11. The hybrid GANMA algorithm was implemented from scratch, leveraging key Python libraries. NumPy handled arrays and matrix operations, Matplotlib was used for visualizing convergence and results, and SciPy supported NMA-based optimization. All tests were executed in a Jupyter Notebook environment to allow for easy experimentation and tuning. Each experiment was repeated 50 times to ensure statistical reliability.

Table 2 demonstrates how the performance of the GANMA, GA, and NM algorithms for dimensions (n) 10, 20, and 30 have been evaluated by comparing the mean value (Mean), standard deviation (Std), and best value (Best) of the final solutions for each benchmark function throughout 30 trials. The algorithm achieves the best optimization performance with the least standard deviation, optimal value, and average value closer to the theoretical ideal value. Any value less than 10 6 in terms of mean, standard deviation, and best value will be regarded as zero. The ideal experimental outcomes are truncated.

Table 2. For n = 10, 20, and 30, the best, mean, and standard deviation of the GANMA, GA, and NM solutions.

Fun Method n = 10 Best Mean Std n = 20 Best Mean Std n = 30 Best Mean Std
f 1GANMA3.92E-2781.01E-2350.00E+004.62E-521.10E-453.30E-451.09E-204.10E-171.10E-16
GA-NMA 1.24E-2011.95E-1840.00E+004.24E-492.97E-436.64E-433.85E-141.53E-124.03E-12
GA2.96E-018.90E-011.39E-021.24E+003.03E+001.45E+001.5E+022.8E+021.0E+02
NM3.96E-1836.63E-1660.00E+005.39E-372.03E-325.48E-322.51E-187.47E-131.11E-12
f 2GANMA5.22E-291.44E-287.60E-293.69E-276.94E-269.75E-266.49E-181.80E-141.85E-14
GA-NMA 6.18E294.06E-276.02E-272.22E-268.78E-252.11E-241.02E-248.93E-241.49E-23
GA5.67E+006.90E+008.78E-011.64E+011.74E+014.81E-012.62E+014.07E+011.63E+01
NM5.91E-292.38E-272.31E-271.89E-262.30E-253.20E-259.10E-161.43E-112.60E-11
f 3GANMA7.88E-2434.78E-2320.00E+006.93E-493.97E-451.12E-441.58E-167.55E-141.39E-13
GA-NMA 1.17E-1951.91E-1790.00E+001.00E-497.69E-461.57E-451.76E-272.32E-254.67E-25
GA1.36E+035.08E+041.03E+049.85E+035.22E+042.93E+047.98E+032.41E+041.00E+04
NM9.32E-1864.62E-1650.00E+001.16E+011.44E+041.14E+048.87E+049.68E+053.88E+05
f 4GANMA7.43E-2506.45E-2350.00E+002.09E-505.42E-441.57E-431.44E-163.78E-154.98E-15
GA-NMA 9.26E-1991.10E-1860.00E+004.03E-495.47E-458.47E-459.03E-277.78E-232.02E-22
GA8.60E-025.56E-013.81E-011.11E+007.66E+006.05E+001.38E+042.15E+046.00E+03
NM6.30E-1844.58E-1710.00E+004.59E-391.00E-322.53E-322.31E-153.89E-111.05E-10
f 5GANMA2.88E-143.18E-131.62E-134.01E-131.11E-111.45E-117.09E-104.39E-111.29E-11
GA-NMA 2.17E-141.22E-012.09E-013.77E-131.22E-018.16E-011.52E-112.70E+001.58E+00
GA7.17E-029.74E-016.37E-014.40E-018.93E-012.78E-012.23E+003.21E+005.99E-01
NM1.86E+011.93E+013.00E-011.90E+011.94E+012.17E-011.84E+011.89E+012.53E-01
f 6GANMA7.13E-021.46E-015.22E-021.66E-012.52E-012.17E-019.41E-011.66E+005.25E-01
GA-NMA 1.23E-021.40E-019.68E-025.21E-012.50E-011.74E-011.01E+002.133E+002.20E+00
GA1.86E-013.56E-011.11E-011.00E+001.02E+001.39E-022.47E+003.53E+009.43E-01
NM1.42E+002.57E+011.34E+017.39E-025.71E+007.57E+009.09E-132.02E+001.65E+00
f 7GANMA1.96E-052.00E-042.00E-034.00E-048.90E-037.70E-031.19E+002.84E+001.16E+00
GA-NMA 5.65E-021.75E-011.24E-016.95E-011.18E+003.71E-018.78E+001.36E+013.28E+00
GA6.29E-052.20E-023.40E-021.00E-031.46E-021.56E-022.47E+003.58E+001.80E+00
NM8.30E+011.06E+021.74E+011.19E+021.71E+025.18E+011.88E+022.89E+025.49E+01
f 8GANMA-1.97E+01-1.57E+017.89E+00-3.94E+01-1.18E+011.57E+01-9.86E+01-4.73E+012.95E+01
GA-NMA 1.27E-041.27E-043.03E-102.54E-042.54E-043.66E-083.81E-043.81E-042.56E-08
GA3.45E-027.88E-024.12E-026.72E+008.25E+001.10E+001.09E+021.54E+023.0E+01
NM6.31E+021.37E+043.81E+022.32E+032.84E+033.82E+024.06E+035.23E+031.04E+03
f 9GANMA5.11E-094.88E-064.95E-064.77E-061.56E-058.25E-068.83E-062.30E-051.05E-05
GA-NMA 1.86E-051.96E-031.80E-036.16E-044.73E-033.95E-031.14E-026.55E-026.38E-02
GA8.00E-058.50E-027.60E-022.56E-015.47E-011.80E-012.38E+003.89E+009.46E-01
NM6.98E-011.21E+004.94E-011.87E+006.43E+005.55E+002.25E+007.34E+004.84E+00
f 10GANMA2.46E-318.02E-315.62E-311.01E-294.13E-292.00E-299.12E-161.05E-163.02E-16
GA-NMA 1.23E-313.50E-297.57E-296.69E-299.97E-281.63E-271.75E-276.74E-251.64E-24
GA1.18E-053.14E-026.74E-029.86E+011.37E+023.49E+011.89E+033.65E+031.11E+03
NM1.78E-307.60E-306.09E-305.08E-291.96E-281.51E-281.84E-151.13E-093.00E-09
f 11GANMA2.90E-265.12E-255.85E-243.16E-247.45E-243.25E-241.46E-158.71E-141.69E-13
GA-NMA 4.39E-261.00E-242.12E-247.13E-244.67E-234.92E-233.03E-231.09E-211.60E-21
GA1.01E+003.49E+025.78E+021.00E+051.62E+058.76E+035.63E+051.11E+063.41E+06
NM3.68E-267.21E-251.13E-242.93E+031.98E+041.55E+041.29E+042.49E+051.51E+05
f 12GANMA8.13E-296.12E-284.43E-281.59E-261.45E-251.92E-251.14E-182.20E-154.29E-15
GA-NMA 9.81E-284.29E+001.87E+015.26E-265.97E-011.42E+002.37E-241.59E+001.95E+00
GA9.03E+004.87E+013.27E+011.46E+022.11E+027.77E+017.88E+032.52E+041.12E+04
NM1.15E-271.59E+001.95E+003.40E-257.97E-011.59E+002.94E-143.18E+001.59E+00
f 13GANMA1.44E-022.13E-021.02E-022.82E+003.74E+001.71E+009.01E+001.05E+011.43E+00
GA-NMA 3.61E-021.73E-017.70E-025.12E-019.97E-013.23E-018.86E+001.52E+012.74E+00
GA4.60E-021.49E-011.19E-018.14E+009.42E+001.12E+001.58E+011.98E+013.71E+00
NM6.40E+018.76E+012.60E+011.70E+022.07E+022.36E+012.08E+023.03E+025.42E+01
f 14GANMA1.42E-024.42E-022.85E-028.11E-021.21E-017.64E-021.21E-134.90E-024.10E-02
GA-NMA 2.33E-151.01E-017.00E-026.77E-159.34E-031.28E-021.57E-131.26E-021.29E-02
GA2.10E-014.81E-011.79E-011.83E+002.32E+002.97E-012.73E+003.75E+005.10E-01
NM5.36E+003.23E+014.09E+012.40E-011.34E+019.05E+007.30E-027.99E-019.70E-01
f 15GANMA5.01E-141.98E-123.10E-123.25E-121.43E+001.77E+012.22E+004.27E+003.10E+00
GA-NMA 3.24E-148.55E-021.31E-011.82E-124.49E+011.20E+011.48E-112.45E+021.66E+02
GA1.10E-014.17E-011.20E-012.91E+003.85E+006.97E-019.29E+001.04E+011.23E+00
NM1.85E+011.89E+012.95E-011.89E+011.93E+012.33E-011.92E+011.93E+011.31E-01

5. Experimental results and analysis

The statistical results of GANMA’s performance on 15 benchmark functions with dimensions (n) of 10, 20, and 30 are shown in Table 2. It also contains the final solutions’ best (Best), mean (Mean), and standard deviation (Std) across a 30-run period for each benchmark function. All benchmark functions for unimodal functions (f1 - f4) have been solved in all three dimensions (10, 20, and 30). For the multimodal functions (f5f9), the solutions for f5 and f9 occur in 10, 20, and 30 dimensions, whereas the solutions for f6 and f7 in 10 and 20 dimensions are almost optimal. The standard deviation range is 1.62E − 13 ∼ 7.89E + 00, 1.45E − 11 ∼ 1.57E + 01, and 1.29E − 11 ∼ 2.95E + 01, respectively, while the mean value’s variations range in the 10, 20, and 30 dimensions is 3.18E − 13 ∼ 1.46E − 01, 1.11E − 11 ∼ 2.52E − 01, and 4.39E − 11 ∼ 2.84E + 00.

Six shifted test functions have been chosen for this study to validate the performance of GANMA: three shifted multimodal test functions, denoted as f13 to f15; and three shifted unimodal test functions, denoted as f10 to f12, Sphere, Elliptic, and Rosenbrock. On functions f10, f11, f12 (in 10, 20, and 30), and f15 (in 10), GANMA achieved optimum solutions; on functions f13 (in 10) and f14 (in 10, 20, and 30), the solutions are nearly optimal. Even while GA is outperformed by the solutions of f13 and f15 (in 20 and 30) in GANMA, the solutions are still far from the optimal ones. Furthermore, the Std that GANMA found on five test functions is not too high, suggesting that GANMA’s performance on shifted test functions is steady.

Therefore, for all unimodal functions (in 10, 20, and 30 dimensions), GANMA can obtain the global optimum. GANMA can identify outcomes with negligible deviations from the global optimal value for multimodal functions. Except for f5 and f9 in dimensions 10, 20, and 30, the results of f6, and f7 in dimensions 10 and 20 are quite near to the optimal value. The outcomes produced by GANMA algorithms for shifted unimodal and multimodal functions are optimal or extremely near-optimal in all functions for all three dimensions, except f13 and f15 (in 20 and 30). The benefits of the GANMA algorithm include excellent robustness, high convergence accuracy, and steady performance in all scenarios, whether they involve multimodal functions, unimodal functions, or shifting unimodal and multimodal functions. This is shown in Table 2 under the various numbers of iterations for the corresponding dimensions, which are 300, 400, and 600 for the dimensions 10, 20, and 30, respectively.

GANMA consistently outperforms GA-NMA, shown in Table 2, across various function categories, particularly in unimodal functions. For example, in f1 and f4, GANMA achieves near-zero fitness values across all dimensions, demonstrating its ability to efficiently refine solutions in smooth landscapes. Its lower standard deviations further indicate robust and stable convergence compared to GA-NMA, which struggles to maintain similar precision. In multimodal functions like f8, GANMA excels by navigating complex landscapes with multiple local optima, achieving superior results in higher dimensions (e.g., n = 30). Its hybrid structure effectively balances global exploration and local exploitation, reducing the risk of premature convergence. In contrast, GA-NMA often stagnates in local optima due to less dynamic exploration capabilities, leading to higher fitness values and greater variability.

For shifted unimodal functions such as f10, GANMA demonstrates its adaptability by achieving significantly lower best and mean fitness values, overcoming challenges introduced by displaced optima. Similarly, in shifted multimodal functions like f13 and f15, GANMA showcases its robustness by effectively handling complex, displaced landscapes. GANMA achieves accurate and dependable convergence by fine-tuning solutions even in challenging environments by utilizing Nelder-Mead for local refining. GA-NMA, however, struggles with the combined challenges of shifting and multimodal complexity, resulting in higher fitness values and inconsistent performance. Overall, GANMA’s adaptability and superior optimization capabilities make it a robust choice for diverse and challenging optimization problems.

To help further investigate the evolutionary behavior of various methods, the convergence curves of GANMA and GA for a few chosen benchmark functions are displayed in Figure 2, Figure 3, and Figure 4 for dimensions (n) = 10, 20, and 30, respectively. These graphs demonstrate the convergence behavior of methods that can help to analyze the evolutionary behavior of various algorithms. The y- and x-axes, respectively, represent the values of the fitness function and the number of iterations. The blue solid line shows the genetic algorithm (GA), while the suggested method GANMA is shown by the solid orange line.

5a573b27-bb41-4d55-9194-12266f20ea50_figure2a.gif5a573b27-bb41-4d55-9194-12266f20ea50_figure2b.gif

Figure 2. Convergence graphs of functions for n = 10.

5a573b27-bb41-4d55-9194-12266f20ea50_figure3a.gif5a573b27-bb41-4d55-9194-12266f20ea50_figure3b.gif

Figure 3. Convergence graphs of functions for n = 20.

5a573b27-bb41-4d55-9194-12266f20ea50_figure4a.gif5a573b27-bb41-4d55-9194-12266f20ea50_figure4b.gif

Figure 4. Convergence graphs of functions for n = 30.

Until the ideal solution is discovered, GA shows a decreasing trend for unimodal functions like f1, f2, f3, and f4. In contrast, GANMA presents a straight line for all three dimensions (n = 10, 20, and 30). Similar to this, for multimodal functions other than f5 and f7 (in 30), there is a greater similarity between the global optimum solution and the GANMA optimal solution in f5, f7 (in 10, and 20), and f6, f8 (in 10, 20, and 30). As a result, of these two algorithms, the lowest optimum solution and the fastest rate of convergence are found through GANMA. The curves for shifted functions, except f15 (in 20 and 30), demonstrate how well the proposed method was able to obtain the ideal solution for other functions like f10, f11, and f12 (in 10, 20, and 30).

The dynamic character of the algorithm during its exploitation phases is reflected in the zigzag behavior shown in the figures. The main cause of this pattern is the optimization algorithms’ natural localized refinement processes. Local search methods cause local search methods cause local search methods cause these variations, such as the Nelder-Mead algorithm, which concentrates on enhancing solutions within a limited area of the search space. Furthermore, mutation processes in Genetic Algorithms (GA) add variation by slightly altering individual solutions. The observed zigzag patterns might result from these alterations, which can lead to brief departures from a smooth convergence trajectory. Even while these variations can seem erratic, they highlight how the exploration and exploitation stages actively interact, demonstrating the algorithm’s attempts to improve solutions and converge to the best result. Out of these two methods, GANMA yields the lowest optimum solution and has the fastest convergence rate than GA’s in both the multimodal and shifted functions.

It is demonstrated by analyzing the convergence curve and experimental findings that GANMA typically exhibits remarkable performance on the 15 test functions, with a fraction of proper convergence to the global optimal solution that is close to 90%. In terms of exploration and exploitation, GANMA performs better than GA and the NM algorithm. Consequently, GANMA achieves lower fitness values, less variability, and more steady convergence than GA-NMA, GA, and NM. GANMA is a flexible and dependable hybrid algorithm because of its capacity to adjust to optimization problems ranging from simple unimodal functions to intricate shifting multimodal ones. This robustness highlights its advantage in solving diverse real-world optimization problems.

6. Application of proposed algorithm (GANMA) for Weibull-Parameter Estimation

The Weibull distribution is a probability distribution that is often used in reliability and survival research. Weibull et al.23 had shown that the Weibull distribution fit many different datasets and offered satisfactory results, even for small samples. The Weibull distribution, known for its flexibility in modeling various failure and survival scenarios, is defined by two parameters: the shape (β) and scale (η) parameters. In some cases, a location (α) parameter is added to create a three-parameter Weibull distribution, allowing for greater flexibility in fitting data with location shifts. The three-parameter probability density function (pdf ) will have only two parameters24 when the location parameter (α) is equal to zero. Due to the fact that no failure may occur before or after the time is zero, the two Weibull parameters are frequently utilized in failure analysis.25

Weibull parameter estimation employs a variety of methods. Method of Moments (MOM), the maximum likelihood (ML) approach, and modified maximum likelihood (MML) methods were all used by Seguro and Lambert.26 They discovered that the time series data sets are more suited for the ML approach. They advised utilizing the MML technique for data sets that were formatted as frequency distributions. The least squares approach, the ML method, and the MML method were contrasted by Akgül et al.27 ML was shown to be the most effective approach overall, but they also noted that MML and ML are equally effective for big data sets, despite MML’s lower computational complexity. The ML technique was used in the studies of Kollu et al.28 and Akpnar and Akpnar29 to estimate the Weibull parameters. Teimouri et al.30 investigated the MoM using their proposed L-moment estimator, the ML approach, the logarithmic moment method, and the percentile method. They discovered that the ML method and their suggested approach are the most effective estimators. The power density approach was proposed by Akda and Dinler.31 They concluded that it outperformed popular techniques like MoM and ML techniques. After evaluating five different methods for approximating the Weibull distribution, Saleh et al.32 recommended the mean wind speed methodology and the ML method. Azad and colleagues33 discovered that the MoM and ML techniques were more effective than other approaches.

Considering the Weibull distribution has a nonlinear log-likelihood function and is compatible with numerical optimization techniques like Newton-Raphson (NR) and Nelder-Mead (NM), previous studies have often used MLE approaches for parameterizing the Weibull distribution.34,35 However, the effectiveness of these iterative methods heavily relies on the initial value chosen.36 In a departure from traditional approaches, this study employs Genetic Algorithms (GAs) as a heuristic search method, considering a set of solutions within the search space rather than individual points, to address the initial value problem in Weibull parameter Maximum Likelihood Estimation.37,38 GAs have been successfully applied in various optimization contexts, ranging from optimizing mixing parameters for high-performance concrete to signal control optimization.39 Parameterization of distributions such as the skew-normal distribution,40 nonlinear regression,41 and negative binomial gamma mixed distribution42 have all been applied in previous works.

Notably, Thomas et al.43 pioneered the use of GA for Weibull distribution parameter estimation in the context of breakdown periods of insulating fluid data, achieving performance comparable to traditional methods based on maximizing the log-likelihood function. Furthermore, hybrid approaches combining GA with other methods, such as the improved Nelder-Mead algorithm for controlling synchronous generator output voltage,36 and memetic algorithms applied to parameter identification in electrical engineering,44 underscore the versatility of heuristic and hybrid optimization techniques in solving complex problems. In addition, improved Nelder-Mead techniques have been used for synchronous generator output voltage control, as in the efforts of Boudissa et al.45 and Fatiha et al.46 In reliability analysis, Weibull parameter estimation is an important problem, with recent developments employing successive approximation47 and techniques specific to zero-failure data situations,48 enhancing estimation efficiency in small sample situations.

6.1 Weibull distribution

A versatile continuous probability distribution, the Weibull distribution is frequently used in survival analysis and reliability engineering. It is characterized by its ability to model the distribution of time until an event occurs. Named after Wallodi Weibull, who described it in the 1950s, the distribution is flexible and can take different shapes depending on its parameters. The shape parameter affects the structure of the Weibull distribution curve resulting in whether the distribution appears to be a Rayleigh distribution (β = 2), an exponential distribution (β = 1), or another shape. The scale parameter determines the distribution’s scale or size. Together, these factors enable the Weibull distribution to simulate a wide range of events with varying shapes and sizes.

The following is the Weibull two-parameter distribution’s probability density function (PDF):

(6)
f(x;β,η)={βη(xη)β1e(x/η)β,x00,x<0
where:
  • x is the random variable,

  • β is the shape parameter,

  • η is the scale parameter.

The following represents the Weibull distribution’s cumulative distribution function (or CDF):

(7)
F(x;β,η)={1e(x/η)β,x00,x<0

Probability density and cumulative distribution plots for some different parameter values are given in Figure 5.

5a573b27-bb41-4d55-9194-12266f20ea50_figure5.gif

Figure 5. Different values of (a) shape parameter β and (b) scale parameter η are plotted in Weibull PDF (solid line) and CDF (dashed line) plots.

Two-Parameter Weibull is Commonly applied in reliability engineering for modeling time until the failure of components. Whereas, Three-Parameter Weibull is Useful when considering scenarios where the event initiation may not be at zero, such as analyzing the time until an event occurs after a certain threshold.

7. Methods for estimating parameters

Estimating the parameters of the Weibull distribution poses a significant challenge due to the intricacies involved in utilizing sample data for accurate estimation. Parameter estimation involves the process of determining the distribution’s parameters using available sample data, aiming to derive optimal values that provide meaningful insights into the underlying data. Making incorrect parameter choices can lead to misleading results, underscoring the importance of analyzing and selecting appropriate estimation techniques for accurate modelling. Therefore, a thorough evaluation of estimation methods is essential to determine the most suitable approach for a given dataset and analysis context.

7.1 Maximum Likelihood Estimation (MLE)

The statistical method known as Maximum Likelihood Estimation (MLE) is used to estimate Weibull parameters by maximizing the likelihood function, which determines how well the distribution fits the observed data. MLE is known for its efficiency, but its optimization can be complex due to non-linear equations and numerical stability issues. The PDF of the Weibull distribution is given by Equation (6). Given a sample x1, x2, … xn from a Weibull distribution, the likelihood function is given by:

(8)
L(β,η)=i=1nf(xi;β,η)
where, f (x; β, η) is the probability density function of the Weibull distribution.

The Weibull distribution’s log-likelihood function is as follows:

(9)
LnL(β,η)=i=1n[ln(βη)+(β1)ln(xiη)(xiη)β]
(10)
LnL(β,η)=nlnβnβlnη+(β1)i=1nlnxiηβi=1nxiβ

The log-likelihood function is differentiated for β and η, the derivatives are set to zero, and the resultant system of equations is solved to get the MLE.

(11)
Lη=η+βηβ+1i=1nxiβ=0
(12)
Lβ=nβnlnηi=1nxiβlnηi=1nxiβηβ+i=1nlnxi=0

By eliminating α from the above equations and simplifying the equations we get,

(13)
η̂=(1ni=1nxiβ)1β
(14)
1βi=1nxiβlnxii=1nxiβ+1ni=1nlnxi=0

Eqn. (13) may be used to calculate the estimate η̂ . However, because of Eqn. (14) did not give an analytical solution, the estimate β̂ must be calculated numerically. This is possible by using the optimization strategy. The Nelder-Mead, Newton Rapson, simulated annealing, or GA algorithms can all be used to solve the nonlinear function that the ML estimator of the shape parameter β contains. In this study, the suggested method, GA, and NM were all used to optimize the log-likelihood function. Nelder-Mead is a powerful algorithm that converges quickly, but its performance is dependent on the initial guess. As a result, we took into account the GA while maximizing the Weibull distribution’s loglikelihood function. Eqn. (10) is considered a fitness function for GA and NM methods.

Below the proposed method on MLE of Weibull Distribution has been described briefly.

7.1.1 Proposed method

(Genetic and Nelder -Mead Algorithm (GANMA))

To improve the precision and reliability of parameter estimation, we proposed a hybrid approach GANMA that integrates the GA and the NM method with MLE for two-parameter Weibull distributions. The GA aids in exploring the parameter space globally, generating diverse candidate solutions, while the NM fine-tunes these solutions through local search, aiming for optimal parameter estimates. To the best of our knowledge, this is the first instance where the GANMA is being utilized to estimate the Weibull distribution’s parameters.

The steps of the proposed method in this study are summarized as follows:

Step 1: Problem Formulation - We aim to find the MLE parameters β (shape) and η (scale) for a Weibull distribution.

Step 2: Genetic Algorithm (GA) Phase -

  • Generate an initial population (P) of possible solutions. For the Weibull distribution, each solution indicates a collection of parameters (β, η).

  • Define the fitness function f(β, η) that measures the goodness of fit between the observed data and the Weibull distribution with the given parameters. A suitable fitness function could be the log-likelihood shown in Equation 10.

  • Select individuals within the population according to their fitness by using a selection process (tournament selection). Higher fitness levels increase the probability of selection.

  • Apply crossover operations (one-point crossover) to pairs of selected individuals to create new candidate solutions.

  • Introduce small random changes (mutations) to the parameters of some individuals to add diversity to the population.

Step 3: Nelder-Mead Algorithm (NM) Phase -

  • Take the best individual from the final population of the GA as an initial guess for the parameters (β1, η1).

  • Define the log-likelihood function L(β, η) for the Weibull distribution shown in Equation 5.

  • To minimize the log-likelihood function and improve the parameter estimations (i.e., reflection, expansion, contraction, and shrinkage), apply the Nelder-Mead method.

  • Repeat the iterations until convergence criteria are met (e.g., small changes in parameters or a maximum number of iterations).

Step 4: Repeat the selection, crossover, and mutation steps for several generations until convergence is met (i.e. end of GA phase).

Step 5: Apply the NM method to the best GA solution once again after the GA phase.

Step 6: Result - The final parameters ( β̂ , η̂ ) obtained from the Nelder-Mead optimization represent the Maximum Likelihood Estimates (MLE) for the Weibull distribution.

8. Monte Carlo simulations

The two-parameter Weibull distribution parameter estimation methods were investigated using a Monte Carlo simulation. The scale parameter was set to 1, while the other shape parameters were set to 0.5, 1, 3, and 6. The simulation has been repeated 1000 times for sample sizes of 20, 100, and 500 respectively. With a population size of 100, the GA and GANMA have corresponding crossover and mutation rates of 0.1 and 0.8. The parameters that are used to compare the goodness-of-fit of different parameter estimating methods are mean absolute error (MAE) and bias. For the parameters β (shape) and η (scale), MAE and bias are computed using the formula provided by:

(For shape parameter)

(15)
MAE(β̂)=1ni=1n|βîβi|
(16)
bias(β̂)=1ni=1n(βîβi)

(For scale parameter)

(17)
MAE(η̂)=1ni=1n|ηîηi|
(18)
bias(η̂)=1ni=1n(ηîηi)

Greater efficiency is implied by lower absolute values of the bias and MAE. For various data sizes and shape parameters, Tables 3-5 display the parameter estimates, bias, and MAE for each parameter estimation method. The results of the simulation demonstrate that the GANMA approach performed better than NM and GA when estimating shape and scale parameters based on MAE and bias criteria. The best results are highlighted in bold.

Table 3. Estimations of parameters, MAE, and bias values for several simulation scenarios with n = 20 of a two-parameter distribution for β = 0.5, 1, 3, and 6.

n β Method β̂ η̂
Mean MAE Bias Mean MAE Bias
NM0.623940.123950.123932.198771.198761.19871
0.5GA0.609010.136410.109011.765230.818950.76523
GANMA0.605140.10514 0.10514 1.508060.50806 0.50806
NM1.210290.210280.210291.228030.228100.22808
1GA1.239620.290000.239611.3805310.414560.38053
20GANMA1.210290.21029 0.21009 1.228030.22803 0.22805
NM2.041360.95863-0.958631.958630.14163-0.14163
3GA3.630890.708430.41377 1.074880.095950.074881
GANMA3.413740.63089 0.630881.070870.070870 0.070870
NM3.088152.91184-2.911461.911840.08815-0.08813
6GA5.38461.0844-0.614331.051150.071960.05115
GANMA6.261771.06178 1.261781.034820.03482 0.034828

Table 4. Estimations of parameters, MAE, and bias values for several simulation scenarios with n = 100 of a two-parameter distribution for β = 0.5, 1, 3, and 6.

n β Method β̂ η̂
Mean MAE Bias Mean MAE Bias
NM0.500002.664532.886571.249990.249990.24989
0.5GA0.582670.08833-0.040391.749330.922930.67225
GANMA0.494770.01522 -0.0152 0.082110.17880 -0.17868
NM1.000001.77631.73221.00000.0000 0.0000
1GA0.766070.17435-0.01683 0.833930.283490.11227
100GANMA0.979540.03045 -0.030450.906200.09379-0.09386
NM2.106060.89393-0.893960.893930.10606-0.10606
3GA2.901030.45456-0.01648 1.077810.09780-0.08386
GANMA2.918630.09136 -0.091350.967690.03230 -0.03231
NM2.124993.87500-3.875010.875000.12499-0.12497
6GA4.482851.10891-0.938640.985340.05621-0.01811
GANMA5.517260.18273 -0.18273 0.983710.01628 -0.01625

Table 5. Estimations of parameters, MAE, and bias values for several simulation scenarios with n = 500 of a two-parameter distribution for β = 0.5, 1, 3, and 6.

n β Method β̂ η̂
Mean MAE Bias Mean MAE Bias
NM0.503920.003920.003911.249340.2493450.249344
0.5GA0.505550.08630.00551.83360.95860.8336
GANMA0.498260.00677 -0.00671 1.00540.00545 -0.00544
NM1.00000.0000 0.0000 1.00000.000 0.000
1GA0.96760.1473-0.03231.19980.298070.19986
100GANMA0.986530.0134-0.01331.002720.00272-0.00277
NM2.08250.91747-0.917460.917470.08252-0.08251
3GA3.20000.525410.200001.04040.085470.0404
GANMA2.959590.04040 -0.04040 1.000900.00090 -0.00080
NM2.11663.88335-3.883340.883350.11664-0.11667
6GA5.21081.0329-0.78911.00600.068510.00605
GANMA5.61910.08080 -0.080801 1.00040.00045 -0.00043

8.1 Result analysis

Figures 6-8 illustrate the outcome across various shape parameters while keeping the scale parameter constant—as well as various data sizes by plotting the convergence graph of the PDF of Weibull parameters and the PDF of MLE of parameters using NM, GA, and GANMA. The solid black line depicts the PDF of parameters (β, η), whilst the usual genetic algorithm is illustrated by the solid green line, the yellow solid line shows the Weibull PDF using NM, and the suggested method GANMA is shown by the solid red line. It has been found that parameter estimation using the suggested technique converges with the original PDF as the shape parameter and data size increase. GANMA, the suggested algorithm, performs better than GA and NM in all types of situations.

5a573b27-bb41-4d55-9194-12266f20ea50_figure6.gif

Figure 6. Histogram and MLE PDF of Weibull 2- parameter Distribution for β = 0.5, 1, 3, and 6 with n = 20.

5a573b27-bb41-4d55-9194-12266f20ea50_figure7.gif

Figure 7. Histogram and MLE PDF of Weibull 2- parameter Distribution for β = 0.5, 1, 3, and 6 with n = 100.

5a573b27-bb41-4d55-9194-12266f20ea50_figure8.gif

Figure 8. Histogram and MLE PDF of Weibull 2- parameter Distribution for β = 0.5, 1, 3, and 6 with n = 500.

Based on MAE and bias criteria, the simulation results demonstrate that the GANMA technique outperformed NM and GA in the estimation of shape and scale parameters. In each simulated scenario, the GANMA technique yielded the best shape parameter efficiency in terms of bias and MAE for sample sizes of 20, 100, and 500 respectively.

Throughout almost every simulated scenario, GANMA achieved the maximum efficiency in the estimate of scale parameters for sample sizes of 20, 100, and 500, based on at least one decision criterion. By analyzing MAE and bias for each simulation scenario, GANMA proved to be the most effective approach for the data size 20. For small, moderate, and high sample sizes, GANMA is a fairly effective strategy overall. Additionally shown in Figures 9-12 are the absolute values of the biases and the MAE.

5a573b27-bb41-4d55-9194-12266f20ea50_figure9.gif

Figure 9. Comparison of parameter estimate approaches for β using the MAE criteria.

5a573b27-bb41-4d55-9194-12266f20ea50_figure10.gif

Figure 10. Comparison of parameter estimate approaches for η using the MAE criteria.

5a573b27-bb41-4d55-9194-12266f20ea50_figure11.gif

Figure 11. Comparison of parameter estimate approaches for β using the bias criteria.

5a573b27-bb41-4d55-9194-12266f20ea50_figure12.gif

Figure 12. Comparison of parameter estimate approaches for η using the bias criteria.

The MAE values for the shape parameter β are shown in Figure 9. In every simulated scenario, GANMA outperformed NM and GA in terms of efficiency. The second-best approach is NM. An increase in sample size resulted in lower MAE values. On the other hand, MAE values increased along with an increase in the form parameter value.

The scale parameter η’s MAE values are displayed in Figure 10. For sample sizes of 20, 100, and 500, GANMA proved to be the most effective approach. When the shape parameter is set to a higher value, the MAE values drop. Likewise, as the sample size is raised, the MAE values drop.

The shape parameter β’s absolute bias value is displayed in Figure 11. The most efficient results were obtained using GANMA. NM outperformed GA on some occasions. As with MAE values, larger sample sizes resulted in lower absolute bias levels. Increasing the parameter value resulted in higher absolute bias levels.

The absolute bias for the scale parameter η is shown in Figure 12. Most of the time, GANMA outperformed other methods in terms of efficiency. The second-best approach is NM. Increasing the shape parameter and sample size leads to lower absolute bias levels.

9. Estimation of Weibull-Parameter in Wind speed analysis

The decrease in fossil fuel supplies and their lack of reliability in meeting future energy demands have made renewable energy a hot topic for academics. Wind is one of the main sources of renewable energy, and wind speed modeling has been studied in great detail. In wind power applications, the most popular Weibull distribution is two parameters. It has been discovered that this PDF is correct for the majority of wind regimes observed in nature, is easy to use, and is adaptable. In several research, it has been noted that the wind speed data cannot be adequately represented for specific applications, including those with bimodal distributions, short time horizons, low and high wind speeds, and with a high frequency of nulls.4951 The given equation may be used to determine the probability density function.

(19)
f(v)=βη(vη)β1e(v/η)β
where v is wind speed.

Power density

Power density in wind speed analysis refers to the amount of power that can be obtained from the wind per unit area. This statistic is critical when evaluating the feasibility and potential viability of wind energy projects since it quantifies the energy available from the wind at a given place. The power density (PD) may be easily calculated using the following equation once β and η have been established.

(20)
PD=ρaη323βΓ(3β)
where, ρa is the air density and symbol Γ denotes the gamma function. The standard value of air density ρa is taken as ρa=1.225kg/m3 .

9.1 Result and Discussion

In this challenge, two real-world data sets have been used to examine wind-speed analysis. The very first set of data came from the seas surrounding the Maluku Islands and Sulawesi. The data under analysis were gathered by the satellite Quikscat, which measured the ocean wind 10 meters above sea level using a scatterometer. The measurement’s horizontal and vertical spatial resolution is 0.25°earth grid. The information from the January measurement point at latitude 116° and longitude 85.5° is included in the accessible data.52

Tarama Island and Iriomote Island, which are close to northern Taiwan, had their wind speeds recorded in the second data set. At Iriomotejima Meteorological Station, the maximum daily wind speed and direction were recorded in March 2012.53

The Kolmogorov-Smirnov (K-S) test is a nonparametric statistical test used to compare two distributions. The K-S test calculates the maximum absolute difference between the empirical cumulative distribution functions (ECDFs) of the distributions being compared, providing a test statistic (D). A p-value derived from this statistic indicates the significance of the difference, helping in goodness-of-fit testing, comparing sample distributions, and model validation without assuming any specific distribution for the data.

The statistical confirmation that the monthly data sets come from the Weibull distribution can be obtained by doing the K-S test separately for each data set. The most significant difference between the theoretical distribution, SN(x), and the observed distribution, F0(x), is the K-S test statistic.54

(21)
D=max|Fo(x)SN(x)|

Monthly distributions from the Weibull distribution are selected for further investigation following the K-S test (pvalue0.05 ), which indicates the probability of observing a discrepancy as large as the one computed if the two distributions were the same.

Results across shape and scale parameters were obtained by plotting the convergence graph between the PDF and CDF of MLE of parameters using NM, GA, and GANMA, as shown in Figures 13 and 14. The solid green line and dotted green line represent the PDF and CDF of the standard genetic algorithm, the yellow solid line, and dotted yellow line represent the Weibull PDF and CDF using NM, and the solid red line and dotted red line represent the suggested method for both the PDF and CDF, respectively. Figure 13 illustrates that the PDF and CDF for both GANMA and NM convergence are on the same line.

5a573b27-bb41-4d55-9194-12266f20ea50_figure13.gif

Figure 13. Histogram, MLE PDF and CDF using GA, NM, and GANMA for data set 1.

5a573b27-bb41-4d55-9194-12266f20ea50_figure14.gif

Figure 14. Histogram, MLE PDF and CDF using GA, NM, and GANMA for data set 2.

Tables 6 and 7 present the shape and scale parameters, k-s value, p-value, and power density for the first and second data sets, respectively, for all three estimation techniques. The greatest p-value and the lowest k-s statistic for both data sets are produced by the suggested approach (GANMA) out of the three estimation techniques. The Weibull distribution and the actual wind speed data seem similar, as indicated by the p-value exceeding the selected significance threshold (e.g., 0.05). In other words, the data is well-fitted by the Weibull distribution. The parameters estimated using GANMA are considered the best fit for describing the wind speed data, based on the K-S test findings. The observed wind speed data and the predicted Weibull distribution with these parameters were well recognized, as evidenced by the low K-S statistic and high p-value.

Table 6. Parameter estimations for data set-1.

Method β̂ η̂ k-s valuep-value PD (watt/m2)
NM9.523826.448680.136850.56069147.05921
GA3.483124.970070.677561.274E-1471.36770
GANMA9.523406.448630.136820.560978147.05541

Table 7. Parameter estimations for data set-2.

Method β̂ η̂ k-s valuep-value PD (watt/m2)
NM2.01.00.999992.63E-2850.81422
GA1.074184.971520.625123.105E-12350.27224
GANMA1.359259.856110.359820.000421431.63678

The maximum power density is demonstrated by the parameters estimated through MLE implementing NM, as shown in Table 6. This suggests that the parameters possess greater absolute performance in terms of power generation. Despite the slightly lower power density value of the parameters estimated by MLE using GANMA compared to NM, they are nevertheless selected as the best fit since they have the greatest p-value and the least k-s statistic. This suggests that for wind speed data set 1, parameters calculated by MLE using GANMA offer the best match.

The parameters that are estimated by MLE using GANMA are found to provide the best fit in Table 7, as shown by their lowest K-S statistic and highest p-value. Additionally, superior performance in terms of power generation is indicated by the higher power density value associated with these parameters.

10. Conclusion

To improve the exploitation capabilities of GA, this study presents a unique hybridized approach called the Genetic and Nelder-Mead Algorithm (GANMA), in which NM is included. GANMA has been employed to verify the robustness and efficiency of the suggested technique on fifteen benchmark problems for three separate dimensions. Because of its high level of accuracy and stability, GANMA performs very well in improving unimodal, multimodal, and shifting unimodal/multi-modal functions, as shown by the test function comparison experiment table. According to the testing results, the suggested method is strong and has the potential to solve benchmark issues more quickly than the other two algorithms in the majority of situations.

Furthermore, estimating the Weibull distribution’s scale (η) and shape (β) parameters, this study aims to assess the efficacy of three estimation methods: ML estimators employing GA, NM, and GANMA. The MAE and bias criteria are used to assess the efficiency of the parameter estimating techniques. Based on the conclusions drawn from the Monte Carlo simulation and the examination of real-world wind speed data, the ML estimator using GANMA performs better in Weibull parameter estimation than the ML estimator using NM and GA estimator. We used the K-S test to compare three sets of parameters for two fitting wind speed data sets with a Weibull distribution and selected the set of parameters that minimized the K-S statistic and maximized the associated p-value, indicating the best fit. Moreover, it may be said that the two sets of data were collected in two different geographic locations with different meteorological conditions. In these data sets, which included a variety of meteorological situations, GANMA demonstrated superiority.

Compliance with ethical standards

Disclosures & disclaimer

We certify that the submitted manuscript is our original work that is not currently being considered elsewhere. The paper is an unfunded independent piece of labor.

Ethical approval

This article does not include any research that any of the authors conducted using humans or animals.

Comments on this article Comments (0)

Version 3
VERSION 3 PUBLISHED 19 Sep 2024
Comment
Author details Author details
Competing interests
Grant information
Copyright
Download
 
Export To
metrics
Views Downloads
F1000Research - -
PubMed Central
Data from PMC are received and updated monthly.
- -
Citations
CITE
how to cite this article
Majhi N and Mishra R. A novel hybrid genetic algorithm and Nelder-Mead approach and it’s application for parameter estimation [version 3; peer review: 2 approved, 1 approved with reservations]. F1000Research 2025, 13:1073 (https://doi.org/10.12688/f1000research.154598.3)
NOTE: If applicable, it is important to ensure the information in square brackets after the title is included in all citations of this article.
track
receive updates on this article
Track an article to receive email alerts on any updates to this article.

Open Peer Review

Current Reviewer Status: ?
Key to Reviewer Statuses VIEW
ApprovedThe paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approvedFundamental flaws in the paper seriously undermine the findings and conclusions
Version 3
VERSION 3
PUBLISHED 07 Apr 2025
Revised
Views
3
Cite
Reviewer Report 14 Aug 2025
Gyan Singh, National Institute of Technology Agartala, Agartala, Tripura, India 
Approved
VIEWS 3
The approach leverages the global exploration capabilities of the GA to identify promising regions in the search space ,then utilize the local refinement capabilities of the Nelder-Mead algorithm to find the optimal solution within that region .The hybrid approach is ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Singh G. Reviewer Report For: A novel hybrid genetic algorithm and Nelder-Mead approach and it’s application for parameter estimation [version 3; peer review: 2 approved, 1 approved with reservations]. F1000Research 2025, 13:1073 (https://doi.org/10.5256/f1000research.180009.r396886)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
Version 2
VERSION 2
PUBLISHED 10 Mar 2025
Revised
Views
17
Cite
Reviewer Report 25 Mar 2025
El-ghalia Boudissa, Faculty of Technology, Saad Dahlab University of Blida, Blida,, Blida Province, Algeria;  Automatic-Electrotechnic, Saad Dahlab University of Blida Faculty of Technology (Ringgold ID: 272254), Blida, Blida Province, Algeria 
HABBI FATIHA, Saad Dahlab University of Blida Faculty of Technology (Ringgold ID: 272254), Blida, Blida Province, Algeria;  Saad Dahlab University of Blida (Ringgold ID: 213442), Blida, Blida Province, Algeria 
Approved
VIEWS 17
We are pleased to inform you of the final decision about the revised paper "A novel hybrid genetic algorithm and Nelder-Mead approach and it’s application for parameter estimation"

Reviewer 1: Dr. BOUDISSA EL GHALIA

... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Boudissa Eg and FATIHA H. Reviewer Report For: A novel hybrid genetic algorithm and Nelder-Mead approach and it’s application for parameter estimation [version 3; peer review: 2 approved, 1 approved with reservations]. F1000Research 2025, 13:1073 (https://doi.org/10.5256/f1000research.176873.r370272)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
  • Author Response 27 Mar 2025
    Rajashree Mishra, Mathematics, Kalinga Institute of Industrial Technology, Bhubaneswar, 751024, India
    27 Mar 2025
    Author Response
    Reviewer1
    Reference 44 should be corrected as follows.
    Ans: Thank you for your suggestion, and as per the recommendation, Reference 44 has been corrected and implemented on page 37 in the ... Continue reading
COMMENTS ON THIS REPORT
  • Author Response 27 Mar 2025
    Rajashree Mishra, Mathematics, Kalinga Institute of Industrial Technology, Bhubaneswar, 751024, India
    27 Mar 2025
    Author Response
    Reviewer1
    Reference 44 should be corrected as follows.
    Ans: Thank you for your suggestion, and as per the recommendation, Reference 44 has been corrected and implemented on page 37 in the ... Continue reading
Version 1
VERSION 1
PUBLISHED 19 Sep 2024
Views
14
Cite
Reviewer Report 06 Dec 2024
Olympia Roeva, Bioinformatics and mathematical modelling, Institute of Biophysics and Biomedical Engineering, Bulgarian Academy of Sciences, Sofia, Bulgaria 
Approved with Reservations
VIEWS 14
  • Is the work clearly and accurately presented and does it cite the current literature?No. The paper structure should be reconsidered. First, all used/known methods and knowledge used in the research should be presented and then the proposed
... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Roeva O. Reviewer Report For: A novel hybrid genetic algorithm and Nelder-Mead approach and it’s application for parameter estimation [version 3; peer review: 2 approved, 1 approved with reservations]. F1000Research 2025, 13:1073 (https://doi.org/10.5256/f1000research.169646.r343361)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
  • Author Response 10 Mar 2025
    Rajashree Mishra, Mathematics, Kalinga Institute of Industrial Technology, Bhubaneswar, 751024, India
    10 Mar 2025
    Author Response
    Thank you for the comments and suggestions.
    1. Sorry, I have wrongly mentioned it and I have corrected it in the given new manuscript.
    2. We acknowledge the reviewer’s
    ... Continue reading
COMMENTS ON THIS REPORT
  • Author Response 10 Mar 2025
    Rajashree Mishra, Mathematics, Kalinga Institute of Industrial Technology, Bhubaneswar, 751024, India
    10 Mar 2025
    Author Response
    Thank you for the comments and suggestions.
    1. Sorry, I have wrongly mentioned it and I have corrected it in the given new manuscript.
    2. We acknowledge the reviewer’s
    ... Continue reading
Views
32
Cite
Reviewer Report 28 Oct 2024
HABBI FATIHA, Saad Dahlab University of Blida Faculty of Technology (Ringgold ID: 272254), Blida, Blida Province, Algeria;  Saad Dahlab University of Blida (Ringgold ID: 213442), Blida, Blida Province, Algeria 
El-ghalia Boudissa, Faculty of Technology, Saad Dahlab University of Blida, Blida,, Blida Province, Algeria;  Automatic-Electrotechnic, Saad Dahlab University of Blida Faculty of Technology (Ringgold ID: 272254), Blida, Blida Province, Algeria 
Approved with Reservations
VIEWS 32
This paper aims to present a novel hybrid GA and NMA. It should make a contribution to enhance the exploitation capabilities of GA by using NMA. To highlight the performance of GANMA, it is tested across various benchmark functions and ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
FATIHA H and Boudissa Eg. Reviewer Report For: A novel hybrid genetic algorithm and Nelder-Mead approach and it’s application for parameter estimation [version 3; peer review: 2 approved, 1 approved with reservations]. F1000Research 2025, 13:1073 (https://doi.org/10.5256/f1000research.169646.r329147)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
  • Author Response 10 Mar 2025
    Rajashree Mishra, Mathematics, Kalinga Institute of Industrial Technology, Bhubaneswar, 751024, India
    10 Mar 2025
    Author Response
    Reviewer Comments and Suggested Responses:
    1. Necessity of Hybridizing GA with NMA:
      • Response: The hybridization of GA with NMA addresses the limitation of GA in fine-tuning
    ... Continue reading
COMMENTS ON THIS REPORT
  • Author Response 10 Mar 2025
    Rajashree Mishra, Mathematics, Kalinga Institute of Industrial Technology, Bhubaneswar, 751024, India
    10 Mar 2025
    Author Response
    Reviewer Comments and Suggested Responses:
    1. Necessity of Hybridizing GA with NMA:
      • Response: The hybridization of GA with NMA addresses the limitation of GA in fine-tuning
    ... Continue reading

Comments on this article Comments (0)

Version 3
VERSION 3 PUBLISHED 19 Sep 2024
Comment
Alongside their report, reviewers assign a status to the article:
Approved - the paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations - A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approved - fundamental flaws in the paper seriously undermine the findings and conclusions
Sign In
If you've forgotten your password, please enter your email address below and we'll send you instructions on how to reset your password.

The email address should be the one you originally registered with F1000.

Email address not valid, please try again

You registered with F1000 via Google, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Google account password, please click here.

You registered with F1000 via Facebook, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Facebook account password, please click here.

Code not correct, please try again
Email us for further assistance.
Server error, please try again.