You are viewing a javascript disabled version of the site. Please enable Javascript for this site to function properly.
Go to headerGo to navigationGo to searchGo to contentsGo to footer
In content section. Select this link to jump to navigation

Development of optimization methods to deal with current challenges in engineering design optimization


Presented herein is a summary of PhD thesis entitled “Development of optimization methods to deal with current challenges in engineering design optimization” by the author. The thesis is available online at

1.Background and motivation

In engineering design, optimization is customary, and often indispensable. Typical cases include minimization of drag for vehicles, minimization of weight for structures like buildings and bridges, maximization of power and lift for aircraft and rockets, minimization of fuel consumption for engines, etc. Therefore it comes as no surprise that development of fast and efficient optimization algorithms for engineering design is an actively pursued research area.

In recent decades, metaheuristic algorithms have proven to be efficient, robust and versatile methods for numerical optimization. However, they usually need to evaluate a large number of candidate designs to find near optimum solutions. This becomes prohibitive for engineering optimization problems in which each design evaluation may require computationally expensive analysis and, consequently, the optimization process may take much longer time than affordable.

Considering that the number of design evaluations is a critical factor in overall optimization time, it is imperative to develop techniques to search for the optimum design using fewest evaluations possible. With this singular goal, this thesis investigates a range of domains in which existing metaheuristic optimization approaches can be improved.

Engineering problems are often highly non-linear, discontinuous, and non-differentiable, which rules out (or restricts) the applicability of analytical techniques for solving them. However, they exhibit additional attributes that prove challenging even to the existing metaheuristic techniques, thus making the search difficult and, consequently, creating a necessity for carrying out large numbers of evaluations. These include: (a) Constraints – constraints render a fraction (often a large fraction) of the search space infeasible, making it hard to find the optimum and at times even a feasible design; (b) Large number of objectives – Pareto-dominance sorting, a commonly used technique in multi-objective optimization algorithms, is inadequate to solve problems with large numbers of objectives, a fact well reported in literature; (c) Large number of variables – the search space grows exponentially with the number of variables, which results in a corresponding increase in computational effort; and (d) Multiple models – for certain problems, there may be multiple candidate models to choose a solution from, with none of them being an obviously preferred one. In such a case, one may need to explore each one of them to find the global best.

In this thesis, studies are conducted on each of these domains individually. Shortcomings of the existing methods are analyzed, and novel techniques are developed for efficiently handling constraints, large number of objectives/variables and multiple models.

2.Research and outcomes

The contributions of this thesis can be grouped into four broad areas.

2.1.Constraint handling

Two algorithms are proposed to effectively deal with constrained optimization problems:

  • (1) Infeasibility Driven Evolutionary Algorithm [2,8]: A novel approach is proposed which explicitly prefers marginally infeasible solutions over feasible solutions during the evolution. This preference translates into active search through feasible and infeasible regions of the search space leading to faster convergence towards optimum (which often lies on constraint boundary).

  • (2) Constrained Pareto Simulated Annealing [6]: The conventional simulated annealing algorithm is extended to deal with constrained, multi-objective optimization problems.

The above two algorithms have also been further enhanced by incorporating local search and surrogate modeling in them, respectively.

2.2.Large scale optimization (many objectives)

Two new developments are done for handling problems with large numbers of objectives:

  • (1) Secondary ranking: As reported in a number of studies in the literature, Pareto-dominance is an inadequate strategy for dealing with problems with high numbers of objectives (typically more than three). Two new secondary ranking methods are proposed in this thesis, namely, Cluster-sort [4] and Modified-ε-dom [5], which improve convergence and diversity for many-objective optimization problems.

  • (2) Dimensionality reduction: A novel technique for dimensionality reduction is proposed in which the true dimensionality of a problem is identified using a key set of solutions (corners) on the Pareto front. An algorithm to identify the set of corners, Pareto Corner Search Evolutionary Algorithm (PCSEA) [1], is proposed; followed by analysis of the obtained corner solutions by omitting each objective sequentially. The proposed method is able to estimate the dimensionality using merely a small fraction of evaluations required by other contemporary techniques.

2.3.Large scale optimization (many variables)

A novel partitioning strategy, based on correlations among the variables, is proposed for Cooperative Coevolutionary Algorithms (CCEAs) to handle problems with large number of variables. The resulting algorithm, CCEA with Adaptive Variable Partitioning (CCEA-AVP) [7], is able to solve a broad class of separable and non-separable problems more efficiently as compared to conventional EA and CCEA.

2.4.Trans-dimensional optimization

A Simulated Annealing based Trans-Dimensional Optimization (SA-TDO) algorithm [3] is proposed to deal with optimization problems with multiple candidate models. This algorithm searches through the model and variable spaces simultaneously, resulting in better quality solutions compared to the case when same number of function evaluations is distributed equally among the models for their individual optimization.

The proposed methods are able to achieve competitive results using markedly fewer numbers of design evaluations compared to conventional optimizers, which is demonstrated through rigorous numerical experiments on benchmark test problems and engineering design problems. The work has resulted in a total of 16 peer reviewed publications (3 journals, 2 book chapters and 11 conference proceedings). Selected few of them are listed in the References.



[[1]] H.K. Singh, A. Isaacs and T. Ray, A Pareto corner search evolutionary algorithm and dimensionality reduction in many-objective optimization problems, IEEE Transactions on Evolutionary Computation 15: (4) ((2011) ), 539–556.


[[2]] H.K. Singh, A. Isaacs, T. Ray and W. Smith, Infeasibility Driven Evolutionary Algorithm (IDEA) for engineering design optimization, in: Proceedings of Australasian Joint Conference on Artificial Intelligence (AI), (2008) , pp. 104–115.


[[3]] H.K. Singh, A. Isaacs, T. Ray and W. Smith, A simulated annealing algorithm for single objective trans-dimensional optimization problems, in: Proceedings of Hybrid Intelligent Systems (HIS), (2008) , pp. 19–24.


[[4]] H.K. Singh, A. Isaacs, T. Ray and W. Smith, A study on the performance of substitute distance based approaches for evolutionary many-objective optimization, in: Proceedings of Simulated Evolution and Learning (SEAL), (2008) , pp. 401–410.


[[5]] H.K. Singh, A. Isaacs, T. Ray and W. Smith, An improved secondary ranking for many objective optimization problems, in: Proceedings of Genetic and Evolutionary Computation Conference (GECCO), (2009) .


[[6]] H.K. Singh and T. Ray, C-PSA: Constrained Pareto Simulated Annealing for constrained multi-objective optimization, Information Sciences 180: (13) ((2010) ), 2499–2513.


[[7]] H.K. Singh and T. Ray, Divide and conquer in coevolution: A difficult balancing act, in: Agent Based Evolutionary Search, Adaptation, Learning, and Optimization, Springer-Verlag, Berlin/Heidelberg, (2010) , pp. 117–138.


[[8]] H.K. Singh, T. Ray and R. Sarker, Optimum oil production planning using infeasibility driven evolutionary algorithm, Evolutionary Computation 21: (1) ((2013) ), 65–82.