ISSN ONLINE(2319-8753)PRINT(2347-6710)
P.Sabarinath1, R.Hariharasudhan1, M.R.Thansekhar1 R.Saravanan2
|
Related article at Pubmed, Scholar Google |
Visit for more related articles at International Journal of Innovative Research in Science, Engineering and Technology
This work presents an application of improved Elitist Non-dominated Sorting Genetic Algorithm version II (NSGAII), to multi-objective disc brake optimization problem. The disc brake optimization problem is considered as a two-objective problem. The first objective is the minimization of the mass of the brake and the second objective is the minimization of the stopping time. The disc brake optimization model has four design variables and five inequality constraints. To improve the performance of NSGA-II, two modifications are proposed. One modification is incorporation of Virtual Mapping Procedure (VMP), and the other is introduction of controlled elitism in NSGA-II. The main objective of this project is to apply NSGA-II Algorithm for optimizing the design of Disc Brake for minimization of brake mass and stopping time and to compare the results obtained by NSGA-II Algorithm with that of the results already published for Genetic Algorithm.
Keywords |
Combinatorial optimization, Disc brake, Multi-objective, Non-dominated Sorting Genetic Algorithm (NSGA-II) |
INTRODUCTION |
The present scenario in industries is to design and manufacture high quality products that not only satisfies the customer demands but also meets the heavy competition in the market. The current design and manufacturing techniques available in the industry must be improved in order to meet the challenges and to become winners of the competitive market. The design and manufacturing companies have been forced to make radical changes in design and manufacturing strategies in recent years so as to face the fierce competition conditions in the market. The crucial task nowadays is to find the optimal design and machining parameters so as to minimize the production costs. The optimal design problems consist of the selection of optimal design variables. Even though these problems have been extensively researched, the complexity of optimal design of machine elements creates the requirement for increasingly effective algorithms. |
Various solution approaches like the gradientbased methods, sequential unconstrained minimization technique and dynamic programming have been used to optimize the design of machine elements. The gradientbased methods differ in their reliability, efficiency and sensitivity to the initial solution. Furthermore, they are inclined to obtain a local optimal solution. The gradientbased methods are limited because they are not ideal for non-convex problems. |
Convergence speed to the global optimum and solution accuracy are important factors in the development of optimization methods. Since the convergence speed of evolutionary algorithms to the global optimal results is better than that of conventional techniques, non-traditional optimization methods such as genetic algorithm (GA), simulated annealing, particle swarm optimization algorithm, differential evolution and ant colony algorithm have been widely preferred in the solution of optimization problems from design industry. |
We can divide the optimization algorithms into two categories. The first category, known as single objective optimization, consists of finding the global maximum or minimum of an objective function which is a function of design variables. A number of benchmark problems on the single objective design optimization such as the design of pressure vessel, welded beam, tension compression spring, gear train, speed reducer, hydro static thrust bearing, step cone pulley, Belleville spring, Rolling element bearing etc have been reported in the literature. The above single objective benchmark problems have been solved by both traditional and nontraditional optimization techniques. |
Over the last years, genetic algorithms (GAs) have received a lot of attention as an optimization method in the design of basic machine elements such as springs, gears, pulleys, shafts etc. GA’s work on the natural phenomenon of genetics such as selection, reproduction, crossover and mutation to find the global best solution. The application of GA’s in the design optimization of machine elements such as spring, hollow shaft and belt pulley system is reported in [1]. He et al [2] used improved particle swarm optimization for solving some benchmark problems on single objective design optimization. Rao et al [3] recently proposed teaching learning based optimization (TLBO) for solving single objective design optimization of above mentioned benchmark problems. The second category is multi objective optimization, which embroils the simultaneous optimization of multiple, often conflicting objectives. Instead of finding a single optimal solution, a set of optimal non-dominated solutions is generated; this set is referred to as the Pareto domain. A solution (P) is said to dominate a solution (Q) when (P) is not worse than (Q) in any of its objective function values and it is better with respect to at least one objective. In general, most of the real-world problems are multi objective in nature. Thus in many cases, solving single-objective problem alone does not provide an effective solution. Thus, in this work, we have dealt with multi-objective design optimization problem. As mentioned earlier, a number of research studies are observed on single objective design optimization problems, but there are a few research studies on design optimization problem dealing with multi-objective scenario and even fewer research studies applying evolutionary strategies. A multi-objective optimization problem gives rise to a set of optimal solutions commonly known as Pareto optimal solution [4]. Thus, instead of getting a single solution, we get a set of solutions because of the presence of multiple objectives. The decision maker, then, chooses one or more solutions from among the Pareto optimal solutions. |
Multi-objective optimization problem can be solved mathematically or by applying heuristics or metaheuristics. Goal programming is a mathematical technique to solve multi-objective optimization problem [5]. Another way is to solve such problem by multiobjective evolutionary algorithm (MOEA) [6]. Because of the increasing attention toward machine learning procedures, MOEA techniques have gone through considerable evolution. The MOEA techniques can be classified into (1) apriori technique where the decision maker assigns relative importance to the objectives prior to the application of MOEA algorithm; (2) progressive technique which is actually an interactive search method; and (3) a posteriori technique where the decision maker will choose the solution after completion of the search technique. A number of MOEA algorithms are also observed in the existing literature [6]. Some of them are: vector evaluated genetic algorithm [7], weight-based genetic algorithm (GA) [8], multiple objective GA [9], vector optimized evolution strategy [10], strength Pareto evolutionary algorithm (SPEA, SPEA2) [11, 12], Pareto archived evolutionary strategy [13], Pareto envelopebased selection algorithm (PESA, PESA II) [14, [15], niched Pareto GA (NPGA, NPGA 2) [16, 17], multi objective Particle swarm optimization (MOPSO), multi objective differential evolution (MODE) and nondominated sorting GA (NSGA, NSGA-II) [4, 18]. Salim fettaka et al [19] used NSGA II for the multi objective design optimization of a shell-and tube heat exchanger. The two objective functions considered were the heat transfer area and pumping power. Nine decision variables were considered to obtain the multiple Pareto-optimal solutions which capture the trade-off between the two objectives. Murugan et al [20] proposed the application of NSGA II for multi objective generation expansion planning (GEP) problem. They have also introduced two modifications such as Virtual Mapping Procedure (VMP) and controlled elitism in NSGA-II. Ramesh et al [21] used a modified version of NSGA II for multi objective reactive power planning. They have applied the concept of TOPSIS for finding the best solution from the Pareto front solutions obtained using MNSGA II. In this work, we are applying the NSGA II algorithm to solve the disc brake design optimization problem with two conflicting objectives of minimizing both the mass of the brake and stopping time of the brake. The disc brake optimization model has four design variables and five inequality constraints. |
II. DISC BRAKES |
Modern motor cars are fitted with disc brakes instead of conventional drum type brakes. In Santro car and Maruti-800, front wheels are provided with disc brakes whereas rear wheel are provided with drum brakes. A disc brake consists of a rotating disc and two friction pads which are actuated by hydraulic braking system as described earlier. The friction pads remain free on each side of disc when brakes are no applied. They rub against disc when brakes are applied to stop the vehicle. These brakes are applied in the same manner as that of hydraulic brakes. But mechanism of stopping vehicle is different than that of drum brakes. |
Advantage of Disc Brakes: |
(a) Main advantage of disc brakes is their resistance to wear as the discs remain cool even after repeated brake applications. |
(b) Brake pads are easily replaceable. |
(c) The condition of brake pads can be checked without much dismantling of brake system. |
Disadvantage of Disc Brakes : |
(a) More force is needed be applied as the brakes are not self-emerging. |
(b) Pad wear is more. |
(c) Hand brakes are not effective if disc brakes are used in rear wheels also. (Hand brakes are better with mechanical brakes). |
III. OPTIMIZATION |
Optimization deals with the study of practical problems in which one has to minimize or maximize with one or more objectives that are functions of some real or integer variables. This is executed in a systematic way by choosing the proper values of real or integer variables within an allowed set. Given a defined domain, the main goal of optimization is to study the means of obtaining the best value of some objective function. |
Single Objective Optimization: |
The study of practical problems in which one has to minimize or maximize with one objective function of some real or integer variables. Different optimization techniques that are found in the literature can be broadly classified into three categories. |
• Calculus-based techniques. |
• Enumerative techniques. |
• Random techniques. |
Numerical methods, also called calculus-based methods, use a set of necessary and sufficient conditions that must be satisfied by the solution of the optimization problem. |
Multi Objective Optimization: |
The study of practical problems in which one has to minimize or maximize with more than one objective functions of some real or integer variables. Different optimization techniques that are found in the literature can be broadly classified into three categories |
Multi objective optimization (MOO) recognizes that most of the practical problems invariably require a number of design criteria to be satisfied simultaneously, viz., |
ideal “optimal” solution, but a set of Pareto-optimal solutions for which an improvement in one of the design objectives will lead to a degradation in one or more of the remaining objectives. Such solutions are also known as noninferior or nondominated solutions to the MOO problem. |
The concept of Pareto-optimality in the twoobjective case is illustrated in Fig. Here, points M and N are two examples of nondominated solutions on the Pareto-front. Neither is preferred to the other. Point M has a smaller value of 2 f than point N, but a larger value of |
Nondominated Sorting Genetic Algorithm (NSGA-II): |
Primary strength of NSGA-II lies in its ease-of-use because of its elitism, nondominated ranking and crowding distance which lead to rapid convergence to very high quality solutions. NSGA-II has been proposed, as a modification of NSGA, to alleviate the three difficulties associated with NSGA. It incorporates elitism, fast nondominated sorting approach and diversity along the Pareto optimal front which is well maintained using a crowding distance operator. Elitism maintains the knowledge acquired during the algorithm execution by conserving the individuals with best fitness in the population. |
Initially, a random parent population 0 Ppop is created. The population is sorted based on the nondomination. Each solution is assigned a fitness equal to its nondomination level (1 is the best level). Thus, minimization of fitness is assumed. Tournament selection, recombination, and mutation operators are used to create offspring population 0 Off of size N . Further, the NSGA-II procedure can be outlined in the following steps: |
IV.PROBLEM DESCRIPTION |
Multi-objective disc-brake optimization problem: |
The multi-objective disc-brake optimization problem was solved by Osyczka and Kundu [22] using plain stochastic method and genetic algorithms for optimization of disc-brake problem. They have shown that genetic algorithm was giving better results compared with that of plain stochastic method. The objectives of the problem are to minimize the mass of the brake and to minimize the stopping time. The disc brake optimization model has four variables that are |
x1- inner radius of the discs, in mm |
x2- outer radius of the discs, in mm |
x3- engaging force, in N and |
x4- number of the friction surfaces (integer) |
The objective functions and constraints of the discbrake design optimization model provided by Osyczka and Kundu [22] are defined as follows: |
V. RESULTS AND DISCUSSION |
For demonstration purpose, NSGA II approach is applied on the constrained optimization problem consists of two objectives. The first objective is minimization of the mass of the brake and second one is stopping time. The obtained best results of NSGA II in 10 trials are tabulated. Matlab 7.9 software package is used on a PC compatible with Pentium IV, a 3.2 GHz processor and 3 GB of RAM (Random Access Memory). |
Optimal parameter combinations for NSGA II are experimentally determined by conducting a series of experiments with different parameter settings before conducting actual runs. The optimal parameters are given in Table. The population size is set to 50 particles and the maximum generations (set as stopping condition) is set to 2000 generations. A total of 100000 fitness function evaluations were made with this optimization approach in each run. |
Figure shows that the applied MOEAs are able to generate Pareto-front in a single simulation run. The extreme points of Pareto-front obtained by NSGA II are given in Table. |
VI.CONCLUSIONS |
The bi-objective design optimization of disc brake problem was solved using fast, elitist NSGA II algorithm by considering two objective functions. The results of 50 Pareto fronts are obtained after 2000 iterations are plotted in a graph which shows better optimal solutions. The curve in the graph shows that the results are smooth and are better than the results obtained by other optimization techniques. |
References |
|