ABSTRACT Title of Dissertation: PARAMETER SENSITIVITY MEASURES FOR SINGLE OBJECTIVE, MULTI-OBJECTIVE, AND FEASIBILITY ROBUST DESIGN OPTIMIZATION Subroto Gunawan, Doctor of Philosophy, 2004 Dissertation directed by: Shapour Azarm, Professor Department of Mechanical Engineering Uncontrollable variations are unavoidable in engineering design. If ignored, such variations can seriously deteriorate performance of an optimum design. Robust optimization is an approach that optimizes performance of a design and at the same time reduces its sensitivity to variations. The literature reports on numerous robust optimization techniques. In general, these techniques have three main shortcomings: (i) they presume probability distributions for parameter variations, which might be invalid, (ii) they limit parameter variations to a small (linear) range, and (iii) they use gradient information of objective/constraint functions. These shortcomings severely restrict applications of the techniques reported in the literature. The objective of this dissertation is to present a robust optimization method that addresses all of the above-mentioned shortcomings. In addition to being efficient, the robust optimization method of this dissertation is applicable to both single and multi- objective optimization problems. There are two steps in our robust optimization method. In the first step, the method measures robustness for a design alternative. The robustness measure is developed based on a concept that associated with each design alternative there is a sensitivity region in parameter variation space that determines how much variation a design alternative can absorb. The larger the size of this region, the more robust the design. The size of the sensitivity region is estimated by a hyper-sphere, using a worst-case approach. The radius of this hyper-sphere is obtained by solving an inner optimization problem. By comparing this radius to an actual range of parameter variations, it is determined whether or not a design alternative is robust. This comparison is added, in the second step, as an additional constraint to the original optimization problem. An optimization technique is then used to solve this problem and find a robust optimum design solution. As a demonstration, the robust optimization method is applied to numerous numerical and engineering examples. The results obtained are numerically analyzed and compared to nominal optimum designs, and to optimum designs obtained by a few well- known methods from the literature. The comparison study verifies that the solutions obtained by our method are indeed robust, and that the method is efficient. PARAMETER SENSITIVITY MEASURES FOR SINGLE OBJECTIVE, MULTI-OBJECTIVE, AND FEASIBILITY ROBUST DESIGN OPTIMIZATION by Subroto Gunawan Dissertation submitted to the Faculty of the Graduate School of the University of Maryland, College Park in partial fulfillment of the requirements for the degree of Doctor of Philosophy 2004 Advisory Committee: Professor Shapour Azarm, Advisor/Chair Professor Amr Baz Professor Roberto Celi Assistant Professor Steven A. Gabriel Associate Professor Jeffrey Herrmann Associate Professor Peter Sandborn ? Copyright by Subroto Gunawan 2004 ACKNOWLEDGEMENTS First of all, I would like to thank my advisor, Dr. Shapour Azarm, for his guidance and support throughout my graduate study. I also would like to thank the committee members, Dr. Baz, Dr. Celi, Dr. Gabriel, Dr. Herrmann, and Dr. Sandborn, for their inputs and comments. Secondly, I would like to thank our sponsor, Dr. Ng from ONR, whose generous support made this research possible. Special thanks to Mr. Art Boyars from NSWC Indian Head who in numerous occasions helped and provided us with the information necessary to formulate the payload optimization problem. His comments and suggestions throughout the research have been invaluable as well. Thanks also to Dr. Magrab who provided support during the first year of my graduate study. The research work presented in this thesis was also supported in part by the National Science Foundation through Grant DMI 0200029. Such support does not constitute an endorsement by the funding agency of the opinions expressed in the paper. Thirdly, I want to thank my colleagues, past and present, from the Design Decision Support Laboratory: Mr. Kumar Maddulapalli, Mr. Babak Besharati, Mr. Genzi Li, Dr. Ali Farhang-Mehr, Dr. Jin Wu, and others who have provided much valuable inputs and suggestions. I also want to thank my parents and my family for their continuous support during my study. Very special thanks to my girlfriend, Agnes Susianti, whose love and understanding helped me carry on during difficult times. Lastly, I wish to thank all my friends, Koswara, Franky, Andre, Steven, Ronald, and many others, for their support and encouragements. ii TABLE OF CONTENTS List of Figures.................................................................................................................... vi List of Tables ..................................................................................................................... ix Nomenclature...................................................................................................................... x CHAPTER 1: INTRODUCTION....................................................................................... 1 1.1. MOTIVATION AND OBJECTIVE............................................................................ 1 1.2. RESEARCH COMPONENTS..................................................................................... 4 1.2.1. Research Component 1: Single Objective Robust Optimization.................. 4 1.2.2. Research Component 2: Multi-Objective Robust Optimization................... 5 1.2.3. Research Component 3: Feasibility Robust Optimization............................ 5 1.3. ASSUMPTIONS.......................................................................................................... 6 1.4. ORGANIZATION OF DISSERTATION ................................................................... 7 CHAPTER 2: DEFINITIONS AND PREVIOUS WORKS............................................... 9 2.1. INTRODUCTION ....................................................................................................... 9 2.2. DEFINITIONS AND TERMINOLOGIES.................................................................. 9 2.3. OVERVIEW OF PREVIOUS WORK ...................................................................... 13 2.3.1. Single Objective Robust Optimization ....................................................... 16 2.3.2. Multi-Objective Robust Optimization ........................................................ 18 2.3.3. Feasibility Robust Optimization ................................................................. 19 CHAPTER 3: SINGLE OBJECTIVE ROBUST OPTIMIZATION ................................ 21 3.1. INTRODUCTION ..................................................................................................... 21 3.2. TWO-SIDED SENSITIVITY MEASURE................................................................ 22 3.2.1. Sensitivity Set ............................................................................................. 22 3.2.2. Sensitivity Region....................................................................................... 25 3.2.3. Directional Sensitivity ................................................................................ 32 3.2.4. Worst Case Sensitivity Region ................................................................... 36 3.2.5. Normalization ............................................................................................. 44 3.3. ROBUST OPTIMIZATION ...................................................................................... 49 3.3.1. Robustness Index ........................................................................................ 49 3.3.2. Constrained Robustness Approach ............................................................. 51 3.4. COMPARISON STUDY........................................................................................... 54 3.4.1. Wine-Bottle Function.................................................................................. 55 3.4.2. Design of a Three-Bar Truss....................................................................... 58 3.4.3. Design of A Welded Beam ......................................................................... 64 3.4.4. Design of a Compression Spring ................................................................ 72 3.5. SUMMARY............................................................................................................... 77 iii CHAPTER 4: MULTI-OBJECTIVE ROBUST OPTIMIZATION ................................. 80 4.1. INTRODUCTION ..................................................................................................... 80 4.2. BASIC CONCEPTS .................................................................................................. 81 4.2.1. Multi-Objective Robustness........................................................................ 81 4.2.2. Multi-Objective Robust Optimality ............................................................ 83 4.3. TWO-SIDED SENSITIVITY OF MULTIPLE FUNCTIONS.................................. 85 4.3.1. Generalized Sensitivity Set and Sensitivity Region.................................... 85 4.3.2. Generalized Worst Case Sensitivity Region............................................... 88 4.3.3. Normalization ............................................................................................. 91 4.4. ROBUST OPTIMIZATION ...................................................................................... 94 4.5. COMPARISON STUDY........................................................................................... 96 4.5.1. Numerical Example .................................................................................... 97 4.5.2. Design of a Vibrating Platform................................................................. 103 4.5.3. Design of a Speed Reducer....................................................................... 111 4.5.4. Design of a Power Electronic Module...................................................... 117 4.6. SUMMARY............................................................................................................. 126 CHAPTER 5: FEASIBILITY ROBUST OPTIMIZATION........................................... 128 5.1. INTRODUCTION ................................................................................................... 128 5.2. ONE-SIDED SENSITIVITY MEASURE............................................................... 129 5.2.1. Single Constraint....................................................................................... 129 5.2.2. Multiple Constraints.................................................................................. 134 5.3. ROBUST OPTIMIZATION .................................................................................... 139 5.3.1. Determination of Increment Limit............................................................ 139 5.3.2. Normalization and Feasibility Robustness Index ..................................... 141 5.4. COMPARISON STUDY......................................................................................... 143 5.4.1. Numerical Example .................................................................................. 143 5.4.2. Design of an Explosive Actuated Cylinder............................................... 150 5.4.3. Design of a Belleville Spring.................................................................... 157 5.4.4. Design of a Control Valve Actuator Linkage ........................................... 163 5.5. SUMMARY............................................................................................................. 172 CHAPTER 6: DISCUSSIONS ....................................................................................... 173 6.1. INTRODUCTION ................................................................................................... 173 6.2. OBJECTIVE AND FEASIBILITY ROBUST OPTIMIZATION........................... 173 6.2.1. Design of a Payload for an Undersea Autonomous Vehicle (UAV) ........ 178 6.3. ONE-SIDED SENSITIVITY FOR OBJECTIVE ROBUSTNESS ......................... 185 6.4. ASYMMETRICAL TWO-SIDED SENSITIVITY MEASURE............................. 187 6.5. ASYMMETRICAL PARAMETER VARIATIONS............................................... 188 6.6. ROBUSTNESS INDEX VS. ROBUSTNESS PROBABILITY ............................. 190 6.7. SUMMARY............................................................................................................. 195 CHAPTER 7: CONCLUSIONS ..................................................................................... 197 7.1. CONCLUDING REMARKS................................................................................... 197 7.1.1. Verification ............................................................................................... 198 7.1.2. Computational Efficiency ......................................................................... 200 iv 7.1.3. Advantages and Disadvantages................................................................. 202 7.2. CONTRIBUTIONS ................................................................................................. 205 7.3. FUTURE RESEARCH DIRECTIONS ................................................................... 206 REFERENCES ............................................................................................................... 209 v LIST OF FIGURES Figure 1.1: Organization of dissertation. Figure 3.1: Sensitivity region of a design alternative. Figure 3.2: Geometric condition for a boundary point. Figure 3.3: One-dimensional SR example of a discontinuous function. Figure 3.4: One-dimensional example of equality inner point condition. Figure 3.5: Boundedness of a sensitivity region. Figure 3.6: Example of a condition that causes SR to be unbounded. Figure 3.7: Example of a disjointed sensitivity region. Figure 3.8: (a) Sensitivity region of the piston pin. (b) Surface plot of ?W(??,?r). Figure 3.9: Example of a directional sensitivity. Figure 3.10: SR of the aluminum and stainless steel pins. Figure 3.11: Most and least sensitive directions of a SR. Figure 3.12: An example where best-case direction has no meaning. Figure 3.13: Worst Case Sensitivity Region. Figure 3.14: SR and WCSR of the (a) steel pin, and (b) aluminum pin. Figure 3.15: Mathematical WCSR of the (a) steel pin, and (b) aluminum pin. Figure 3.16: Equal-scaled SR and WCSR of (a) steel and (b) aluminum pins. Figure 3.17: Normalized SR and WCSR of (a) steel pin, and (b) aluminum pin. Figure 3.18: Normalized ?p 0 ranges. Figure 3.19: Comparison between ? and ?p 0 . Figure 3.20: Effect of adding robustness constraint. Figure 3.21: Surface plot of the wine-bottle function. Figure 3.22: Distributions of robust optima on a contour plot. Figure 3.23: Distributions of robust optima on a cross-section plot. Figure 3.24: A three-bar truss. Figure 3.25: Sensitivity analysis of the three-bar truss optima. Figure 3.26: SR and WCSR of the nominal optimum. Figure 3.27: SR and WCSR of the robust optimum. Figure 3.28: SR and WCSR of the mean optimum. Figure 3.29: SR and WCSR of the mean+std optimum. Figure 3.30: A welded beam assembly. Figure 3.31: SR and WCSR of the optima: (a) nominal, (b) robust. Figure 3.32: Sensitivity analysis of the welded beam optima. Figure 3.33: Sensitivity analysis of the welded beam optima. Figure 3.34: A compression spring. Figure 3.35: Sensitivity analysis of the nominal optimum. Figure 3.36: Sensitivity analysis of the robust optimum. Figure 3.37: Sensitivity analysis of the gradient optimum. Figure 4.1: Graphical illustration of a multi objective robustness. Figure 4.2: Comparison between point and set robustness. Figure 4.3: Comparison between nominal and robust Pareto set. vi Figure 4.4: Graphical definition of the generalized SR. Figure 4.5: (a) Generalized SR, and (b) its corresponding ?f 0 ranges. Figure 4.6: Generalized WCSR. Figure 4.7: Graphical interpretation of the simplified constraint. Figure 4.8: A piston-pin-arm assembly. Figure 4.9: Overall SR and WCSR of the stainless steel pin. Figure 4.10: Constrained MORO approach. Figure 4.11: Nominal and robust Pareto set of the numerical example. Figure 4.12: Behavior of robust Pareto set as constraint is relaxed. Figure 4.13: Robust solutions using Monte Carlo simulations. Figure 4.14: Sensitivity analysis of the robust 1 design. Figure 4.15: Sensitivity analysis of the robust 2 design. Figure 4.16: Sensitivity analysis of the nominal 1 design. Figure 4.17: Sensitivity analysis of the nominal 2 design. Figure 4.18: Sensitivity analysis of the nominal 3 design. Figure 4.19: A pinned-pinned vibrating platform. Figure 4.20: Pareto sets of the vibrating platform example. Figure 4.21: Sensitivity of: (a) nominal, (b) robust, and (c) probabilistic designs. Figure 4.22: SR and WCSR of: (a) nominal, (b) robust, and (c) probabilistic designs. Figure 4.23: A speed reducer. Figure 4.24: Nominal and robust Pareto solutions of the speed reducer problem. Figure 4.25: Sensitivity analysis of the robust-1 (left) and robust-2 (right) design. Figure 4.26: Sensitivity analysis of the nominal-1 (left) and nominal-2 (right) design. Figure 4.27(a): SR and WCSR of the nominal-1 design. Figure 4.27(b): SR and WCSR of the robust-1 design. Figure 4.28: A power electronic module. Figure 4.29: Nominal and robust Pareto solutions of the power electronic problem. Figure 4.30: Sensitivity analysis of the nominal and robust designs. Figure 5.1: Feasibility Sensitivity Region for the inequality constraint. Figure 5.2: Feasibility WCSR for the inequality constraint. Figure 5.3: Overall FSR and FWCSR for the two constraints. Figure 5.4: Graphical derivation of the simplified constraint. Figure 5.5: Objective and constraint contours of the numerical example. Figure 5.6: Sensitivity analysis of the optimum designs. Figure 5.7: Plot of ?x 0 vs. Prob of the FWCSR optima. Figure 5.8: Plot of ?x 0 vs. Prob of the gradient and moment matching optima. Figure 5.9: Plot of ?x 0 vs. f* of the four robust optimization methods. Figure 5.10: An explosive actuated cylinder. Figure 5.11: Sensitivity analysis of the nominal optimum. Figure 5.12: Sensitivity analysis of the probabilistic optima. Figure 5.13: Sensitivity analysis of the robust FWCSR optima. Figure 5.14: A Belleville spring. Figure 5.15: Plot of f vs. ? g of the Belleville spring. Figure 5.16: Sensitivity analysis of the nominal and min-max optima. Figure 5.17: Sensitivity analysis of the robust (0.3) and robust (0.6) optima. vii Figure 5.18: Sensitivity analysis of the robust (0.8) and robust (1.0) optima. Figure 5.19: A control valve actuator linkage. Figure 5.20: Forces acting on the linkage. Figure 5.21: Trade-off frontier of the actuator linkage. Figure 5.22: FSR and FWCSR of nominal and robust optima. Figure 6.1: Intersection of SR and FSR. Figure 6.2: Pareto sets of the payload problem. Figure 6.3: Sensitivity analysis of the nominal optimum payload. Figure 6.4: Sensitivity analysis of the robust optimum payload. Figure 6.5: Sensitivity analysis of the probabilistic optimum payload. Figure 6.6: Comparison between the estimate and actual P s . viii LIST OF TABLES Table 3.1: Optimum designs of the three-bar truss. Table 3.2: Optimum designs of the welded beam. Table 3.3: Constraints of the robust optimum design. Table 3.4: Optimum designs of the compression spring. Table 4.1: Material properties. Table 4.2: Sensitivity of the constraints. Table 4.3: Nominal and robust designs to be analyzed. Table 5.1: Optimum designs of the numerical example. Table 5.2: Optimum designs of the explosive actuated cylinder. Table 5.3: Constraints of the optimum designs. Table 5.4: Probability of constraint satisfaction of the optima. Table 5.5: Optimum designs of the Belleville spring. Table 5.6: Constraint values of the Belleville spring optima. Table 5.7: Optimum designs of the control valve actuator linkage. Table 5.8: Sensitivity analysis of the optima. Table 5.9: Constraint values of the nominal and robust optima. Table 6.1: Objective and constraint values of the optima. Table 6.2: Design variables of the optima. Table 6.3: Probability values for uniform distribution. Table 6.4: Probability values for normal distribution. Table 7.1: Summary of sensitivity analysis results. Table 7.2: Summary of average number of function evaluations. Table 7.3: Summary of computational time. ix NOMENCLATURE FSR Feasibility Sensitivity Region FWCSR Feasibility Worst Case Sensitivity Region GA Genetic Algorithm MC Monte Carlo simulation MOGA Multi Objective Genetic Algorithm MORO Multi Objective Robust Optimization SR Sensitivity Region WCSR Worst Case Sensitivity Region ?f variations in objectives ?f 0 maximum acceptable variations in the objectives ?g variations in constraints ?g 0 maximum acceptable increment in the constraints ?p variations in design parameters ?p 0 known ranges of parameter variations ? ? 0 p lower bound of known parameter variation ranges + ? 0 p upper bound of known parameter variation ranges p? normalized design parameters f design objectives g inequality design constraints G number of design parameters h equality design constraints x +?/ ? asymmetric robustness index +?/ 0 ? desired level of asymmetric objective robustness + 0 ? desired level of one-sided objective robustness + ? one-sided robustness index ? robustness index ? 0 desired level of objective robustness specified by designer ? g feasibility robustness index ? g,0 desired level of feasibility robustness specified by designer ? min overall robustness index J number of inequality constraints K number of equality constraints M number of design objectives N number of design variables p design parameters p 0 nominal value of design parameters (values used in the optimization) P s probability of constraint satisfaction f R normalized WCSR radius +?/ R f radius of asymmetrical WCSR + f R radius of one-sided WCSR R radius of the worst-case estimate of the SR and FSR intersection R f radius of WCSR R g radius of FWCSR xi S f sensitivity set S g feasibility sensitivity set x design variables x 0 a particular instance of design variables xii CHAPTER 1 INTRODUCTION 1.1. MOTIVATION AND OBJECTIVE Back in the late 1970?s, there was a tile manufacturer in Japan called Ina Tile Company. One day the company discovered that an uneven temperature profile of its kilns was causing unacceptable variations in the size of its manufactured tiles. An obvious way to solve the problem was to modify the kilns by adding thermocouples and temperature controllers to monitor and correct the malfunction. However, this modification would have been very expensive. Instead, the company chose to make an inexpensive modification to their tile design to reduce the sensitivity of the manufactured tiles to temperature variations. Using statistically designed experiments, they found that increasing the lime content of their clay-mix from 1% to 5% reduced the variations in their tile size by a factor of 10 (Leon et al., 1987). Uncontrollable variations and noises are unavoidable in engineering design. Temperature variations, deviation of material properties from specifications, and dimensional tolerances of a design are just a few examples of uncontrollable parameter variations. When designing a system, these variations cannot and should not be ignored because they can seriously affect the performance of a design. As in the Ina Tile Company example above, one way to counter the effects of these variations is to try to reduce or eliminate the parameter variations themselves. However, this approach is usually very difficult to undertake and/or expensive to implement. Furthermore, it is quite possible that such variations will re-appear some other time in the future. A better 1 approach is to try to reduce the sensitivity of the design to the variations so that deteriorations caused by these variations are kept within an acceptable level. Dr. Genichi Taguchi from Japan is commonly credited for introducing the idea of reducing the sensitivity of a design, a process he called parameter design. Since then, this ?least-sensitive design? idea has been developed much further, and later the term ?robust design? was coined to refer to a design alternative that is insensitive to parameter variations. With the introduction of design optimization into system design, it was not long before the idea of a robust and optimum design surfaced, and the concept of robust optimization became popular among researchers in the field. Following conventional terminologies, Parkinson et al. (1993) later introduced the term ?objective robustness? and ?feasibility robustness? to refer to robustness with respect to objective and constraint functions in an optimization problem, respectively. Many robust optimization methods have been developed in the literature, as will be reviewed in detail in Chapter 2. However, the applicability of these methods is limited to optimization problems with small variations, and continuous and/or differentiable objective and constraint functions. In addition, these methods typically presume a certain form of probability distribution function of the uncertain parameters, and are applicable for single objective optimization problems only. The computational cost of these methods also often limits their application to relatively simple optimization problems. Practically, real world optimization problems rarely exhibit the properties mentioned above. The functions involved in real world optimization problems are typically non- differentiable. The parameter variations of interest are very often large, beyond the validity of gradient estimation. Probability distribution of the uncertain parameters is also 2 generally not known, or is difficult and expensive to estimate accurately. Many problems have multiple objectives that need to be considered simultaneously. In our case study for example, the design of a payload for an Undersea Autonomous Vehicle (Chapter 6), the objective and constraint functions are discontinuous, the parameter variations are large, and the probability distribution of parameter variations is unknown. This problem also has multiple objectives, instead of just one objective, for which we want to find the robust optimum solutions. To make matter worse, in reality it is computationally expensive to compute the functions involved in the problem, so computational efficiency of the method used is important. The overall objective of this dissertation is to develop an efficient robust optimization method for both single- and multi-objective design optimization problems to obtain optimum designs that are robust with respect to both objectives and constraints, without having to: (1) presume a probability distribution of the parameter variations, (2) limit parameter variations to a small (linear) range, and (3) use the gradient information of objective/constraint functions. Before we continue, it is important to provide a distinction between the concept of robust optimization and Post Optimality Sensitivity Analysis (POSA) (e.g., Fiacco, 1983). Both concepts deal with the uncertainties and variabilities that exist in an optimization problem. However, POSA is a posteriori approach where it determines the sensitivity and stability of the solution due to variability after the optimization process is completed. The goal of POSA is to provide information to the designer regarding the behavior of the optimum solution and the active constraints if some of the parameters vary. It is hoped that based on this information, the designer can then take appropriate 3 measures to maintain the performance of the optimum design. It is essentially a passive approach to optimization under uncertainty. In contrast, robust optimization is an active approach in dealing with uncertainty whereas the sensitivity of the design is considered during the optimization process. The goal of robust optimization is not to inform the designers of how to guard the optimum design against variations, but rather to reduce the sensitivity of the optimum design obtained so that there is little need for the designer to devise corrective measures when the variabilities exist. 1.2. RESEARCH COMPONENTS To achieve the overall objective, we developed a step-by-step approach for the research in this dissertation. We first developed three research components for different types of robustness. These research components are: (1) single objective robustness, (2) multi-objective robustness, and (3) feasibility robustness. Next, we combined these different types of robustness to obtain a robust optimization method that accounts for both objective and feasibility robustness. In the next three sub-sections, an overview and objective of each research component is given. 1.2.1. Research Component 1: Single Objective Robust Optimization The first research component is concerned with variations in the objective value of an optimum design due to uncontrollable variations in the parameters. This so-called ?objective robustness? of an optimum design is important because if its objective value changes significantly, then performance of the design can degrade so much that it may be deemed unsatisfactory. Objective robustness of an optimum design is especially 4 important if the design is part of a larger system because deviations in the design?s performance could affect the rest of the system. The objective of the first research component is to develop a novel robust optimization method for single objective optimization problems that can obtain an optimum design solution that is robust in terms of the objective function. 1.2.2. Research Component 2: Multi-Objective Robust Optimization The second research component is concerned with performance variations of an optimum design when there are multiple objectives involved. Similar to the first research component, here we also look into the changes in the objective values of a design due to variations in the parameters. However, since there are now multiple objectives, we have to examine the performance variation of the design with respect to each objective, and then based on it, determine the overall robustness of the design. The objective of the second research component is twofold. First, since the notion of a design that is optimum and robust for multiple objectives has not yet been defined in the literature, this research component aims to introduce and develop the concept of ?multi- objective robustness? and ?multi-objective robust optimality? of a design alternative. Second, this research component seeks to develop a novel method for robust optimization of a design in multi-objective optimization problems. 1.2.3. Research Component 3: Feasibility Robust Optimization The third research component is concerned with the feasibility of an optimum design due to uncontrollable variations in parameters. Typically, an optimum solution to an 5 engineering optimization problem is a boundary optimum, i.e., at the optimum at least one of the constraints is active (Papalambros and Wilde, 2000). Because of this, if some of the problem?s parameters vary, the optimum design may no longer be feasible. A design that is always feasible even if there are parameter variations is called ?feasibly robust,? and the method to obtain a feasibly robust solution is called ?feasibility robust optimization? method. The objective of the third research component is to develop a novel and efficient feasibility robust optimization method to obtain an optimum design that is always feasible regardless of parameter variations. 1.3. ASSUMPTIONS In developing our robust optimization method, we make the following assumptions: ? For objective robust optimization, we assume that there exists a trade-off between objective values of a design, and its robustness. If such a trade-off does not exist, then an optimum design is also a robust design, and there is no need to conduct robust optimization. ? For feasibility robust optimization, we assume that the optimum is on (or near) the boundary of the feasible domain. If the optimum is well inside the feasible domain, then most likely that optimum is already feasibly robust, and we do not need to conduct robust optimization. ? We assume that the range of parameter variations is known a priori, and that they are symmetric. (Robust optimization with asymmetric parameter variations will be discussed briefly in Chapter 6.) 6 1.4. ORGANIZATION OF DISSERTATION The rest of the dissertation is organized as follows. Chapter 2 gives the definitions of concepts and terminologies used throughout the dissertation, as well as a comprehensive review of related previous work in the literature. We develop the method for objective robust optimization of a single objective optimization problem in Chapter 3 (Research Component 1), and extend it to multi-objective problems in Chapter 4 (Research Component 2). In Chapter 5, we develop a method for feasibility robust optimization (Research Component 3). Chapter 6 presents our combined objective and feasibility robust optimization method. To demonstrate the applications of our method, several numerical and engineering examples are given in Chapters 3 through 6. Chapter 7 concludes the dissertation with some remarks as well as a discussion on the contributions of the dissertation and potential future research directions. After reading this chapter and the next, we recommend that the readers continue with Chapter 3 first because it contains the bulk of our research results that will become the foundation of Chapters 4, 5 and 6. Chapters 4 and 5 may be read separately; however, Chapter 6 should be read after Chapters 3, 4, and 5. Figure 1.1 shows the organization and flow of information in this dissertation. 7 CHAPTER 1 Motivation and overall objective; research components; assumptions CHAPTER 2 Definitions and terminologies; review of related works CHAPTER 3 Single objective robust optimization CHAPTER 4 Multi-objective robust optimization CHAPTER 5 Feasibility robust optimization CHAPTER 6 Combined objective and feasibility robust optimization CHAPTER 7 Conclusions; contributions; future research directions Figure 1.1: Organization of dissertation. 8 CHAPTER 2 DEFINITIONS AND PREVIOUS WORK 2.1. INTRODUCTION In this chapter, we provide several definitions and terminologies that will be used throughout the dissertation. In addition, we also give a comprehensive review of previous work in the literature related to single objective, multi-objective, and feasibility robust optimization concepts and methods. 2.2. DEFINITIONS AND TERMINOLOGIES A general single objective optimization problem can be formulated as shown in Eq. (2.1) below. K,...,1;0),(h J,...,1;0),(g:subject to ),(minimize == =? k j f k j p p p x x x x (2.1) Here, f is the objective function to be optimized, x = [x 1 ,?,x N ] t is the design variable vector (with superscript ?t? referring to the transpose of the row vector), and p = [p 1 ,?,p G ] t is the set of parameters. For practical reasons, x and p are restricted to real values. The problem has J inequality constraints, g j , j = 1,...,J, and K equality constraints, h k , k = 1,?,K. In this dissertation, the notation p represents problem factors that have variability, including design variables. Following this notation, if there are variations in some of the design variables, then a subset of x belongs to p, i.e., these are the design factors that we control during the optimization of f but these factors have variability. Some researchers 9 prefer to differentiate between variations in x and variations in p, the so-called type-1 and type-2 variations (Chen et al., 1996; Kalsi et al., 2001). For simplicity, we do not make that distinction. Also, the parameters value p = p 0 that we use to optimize a design is called the nominal parameters value. Because of noise and uncertainty, parameter values vary by some amount: ?p = (?p 1 ,?,?p G ) t , and in turn these variations affect the objective and constraint values of a design. The goal of objective robust optimization is to obtain a design variable vector x R whose objective value f(x R ,p) is not only minimum but also remains within an acceptable bound when p varies. In other words, the objective value of the design is insensitive to variations in p. The goal of feasibility robust optimization is to obtain a design variable vector x R whose inequality constraint values g j (x R ,p), j=1,..,J, are always feasible regardless of ?p variations. Feasibility robust optimization is concerned only with inequality constraints (i.e., equality constraints are not considered in feasibility robust optimization). This is because equality constraints are ?hard? to consider, i.e., unless the ?p variations are such that h k (x,p+?p) = 0 for all k=1,?,K, there is no way to guarantee that these equality constraints will always be satisfied. When there is more than one objective function to minimize, the problem becomes a multi-objective optimization problem as shown in Eq. (2.2). [ ] K,...,1;0),(h J,...,1;0),(g:subject to ,...,,),(minimize M21 == =? = k j fff k j p p p x x xf x (2.2) 10 The formulation in Eq. (2.2) is the same as that of a single objective optimization problem, except that now we have M > 1 objectives to minimize simultaneously. Here, we assume that the number of objectives is finite (i.e., M < ?). We also assume that at least two of the objectives are conflicting, i.e., for a given design, as we decrease (improve) the value of one objective, the value of at least one other objective increases (worsens). Because of this trade-off among the objectives, there is generally a set of optimum solutions to the problem in Eq. (2.2). This set is called a Pareto set (see definitions below), and the design solutions in the set are called Pareto designs. The Pareto set is a trade-off set meaning that there is no design in the set that dominates or is better than the other designs in the set. Extensive reviews of multi-objective optimization concepts and methods are given by Miettinen (1999), and in evolutionary multi-objective optimization by Deb (2001) and Coello Coello et al. (2002). When there are variations in design parameters (i.e., ?p), some or all of the M objectives will be affected. The goal of multi-objective robust optimization is to obtain a design variable vector x R whose objective values f(x R ,p) are Pareto optimum, and at the same time are insensitive to these ?p variations for all objectives. Next, we provide several definitions and terminologies used in this dissertation. Objective space (f-space): An M-dimensional space in which the coordinate axes are the objective values. Parameter variation space (?p-space): A G-dimensional space in which the coordinate axes are the parameter variation (?p) values. Normalized parameter variation space ( p? -space): A G-dimensional ?p-space where all the entries are normalized by the known ranges of parameter variations (?p 0 ), i.e., 11 ,0 p p p i i i ? ? =? , for all i=1,?,G. Maximum acceptable performance variation (?f 0 = [?f 1,0 ,?,?f M,0 ] t ): The maximum acceptable change in the objective function values f. These values are determined by the designer. The maximum acceptable change may also be given as a percentage value. Inferiority, non-inferiority, and dominance: In multi-objective minimization, a feasible design point x a is said to be inferior with respect to (w.r.t.) another feasible design point x b if f i (x b ) ? f i (x a ) for all i=1,?,M, with strict inequality for at least one i. Correspondingly, the design point x b is said to dominate x a . If x a neither dominates nor is inferior to x b , then x a and x b are said to be non-inferior w.r.t. each other. Inferior, non-inferior, and dominant regions: In multi-objective minimization, the inferior region of a design x a is defined to be the region in the objective space where the design points are dominated by x a . Similarly, regions in the objective space where the design points in the regions are non-inferior and dominate x a are called the non- inferior and dominant regions of x a , respectively. Trade-off set: A set of design points is a trade-off set if all points in the set are non- inferior with respect to each other (although there might be design points outside the set that dominate the points in the set). Pareto optimality, Pareto set, and Pareto frontier: In multi-objective optimization, a feasible design point x p is Pareto optimum if it is not inferior w.r.t. any other feasible design point. The set of all Pareto optimum points is called the Pareto set. The plot of the Pareto set in the objective space is called the Pareto frontier. 12 Nominal Pareto set: Pareto set of a multi-objective optimization problem without robustness consideration. Robust Pareto set: A trade-off set whose elements are both multi-objectively robust and Pareto optimum (see Chapter 4 for further explanations). Vector operator ?: Let a and b be nx1 vectors. We define the operation: c = a ? b, where c is a nx1 vector whose elements are: c = (a 1 b 1 , a 2 b 2 ,?, a n b n ) t . Ternary vector operator ??? : Let a, b, and c be nx1 vectors. We define the operation: d = , where d is a nx1 vector whose i-th element is d for all i=1,?,n. ?? cba ,, ? ? ? > ? = 0a if,ca 0a if,ba iii iii i 2.3. OVERVIEW OF PREVIOUS WORK Most of the robust optimization methods in the literature are developed to account for both objective and feasibility robustness of an optimum design. However, some methods are developed to account for objective robustness only, while others are for feasibility robustness only. There are also some related works that developed methods for robustness measurement of a design only, without optimization. In general, these methods can be categorized into three main groups: (i) experiment- based methods, (ii) deterministic methods, and (iii) probabilistic methods. Experiment- based methods are those methods that perform local sampling around the nominal value of a design to probe its behavior under parameter variations. Taguchi?s orthogonal array (Taguchi and Phadke, 1984; Kackar, 1985; Phadke, 1989) and simple random sampling (Branke, 1998, 2001) are examples of experiment-based methods. Deterministic methods 13 are those methods that do not use statistical measures in calculating a design?s robustness; rather, they use some deterministic measures. Gradient minimization (Belegundu and Zhang, 1992), worst-case analysis (Parkinson et al., 1993), and the mini- max method (Hirokawa and Fujita, 2002) are examples of deterministic methods. Probabilistic methods are those methods that use statistical measures, such as mean and variance, to calculate a design?s robustness. These methods often use a Taylor series expansion to estimate these statistics. Examples of probabilistic methods include Yu and Ishii?s variation pattern method (1994, 1998), reliability index method (Tu et al., 1999; Youn et al., 2003), and most probable point method (Du and Chen, 2000). In addition to these three main groups, there are other more specialized robust optimization methods such as fuzzy robustness method (Otto and Antonsson, 1993; Arakawa and Yamakawa, 1998), tolerance maximization method (Balling et al., 1998), and Zhu and Ting?s performance sensitivity distribution method (2001). There are also methods that combine several of the methods from the above three main groups. Experiment-based methods are generally simple and straightforward, but their computational efforts grow rapidly, and eventually become impractical for problems with many parameters because they are essentially based on exhaustive permutations of all possible parameter variations. The computational cost of these methods is even more prohibitive as the number of discrete variation levels to be analyzed becomes large. Even for methods that use only partial permutations (Branke, 2001) the computational cost is still very high. Because of this computational issue, often these methods require preliminary experiments to eliminate those parameters that are statistically insignificant, 14 and to determine the levels of variations to be analyzed. These preliminary experiments are often costly and difficult. Many deterministic methods (Belegundu and Zhang, 1992; Parkinson et al., 1993) try to reduce the computational effort by using gradient information to estimate a design?s robustness. Estimating the gradient of a function is indeed computationally more efficient than exhaustive permutations. However, since these methods need gradient information, obviously they are only applicable to optimization problems whose functions are differentiable. These methods cannot solve robust optimization problems having non-smooth objective and/or constraint functions (e.g., a step function). Besides, as parameter variations grow large (beyond the range in which linear approximation is valid), gradient estimation will cease to be valid. Some deterministic methods use worst-case analysis to calculate a design?s robustness (Badhrinath and Rao, 1994; Hirokawa and Fujita, 2002). These methods are also computationally more efficient than experiment-based methods. However, the results obtained are typically conservative because they use the worst possible instance of a design?s performance as its robustness measure. Probabilistic methods (Yu and Ishii, 1994, 1998; Tu et al., 1999) extend the experiment-based methods by calculating probability information of a design based on a probability distribution of the parameters. For objective robustness, they calculate the mean and variance of a design?s performance. For feasibility robustness, they calculate the probability of constraint satisfaction for a design. Clearly, these methods require that the probability distribution of parameters is known a priori (which often is not the case), and the results obtained from these methods are dependent on the validity of the assumed 15 distribution. The computational cost of these methods is also more prohibitive than experiment-based methods because they need to calculate probability information. Even for those probabilistic methods that claim to be efficient (Du and Chen, 2000), the number of function evaluations needed is still very high. Almost all of the robust optimization methods in the literature are only applicable to single objective optimization problems. It is widely acknowledged, however, that an engineering design problem generally has multiple conflicting objectives. Very few papers address the issue of multiple objectives (sometimes also called multi-criteria): Rao (1984), Pignatiello (1993), and Ramakrishnan and Rao (1996). However, these methods essentially convert a multi-objective robust optimization problem into a single-objective one by aggregating the performance variations, and do not take into account trade-offs among the solutions. To our knowledge, there is no reported work in the literature that has formulated and defined the concept of a robust and multi-objectively optimum design as will be developed in this dissertation. A more detail discussion of related works in the literature is given in the next three sections. For clarity, we divide our discussions according to our research components: (i) single objective robust optimization, (ii) multi-objective robust optimization, and (iii) feasibility robust optimization. 2.3.1. Single Objective Robust Optimization One of the earliest works in objective robustness is the parameter design method of Taguchi (Taguchi, 1978; Taguchi and Phadke, 1984; Kackar, 1985; Phadke, 1989). This method is an experiment-based method that uses full-factorial experiments to obtain the 16 responses of a design, calculates the mean and variance of the responses, and then based on these values, minimizes a quantity called signal-to-noise (S/N) ratio. Taguchi?s method has been widely used to obtain a robust design, e.g., Pignatiello and Ramberg (1985), Wang et al. (1999), Hwang et al. (2001). However, it also has received much criticism for its use of the S/N ratio because it could result in a design with very low performance or very high variance. Leon et al. (1987) proposed an alternative to S/N ratio, and developed a robustness measure called PerMIA (Performance Measure Independent of Adjustment). They showed that PerMIA is a quality loss measure that Taguchi originally proposed in his parameter design concept, but did not use in his formulation. They also showed that PerMIA is a more reliable measure than S/N ratio in terms of solution quality, and that for certain special cases, PerMIA simplifies to the S/N ratio. The use of PerMIA in factorial experiments is also proposed and discussed in Pignatiello and Ramberg (1987) and Box (1988). In a different approach to Taguchi?s experiment-based method, some methods calculate a design?s robustness deterministically. Many methods (e.g., Belegundu and Zhang, 1992) use the gradient of the objective function as a robustness measure, and minimize a weighted sum (or some other combinations) of the objective and gradient values. However, Badhrinath and Rao (1994) showed that in general this weighted-sum approach is not reliable because it could lead to a local maximum solution. Instead they proposed a worst-case approach where they minimize the maximum objective value within the given parameter range. Parkinson (1998, 2000) and Hirokawa and Fujita (2002) also developed methods based on this worst-case min-max strategy. However, this min-max approach is often computationally extensive because it calculates the maximum 17 objective value every time a new feasible solution is obtained. To reduce computation cost, Sundaresan et al. (1992, 1993) proposed to simply calculate objective values at the ?corners? of parameter range, and used their average as a robustness measure. Balling et al. (1986) proposed an interesting deterministic method for robust optimization. Unlike other methods, their method works ?backward? in that they first specify an acceptable variation in objective value, and then maximize the parameter range. Zhu and Ting (2001) also proposed a similar ?backward? approach where they define relative robustness of a design based on the relationship between variation in the objective value and the parameter range corresponding to this variation. A large portion of the literature uses probabilistic measures to determine robustness, most commonly the mean and variance of objective value. The simplest of these methods is the sampling approach where the mean and variance values are estimated by local sampling around the nominal objective value: Tsutsui and Gosh (1997), Branke (1998, 2001), Tsutsui (1999). Many probabilistic methods estimate the mean and variance using a Taylor series expansion, and then minimize a weighted sum of the two values: Yu and Ishii (1994, 1998), Du and Chen (2000), Jung and Lee (2002). Many other methods propose to treat mean and variance as two conflicting objectives, and use multi-objective optimization to simultaneously optimize them: Chen et al. (1996), Simpsons et al. (1997), Chen and Yuan (1999), Chen et al. (2000), Kalsi et al. (2001). 2.3.2. Multi-Objective Robust Optimization Only a few methods in the literature are applicable to multi-objective robust optimization problems, and most of them are generalizations of Taguchi?s loss function 18 (i.e., the S/N ratio). The first work to extend the loss function to multiple responses is by Pignatiello (1993) for the case of nominal-the-best. Tsui (1999) later expanded the work to include smaller-the-better and larger-the-better cases. Not wanting to use the S/N ratio in their robust design, Elsayed and Chen (1993) developed a quantity called PerMQ (Performance Measure on Quality), a generalization of the PerMIA measure of Leon et al. (1987) for single-response problems. Other non-Taguchi based methods have been developed as well: Rao (1984), Ramakrishnan and Rao (1996), Lee and Lee (2001), Messac and Yahaya (2002), Shelokar et al. (2002), but they all use different approaches and are not focused on a certain approach. None of the methods mentioned above take into account the trade-off that exists among the multiple objectives. A few researchers attempt to include Pareto dominance when searching for a robust solution (Kunjur and Krishnamurty, 1997; Fernandez et al., 2001). However, a true multi-objective robust optimization method that accounts for both multi-response robustness and Pareto optimality, as presented in this dissertation, has not yet been developed. 2.3.3. Feasibility Robust Optimization Many deterministic methods have been developed for feasibility robust optimization, the simplest being the use of a safety factor. Several methods propose a worst-case approach where they constrain the largest constraint variation instead of the original constraint value: Parkinson et al. (1993), Yu and Ishii (1994, 1998), Hirokawa and Fujita (2002). Other methods propose non-mathematical ?procedures? to guarantee a design?s 19 robustness: ?closest feasible point? method (Balling et al., 1986), ?corner space? method (Sundaresan et al., 1992, 1993). Yet some other methods use fuzzy logic rules to determine if a design is feasibly robust: Arakawa and Yamakawa (1998), Rao and Cao (2002). The most popular of all feasibility robustness approaches is the probabilistic approach. Some methods calculate the statistical variance of the constraint value using a Taylor series expansion, and then add this variance to the original constraint, a so-called ?moment matching? approach: Wu et al. (1990), Parkinson et al. (1993), Chen et al. (1996), and Ramakrishnan and Rao (1996). Some other methods replace the original constraints with a constraint on the probability of constraint satisfaction of the optimum design. This ?chance-constrained? approach has been well developed for linear problems (Charnes and Cooper, 1959; Charnes and Cooper, 1963), as well as non-linear ones (Du and Chen, 2000; Jung and Lee, 2002). The chance-constrained approach has been much expanded and developed further using reliability analysis techniques, where a design?s probability of constraint satisfaction is calculated via a so-called ?reliability index?: Chandu and Grandhi (1995), Choi et al. (2001), Choi and Youn (2002), Youn et al. (2003). Tu et al. (1999) generalize this approach to two approaches: the conventional Reliability Index Approach (RIA) and the Performance Measure Approach (PMA), and show that in most cases PMA is better than RIA. Du and Chen (2000) later propose to perform a local sampling around the so- called ?most probable point? to get a more accurate estimate of probability of constraint satisfaction, as well as to improve computational efficiency. 20 CHAPTER 3 SINGLE OBJECTIVE ROBUST OPTIMIZATION 3.1. INTRODUCTION When optimizing a design or a system, it is important to ensure that its performance at the optimum does not change significantly if some parameters vary uncontrollably. In designing a racecar, for example, it is often desirable that its weight be minimized. At the same time, it is also necessary to guarantee that a change in its weight (due to a change in some uncontrollable parameters) is limited. If the weight of the vehicle increases significantly, the vehicle speed may decrease considerably. If the weight decreases significantly, the lift that the vehicle experiences might result in lost traction. This so- called performance robustness (or objective robustness) of a design is especially critical if the design is part of a larger system. A robotic arm for a cutting tool, for instance, is often optimized for maximum movement speed. However, once an optimum speed is decided, it must be maintained so that it is compatible with other parts of the robotic system that are designed for that speed. The purpose of this chapter is to present a robust optimization method that can obtain a design solution that is objectively robust for a single objective optimization problem. A design is objectively robust if the variation in its objective function (i.e., variation in its performance) is small, within an acceptable range specified by the designer. We begin this chapter by presenting a method to measure the sensitivity (or inversely the robustness) of a design using a sensitivity region concept. We then present an approach to use such a measure to obtain a robust optimum design. Several numerical 21 and engineering comparison studies are given in this chapter to demonstrate the applications of the method. Note that in this chapter we consider objective robustness only, and assume that the robust optimum obtained is feasible when parameter variations occur. We will present our feasibility robustness approach later on in Chapter 5. 3.2. TWO-SIDED SENSITIVITY MEASURE We start by developing a method to measure the two-sided sensitivity of a design. This sensitivity measure is developed based on the notion that for each design alternative, there is a region in ?p-space that can be used to evaluate that design?s sensitivity. This measure is a ?two-sided? measure because we limit both the increase and decrease of the design performance (unlike feasibility robustness in Chapter 5, which is ?one-sided?). 3.2.1. Sensitivity Set Let x 0 be a design alternative whose sensitivity we want to measure, and let p 0 be the nominal parameter values for which the objective value of that design is defined, i.e., f(x 0 ,p 0 ). If a subset of x belongs to p, then the p 0 values of this subset are its x 0 values. Also let ?f 0 ?0 be the maximum acceptable changes for the design performance as determined by the designer. For this design and given ?f 0 , there is a set of ?p values such that the changes in f(x 0 ,p 0 ) due to these ?p?s are less than or equal to ?f 0 . This set is called the ?sensitivity set? (S f ) of the design, and mathematically it is defined by Eq. (3.1). (We use the square of ?f to account for negative values.) [ ] [ ]{ } ),(),()(:where )(:R)( 0000 2 0 2G 0 pppp ppS xx x fff ff f ??+=?? ??????= (3.1) 22 Let us take a moment to discuss the importance of this set. What exactly is a sensitivity set? As shown in Eq. (3.1), S f is a set of ?p?s that can be allowed to happen if we want variation in f(x 0 ,p 0 ) to be within ?f 0 . So, a sensitivity set is essentially a collection of parameter changes ?p that a design can ?absorb? before it violates the acceptable performance variation limit ?f 0 . Clearly, a design that can absorb a large amount of ?p is less sensitive (or more robust) than a design that can absorb only a small amount. This observation implies that S f is an indicator of a design?s sensitivity. As the number of elements in the set S f increases, the design can allow more changes in p. This in turn brings up two key observations: 1. Given two designs, the design with a larger S f is less sensitive (more robust) to changes in p than the design with a smaller S f 2. If we can control ?p such that it is always a member of S f , then we can guarantee that the ?f 0 limit will always be satisfied. Provided f(x 0 ,p 0 ) is defined and thus exists, S f is guaranteed to exist and is always unique. The existence of S f is easy to see. Because f is defined for the pair (x 0 ,p 0 ), then the smallest set possible is S f = {0}. An empty S f implies that f(x 0 ,p 0 ) does not exist; a contradiction to our assumption. The uniqueness of S f is also straightforward. Suppose x 0 has J non-unique sensitivity sets that satisfy Eq. (3.1): S f,1 ,?,S f,J . Then the unions of all these sets must necessarily also satisfy Eq. (3.1), and this superset becomes the unique sensitivity set of x 0 : S f = {S f,1 ??? S f,J }. Let us demonstrate with an example how to use the sensitivity set of a design to measure its robustness. 23 Example 3.1 Consider a cylindrical piston pin made out of stainless steel (density ? = 8.0 gr.cm -3 ) whose height and radius are: h = 5 cm and r = 2 cm, respectively. The nominal weight of the pin is W = 502.6 gr, but suppose due to variations in ? and r, the actual value of the pin?s weight varies. If we want the weight to vary by at most ?W 0 = 10 gr, determine the ?? and ?r values that can be allowed to occur. Solution The weight of the pin is given by W = ??r 2 h. When there are variations ?? and ?r, the weight becomes W? = (?+??)?(r+?r) 2 h. Setting the square of the weight difference (with and without the variations): ?W 2 = (W??W) 2 , to be less than (?W 0 ) 2 , we obtain a quadratic inequality: 0r? )? h( W t)? r2(t 42 2 2 022 ? ? ? ? ? ? ? ? ? ? ? ?? (3.2) where: t = uv 2 , u = (?+??), and v = (r+?r). Solving the inequality in Eq. (3.2) and using backward substitution, we obtain the sensitivity set of the pin design: ? ? ? ? ? ? ? ? ? ? ? ?+ ? + ???? ?+ ? ? ??= ? )rr( ? h W ? r ?? )rr( ? h W ? r :)r,?( 2 0 2 2 0 2 f S (3.3) As long as the condition in Eq. (3.3) is satisfied, the pin weight will always remain within ?10 gr. For a simple verification, if (??,?r) = (-0.4, 0.05), then the pin weight is 501.7 gr and the condition in Eq. (3.3) is satisfied. If (??,?r) = (1.0, -0.02), then the pin weight is 554.2 gr, and the condition in Eq. (3.3) is violated. If (??,?r) = (0.495, -0.04), the 24 condition is actively (as an equality) satisfied, and the pin weight is 512.6 gr (exactly equal to: nominal weight plus ?W 0 ). The reader might have readily observed that there is an apparent relationship between strict satisfaction of Eq. (3.3) and the amount of the ?W 0 limit being used up. This observation is not a coincidence. In fact, it is an important property that later will become the basis for all of our robust optimization methods. ? 3.2.2. Sensitivity Region Because we are dealing with continuous ?p, mathematically the size of S f is infinite, and as such we cannot explicitly use it as a measure of a design?s robustness. However, if we plot S f in the ?p-space, we obtain a region surrounding the origin that we call the ?sensitivity region? (SR) of the design. The size of this region explicitly corresponds to the size of S f , and therefore the size of a SR is also a measure of a design?s sensitivity: the larger the region, the less sensitive (more robust) the design. Figure 3.1 shows a typical example of a SR (for a two-parameter function). 1 p? 2 p? Point A Point B Point C 0 [][] 2 0 2 C )( ff ?>?? p [][] 2 0 2 A )( ff ?=?? p [][ 2 0 2 B )( ff ? [?f 0 ] 2 . What about points inside and on the boundary of a SR? It is not obvious from Eq. (3.1) what the conditions for these points are. As it turns out, a point inside a SR (e.g., point B) satisfies [?f (?p)] 2 < [?f 0 ] 2 , while a point on the boundary of a SR satisfies Eq. (3.1) with an equality [?f (?p)] 2 = [?f 0 ] 2 . Let us discuss the boundary point condition first. The argument for this condition is as follows. Suppose a point is on the boundary of a SR but Eq. (3.1) is satisfied as an inequality. Then the inequality implies that ?p can change by an infinitesimal amount in any direction, and the condition [?f (?p)] 2 ? [?f 0 ] 2 is still satisfied. But geometrically, if the point is on the boundary of a SR, then there is at least one direction along which the change in ?p, no matter how small, will push the point to the outside of SR, i.e., [?f (?p)] 2 > [?f 0 ] 2 (Figure 3.2). This is a contradiction, so we conclude that if a point is on the boundary of a SR, then Eq. (3.1) must necessarily be satisfied with an equality. 1 p? 2 p? 0 Point A cone of out-of-bound directions sensitivity region Figure 3.2: Geometric condition for a boundary point. It should be noted that the above argument is valid only if f(x,p) is continuous with respect to p (but not necessarily differentiable). If f(x,p) (hence ?f (?p)) is discontinuous, the condition [?f (?p)] 2 = [?f 0 ] 2 is still valid as long as the discontinuity does not occur 26 on the SR boundary (i.e., as long as it is an inner discontinuity). However, even if the discontinuity does occur on the boundary, the condition can still be valid provided the discontinuity is such that [?f (?p)] 2 = [?f 0 ] 2 as the discontinuity is approached from the left or from the right (Figure 3.3). Most engineering functions are continuous, so we rarely have to deal with this situation (recall that we are not dealing with discrete ?p). However, if the function of interest happens to be discontinuous, then we assume that the discontinuity observes either one of the two conditions described above. p? f? 0 inner discontinuity 0 f?? 0 f?+ sensitivity region [][] 2 0 2 )p( ff ?=?? is valid boundary discontinuity [][] 2 0 2 )p( ff ?=?? is valid boundary discontinuity [][ 2 0 2 )p( ff ?=?? ] is not valid Figure 3.3: One-dimensional SR example of a discontinuous function. The argument for the condition of a point inside a SR is similar to that of a boundary point condition, but there is a small complication that must be addressed. Strictly speaking, the inner point condition should be [?f (?p)] 2 ? [?f 0 ] 2 because it is possible that the equality is satisfied by inner points also (as shown in Figure 3.4). However, since we are using the size of SR to measure the robustness of a design, we are interested primarily in the SR boundary. So, to avoid computational problems it is necessary to make a clear distinction between inner and boundary point conditions. To make this distinction, we modify the given ?f 0 to , where ? f ff ? ? 00 +?=? f is an infinitesimal positive number. 27 Using , the inner point condition becomes [ and the boundary point condition becomes [ . 0 ? f? 2 0 2 ] ? [)]( ff ? 2 p? ? ? ? ? Figure 3.19: Comparison between ? and ?p 0 . 3.3.2. Constrained Robustness Approach In robust optimization, we want to simultaneously optimize performance of a design and maximize its robustness, where we use the objective function f(x,p 0 ) to measure the design performance, and ? to measure its robustness. By doing so, we have essentially converted the original single objective optimization problem into a two-objective one. 51 Because of the trade-off between performance and robustness of a design, in general there is a set of optimum solutions to this two-objective problem. It is then up to the designer to select the single most preferred design from this set by trading-off the two objectives. However, the ? measure that we use here does not have a physical association with the design itself and as such it would be difficult for the designer to do such a trade- off analysis. This difficulty is compounded further by the fact that ? has a semi-cardinal scale. To avoid these difficulties, in this dissertation we use a constrained approach to robustness. Instead of maximizing ?, we constrain it to be greater than or equal to 1 to make sure that the WCSR of the optimum design at least encloses the ?p 0 ranges. The formulation for our robust optimization problem is then as follows: 0?1 K,...,1;0),(h J,...,1;0),(g:subject to ),(minimize 0 0 0 ?? == =? k j f k j p p p x x x x (3.16) where the last constraint is the robustness constraint, f R)G(? 21? = , and R f is calculated by solving an inner optimization shown in Eq. (3.12) (re-shown here for convenience). [] 0 ][ )( 1:tosubject )p()(Rminimize 2 0 2 0 2 1 G 1 2 = ? ???? ? ? ? ? ? ? ? ? ?=? = ? f f i if pp p p (3.17) Notice that although not explicitly shown in Eq. (3.16), ? is a function of x, the design variable, because R f is a function of x. In Eq. (3.17), )( 0 pp ????f is a function of x. 52 Using Eq. (3.16), our search for a robust optimum design is performed in the N- dimensional x-space, in which during this search, we run another search, Eq. (3.17), in the G-dimensional p? -space. Although our approach involves two optimization problems, this is not a multi-disciplinary optimization approach. Rather, Eq. (3.17) is simply a tool to obtain the robustness information needed by Eq. (3.16). As such, convergence of this approach depends entirely on the convergence of Eq. (3.16), and has nothing to do with the links between the two problems. If Eq. (3.16) has an optimum solution before the robustness constraint is added, but does not have a feasible solution after it is added, it simply means that there is no design that satisfies our robustness requirement (i.e., the addition of the robustness constraint causes the feasible domain of Eq. (3.16) to become empty). Eq. (3.17) is a simple single objective optimization problem with one constraint. If the function f(x,p) is simple enough, Eq. (3.17) can be solved analytically like in our piston pin examples. Otherwise, traditional gradient-based optimization methods, such as Quasi-Newton methods or the Generalized Reduced Gradient method, can be used to solve it. The choice of an optimizer for Eq. (3.16) depends on the complexity of the problem. Nevertheless, because of the nature of the robustness constraint, we recommend using global optimization methods to solve Eq. (3.16). Unlike conventional constraints, the robustness constraint in Eq. (3.16) may not necessarily divide the search space into two well-defined feasible and infeasible half-spaces. Rather, it may result in many disjointed infeasible domains in the feasible domain, in effect adding ?holes? to the original feasible domain (Figure 3.20). Because of this effect, direction-based optimizers may have difficulty converging. 53 x 1 x 2 (a) = feasible conventional constraints x 1 x 2 (b) = feasible robustness constraint Figure 3.20: Effect of adding robustness constraint. Sometimes, the constraint ? ? 1 is too strict, that is no design has that much robustness. In this case, the right hand side value of 1 needs to be lowered. To give the designer the flexibility to change the desired level of robustness, we use a quantity ? 0 to replace this value of 1, where ? 0 is determined by the designer. The larger ? 0 , the more robust the designer wants the optimum design to be, and vice versa. Using ? 0 in the robustness constraint, our robust optimization formulation becomes: 0 ? ? 1 K,...,1;0),(h J,...,1;0),(g:subject to ),(minimize 0 0 0 0 ?? == =? k j f k j p p p x x x x (3.18) where again ? is calculated by solving Eq. (3.17). 3.4. COMPARISON STUDY To demonstrate our robust optimization method, we applied it to four examples: one numerical example and three engineering examples. The purpose of the numerical example is to provide a graphical verification of the results obtained by our method. The 54 purpose of the three engineering examples is as follows. The welded beam example demonstrates an application of our method to a non-differentiable objective function. The three-bar truss and compression spring examples compare our method to mean-based and worst case analysis-based robust optimization methods, respectively. 3.4.1. Wine-Bottle Function This example is originally formulated by Van Veldhuizen and Lamont (1998) as a multi-objective optimization problem. We converted it into a single objective problem by significantly modifying and optimizing one of the original objectives. The problem has two variables x 1 and x 2 , both continuous, and the objective is to minimize the ?wine- bottle? function f(x 1 ,x 2 ). There are variations in the variables (?x 1 and ?x 2 ), and we want to minimize the sensitivity of the optimum solution with respect to these variations. Because the variability occurs only in the design variables (there are no other noisy factors), in this problem p = x and x?=?p . The mathematical formulation of the problem is as follows: 1),(0 36B;36A )2BAsin( 2 3BA C:where 32.9 1C ),(minimize 21 21 22 22 21 ?? ?=?= +++ ++ = + = xx xx xxf (3.19) Figure 3.21 shows a three-dimensional surface mesh of the objective function. As shown in this figure, the function is axially symmetric and has a dome-like region in the center whose shape resembles the bottom of a wine bottle. The function has an infinite number of global minima circularly located around the center point (x 1 ,x 2 ) = (0.5,0.5), 55 and they are defined by a circle equation: (x 1 ?0.5) 2 + (x 2 ?0.5) 2 = (0.246) 2 . The objective value at the global minima is f * = 0.292. Points on the dome-region of the function have a slightly higher objective value (f R = 0.365) compared to the global minima. However, this region of the function is flat and thus the points in this region are insensitive to the variable variations. Figure 3.21: Surface plot of the wine-bottle function. The maximum allowable variation in f is ?f 0 = 0.01, and the variation ranges are known to be ?p = (0.05,0.05). We added the robustness constraint into Eq. (3.19), and then solve it using a Genetic Algorithm (GA) for the outer optimization problem, and MATLAB?s fmincon function for the inner optimization problem (Eq. (3.17)). MATLAB?s fmincon function is an optimizer based on the Sequential Quadratic Programming (SQP) method with BFGS formula for the Hessian estimation. We used a GA for the outer optimization because the objective function has multiple local minima (see Figure 3.21) so that direction-based optimizers may not find the global minima. Because GA is a stochastic method, the robust optimum obtained might differ from one run to another. To account for this, we solved the problem 50 times (using different 56 initial population each time). Figures 3.22 and 3.23 show the robust optima obtained from solving the problem 50 times each for three different values of ? 0 . Figure 3.22 shows the distributions of the optima on the contour plot of the objective function, while Figure 3.23 shows those points of the distributions that lie on the cross-section of the function when x 2 = 0.5. x 1 x 2 ? 0 = (2) -1/2 = 0.707 ? 0 = 0.35 ? 0 = 1 x Figure 3.22: Distributions of robust optima on a contour plot. x 2 = 0.5 ? 0 = 0.35 ? 0 = (2) -1/2 = 0.707 ? 0 = 1 x x 1 f Figure 3.23: Distributions of robust optima on a cross-section plot. We observe in Figure 3.22 and 3.23 that for ? 0 = 1, the optimum obtained is located at the dome region of the function for all 50 GA runs. Because this region is insensitive to the variations, these results confirm that solving the problem using our robust 57 optimization method results in a robust optimum. We also observe from Figures 3.22 and 3.23 that as ? 0 is reduced (relaxing the robustness constraint), the objective value decreases (gets better). The optimum objective value for ? 0 = 1 is f 1 = 0.365, while f 0.707 = 0.361 and f 0.35 = 0.292. When ? 0 = 0.35, the robust optimum obtained is the global optimum. We expect to observe these phenomena because in general there are trade-offs between performance and robustness. 3.4.2. Design of a Three-Bar Truss This example was first formulated by Schmit (1960), and has been thoroughly studied in (Sun et al., 1975) and (Haug and Arora, 1979). Here, we modified the problem by substituting the original objective with one of the structural constraints and by adding some variations into the problem. In this problem, we are designing a system of three-bar truss with a constant force P = 100 kN acting at an angle ? at the end of the truss as shown in Figure 3.24 (l = 1.0 m). The truss is symmetric (member 1 and member 3 are identical), and is to be designed for minimum stress in members 1 and 3. The variables in this problem are the cross section area of members 1 and 3 (A 1 ), and the angle ?. l l l u v P ? 132 1 2 3 4 Figure 3.24: A three-bar truss. 58 To prevent failure, the design is subjected to 6 structural constraints. The horizontal and vertical deflection at node 4 (u and v) must be less than u a = 0.5 cm and v a = 0.5 cm, respectively. The stress on member 2 (? 2 ) must be less than ? a = 140 MPa. The buckling load on all members must satisfy the buckling constraint. In addition, the total volume of the members is constrained to be less than V a = 2000 cm 3 . The mathematical formulation of the problem is given in Eq. (3.20). 2 1 2 a 7 b1 3 6 b2 2 5 b1 1 4 a 2 3 a 2 a 1 211 ?,A cm 01Acm 2;40?30 01 V V g 01 F F g;01 F F g 01 F F g;01 ? ? g 01g;01g:subject to )A2A( ? )sin( A )?cos( 2 P ?minimize 1 ???? ??? ?????? ?????? ?????? ? ? ? ? ? ? + += oo v v u u (3.20) The constraints are calculated by the following formulas. In calculating the buckling load, the members are considered to be columns with pins at both ends. )EA2(A )?(inP2 ; EA )?cos(P2 211 + == sl v l u (3.21) ? ? ? ? ? ? + +?= + = )A2(A )?sin( A )?cos( 2 P F; )A2(A )?cos(P2 ? 211 1 21 2 (3.22) )A2(A )?sin(P2 F; 2 AE?? F 21 2 2 1 2 1b + ?== l (3.23) ? ? ? ? ? ? ? + ?== 121 3 2 2 2 2b A )?cos( )A2(A )?sin( 2 P F; AE?? F l (3.24) 59 )AA22(V 21 += l (3.25) where: E = Young?s modulus (= 70 GPa) A 2 = cross section area of member 2 (= 2 cm 2 ) ? = non-dimensional constant (= 1.0) Using MATLAB?s fmincon to solve Eq. (3.20), we obtained the nominal optimum design (A 1 * ,? * ) = (6.36, 40), and ? * = 134.56 MPa in just 3 iterations. The constraint values of this nominal optimum are: g = (-0.5133, -0.7173, -0.2934, -1.6128, -1.716, -0.8375, 0.0), where g 7 is active. The two design variables are known to vary by (?A 1 , ??) = (0.1 cm 2 , 5?), and we would like to minimize the sensitivity of the optimum design with respect to these variations. The allowable variation in the objective is ?? = 2.75 MPa. We added the robustness constraint (with ? 0 = 1) into Eq. (3.20) and then solved it using fmincon for both the inner and outer optimization problems. On average (we ran the algorithm using many different initial points) the outer optimization converges in 9 iterations. On average, the inner optimization converges in 13 iterations. The robust optimum design obtained is shown in Table 3.1. For comparison, we also solved Eq. (3.20) for the robust optimum using two conventional robust optimization approaches: (i) minimizing the mean value of the objective function, and (ii) minimizing the sum of mean and standard deviation. The mean and standard deviation are calculated by performing 10,000 Monte Carlo simulations around the design following a uniform pdf of the ranges (?A 1 , ??) = (0.1 cm 2 , 5?). The optimum designs obtained using these two methods are also shown in Table 3.1. 60 Table 3.1: Optimum designs of the three-bar truss. Nominal Optimum Robust Optimum Mean Optimum Mean+Std Optimum ? (Mpa) 134.56 135.08 134.69 135.06 ? 0.826 1.0 0.832 0.885 A 1 (cm 2 ) 6.364 6.364 6.36 6.35 ? (degree) 40 36.3 30 38.6 F call N/A 40 10000 10000 In Table 3.1, the quantity F call is the number of function evaluations needed per design to obtain its robustness information. For our method, this quantity is the number of function evaluations performed in solving the inner optimization. For the mean-std methods, this quantity is the number of samples. In this table we have also shown the ? value of each optimum design (remember, each design has a corresponding ? value even though it is not obtained by our robust optimization method). We see in Table 3.1 that the nominal optimum has the lowest ? value (the objective function), but it also has the lowest ? value. In contrast, our robust optimum has the highest ? value of the four optima, but it has the highest ?. The ? and ? values of the other two optima are somewhere in between. This observation is just what we expected. The larger ?, the more robust the design, but at the expense of performance degradation (i.e., stress increases). We also see in this table that of the four optima, only the ? value of our robust design is equal to 1. So, according to our theoretical development, when the variations occur, our robust design should be the only one that satisfies the requirement ?? ? 2.75 MPa. Let us verify this claim. We randomly perturbed (A 1 , ?) 20 times around their nominal values within the given ranges, and calculated the new ? value of the design using these perturbed values. 61 The difference between this new ? value and the previous value is the ??. We did the same analysis for all four optima, using the same (?A 1 , ??) for each. Figure 3.25 shows the graphs of the ?? of the optima for the 20 random perturbations. In this figure the dashed lines are the ?? 0 limit (2.75 MPa). -3.25 -2.75 -2.25 -1.75 -1.25 -0.75 -0.25 0.25 0.75 1.25 1.75 2.25 2.75 3.25 16116 Case # ?? (a) Nominal optimum -3.25 -2.75 -2.25 -1.75 -1.25 -0.75 -0.25 0.25 0.75 1.25 1.75 2.25 2.75 3.25 1 6 11 16 Case # ?? (b) Robust optimum -3.25 -2.75 -2.25 -1.75 -1.25 -0.75 -0.25 0.25 0.75 1.25 1.75 2.25 2.75 3.25 16116 Case # ?? (c) Mean optimum -3.25 -2.75 -2.25 -1.75 -1.25 -0.75 -0.25 0.25 0.75 1.25 1.75 2.25 2.75 3.25 1 6 11 16 Case # ?? (d) Mean+std optimum Figure 3.25: Sensitivity analysis of the three-bar truss optima. We observe in Figure 3.25 that when the variations occur, the nominal optimum design as well as the mean and mean+std optima violates the allowable ?? limit. Only our robust optimum design stays within the limit, thus showing that this design satisfies our robustness requirement. 62 To further validate the robustness information provided by the ? value, we calculated the SR and WCSR of each of the optima, and compared them to the (?A 1 , ??) range. Since the objective function involves trigonometric expressions, analytical derivation of SR is impossible, so we derived them numerically instead. We constructed an orthogonal grid in the range (?A 1 , ??) = (?0.2 cm 2 , ?10?), and calculated the ?? of the optimum designs at each junction. The SR is obtained by determining if (??) 2 ? (?? 0 ) 2 at these junctions, and the WCSR is obtained by finding the point on the SR boundary closest from origin. The SR and WCSR of the four optima are shown in Figure 3.26 through Figure 3.29. In these figures, the dashed rectangles are the known (?A 1,0 , ?? 0 ) range. SR WCSR R f = 1.16 Region of failure ?A 1 (cm 2 ) ??(?) Figure 3.26: SR and WCSR of the nominal optimum. SR WCSR R f = 1.414 No region of failure ?A 1 (cm 2 ) ??(?) Figure 3.27: SR and WCSR of the robust optimum. 63 SR WCSR R f = 1.17 Region of failure ?A 1 (cm 2 ) ??(?) Figure 3.28: SR and WCSR of the mean optimum. SR WCSR R f = 1.25 Region of failure ?A 1 (cm 2 ) ??(?) Figure 3.29: SR and WCSR of the mean+std optimum. As we can see in the above figures, only the SR of our robust optimum design fully encloses the (?A 1,0 , ?? 0 ) range. The other designs have small regions for which the requirement (??) 2 ? (?? 0 ) 2 is not satisfied. The WCSR?s of the designs also reflect this observation. Only the WCSR of our robust optimum encloses the (?A 1,0 , ?? 0 ) range completely. 3.4.3. Design of A Welded Beam This example is the well-known welded beam problem originally formulated by Ragsdell and Phillips (1976) and Reklaitis et al. (1983). We slightly modified this 64 problem by adding variations to two of the parameters, and by making the objective function discontinuous with respect to one of these parameters. In this problem, a beam A is to be welded to a rigid support member B. The beam has a rectangular cross-section and is to be made out of steel. The beam is designed to support a force F = 6000 lbf acting at the tip of the beam, and there are constraints on the shear stress, normal stress, deflection, and buckling load on the beam. The problem has four (4) continuous design variables, and they are: thickness of the weld (h), length of the weld (l), thickness of the beam (t), and width of the beam (b). All variables are in inches. The objective of the problem is to minimize the total cost of making such an assembly. Figure 3.30 shows a diagram of the welded beam assembly. l L F h t b B A Figure 3.30: A welded beam assembly. The complete formulation of the problem is shown in Eq. (3.26). 0.21.0 0.101.0 0.101.0 0.21.0 01 125.0 g01g 01 F g01 0.25 g 01g01g:subject to )L(c)c1(minimize 65 43 21 4 2 3cost ???? ???? ?????? ?????? ?????? +++= bt lh hb h P ltblhf c dd ? ? ? ? ? (3.26) 65 where: c 3 = cost of weld material ($0.1047 /inch 3 ) c 4 = cost of beam material ($0.0481/inch 3 ) ? = maximum shear stress in weld (psi) ? d = allowable shear stress of weld (13,600 psi) ? = maximum normal stress in beam (psi) ? d = allowable normal stress in beam (30,000 psi) ? = deflection at beam end (inch) P c = allowable buckling load (lbf) L = length of unwelded beam (14 inch) The quantities ?, ?, ?, and P c are calculated as shown in Eq. (3.27) through Eq. (3.33). To calculate ?, it is assumed that the beam is a cantilever beam with length L, and for steel G = 12 ?10 6 psi and E = 30 ?10 6 psi. () () 22 "cos"'2' ?????? ++= (3.27) R l J MR hl 2 cos;"; 2 F ' === ??? (3.28) 2 2 24 ; 2 LF ? ? ? ? ? ? + += ? ? ? ? ? ? += thl R l M (3.29) ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? + += 2 2 212 707.02 thl hlJ (3.0) btbt 3 3 2 E FL4 ; FL6 == ?? (3.1) ? ? ? ? ? ? ? ? ?= ? ? EI L2 1 L EI013.4 2 t P c (3.2) 33 G 3 1 ; 12 1 I tbtb == ? (3.3) 66 The parameters L and c 3 vary by some amount ?L and ?c 3 , and they affect the total cost of the welded beam assembly as shown in Eq. (3.34) (the notation ??? is for rounding up to the nearest integer). Notice that Eq. (3.34) reduces to the original f cost in Eq. (3.26) when ?L = 0 and ?c 3 = 0. ? ? ? ? ? ? ? = +++?++= 05.0 L $0.01cwhere c)L(c)cc1( 5 54 2 33cost lltblhf (3.34) We want to find a design that minimizes f cost and is insensitive to ?L and ?c 3 . We added the robustness constraint into Eq. (3.26), and we set ?f 0 = $0.30 and ?p 0 = (0.05, 0.25). In this problem, p = (c 3 ,L) t and p 0 = (0.1047, 14) t . We solved the problem using a GA for the outer and inner problems (with ? 0 = 1), and obtained a robust optimum value of f R = $2.49. For comparison, we solved Eq. (3.26) using the same GA, and obtained a nominal (non-robust) optimum of f cost = $2.39 (which is very close to that reported by Ragsdell and Phillips (1976): f cost = $2.38, and by Deb (1991): f cost = $2.43). For further comparison, we also solved Eq. (3.26) for robust optimum by minimizing the mean value of the design. The mean value was calculated using Monte Carlo simulation assuming uniform and normal probability distribution for the parameters. For the uniform assumption, ?c 3 distribution is modeled between [-0.05, 0.05] while ?L distribution is modeled between [-0.25, 0.25]. For the normal assumption, ?c 3 distribution is modeled to have a mean and standard deviation of [0, 0.05/3] while ?L distribution is modeled by [0, 0.25/3]. Table 3.2 shows a list of the optimum designs obtained (for the Monte Carlo optima, the f cost shown are the mean values). The quantity F call is the number of function 67 evaluations needed per design to obtain the sensitivity information. As shown in Table 3.2, making different assumptions about the pdf of the uncertain parameters can lead to significantly different results. We also observe in Table 3.2 that the solution obtained by our method is not the same as the one obtained by the Monte Carlo method for a uniform pdf. This shows that although our method uses a range of parameter variations, it is not the same as assuming a uniform pdf for the parameters. This in fact is an advantage of our method in that it does not require a presumed pdf of the uncertain parameters. In Chapter 7 we will show how to use the probability distribution information, if they are available, with our robust optimization method. Table 3.2: Optimum designs of the welded beam. Nominal Optimum Robust Optimum Monte Carlo (Uniform) Monte Carlo (Normal) f cost 2.39 2.49 2.63 2.91 h 0.241 0.246 0.257 0.337 l 6.158 5.461 5.8 5.054 t 8.5 9.138 8.267 7.058 b 0.243 0.248 0.257 0.337 F call N/A 250 100000 100000 Figure 3.31 shows the SR and WCSR of the nominal optimum and the robust optimum obtained using our method. In this figure, the shaded region bounded between the straight line on the left and the step function on the right is the SR, the ellipse is the WCSR, and the dashed rectangle is the ?p 0 ranges. We obtain the SR?s analytically by substituting the values in Table 3.2 into the f cost function in Eq. (3.34) and obtain the ?f cost as a function of ?c 3 and ?L. Then using the given ?f 0 value, we solve for ?c 3 at the six discrete ?L values. Mathematically, the left boundary of the sensitivity region is also a step function similar to that of the right boundary. However, the parameter c 3 represents a 68 material cost, so in reality it cannot be negative. For this reason we restrict the ?c 3 value to be ?0.10471 at the lowest (the straight line left boundary). We obtain the WCSR?s by solving the inner optimization problem using the values in Table 3.2. The WCSR?s shown are ellipses (instead of circles) because Figure 3.31 shows the regions in the non- normalized ?p-space (for clarity of presentation). -0.3 0.3 0 -0.10471 0.09529 0.29529 0.49529 0.69529 0.89529 3 c? L? (a) Nominal optimum -0.2 -0.1 0.1 0.2 SR WCSR Region of failure 0 0.89529 3 c? L? (b) Robust optimum 0.09529 0.29529 0.49529 0.69529-0.10471 -0.3 -0.2 -0.1 0.1 0.2 0.3 SR WCSR No region of failure Figure 3.31: SR and WCSR of the optima: (a) nominal, (b) robust. Notice that the SR?s are unbounded. This is because for the range of ?c 3 values where the regions are unbounded, the value c 5 = $0.05 (|?L|>0.20) still makes the inequality (?f) 2 ? (?f 0 ) 2 satisfied. Notice also that the SR boundary is discontinuous because f cost is a step function. However, at the discontinuity, (?f) 2 = (?f 0 ) 2 is satisfied either from the left or from the right, so our SR boundary condition (?f) 2 = (?f 0 ) 2 is valid (recall Figure 3.3). We observe from Figure 3.31 that the SR of the nominal optimum does not fully enclose the ?p 0 ranges while the robust optimum?s does. This remains true when using 69 the WCSR?s as estimates of the SR?s. We conclude that in the worst case sense, the robust optimum is less sensitive to p = (c 3 ,L) t than the nominal optimum. To further assess the sensitivity of the robust optimum design obtained, we calculated the f cost value of the design for 15 perturbed values of the two uncertain parameters (c 3 and L), and determined how much the f cost differed from the unperturbed f cost value (Table 3.2). For comparison, we performed the same analysis (using the same ?c 3 and ?L values) to calculate the ?f cost of the nominal and the Monte Carlo optima. In this analysis, the values for ?c 3 and ?L were randomly sampled from the ?p 0 range. Figure 3.32 and Figure 3.33 show the graphs of the ?f cost of the optimum designs for the 15 cases. The dashed-lines in these figures are the ?f 0 . We observe in Figure 3.32 that in all 15 cases the ?f cost of the robust optimum is less than that of the nominal optimum, i.e., ?f R < ?f N . This shows that the robust optimum obtained by our method is less sensitive to ?p than the nominal optimum. In addition, we also observe that in all 15 cases, the ?f cost of the robust optimum stays within the acceptable bound, while the nominal optimum violates the bound in cases 6 and 15. In Figure 3.3 we observe that both Monte Carlo optima also violate the bound in cases 6 and 15. 0 0.1 0.2 0.3 0.4 12345678910112131415 Case # ? f co s t Nominal optimum Robust optimum ? f co s t Figure 3.32: Sensitivity analysis of the welded beam optima. 70 1 2 3 4 5 6 7 8 9 101112131415 0 0.1 0.2 0.3 0.4 Case # ? f cost MC (uniform) MC (normal) ? f cost ? f cost Figure 3.33: Sensitivity analysis of the welded beam optima. In this chapter, our robust optimization method looks only at the objective robustness of a design, and we implicitly assume that the robust optimum design remains feasible when the variations occur. To verify the validity of this assumption, we calculated the constraints of the robust optimum for the 15 random cases. The results are shown in Table 3.3. In this table we calculated only the constraints g 1 , g 2 , g 3 , g 4 because g 5 and g 6 are independent of c 3 and L. As shown in Table 3.3, in all 15 cases the constraints for the robust optimum are still satisfied (g ? 0), so our feasibility assumption holds. Table 3.3: Constraints of the robust optimum design. Case # g1 g2 g3 g4 1 -0.016 -0.204 -0.956 -0.134 2 -0.004 -0.190 -0.954 -0.109 3 -0.004 -0.190 -0.954 -0.109 4 -0.014 -0.201 -0.955 -0.128 5 -0.010 -0.196 -0.955 -0.120 6 -0.011 -0.198 -0.955 -0.124 7 -0.008 -0.194 -0.954 -0.116 8 -0.014 -0.201 -0.955 -0.128 9 -0.002 -0.188 -0.953 -0.105 10 -0.017 -0.205 -0.956 -0.135 11 -0.009 -0.195 -0.955 -0.119 12 -0.004 -0.190 -0.954 -0.109 13 -0.016 -0.204 -0.956 -0.134 14 -0.014 -0.201 -0.955 -0.129 15 -0.015 -0.202 -0.956 -0.131 71 Interestingly, if we look into the constraints of the nominal optimum, we will find that constraints g 1 , g 2 , g 3 , g 4 of this design are active or nearly active, and when c 3 and L change, g 1 and g 4 are violated. So, in some sense the robust optimum provides design robustness not only in terms of the objective, but in terms of feasibility also; although obviously this observation is not general and is valid for this particular example only. We will discuss our feasibility robustness approach in Chapter 5. 3.4.4. Design of a Compression Spring This example is from Arora (2001) and has been modified to demonstrate the application of our robust optimization method. Consider a coil spring loaded in compression as shown in Figure 3.34. Before the force P = 10 lbf is applied, the spring is at its free length. The spring is to be used as an energy-storing device, and we are designing it to have as large a restoring force as possible by maximizing the axial deflection ?. There are three variables that affect the design: the wire diameter (d), the mean coil diameter (D), and the number of active coils (N). The wire diameter and the mean coil diameter are measured in inches, and the number of active coils must be an integer between 2 and 15. P P D d ? Figure 3.34: A compression spring. To guarantee a proper design, five (5) constraints are imposed. The wires of a spring under compression experience twisting, so to prevent shear failure, the shear stress due to 72 this twist is constrained to be less than ? A = 80 ksi. In case of a dynamic load P, we would like to avoid resonance by requiring that the surge wave frequency of the spring is greater than ? A = 100 Hz. The outer diameter of the spring is constrained to be less than OD A = 1.5 in, and the total mass of the spring is constrained to be less than M A = 2.309 x 10 -5 lbm. To prevent an unrealistic design, the spring deflection is constrained to be less than ? A = 0.75 in (objective constraint). In addition to the design constraints, there is also a lower and upper bound constraints on the design variables. The mathematical formulation of the problem is shown in Eq. (3.34). In this formulation: G = 1.15 x 10 7 psi is the shear modulus, Q = 2 is the number of inactive coils, and ? = 7.383 x 10 -4 lbm/in 3 is the mass density. For a more detailed analysis of a spring design problem, consult Spott (1953) or Wahl (1963). 15N2;30.1D25.0;20.0d05.0 01 ? ? g 0 M M 1g;01 OD dD g 0 ? ? 1g;01 ? ? g:subject to Gd NDP8 ?maximize A 5 A 4 A 3 A 2 A 1 4 3 N)D,(d, ?????? ??? ????? + ? ?????? = (3.34) where: ? ? ? ? ? ? + ? ? = D 0.615d 4d4D d4D ? d 8PD ? 3 (3.35) 2? G N2?? d ? 2 = (3.36) Q)(N 4 ?Dd? M 22 += (3.37) 73 We use a GA (Goldberg, 1989) to solve Eq. (3.34) and obtain the nominal optimum design (d,D,N) * = (0.0519 in, 0.3616 in, 11) for a maximum deflection of ? * = 0.4985 in. At the nominal optimum, constraints g 1 and g 4 are active. The lower bound of the variable d is also nearly active. In our implementation we use GA to solve this problem because one of the variables is an integer. However, other mixed-integer programming methods such as Branch and Bound algorithm (Belegundu and Chandrupatla, 1999) may also be used. For comparison, if we relax the variable N to be continuous, the nominal optimum becomes (d,D,N) * = (0.0517 in, 0.3569 in, 11.293) for a maximum deflection of ? * = 0.50 in. Three of the problem?s parameters have variability in them: the wire diameter d varies by ??d 0 = 0.001 in; the mean coil diameter D varies by ??D 0 = 0.01 in; and the compression force P varies by ??P 0 = 0.1 lbf. We want to minimize the sensitivity of the optimum design with respect to these variations. Using ?? 0 = 0.075 in, we added our robustness constraint to Eq. (3.34) and solved it using GA for the inner and outer optimization problems (? 0 = 1). The robust optimum obtained is (d,D,N) R * = (0.0548 in, 0.4219 in, 8) for a deflection of ? * = 0.4633 in (a 0.0352 in. difference from the nominal optimum). For comparison, we also solved the problem using the constrained worst-case gradient approach where we added the gradient robustness constraint: ? = ? ? ? =? 3 1 0, p p ? ? i i i ? ?? 0 to Eq. (3.34). The optimum obtained using this worst-case gradient approach is (d,D,N) G * = (0.0560 in, 0.3319 in, 9) for a deflection of ? * = 0.2321 in. The values of the nominal, robust, and gradient optima are shown in Table 3.4. 74 Table 3.4: Optimum designs of the compression spring. Nominal Optimum Robust Optimum Gradient Optimum ? (in) 0.4985 0.4633 0.2321 ? 0.8248 0.9942 1.9109 d (in) 0.0519 0.0548 0.056 D (in) 0.3616 0.4219 0.3319 N118 F call N/A 250 N/A 9 We see in Table 3.4 that the nominal optimum has the largest deflection, as expected, followed by the robust optimum and then the gradient optimum. If we look at the ? value, our robust optimum satisfies the ? ? 1 requirement with an equality (rounded up), the nominal optimum does not satisfy the requirement, while the gradient optimum over- satisfies it. Although the gradient optimum more than satisfies our robustness requirement, it comes with a significant reduction in performance (less than half of the nominal value). This observation brings up a very important fact regarding our robust optimization method. Although we label our WCSR measure as a ?worst-case? approach, it is not really worst-case per se; at least not in the traditional sense. Traditionally, the term ?worst-case? refers to a situation where all the worst possible variabilities occur together simultaneously. Our WCSR measure is different. Instead of blindly using the worst variabilities for each parameters, it has implicitly taken into account the fact that some of these variables might cancel out. This is the reason why our robust optimization method can obtain a design with such a high performance while still satisfying our robustness requirement. This capability comes with a price, however, namely it needs to perform 75 some function evaluations to gather the robustness information (~250 function calls per design in this example). To validate the robustness of the optimum designs shown in Table 3.4, we perturbed (d,D,P) 20 times within the (?d 0 ,?D 0 ,?P 0 ) range, and each time calculated the deviation of the ? value of each design from its original value. The results of our analysis are shown in Figures 3.35 through 3.37. In these figures the dashed lines are the ?? 0 . We can make a couple observations from these figures. We see in Figure 3.35 that the nominal optimum violates the ?? 0 requirement in case 11, while the robust and the gradient optima never violate ?? 0 (Figure 3.36 and Figure 3.37, respectively). We also observe in these figures that on average the ?? of the nominal optimum is large, while the robust optimum?s is less, and that of the gradient optimum is least. This observation verifies the information provided by their ? values. -0.1 -0.08 -0.06 -0.04 -0.02 0 0.02 0.04 0.06 0.08 0.1 1 6 11 16 Case # ???? Figure 3.35: Sensitivity analysis of the nominal optimum. 76 -0.1 -0.08 -0.06 -0.04 -0.02 0 0.02 0.04 0.06 0.08 0.1 1 6 11 16 Case # ???? Figure 3.36: Sensitivity analysis of the robust optimum. -0.1 -0.08 -0.06 -0.04 -0.02 0 0.02 0.04 0.06 0.08 0.1 1 6 11 16 Case # ???? Figure 3.37: Sensitivity analysis of the gradient optimum. 3.5. SUMMARY ? For each design alternative, there is a unique set of ?p that indicates how much variation the design can absorb before the ?f 0 limit is violated. This set is called the sensitivity set (S f ) of that design. ? Size of a S f is a measure of how robust a design is: the larger S f , the more robust the design. In addition, since S f measures how much variation a design can absorb, if we 77 can control the actual variations such that they are always within S f , then we are guaranteed that the performance of the design is always within ??f 0 . ? The plot of S f in the ?p-space is called the Sensitivity Region (SR) of the design. The ?p points inside, outside, and on the SR boundary satisfy [?f (?p)] 2 < [?f 0 ] 2 , [?f (?p)] 2 > [?f 0 ] 2 , and [?f (?p)] 2 = [?f 0 ] 2 , respectively. ? The size of a SR is a measure of a design?s robustness: the larger SR, the more robust the design. However, since a SR is typically asymmetric, we have to account for the directional sensitivity of the design as well. ? We use the most sensitive direction in approximating a SR of a design to account for the worst-case situation. This worst-case approximation to a SR is called the Worst Case Sensitivity Region (WCSR), and is defined as the smallest hyper-sphere inside the SR that touches the SR boundary on at least one point. ? Size of a WCSR is a worst-case measure of a design?s robustness: the larger the WCSR, the more robust the design. The radius of a WCSR (R f ) can be calculated by solving an optimization problem shown in Eq. (3.4). Since WCSR size has a semi- cardinal scale, it is not necessary to calculate its volume. The radius value is sufficient to compare robustness of two or more designs. ? If the magnitudes of the ?p are significantly different, then we need to normalize the optimization problem used to calculate R f . We use a single-valued normalization with ?p 0 as the reference point. ? To determine whether or not a design meets our robustness requirement, we use the robustness index f R)G(? 21? = . A design is robust if ? ? 1, and is not robust if ? < 1. 78 ? In robust optimization we want to simultaneously maximize performance and robustness. These objectives are often conflicting, so to avoid having to make a trade- off, we use a constrained robustness approach instead. ? Using the constrained robustness approach, our robust optimization method searches for the robust optimum design by solving two optimization problems: an inner and outer optimization. The inner optimization is used to calculate a design?s robustness which is then used in the outer optimization. 79 CHAPTER 4 MULTI-OBJECTIVE ROBUST OPTIMIZATION 4.1. INTRODUCTION When an optimization problem has only one objective, the notion of a robust optimum is straightforward because the only trade-off present is between the objective and the robustness of the design. When a problem has multiple objectives, however, it is not quite so because the same variability might affect some or all of the objectives, and in turn, a design will have different robustness behavior with respect to different objectives. If we follow the conventional definition of a robust optimum design in the context of a multi-objective optimization, we might end up making trade-offs between the performance of the design in terms of one objective and its robustness in terms of another objective. Clearly, not only this is difficult to do, it is not very useful either. The purpose of this chapter is to present the concept of multi-objective robustness, a method on how to measure such robustness, and an optimization scheme to obtain a design that is optimum and robust multi-objectively by using this measure. The method to measure a multi-objective robustness of a design presented in this chapter is a generalization of the WCSR measure we presented in Chapter 3, so there will be similarities, but with important differences. Like in Chapter 3, here we look into the robustness of a design in terms of its objectives only. Feasibility robustness will be covered in Chapter 5. Several examples are given at the end of the chapter to demonstrate the applications of the method. 80 4.2. BASIC CONCEPTS Before we present the method to obtain robust solutions of a multi-objective problem, we need to first discuss what it means for a design to be multi-objectively robust and optimum. 4.2.1. Multi-Objective Robustness When there is only one objective, a design is termed ?robust? if its objective value is insensitive to parameter variations. When there are multiple objectives, a design has different robustness behaviors depending on the objective in question. So by direct extension, a design is termed ?robust? multi-objectively if each of its objective value is insensitive to parameter variations. In other words, a design is robust (or insensitive) if ?f i is small, for all i = 1,?,M. In the f-space, the robustness of a design is depicted graphically as the hyper-rectangle constructed from the ?f i ranges (Figure 4.1); the smaller this hyper-rectangle (i.e., a rectangle in two-dimension, as shown in Figure 4.1), the more robust the design. In Figure 4.1, the solid triangles denote the nominal f values of the designs A, B, C and D shown. f 1 f 2 A C D B ?f A,2 ?f A,1 Figure 4.1: Graphical illustration of multi-objective robustness. 81 As shown in Figure 4.1, it is very often the case that a design is less sensitive in one objective than the other objective(s) (design B). Also, the negative and positive ranges of ?f i do not necessarily have to be equal (design A and D). It should be noted that the term ?insensitive? is subjective and depends on the preferences of the decision maker. If the decision maker is risk averse, then for a design to be insensitive, its ?f i must be very small for all objectives i = 1,?,M. On the other hand, if the decision maker is risk prone, then even a rather large ?f i is acceptable. In this chapter, we deem a design multi-objectively robust if |?f i | ? |?f i,0 | for all i = 1,?,M, where ?f 0 = [?f 1,0 , ?, ?f M,0 ] is the ranges of acceptable f variation determined by the decision maker. The multi-objective robustness described previously is defined for one design only. If we have a set of designs, the overall robustness of the set is determined by looking at the robustness of each and every design, one at a time. In other words, our definition of multi-objective robustness is for point robustness. Another type of multi-objective robustness worth mentioning is that of set robustness. Unlike point robustness, set robustness is not defined for one design, but rather for a set of designs, and this set must be a trade-off set. A trade-off set is a set of designs in which all designs in the set are non-inferior with respect to each other (recall Section 2.2 that this set is different from a Pareto set). We define a set of trade-off designs to observe set robustness if the set remains a trade-off set when p varies. Figure 4.2 shows a comparison between a set of trade-off designs that observes point robustness and a set that observes set robustness. We see in Figure 4.2(a) that the ?f of each design in the set is small, but for some of the ?f rectangles, there are portions that lie 82 on either the dominant or inferior region of another ?f rectangle. What this means is that when p varies, it is possible that these points are no longer trade-off points (i.e., one dominates or is dominated by others). In contrast, the ?f of each design on the set shown in Figure 4.2(b) is rather large (i.e., not a point robustness), but all the ?f rectangles lie in the non-inferior region of each other. This means that no matter how the points change, these points will always remain a trade-off set. It can be readily seen in Figure 4.2 that as ?f becomes infinitely small, point robustness implies set robustness and vice versa. f 1 f 2 a b c ?a? is always non- inferior with ?b? ?c? is always non- inferior with ?b? (a) Point robustness (b) Set robustness f 1 f 2 a b c ?a? can be inferior to ?b? ?b? can dominate ?c? Figure 4.2: Comparison between point and set robustness. In multi-objective robust optimization, we are interested in the point robustness of a design only, and not its set robustness. This is because practically, the set robustness of a design is not important. We will discuss this issue further in the next section. 4.2.2. Multi-Objective Robust Optimality The goal of Multi-Objective Robust Optimization (MORO) is to obtain a set of design alternatives that are: (i) multi-objectively robust (point robustness), and (ii) Pareto optimum for the nominal p. Such a set of design alternatives is termed multi-objectively robust and optimum, and we refer to them as robust Pareto solutions. The set of all robust Pareto solutions is called the robust Pareto set. 83 Ideally one wants the robust Pareto set to be the same as the nominal Pareto set. However, robust Pareto set is generally inferior to the nominal Pareto set as shown in Figure 4.3(a). It is also common for the robust Pareto set and the nominal Pareto set to overlap, as in Figure 4.3(b), or for the robust Pareto set to be a subset of the nominal Pareto set, as in Figure 4.3(c). Obviously, the robust Pareto set cannot be superior or be a superset of the nominal Pareto set. One might argue that in case of Figure 4.3(b), the overlap portion of the robust Pareto frontier is better than the non-overlap portion because not only this portion is robust, but it also belongs to the nominal Pareto frontier. In some sense, it is. However, objective-wise this overlap portion does not dominate the non-overlap portion, so multi-objectively the overlap and non-overlap portions are equally optimum (i.e., incomparable). nominal Pareto frontier robust Pareto frontier f 1 f 2 (c) f 1 f 2 (a) nominal Pareto frontier robust Pareto frontier f 1 f 2 (b) overlap nominal Pareto frontier robust Pareto frontier Figure 4.3: Comparison between nominal and robust Pareto set. 84 When p varies, f(x,p) deviates from its nominal value. Because the robust Pareto set is obtained based on point robustness, it is possible that when p changes the robust Pareto set is no longer a trade-off set. However, although the solution to a multi-objective optimization problem is a set of trade-off designs, ultimately the designer would choose a single design to implement. The purpose of including a design?s robustness as an additional optimization criterion is to ensure that the one design selected is robust with respect to variations in p. As such, it does not matter if the robust Pareto designs as a set are insensitive to p. In other words, the set robustness of the robust Pareto set is of no real-world interest beyond our mathematical curiosity. 4.3. TWO-SIDED SENSITIVITY OF MULTIPLE FUNCTIONS We are now ready to present our method for measuring multi-objective robustness of a design. The method presented here is a generalization of the WCSR method presented in Chapter 3 for single objective robustness. As before, the robustness measured here is a two-sided robustness, i.e., we account for both the increase and decrease in f. 4.3.1. Generalized Sensitivity Set and Sensitivity Region Let x 0 be the design alternative whose sensitivity we want to measure, and let p 0 be the nominal parameter value with respect to which the objective values of this design is defined, i.e., f(x 0 ,p 0 ). Given the acceptable variation range ?f 0 = [?f 1,0 , ?f 2,0 , ?, ?f M,0 ], there is a set of ?p values such that the ?f due to these ?p values falls within the ranges of ?f i,0 for all i = 1,?,M. This set is the generalized sensitivity set S f , and it is mathematically defined as shown in Eq. (4.1). As before, we use the square of each ?f i 85 values to account for negative values, i.e., a two-sided sensitivity. We do not make a distinction between single and multiple objective S f because the single objective S f is simply the multi-objective version of S f for M = 1. { } ),(),(:where M,...,1,][][ :R),( 0000 2 0, 2G 00 ppp ppS xx x iii if fff iff i ??+=? =??????= (4.1) Similar to the single objective case (recall Chapter 3), S f determines how much ?p a design can absorb for the given ?f 0 . So, the larger S f , the more robust the design, and if we can make sure that ?p is always a member of S f , then ?f 0 will always be satisfied. The plot of the generalized S f on the ?p-space is the generalized Sensitivity Region (SR) of the design. If we let the notation S f,i be the set of ?p?s such that [?f i ] 2 ? [?f i,0 ] 2 , i.e., the sensitivity set with respect to the i-th objective, then it is easily seen that S f is really just the intersection of all S f,i ?s: S f = S f,1 ? ? ? S f,M (4.2) The above observation implies that the generalized SR corresponding to S f is simply the intersection of all the SR?s of S f,i ?s (Figure 4.4). The existence and uniqueness of S f also follows directly from the existence and uniqueness of each S f,i . The fact that the generalized SR is an intersection of each objective?s SR?s has another important consequence: it defines the requirements that must be met for a point in the ?p-space to be inside, outside, or on the boundary of the generalized SR. A point inside the region (shaded region in Figure 4.4) implies that it belongs to all S f,i ?s, but it is not on the boundary of any S f,i ?s, hence it must satisfy [?f i ] 2 < [?f i,0 ] 2 for all i = 1,?,M. A point outside the region implies that it does not belong to at least one S f,i , thus for this 86 point there must be at least one i such that [?f i ] 2 > [?f i,0 ] 2 . A point on the boundary of the region implies that not only it belongs to all S f,i ?s, but it is also on the boundary of at least one S f,i , therefore it must satisfy [?f i ] 2 ? [?f i,0 ] 2 with a strict equality for at least one i. 1 p? 2 p? SR 1 SR 2 Intersection, overall SR Figure 4.4: Graphical definition of the generalized SR. Graphically, the points inside, outside, and on the perimeter of the generalized SR correspond to points inside, outside, and on the boundary of the hyper-rectangle formed by the ?f 0 ranges respectively (Figure 4.5). 1 p? 2 p? ),( 00 pxf M,...,1 )()(: 2 0, 2 = ?>?? i ffi ii 2 0, 2 2 0, 2 )()(: M,...,1;)()( ii ii ffi iff ?=?? =??? M,...,1 )()( 2 0, 2 = ? [?f i,0 ] 2 for at least one i. A point on the boundary satisfies [?f i ] 2 ? [?f i,0 ] 2 with a strict equality for at least one i. ? The generalized WCSR is a hyper-sphere inside the generalized SR that touches the SR boundary at the closest point from the origin. The radius of the WCSR can be calculated by solving a single-objective optimization problem with an equality constraint. If the magnitudes of ?p are different, this optimization problem needs to be normalized to account for scale importance. ? To avoid difficulties in making trade-offs between the objectives, we use a constrained approach to MORO where we treat the robustness of a design as a constraint instead of an objective. ? Using the constrained approach, our MORO method obtains the robust Pareto set by first eliminating all non-robust designs and then selecting the set of non-inferior designs from those designs that are not eliminated. 127 CHAPTER 5 FEASIBILITY ROBUST OPTIMIZATION 5.1. INTRODUCTION In the previous two chapters we presented a method to measure objective robustness of a design, and a scheme to use that measure to obtain a robust optimum design. In those chapters, we implicitly assumed that the design will always remain feasible when the variations occur. Obviously, this assumption may not be valid in general. Feasibility robustness of a design must be explicitly accounted for and enforced if we want the design to be always feasible even as the uncontrollable parameters vary. The purpose of this chapter is to develop a method to measure the feasibility robustness of a design, and develop an optimization scheme to use this measure to guarantee the feasibility of a design. Similar to Chapters 3 and 4, the robustness measure presented here is also based on the sensitivity region concept, but with a significant difference. Unlike objective robustness, which was ?two-sided?, feasibility robustness of a design with respect to constraints is ?one-sided?. This one-sided feasibility robustness implies that we are interested in limiting the constraint deviation along one direction only, either increase or decrease but not both. In our case, we limit the increase in the constraints because we use the notation g(x,p) ? 0 for constraints. We will point out this important difference in more detail in this chapter. For clarity and simplicity, in this chapter we will focus entirely on the feasibility robustness of a design, and we will not account for objective robustness. We will discuss the combined objective and feasibility robust optimization approach in the next chapter. 128 In the next few sections we develop the concept of one-side sensitivity region for single and multiple constraints, and then present an approach to use it in an optimization routine. At the end of the chapter we give a demonstration of the applications of our method to numerical and engineering examples. 5.2. ONE-SIDED SENSITIVITY MEASURE We begin our discussions by presenting a method to measure the sensitivity (robustness) of a design in terms of constraint functions using the sensitivity region concept presented in previous chapters. We present our approach for a single constraint case first and then extend it to a more general case of multiple constraints. 5.2.1. Single Constraint Let x 0 be the design alternative whose sensitivity we want to measure, and let g(x 0 ,p) be the constraint whose change in value is of interest. Constraint g(x 0 ,p) depends on two factors, the design x 0 itself and parameter p. Our goal is to get a measure of how g changes when p varies by some ?p. Suppose the value of g(x 0 ,p) is allowed to decrease indefinitely, but is allowed to increase only by some non-negative amount ?g 0 (i.e., a one-sided sensitivity). (We choose to limit the increase in g, instead of the decrease, to be consistent with our constraint notation ???.) For this ?g 0 increase, there is a set of ?p?s such that: { } 000 G 0g g),(g),(g:R)( ?+??+??= ppppS xxx (5.1) This set of ?p?s is called the ?feasibility sensitivity set? (S g ) of design x 0 . Notice that the feasibility sensitivity set S g does not have squared terms as in the objective sensitivity set 129 S f that was presented in Chapters 3 and 4. This is because the feasibility sensitivity set is a one-sided sensitivity measure. Rearranging the inequality in Eq. (5.1), we obtain: g(x 0 ,p+?p) ? g(x 0 ,p) ? ?g 0 (5.2) ?g(?p) ? ?g 0 (5.3) Eq. (5.3) shows an important property of S g : for all ?p?s in S g , the changes in g due to ?p?s are always less than or equal to an allowable increase ?g 0 . This implies that S g is an indicator of how much ?p?s design x 0 can ?absorb? for it to remain within the allowable limit. As the number of elements in S g increases, the design can allow more changes in p. So, like the objective S f , the larger S g , the more feasibly robust the design, and if we can make sure that ?p is always a member of S g , then we are guaranteed that ?g 0 will always be satisfied. Plotting S g in ?p-space, we obtain the Feasibility Sensitivity Region (FSR) of the design. The size of FSR corresponds to the size of S g , and therefore FSR size is also a measure of a design?s robustness: the larger it is, the more feasibly robust the design. Based on Eq. (5.3), a ?p point inside, outside, and on the boundary of FSR must satisfy ?g(?p) < ?g 0 , ?g(?p) > ?g 0 , and ?g(?p) = ?g 0 , respectively. Example 5.1 Let us demonstrate an application of the above concept with a simple constraint function. Suppose we have an inequality constraint: , where the parameters are p = [0, -0.5] and the allowable increase for the constraint is 0)p(),g( 21 2 )p(10 1 ?+? x xpx 130 ?g 0 = 1.125. If the design of interest is x 0 = [1.1, 3.0], determine the feasibility sensitivity set S g for this design, and show its FSR in (?p 1 ,?p 2 )-space. Solution The nominal value of the constraint is g(x 0 ,p) = -0.125. Using ?g 0 = 1.125, we obtain: { }0.2)5.0p()1.1(:)p,p()( 3 2 p10 210g 1 ???+??= ? xS . Plotting this inequality in the (?p 1 ,?p 2 )-space, we obtain the FSR of the design as shown in Figure 5.1. As a quick verification, for (?p 1 ,?p 2 ) = (0,0), inside the FSR, the inequality is 0.875 < 2.0. For (?p 1 ,?p 2 ) = (1.0,1.0), outside the FSR, the inequality is 2.718 > 2.0. For (?p 1 ,?p 2 ) = (0,1.5), on the FSR boundary, the inequality is 2.0 = 2.0. -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0.5 1 1.5 2 ?p 1 ?p 2 FSR inside outside on boundary Figure 5.1: Feasibility Sensitivity Region for the inequality constraint. Notice that the FSR in Figure 5.1 is unbounded and only has one boundary. This is in contrast to the objective SR where even when it is unbounded, it still has two boundaries (recall Chapters 3 and 4). Again, the reason for this difference is because FSR is a one- 131 sided sensitivity measure. The FSR boundary corresponds to the ?g 0 allowable increase, but since there is no limit for the decrease, there is no other boundary. ? As seen in Figure 5.1, like objective SR, FSR also has a drawback of being asymmetric. Because of this asymmetry, the directional sensitivity of a design becomes an important issue. To account for this directional sensitivity, we once again use the worst-case representation of FSR, hereafter called ?Feasibility Worst Case Sensitivity Region? (FWCSR), as a measure of a design?s feasibility robustness. Graphically, FWCSR of a design is a hyper-sphere inside the FSR of that design that touches the boundary at the closest point from the origin in ?p-space. The FWCSR is typically, but not necessarily, tangent to the FSR at the point of contact (e.g., when there is a cusp at the point of contact). Example 5.2 Using the FSR obtained in Example 5.1, determine the FWCSR of the design x 0 = [1.1, 3.0] for the constraint . 0)p(),g( 21 2 )p(10 1 ?+? x xpx Solution The FSR of the design is shown in Figure 5.1. The FWCSR of it is simply the smallest hyper-sphere inside it that touches it at the point closest to the origin as shown in Figure 5.2. In this example, the FWCSR is derived from the non-normalized (?p 1 ,?p 2 )- space, but it is valid because the scale of ?p 1 and ?p 2 is the same. We will discuss normalization in Section 5.3. ? 132 ?p 1 ?p 2 FSR FWCSR -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0.5 1 1.5 2 point of contact Figure 5.2: Feasibility WCSR for the inequality constraint. The point of contact between FWCSR and FSR is a point on FSR boundary that is closest from the origin. Since the point of contact satisfies ?g(?p) = ?g 0 , the FWCSR radius can be calculated by solving the following optimization problem: 000 2 1 G 2 g g),g(),g(:subject to p)(Rminimize ?=??+ ? ? ? ? ? ? ?=? ? ? ppp p p xx i i (5.4) Using the radius calculated from Eq. (5.4), we can then calculate the size of FWCSR. However, like in the objective robustness case, there is really no need to perform this calculation. The R g value alone is sufficient for calculating the feasibility robustness of the design. Example 5.3 Use Eq. (5.4) to calculate the R g in Example 5.2. Compare this value to the FWCSR graph shown in Figure 5.2. 133 Solution The problem is simple enough to be solved analytically. For ease of differentiation, we use instead of R 2 g R g as objective. Taking the first derivative of the Lagrangian with respect to ?p 1 and ?p 2 , we obtain the following equations (? is the Lagrange multiplier): 0.1)? (0.953)(1p2? 1 p10? 1 =+ (5.5) 0.5)0p? (?3p2? 2 22 =+ - (5.6) Combined with the equality constraint: ( , we have three linearly independent equations with three unknowns (?p 00.20.5)-p()1.1 3 2 )p(10 1 =??+ ? 1 ,?p 2 ,?). Solving these equations, we obtain the point of contact to be (?p 1 ,?p 2 ) = (0.749, 0.147) for a FWCSR radius of R g = 0.763 (the Lagrange multiplier is ? = -0.786). The Hessian of the Lagrangian at this (?p 1 ,?p 2 ) point is positive definite, so the sufficiency condition is satisfied. Graphically (see Figure 5.2), the point of contact is approximately at (?p 1 ,?p 2 ) = (0.75, 0.15) for a FWCSR radius of R g = 0.765. This value is in agreement with the analytic value, thus confirming that solving Eq. (5.4) does indeed give us the FWCSR radius that we are looking for. ? 5.2.2. Multiple Constraints We can easily extend the FWCSR concept described in the last section to measure robustness of a design when there are multiple constraints. Let g(x,p) = [g 1 ,?,g J ] be the constraint functions of interest, and ?g 0 = [?g 1,0 ,?, ?g J,0 ] ? 0 be the vector of acceptable increments. Then there is a set of ?p?s such that: 134 { }J,...,1,g),(g),(g:R)( 0,00 G 0g =??+??+??= j jjj ppppS xxx (5.5) { }J,...,1,g)(g:R)( 0, G 0g =???????=? j jj ppS x (5.6) This set is the generalized feasibility sensitivity set S g . Notice how Eq. (5.6) collapses to Eq. (5.1) when J=1. Recall from Chapter 4 that the generalized S f is really just the intersection of all S f,i ?s. The same property also exists for the generalized S g . If we let the notation S g,j be the set of ?p?s such that ?g j (?p) ? ?g j,0 , then it is easily seen that the overall S g is simply the intersection of all S g,j ?s: S g = S g,1 ? ? ? S g,J (5.7) Consequently, the overall FSR of a design is then formed by the intersections of the FSR of each constraint, and the overall FWCSR is then defined for this overall FSR. Utilizing the fact that the overall FSR is an intersection of all constraints? FSRs, we can define the requirements for a ?p point to be inside, outside, or on the boundary of the overall FSR. Similar to the generalized SR, a point inside the overall FSR satisfies ?g j (?p) < ?g j,0 for all j=1,?,J. For a point to be outside of the FSR, there must exist at least one j such that ?g j (?p) > ?g j,0 . A point on the boundary of the FSR satisfies ?g j (?p) ? ?g j,0 with a strict equality for at least one j. Example 5.4 Going back to our inequality constraint from Examples 5.1-5.3. Suppose there is a second constraint 02)p()471.2(),(g 12 2 5.0p 12 2 ?++?? ? ? ? ? ? ? + xxpx with the same parameter 135 values p = [0,-0.5]. If the increment limit for this new constraint is ?g 2,0 = 1.0, determine the overall FSR and FWCSR of the design x 0 = [1.1, 3.0]. Solution We already have the FSR of the first constraint, so to get the overall FSR we only need to calculate FSR of the second constraint and then find the intersection of the two FSRs. From previous examples, the feasibility sensitivity set of the first constraint is { }0.2)5.0p()1.1(:)p,p()( 3 2 p10 210g,1 1 ???+??= ? xS . The nominal value of the second constraint is g 2 (x 0 ,p) = 0, so the feasibility sensitivity set of the second constraint is { }02)p()718.2(:)p,p()( 1 2/p 210g,2 2 ?+????? ? xS . Plotting these inequalities in the (?p 1 ,?p 2 )-space, we obtain the overall FSR of the design as shown in Figure 5.3. ?p 1 ?p 2 FSR of g 2 FSR of g 1 Overall FSR FWCSR -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0.5 1 1.5 2 Figure 5.3: Overall FSR and FWCSR for the two constraints. As a verification, the point (?p 1 ,?p 2 ) = (0,0) is inside the overall FSR, and it satisfies both S g,1 inequality (0.875 < 2.0) and S g,2 inequality (-1 < 0). The point (?p 1 ,?p 2 ) = (1,1) is outside the overall FSR. It satisfies S g,2 inequality (-1.35 < 0), but not S g,1 inequality 136 (2.718 > 2.0). The point (?p 1 ,?p 2 ) = (0,2) is also outside the overall FSR, but it does not satisfy either S g,1 inequality (4.375 > 2.0) or S g,2 inequality (0.718 > 0). The point (?p 1 ,?p 2 ) = (-1,0) is on the boundary of the overall FSR. It satisfies S g,1 inequality (0.26 < 2.0), and satisfies S g,2 inequality with a strict equality (0 = 0). The overall FWCSR of the design is also shown in Figure 5.3. We see in this figure that the overall FWCSR is the same as the FWCSR of the first constraint. This is because the worst-case scenario for this particular design is governed by the first constraint. ? Using the requirement for a point to be on the overall FSR boundary, the FWCSR radius of a multiple-constraint design can be calculated by solving the optimization problem shown below (notice again how Eq. (5.8) collapses into Eq. (5.4) when J=1): ,000 ,000 2 1 G 2 g g),(g),(g: J,...,1 g),(g),(g:subject to p)(Rminimize jjj jjj i i j j ?=??+? = ????+ ? ? ? ? ? ? ?=? ? ? ppp ppp p p xx xx (5.8) Notice in Eq. (5.8) that at least one of the J inequality constraints must be active. Based on this information, we can simplify these constraints. If we rearrange each inequality constraint, we obtain the following: J,...,1 01 g ),(g),(g ,0 00 = ?? ? ??+ j j jj ppp xx (5.9) For a feasible ?p point, the maximum of each of the modified constraint is 0, and for a constraint to be active, it has to be at its maximum. Therefore, for a feasible ?p point with at least one active constraint, Eq. (5.10) below must be satisfied: 137 01 g ),(g),(g max ,0 00 J,...,1 =? ? ? ? ? ? ? ? ? ? ??+ = j jj j ppp xx (5.10) Substituting Eq. (5.10) into Eq. (5.8), the optimization problem to calculate the FWCSR radius of a multiple-constraint design becomes as shown in Eq. (5.11). (Notice again how Eq. (5.11) simplifies to Eq. (5.4) when J=1.) ),(g),(g)(g:where 01 g )(g max:subject to p)(Rminimize 00 ,0 J,...,1 2 1 G 1 2 g pppp p p p xx jjj j j j i i ??+=?? =? ? ? ? ? ? ? ? ? ? ?? ? ? ? ? ? ? ?=? = = ? ? (5.11) The constraint in Eq. (5.11) is identical to the constraint used in calculating the WCSR radius of the objective SR (Eq. (4.5)) except that this time we do not take the absolute value of the nominator, the ?g j (?p) term. This is again because this constraint is for a one-sided sensitivity measure. In the objective WCSR, the absolute value is necessary to guarantee that the ?f value is bounded in both increasing and decreasing directions. In the feasibility WCSR, we only need to guarantee that the ?g value is bounded in the increasing direction, so there is no need for the absolute value. Alternatively, we can also justify the simplified constraint in Eq. (5.11) graphically. In the g-space, the quantity ?g(?p) is represented as a vector from the point g(x 0 ,p) to the point g(x 0 ,p+?p). Based on the definition of S g , if a point ?p is on the FSR boundary, then the vector ?g(?p) must touch the boundary of ?g 0 , which implies that at least one of its component vectors must touch this boundary. Coupled with the fact that ?g 0 ? 0, this 138 implies that 1 g )(g max 0, J,...,1 = ? ? ? ? ? ? ? ? ? ?? = j j j p , which is the constraint in Eq. (5.11). Figure 5.4 shows an illustration of this derivation. 0 g 1 g 2 ?g 0 boundary ?g 2,0 ?g 1,0 g(x 0 ,p) ?g 2 ?g 1 1 g g g g , g g max 0,1 1 0,2 2 0,1 1 = ? ? = ? ? ? ? ? ? ? ? ? ? ? ? Figure 5.4: Graphical derivation of the simplified constraint. Since the single-constraint FSR and FWCSR is just a simplified version of the multiple-constraint case, from hereon our discussion will focus only on the multiple- constraint case. 5.3. ROBUST OPTIMIZATION We will now show how to use the sensitivity measure presented earlier in a feasibility robust optimization scheme. 5.3.1. Determination of Increment Limit In developing the FWCSR concept, we have assumed there exists a quantity ?g 0 that defines how much g(x 0 ,p) is allowed to increase, but we have not yet discussed how this 139 quantity is determined. Unlike the objective ?f 0 limit, the ?g 0 limit is not determined by the designer. Rather it is determined by the position of the design relative to the constraints boundary. Suppose a design x 0 is feasible, i.e., g(x 0 ,p) ? 0. Then the value |g(x 0 ,p)| shows how much g(x 0 ,p) must increase for it to become active. In other words, the quantity |g(x 0 ,p)| represents the maximum allowable increment in g(x 0 ,p) for the design to remain feasible, i.e., ?g 0 = |g(x 0 ,p)|. If x 0 is infeasible, then there is no need to calculate the feasibility robustness of this design (feasibility cannot be guaranteed). Here, we must use the absolute value because we use the convention g(x,p) ? 0, i.e., a feasible x has a negative g value. If we use the convention g(x,p) ? 0 for a feasible design, then the absolute value is not necessary (but then we have to limit the decrease in g). Substituting ?g 0 = |g(x 0 ,p)| into the constraint in Eq. (5.11), we obtain: 01 ),(g ),(g),(g max 0 00 J,...,1 =? ? ? ? ? ? ? ? ? ??+ = p ppp x xx j jj j (5.12) Since for a feasible design, we have g j (x 0 ,p) ? 0 and |g j (x 0 ,p)| ? 0 for all j: 0 ),(g ),(g max 0 0 J,...,1 = ? ? ? ? ? ? ? ? ?+ = p pp x x j j j (5.13) Let us assume for a moment that |g(x 0 ,p)| > 0, i.e., when x = x 0 no constraint is active. Then Eq. (5.13) simplifies into: 0)],(g[max 0 J,...,1 =?+ = ppx j j (5.14) Recall that the constraint in Eq. (5.11) is the requirement for a point ?p to be on the FSR boundary. Eq. (5.14) further simplifies this requirement. Eq. (5.14) states that if a point 140 ?p causes any g j (x 0 ,p+?p) to be active, then it is on the FSR boundary. This simplified requirement is intuitively obvious. Because ?g j,0 = |g j (x 0 ,p)|, if ?g j (?p) = ?g j,0 , then g j (x 0 ,p+?p) = 0. This FSR boundary condition is still valid even when |g(x 0 ,p)| = 0. So, Eq. (5.14) is valid for |g(x 0 ,p)| ? 0. Substituting Eq. (5.14) into Eq. (5.11), the optimization problem to calculate the FWCSR radius of a design becomes: []0),(gmax:subject to p)(Rminimize 0 J,...,1 2 1 G 1 2 g =?+ ? ? ? ? ? ? ?=? = = ? ? pp p p x j j i i (5.15) 5.3.2. Normalization and Feasibility Robustness Index To account for the scale difference among the parameters, Eq. (5.15) needs to be normalized. Even if the scale of the parameters is the same, we still recommend normalizing Eq. (5.15) to help with convergence. As in objective robustness, we use the known ranges of variations ?p 0 to normalize Eq. (5.15). Since 0, p p p i i i ? ? =? for all i=1,?,G, then 0, ppp iii ???=? ; or in vector notation 0 ppp ???=? . Substituting this equality into Eq. (5.15), the normalized optimization problem to calculate the FWCSR radius becomes as follows (for simplicity, we have forgone using the g R notation). []0),(gmax:subject to p)(Rminimize 00 J,...,1 2 1 G 1 2 g =???+ ? ? ? ? ? ? ?=? = = ? ? ppp p p x j j i i (5.16) 141 If we define g 21 g R)G( ? =? to be the feasibility robustness index of the design, then the value ? g ? 1 implies that the design is always feasible even when the variations occur, i.e., it is feasibly robust. In contrast, the value ? g < 1 implies that there will be instances during the changes in parameters when the design will become infeasible. Adding the robustness constraint ? g ? 1 to an optimization problem guarantees that the optimum design is feasibly robust. Eq. (5.17) shows the overall feasibility robust optimization problem. 0 ? )(? 1 J,...,10),(g:subject to ])(,...,)([)(minimize g,0 g M1 ?? =? = x x xxxf x j ff j p (5.17) Here ? g (x) is the feasibility robustness index calculated from Eq. (5.16), and ? g,0 is the desired level of robustness as determined by the designer. There are a few important things needs to be pointed out about Eq. (5.17). First, the robustness constraint guarantees feasibility robustness of an optimum design with respect to inequality constraints only. Equality constraints are hard constraints in the sense that unless ?p variations are such that h(x,p+?p) = 0, there is no way to guarantee these constraints will always be satisfied. Second, although we have presented our feasibility robust optimization for a multi-objective optimization problem, our approach is also applicable to single objective problems. The feasibility robustness constraint in Eq. (5.17) does not depend on the number of objectives. Third, implementation-wise, solving Eq. (5.17) as it is will give us a feasibly robust optimum design(s). However, we can improve the efficiency of the approach, i.e., reduce the number of evaluations, simply by not calculating the robustness constraint when the other constraints (both equalities and 142 inequalities) are not satisfied. A design can be guaranteed to be always feasible (when parameter variations occur) only if it is feasible in the first place (i.e., as a nominal design). If a nominal design is infeasible, then it does not even meet our design requirements, so there is no need to calculate its robustness. 5.4. COMPARISON STUDY As a demonstration of our feasibility robust optimization method, we applied it to one numerical and three engineering examples. In the numerical example, we show the applicability of our method to problems whose parameter variations are large. In the explosive actuator and control valve linkage examples, we provide comparison between our method and a probabilistic method. In the Belleville spring example, we compare our method to the min-max method developed by Hirokawa and Fujita (2002). 5.4.1. Numerical Example For our first example, we solve a two-dimensional numerical example presented in Hirokawa and Fujita (2002). The problem is a single objective optimization problem with two inequality constraints. The mathematical formulation of the problem is as follows: 03)41.01.0log()(g 075.2sin3)(g:subject to 123252210)4sin()(minimize 2 43 212 2111 2 11 2 2 2211 2 11 3 1 ],[ 21 21 ??+++?? ??+?+? +++++++= ?+? = xexx xxxxx xxxxxxxxf xx xx x x x x (5.18) There are no restrictions on the design variables x = [x 1 ,x 2 ], but they observe some variations ?x 0 = [0.4,0.4]. Our goal is to obtain a feasibly robust optimum design that will always remain feasible when the variations in x occur. 143 Figure 5.5 shows a contour graph for the objective and constraint functions of the problem. The two solid lines are the constraint boundaries with the feasible directions indicated by the arrows. The dashed ellipses are the contours of the objective function, and the arrow on the objective contours indicates the decreasing direction. From this graph we can easily see that the nominal optimum of this problem occurs at the point where the g 2 constraint boundary is tangent to the objective contour (indicated in the graph). -4 -3 -2 -1 0 1 2 -2 -1 .5 -1 -0 .5 0 0.5 1 1.5 2 feasible g 2 feasible g 1 decreasing f nominal optimum x 1 x 2 Figure 5.5: Objective and constraint contours of the numerical example. Solving the problem numerically using MATLAB?s fmincon function, the nominal optimum is found to be at x* = [-1.825,0.741] and the optimum objective is f* = -3.287. The constraint value of the nominal optimum is g* = [-5.919,0]. This solution is obtained in 8 iterations. We see that numerically, at the nominal optimum the constraint g 2 is active just as observed in Figure 5.5. 144 Since the constraint g 2 is active, obviously the nominal optimum is not feasibly robust. When the variables [x 1 ,x 2 ] vary, the g 2 constraint can be violated. Adding the feasibility robustness constraint 0 ? )(? g,0 g ?? x 1 to Eq. (5.18) with ? g,0 = 1.0, and then solving it using fmincon, we obtain the robust optimum design to be x R * = [-1.394,0.272], f R * = -1.552, and g R * = [-6.089,-1.374]. This robust optimum is obtained in 7 iterations. The inner optimization problem used to calculate ? g is also solved using fmincon and converges on average in 6.28 iterations. For comparison, let?s also find the feasibly robust optimum design using the conventional robustness methods in the literature. More specifically, let us solve the problem for robust optimum using: (1) the worst-case gradient method where the constraint is replaced by 0)(g ?x j 0 g )( N 1 ?? ? ? + ? =i i i j j x x xg , and (2) the moment matching method where the analytic constraint is replaced by a probabilistic constraint , where the standard deviation of g0 g ? j ?)?(g +k j x j is estimated using a Taylor series expansion: 2 2 ? g i x i j x ? ? ? ? ? N 1 2 g ? j i ? = ? ? ? = , where ? is the variance of x 2 i x i . The factor k in the probabilistic constraint is specified to be equal to 3.0. According to Parkinson et al. (1993), a factor k=3.0 will provide more than 0.99 probability of constraint satisfaction. For the probabilistic constraint, the ?x variation is assumed to be normally distributed with a [? x ,? x ] = [0, 0.4/3]. 145 The robust optimum designs obtained using the two methods are shown in Table 5.1. For comparison, we have also re-listed the nominal optimum and the robust optimum obtained by our method. Table 5.1: Optimum designs of the numerical example. Nominal Robust (FWCSR) Robust (Gradient) Robust (Moment) x 1 -1.825 -1.394 -1.557 -1.599 x 2 0.741 0.272 0.477 0.517 f -3.287 -1.552 -2.266 -2.419 g 1 -5.919 -6.089 -6.077 -6.071 g 2 0 -1.374 -0.98 -0.874 If we consider Table 5.1, we see that the nominal optimum has the smallest f value, while our robust optimum has the largest. The other two robust optima have roughly the same f value, and they are somewhere between the nominal and our robust optimum. However, if we look at the value of the g 2 constraint, we also see that our robust optimum provides that largest amount of ?cushion? for the constraint to vary. The other two robust optima provide some amount of variation cushion, while the nominal optimum provides no room for variation at all (g 2 = 0). At a first glance, it might seem that our robust optimum is too conservative and is an overly non-optimal design. However, let?s verify the robustness of each of the optimum designs in Table 5.1. The variations occur in the design variables, so we can use the feasible region graph in Figure 5.5 to see what happens when x varies. For each optimum design, we add a rectangle of ?0.4 around it to indicate the ?x variation. An optimum design is truly feasibly robust only if the entire rectangle is inside the feasible region. Figure 5.6 shows the plots of the robustness of each optimum design. 146 -4 -3 -2 -1 0 1 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 ?x 2,0 ?x 1,0 nominal optimum f = -3.287 x 1 x 2 -4 -3 -2 -1 0 1 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 robust optimum (FWCSR) f = -1.552 x 1 x 2 ?x 2,0 ?x 1,0 -4 -3 -2 -1 0 1 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 ?x 2,0 ?x 1,0 robust optimum (Gradient) f = -2.266 x 1 x 2 -4 -3 -2 -1 0 1 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 ?x 2,0 ?x 1,0 robust optimum (Moment matching) f = -2.419 x 1 x 2 Figure 5.6: Sensitivity analysis of the optimum designs. We see in Figure 5.6 that of the four optimum designs, only our robust optimum can completely absorb the ?x 0 variation, i.e., the dashed rectangle is fully inside the feasible region. For the other three optima, there are ?x variations that cause the design to fall into the infeasible region. This shows that it is not that our robust optimum is too conservative; rather it is the other optima that are not robust enough. This also shows that the gradient and moment matching methods do not guarantee a feasibly robust optimum. One reason the gradient and moment matching method do not result in a feasibly robust optimum in this problem is because the ?x 0 variation is too large for the Taylor series expansion to remain valid. This shows the advantage of our robust optimization method: it is applicable to problems where the variations are large. Let us verify this claim. We re-solve the same problem using four methods: (1) our FWCSR method with 147 ? g,0 = 1.0, (2) our FWCSR method with ? g,0 = 0.8, (3) gradient method, and (4) moment matching method. We solve the problem using these methods for 6 increasing amount of ?x 0 variations (the same for both ?x 1 and ?x 2 ): [0.01, 0.1, 0.2, 0.4, 0.8, 1.0]. For each optimum obtained, we calculate the probability of constraint satisfaction (P s ) of the design by performing 1000 runs of Monte Carlo simulation assuming a normal pdf of the variations. The plot of the ?x 0 vs. P s for each method is shown in Figure 5.7 and 5.8. 0.95 0.96 0.97 0.98 0.99 1 1.01 1.02 0 0.2 0.4 0.6 0.8 1 1.2 FWCSR ? 0 = 1.0 ?x 0 P s 0.95 0.96 0.97 0.98 0.99 1 1.01 1.02 0 0.2 0.4 0.6 0.8 1 1.2 P s ?x 0 FWCSR ? 0 = 0.8 Figure 5.7: Plot of ?x 0 vs. P s of the FWCSR optima. 0.95 0.96 0.97 0.98 0.99 1 1.01 1.02 0 0.2 0.4 0.6 0.8 1 1.2 Gradient method ?x 0 P s 0.95 0.96 0.97 0.98 0.99 1 1.01 1.02 0 0.2 0.4 0.6 0.8 1 1.2 Moment matching method ?x 0 P s Figure 5.8: Plot of ?x 0 vs. P s of the gradient and moment matching optima. 148 We see in Figure 5.7 that for the FWCSR method with ? g,0 = 1.0, P s is always 1.0 regardless of how large ?x 0 is. This shows that our method will always obtain a feasibly robust optimum even when the variations are large. Similarly, for the FWCSR method with ? g,0 = 0.8, P s is relatively the same (between 0.9995 and 1.0) even as ?x 0 increases. This again indicates that our method will still obtain a feasibly robust optimum even as the variations grow beyond the linear range (range in which Taylor series expansion is valid). In contrast, as seen in Figure 5.8, P s of the gradient method is very high for small value of ?x 0 , but decreases as ?x 0 increases. This indicates that this method fails to obtain a robust optimum as the variations grow large. Even worse behavior is observed for the moment matching method. When is ?x 0 small, P s is high, but it decreases dramatically (to as low as 0.954) as ?x 0 increases. This also indicates that this method is good only for problems with small variations. If we plot the optimum f* value obtained by each method vs. the increasing ?x 0 , we will obtain a graph as shown in Figure 5.9. We see once again in this figure that for small value of ?x 0 , the optimum f* of all methods are relatively the same, but they diverge as ?x 0 increases. The FWSR method with ? g,0 = 1.0 is most conservative, but as shown in Figure 5.7 this method guarantees P s = 1.0. FWSR method with ? g,0 = 0.8 guarantees P s of at least 0.999, and its f* value is better than when ? g,0 = 1.0. The f* value of the gradient and moment matching methods are better than the FWCSR method, but this is misleading since the P s of these two methods drops significantly as ?x 0 increases. 149 -4 -3 -2 -1 0 1 2 0 0.2 0.4 0.6 0.8 1 FWCSR (1.0) FWCSR (0.8) Gradient Moment Nominal ?x 0 f * Figure 5.9: Plot of ?x 0 vs. f* of the four robust optimization methods. 5.4.2. Design of an Explosive Actuated Cylinder For our second example, we solve the optimization problem of designing an explosive actuated cylinder as formulated by Papalambros and Wilde (1980). In this problem, we are interested in optimizing the actuated cylinder shown in Figure 5.10 for minimum total length by controlling 5 design variables: the unswept cylinder length (x 1 - inch), the working stroke of the piston (x 2 ? inch), the outside diameter of the cylinder (x 3 ? inch), the initial pressure of the combustion (x 4 ? ksi), and the piston diameter (x 5 ? inch). All design variables are restricted to be positive. x 1 x 2 x 5 x 3 piston cylinder body fixed chamber, v c unswept cylinder length Figure 5.10: An explosive actuated cylinder. 150 In addition, there are constraints on the kinetic energy produced, the maximum piston force, and the stress on the cylinder wall, as well as geometric constraints. The mathematical formulation of the problem is as follows: 356 max215 max34 3 max 2 54 3 2 min ?1 1 ?1 2 ? 14 3 1 21 001.0)(g L)(g D)(g )3/()(g F 4 ?10 )(g W)( ?1 )10( )(g:subject to )(minimize xx xx x SS xx vv vx xxf yyt ?+? ?+? ?? ?? ? ? ? ? ? ? ? ? ? ? ?? ? ? += ?? x x x x x x x x (5.19) 2 511 )4/?(:where xxvv c += ; (5.20) 2 5212 )4/?( xxvv += 2/12 221 2 1 )( ???? +?= yt S (5.21) 42 2 5 2 3 2 5 2 34 1 ; )( )( x xx xxx ?= ? + = ?? (5.22) The constants used in the problem are as follows: D max = maximum allowable cylinder outside diameter (1.0 in) F max = maximum piston force (700 lb) L max = maximum cylinder total length (2.0 in) S y = cylinder material yield stress (125 ksi) v c = fixed chamber volume (0.084 in 3 ) W min = minimum kinetic energy for satisfactory performance (600 lb-in) ? = specific heat ratio (1.2) Solving Eq. (5.19) using MATLAB?s fmincon, we obtain the nominal optimum to be x* = [0, 1.042, 1.0, 23.12, 0.196] and f* = 1.042. For this nominal optimum, the 151 constraints g 1 , g 2 , g 3 , and g 4 are active. This optimum value is very close to that reported in Papalambros and Wilde (1980), f* = 1.036, obtained using monotonicity analysis (actually, their optimum point is slightly infeasible. When f* = 1.036, [g 1 , g 2 , g 3 ] = [0.0026, 0.0017, 0.0049] > 0). There are variations in three of the design variables, and they are [?x 3,0 , ?x 4,0 , ?x 5,0 ] = [0.01, 1.0 , 0.01]. We need to guarantee that the optimum design is feasibly robust with respect to these variations. Adding the robustness constraint 0 ? )(? g,0 g ?? x 1 to Eq. (5.19) with ? g,0 = 1.0, and then solving it using fmincon, we obtain the robust optimum design to be x R * = [0.0097, 1.669, 0.823, 17.8, 0.201] and f R * = 1.679. For this robust optimum, all constraints are inactive. The inactivity of the constraints is one indication of the feasibility robustness of this optimum. If the constraints are active, then there is no ?cushion? for them to vary as [x 3 , x 4 , x 5 ] vary. By moving slightly inside the feasible region, our robust optimum design provides some safety margin to absorb the ?x. The inner optimization problem used to calculate the robustness of a design is also solved using fmincon, and on average converges in 15 iterations. For comparison, we also solved the problem using a probabilistic method (Du and Chen, 2000). We replaced all of the constraints in the problem with a probabilistic one and constrained it to be greater than some predetermined value (= 0.99 in this case), . The probability is calculated using 100,000 runs of the Monte Carlo method assuming two different pdf models for the variations: uniform and normal. For the uniform model, the pdf is assumed to be uniformly distributed between [-?x 99.0])([P ?? 0g x i,0 , ?x i,0 ]. For the normal model, the pdf is assumed to be normally distributed with a mean and 152 standard deviation value of [0, ?x i,0 /3]. Table 5.2 shows the robust optimum designs obtained using the probabilistic method. For ease of comparison, we have again listed the values of the nominal optimum and our robust optimum with ? g,0 = 1.0, and ? g,0 = 0.8. Table 5.3 shows the constraint values of these optimum designs. Table 5.2: Optimum designs of the explosive actuated cylinder. Nominal Robust (Normal) Robust (Uniform) Robust (? g = 0.8) Robust (? g = 1.0) x 1 0 0.025 0.003 0 0.010 x 2 1.042 1.283 1.501 1.580 1.669 x 3 1.000 0.686 0.534 0.507 0.823 x 4 23.123 20.381 16.882 15.963 17.805 x 5 0.196 0.201 0.214 0.218 0.202 f 1.042 1.307 1.505 1.580 1.679 Table 5.3: Constraints of the optimum designs. Nominal Robust (Normal) Robust (Uniform) Robust (? g = 0.8) Robust (? g = 1.0) g 1 0.0 -0.087 -0.114 -0.126 -0.167 g 2 0.0 -0.075 -0.136 -0.150 -0.188 g 3 0.0 -0.057 -0.147 -0.168 -0.199 g 4 0.0 -0.314 -0.466 -0.493 -0.177 g 5 -0.479 -0.346 -0.248 -0.210 -0.160 g 6 -0.803 -0.705 -0.598 -0.569 -0.754 We see in Table 5.2 that the nominal optimum has the lowest f value, while our robust optimum with ? g,0 = 1.0 has the highest. The f value of the probabilistic optima and that obtained by our method using ? g,0 = 0.8 are in between these two values. However, we observe in Table 5.3 that the constraints of the nominal optimum are closest to the feasible region boundary (i.e., its constraint values are largest on average) while those of our robust ? g,0 = 1.0 optimum are the furthest. The constraints of the other 153 optima are somewhere in between. These constraint values indicate that our robust ? g,0 = 1.0 optimum can absorb the most ?x variation, while the nominal optimum can absorb the least. Let us numerically verify the robustness of each of the optimum designs. We perturbed the [x 3 , x 4 , x 5 ] of each optima 20 times by adding some ?x value randomly sampled from the ?x 0 range, and then calculate the new g(x) value of the design. A design is feasibly robust if all the new constraint values are still feasible, i.e., g(x) ? 0. We performed the analysis only for g 1 , g 2 , g 3 , and g 4 constraints because constraint g 5 is independent of [x 3 , x 4 , x 5 ], and it is easily observed from Table 5.2 that constraint g 6 will always be satisfied for all optima. Figures 5.11 ? 5.13 show the graphs of the g 1 , g , g 2 3 , and g 4 constraints of each optimum under perturbation. For clarity, but without loss of information, the graphs only show the max[g 1 , g 2 , g 3 , g 4 ] of each perturbation. In these graphs, a design is feasibly robust if all the bars are below the horizontal axis, i.e., max[g j (x)] ? 0. -0.15 -0.1 -0.05 0 0.05 0.1 0.15 16116 Figure 5.11: Sensitivity analysis of the nominal optimum. 154 -0.15 -0.1 -0.05 0 0.05 0.1 0.15 1 6 11 16 -0.15 -0.1 -0.05 0 0.05 0.1 0.15 16116 Robust (normal) Robust (uniform) Figure 5.12: Sensitivity analysis of the probabilistic optima. -0.15 -0.1 -0.05 0 0.05 0.1 0.15 1 6 11 16 -0.2 -0.15 -0.1 -0.05 0 0.05 0.1 0.15 0.2 1 6 11 16 Robust (? g = 0.8) Robust (? g = 1.0) Figure 5.13: Sensitivity analysis of the robust optima. We see in Figures 5.11 and 5.12 that the nominal optimum becomes infeasible in all 20 cases, while the probabilistic optima become infeasible in five cases for the robust (normal) optimum and in two cases for the robust (uniform) optimum. Our robust ? g,0 = 0.8 optimum becomes infeasible in one case, while the robust ? g,0 = 1.0 optimum is always feasible. These observations verify that our robust ? g,0 = 1.0 optimum is the only 155 design that is feasibly robust. The robust (uniform) and robust ? g,0 = 0.8 optima are close, but not quite feasibly robust. To further verify the robustness of the optima, we calculate their probability of constraint satisfaction using 100,000 runs of Monte Carlo simulations. The simulations are performed using the uniform and normal ?x pdf models as before. The P s value of each design is shown in Table 5.4. Table 5.4: Probability of constraint satisfaction of the optima. Uniform pdf Normal pdf Nominal 00 Robust (Normal) 0.815 0.982 Robust (Uniform) 0.983 0.9997 Robust (? g = 0.8) 0.996 0.9999 Robust (? g = 1.0) 1.0 1.0 P s We see in Table 5.4 that the nominal optimum has a P s = 0, which implies that it has no robustness towards variations at all. In contrast, the P s of our robust ? g,0 = 1.0 optimum is 1.0 confirming that this optimum design is indeed feasibly robust. The P s ?s of the other optima confirm the robustness information shown in Figures 5.12 and 5.13 as well. If we compare Table 5.4 with Table 5.2, we also see that P s increases as f increases (i.e., the performance vs. robustness trade-off). One last item of interest is the computational efficiency of each method. For the nominal optimum, we did not perform any calculation of a design?s robustness at all. For our FWCSR method, we solved an inner optimization using fmincon to calculate a design?s robustness. On average the optimization converged in about 15 iterations. Because fmincon performed some function calls to estimate gradients, the actual number 156 of function calls used by our method is approximately close to 50. The probabilistic method calculated the design?s robustness by performing 100,000 runs of Monte Carlo simulations. So we see that our robust optimization method is more efficient in terms of number of function calls. 5.4.3. Design of a Belleville Spring Our third example is the problem of optimizing a Belleville spring originally formulated by Siddall (1982), and later modified by Hirokawa and Fujita (2002) as a robust optimization problem. The problem presented in this chapter is the one formulated by Hirokawa and Fujita (2002). P d e d i h l t Figure 5.14: A Belleville spring. In this problem, we want to optimize the Belleville spring (made out of steel) shown in Figure 5.14 for maximum rated load P. The design variables of the problem are: the external diameter (d e ), the internal diameter (d i ), the thickness (t), and the free height (h). The variables are all continuous, and their units are meter. The optimization is constrained by two design constraints, allowable stress and maximum mass, and five geometric constraints. The mathematical formulation of the problem is as follows: 157 0)(3.0)(g 025.1)(g 0)(g 0)()(g 0)(g 0)(g 0??)(g:subject to P)(maximize 7 6 max5 4 min3 max2 max1 ][ ???? ??? ??? ??+? ??? ??? ??? = = ie ei e aw ,t,h,dd ddh dd dd lth hh mm f ie x x x x x x x x x (5.23) Here P is the rated load (N), ? max is the maximal stress (Pa), and m is the spring mass (kg). The quantities P, ? max , and m are calculated using the following equations. () ? ? ? ? ? ? +? ? ? ? ? ? ? ? ? = 3 max max 2 2 2 max )?( 2 ? ?)?1( ? P tthh E e d (5.24) () ? ? ? ? ? ? + ? ? ? ? ? ? ? ? = th E e d ? 2 ? ? ?)?1( ? ? max 2 2 2 max max (5.25) tddm ie )(? 4 ? 22 ?= (5.26) 2 1 ln? 6 ?:where ? ? ? ? ? ? ? = K K K ; ? ? ? ? ? ? ? ? = 1 ln 1 ln? 6 ? K K K (5.27) ? ? ? ? ? ? ? = 2 1 ln? 6 ? K K ; i e d d K = (5.28) The constants for this problem are: d max = maximum allowable diameter (0.3 m) E = Young?s modulus (210 GPa) h min = minimum height (0.005 m) l = maximum allowable total height including t (0.02 m) m max = maximum spring mass (2.0 kg) 158 ? max = maximum allowable deflection (= h) ? = Poisson?s ratio (0.3) ? = mass density (7850 kg/m 3 ) ? aw = allowable stress (1200 MPa) The nominal optimum of the problem obtained using fmincon is [d e , d i , t, h]* = [0.3, 0.211, 7.273, 5.0] and the objective value is (in kN) f* = 42.106. Here the values of the variables [t, h] are in mm. The constraints of this optimum are g* = [0, 0, 0, -0.386, 0, -0.112, -4.214]. This nominal optimum f value is very close to that obtained by Hirokawa and Fuijta (2002), f* = 41.9. Due to manufacturing errors, all design variables [d e , d i , t, h] are subject to variations. In addition, two of the material properties [? aw , E] are also subject to variations. So there are six uncontrollable parameters in this problem p = [d e , d i , t, h, ? aw , E]. The variation ranges of the parameters are ?p 0 = [8.67 x 10 -5 , 7.67 x 10 -5 , 3.33 x 10 -5 , 3.33 x 10 -5 , 4.0 x 10 5 , 6.67 x 10 7 ] (as given in Hirokawa and Fujita, 2002). The ?p 0 values of the variables [d e , d i , t, h] are in m. The ?p 0 values of the material properties [? aw , E] are in Pa. To obtain a robust optimum, we added the robustness constraint to Eq. (5.23). The addition of this constraint, however, causes the fmincon algorithm to fail to obtain a solution. One reason of this failure is because the feasible region of the problem is already very small as it is. Adding the robustness constraint reduces the feasible region even further, making it very hard for an SQP algorithm to converge. Because fmincon failed to obtain a solution, we used a Genetic Algorithm (GA) (Goldberg, 1989) instead. 159 Due to implementation reasons, we used the same GA to solve the inner optimization problem. Solving the inner optimization using GA takes ~300 function calls. The robust optimum obtained (? g,0 = 1.0) is shown in Table 5.5. For comparison purposes, we have also shown the robust optimum of the problem for ? g,0 = 0.8, 0.6 and 0.3. Table 5.5 also shows the robust optimum reported by Hirokawa and Fujita (2002). Their robust optimum is obtained by a min-max strategy where they use the maximum of the constraints within a so-called ?variation pattern? of the parameters as the constraints of the problem. The constraint values of these optima are shown in Table 5.6. Table 5.5: Optimum designs of the Belleville spring. Nominal Min-Max Robust (0.3) Robust (0.6) Robust (0.8) Robust (1.0) d e (m) 0 0.300 0.299 0.285 0.298 0.292 d i (m) 0.213 0.212 0.207 0.187 0.208 0.196 t (mm) 7.273 7.083 6.994 6.938 6.738 6.780 h (mm) 5.0 5.1 5.219 5.063 5.187 5.152 f (kN) 42.106 37.190 37.662 35.677 34.253 33.488 ? g 0.075 0.032 0.248 0.601 0.824 1.430 Table 5.6: Constraint values of the Belleville spring optima. Nominal Min-Max Robust (0.3) Robust (0.6) Robust (0.8) Robust (1.0) g 1 0 0 -0.002 -0.017 -0.018 -0.036 g 2 0 0 -0.008 -0.006 -0.060 -0.018 g 3 0 -0.020 -0.044 -0.013 -0.038 -0.030 g 4 -0.386 -0.391 -0.389 -0.400 -0.404 -0.403 g 5 0 0 -0.003 -0.049 -0.006 -0.025 g 6 -0.112 -0.116 -0.132 -0.181 -0.124 -0.160 g 7 -4.214 -4.176 -4.256 -4.834 -4.163 -4.592 160 We see in Table 5.5 that the nominal optimum has a very low ? g , but has the highest f value (recall that this is a maximization problem). The value ? g progressively increases as f decreases (the performance vs. robustness trade-off). This progressive increase in robustness can also be observed in the decrease in the constraint values of the optima, Table 5.6 (i.e., the optimum is further away from the constraint boundary). One important thing to notice in Table 5.5 is that the min-max optimum of Hirokawa and Fujita is inferior to the nominal and robust (0.3) optima (i.e., in terms of ? g and f, it is dominated). If we plot the value f vs. ? g , this inferiority is immediately apparent (Figure 5.15). The performance vs. robustness trade-off of the optima can also be observed in this figure. 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 30 35 40 45 nominal robust (0.3) robust (0.6) robust (0.8) robust (1.0) min-max f (kN) ? g Figure 5.15: Plot of f vs. ? g of the Belleville spring. To verify the robustness of the optima, we performed a numerical sensitivity analysis by perturbing the parameter values following the given ranges. A design is feasibly robust if all the constraints remain feasible. Figures 5.16, 5.17, and 5.18 show the graphs 161 of the max[g(x,p)] of each optimum. In these graphs, a design is feasibly robust if all the bars are below the horizontal axis. -0.012 -0.008 -0.004 0 0.004 0.008 0.012 161 -0.012 -0.008 -0.004 0 0.004 0.008 0.012 161 nominal min-max Figure 5.16: Sensitivity analysis of the nominal and min-max optima. -0.012 -0.008 -0.004 0 0.004 0.008 0.012 161 robust (0.3) -0.012 -0.008 -0.004 0 0.004 0.008 0.012 161 robust (0.6) Figure 5.17: Sensitivity analysis of the robust (0.3) and robust (0.6) optima. -0.025 -0.015 -0.005 0.005 0.015 0.025 161 -0.012 -0.008 -0.004 0 0.004 0.008 0.012 161 robust (0.8) robust (1.0) Figure 5.18: Sensitivity analysis of the robust (0.8) and robust (1.0) optima. 162 We see in Figures 5.16 ? 5.18 that the nominal, min-max, and robust (0.3) optima are not feasibly robust. The robust (0.6) optimum is almost feasibly robust except in one case where it becomes infeasible. The robust (0.8) and robust (1.0) optima are always feasible regardless of the perturbations, thus they are feasibly robust. These observations confirm the ? g values of each design shown in Table 5.5. 5.4.4. Design of a Control Valve Actuator Linkage For our last example, we applied our feasibility robust optimization method to the engineering design of a control valve actuator linkage. This example is adapted from the example in Balling et al. (1986) with some modifications. The control valve actuator linkage to be designed is shown in Figure 5.19. 55? ?=90? L c L r s (a) x y d c d r d h 0 (b) Figure 5.19: A control valve actuator linkage. The linkage mechanism has two members: a crank and a rod that are connected by a pin joint. The other end of the crank is held stationary, while the other end of the rod is pinned to a slider. Originally the crank was at a 55? angle from the vertical axis as shown in Figure 5.19(a). The dimensions of the linkage are shown in Figure 5.19(b). 163 There is a constant force F = 1425.5 lbs (6340 N) acting at the end of the rod that causes the mechanism to turn (see Figure 5.20). The design objective is to maximize the torque (T) at the end of the crank as it turns from ? = 0? to ? = 90?, averaged over 10? interval (Figure 5.20). The weights of the crank and rod are assumed negligible. F y F x FF y F x T Figure 5.20: Forces acting on the linkage. The design variables in this problem are the crank length (L c ), the rod length (L r ), and the center distance (d). The crank length and the rod length are constrained to be within 0 and 10 inch (0 and 25.4 cm), while the center distance is constrained to be within 5 and 7 inch (12.7 and 17.78 cm). The problem has three design constraints: (1) the vertical position of the slider when ? = 0? (h 0 ) is constrained to be less than 6.5 inch (16.51 cm), (2) the movement of the slider (s) is constrained to be less than 4.5 inch (11.43 cm), and (3) the side force (F x ) averaged over 10? interval is less than 800 lbs (3558 N). In addition, the problem has two geometric constraints: (1) the horizontal length of the crank (d c ) must not be greater than the center distance (d), and (2) the difference between d and d c must be less than or equal to the rod length (L r ). The mathematical formulation of the problem is shown below (for English units). 164 7d5;10L,L0 01 L d g 01 d d g;01 800 )?(F 9 1 g 01 4.5 s g;01 5.6 h g:subject to )?(T 9 1 )d,L,L(maximize rc r r 5 c 4 90 0? x 3 2 0 1 90 0? rc ???? ??? ?????? ?????? = ? ? ? ?= ? ?= f (5.29) The quantities T(?), F x (?), s, h 0 , d c , and d r are calculated as follows: 45)cos(?Ld cc ??= ; cr d-dd = (5.30) radians)in ( L d cos? 45)-sin(?L?sinL)?(h )0h(?h r r1- cr 0 ? ? ? ? ? ? ? ? = ??= ?== - (5.31) )0h(?)09h(?s ?=?== - (5.32) ?cosF)?(F x ?= (5.33) ?sinFF )?(F)?(h-Fy.d)?(T y x ?= ?= (5.34) Because of manufacturing tolerances, L c and L r vary by ?0.1 inch (?0.254 cm), and we need to guarantee the feasibility of the optimum design under these variations. We add the sensitivity constraint 0 ? )(? g,0 g ?? x 1 to Eq. (5.29) and then optimize it. In this problem ?p 0 = [0.1, 0.1] and ? g,0 = 1.0. For comparison, we also optimize the original problem (nominal optimum), and Eq. (5.29) with its constraints replaced by a probabilistic constraint P(g j ? 0) ? 0.99, j=1,?,5. The probabilistic constraint is 165 calculated using Monte Carlo simulation assuming two probability distribution models: uniform and normal. For the uniform model, the lower and upper bound of the distribution are specified to be ?0.1 and 0.1, respectively. For the normal model, the distribution is specified to have a mean of 0 and standard deviation of 0.033 (= 0.1/3). The results obtained from this comparison study are shown in Table 5.7. In this table the quantity F call is the number of function evaluations needed per design to calculate its feasibility robustness. Table 5.7 also shows the ? g value of each optimum design. Table 5.7: Optimum designs of the control valve actuator linkage. Nominal Robust Monte Carlo (Uniform) Monte Carlo (Normal) Torque (in.lb) 3592 3363 3429 3453 L c (in) 3.182 3.03 3.078 3.093 L r (in) 5.061 5.009 4.994 4.999 d (in) 5.0 5.0 5.0 5.0 F call N/A 250 100000 100000 ? g 0.005 1.02 0.73 0.62 We observe from Table 1 that the nominal optimum has the highest torque but the lowest ? g . In contrast, the robust optimum obtained by our method has the lowest torque but the highest ? g . The Monte Carlo optima are somewhere in between the two. This observation is expected because generally we have to sacrifice some performance to gain an increase in robustness. In fact, if we solve the optimization problem as a two-objective problem where we maximize both torque and ? g , we will obtain a trade-off frontier as shown in Figure 5.21. Notice in this figure how torque decreases as ? g increases. Points corresponding to the optima in Table 5.7 are also shown. 166 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 3200 3300 3400 3500 3600 3700 Torque (in.lb) ? g Robust optimum Monte Carlo (uniform) Monte Carlo (normal) Nominal optimum Figure 5.21: Trade-off frontier of the linkage problem. Note also in Table 5.7 that although the two Monte Carlo optima were obtained by enforcing a 0.99 probability of constraints satisfaction, their ? g value is less than 1.0. In fact, the ? g values of the two optima are quite different. This is because the robustness of the optimum design is sensitive to the assumed probability distributions, and the 0.99 constraints satisfaction probability is valid only if the assumed distribution is valid. We will further discuss this important issue next. Table 5.7 (and Figure 5.21) also shows that the robust optimum is different than the Monte Carlo optimum using a uniform distribution. This shows that our method does not presume a uniform probability distribution of the parameters (although it may seem so). The fact is our method does not presume any distribution at all, and this is reflected in the fact that we cannot provide probability information for the obtained optimum. We lack the information to do so. Our robustness constraint guarantees that the optimum design will remain feasible if the parameter variations are as specified. If the parameter 167 distribution changes, the probability of constraint satisfaction will change, but this guarantee still holds. We also observe in Table 5.7 that Monte Carlo method requires 100,000 function evaluations to calculate a design?s robustness. In contrast, our method requires only ~250 evaluations, comparable to those more efficient probabilistic methods (MPP for instance ? Du and Chen, 2000). It should be noted, however, that this number (i.e., 250) is an upper bound value because we used GA to solve the inner optimization problem. GA is an optimizer that needs a lot of function evaluation, but it is applicable to a wide range of optimization problem and does not require gradient information. If gradient information is available, we can use a more efficient optimizer to solve our inner optimization problem, and the number of function evaluations our method needs would be much lower (in the order of 10 1 ). To validate the results in Table 5.7, we conducted a sensitivity analysis on each optimum design. We performed 100,000 Monte Carlo simulations on each design, and based on the result, calculate its probability of constraint satisfaction. The simulations are performed using two probability distribution models of the parameters: uniform and normal. In the uniform distribution model, ?L c and ?L r are jointly uniformly distributed in the interval [-0.1,0.1]. In the normal distribution model, ?L c and ?L r are bi-normally distributed with a mean and standard deviation of [0, 0.033]. In both models, ?L c and ?L r are assumed independent. The results of this sensitivity study are shown in Table 5.8. For comparison purposes, we have also re-listed the ? g value of each optimum. 168 Table 5.8: Sensitivity analysis of the optima. Uniform model Normal model Nominal 0.376 0.381 0.005 Robust 1.0 1.0 1.02 Monte Carlo (Uniform) 0.989 0.999 0.73 Monte Carlo (Normal) 0.925 0.996 0.62 Probability of constraint satisfaction ? g We observe in Table 5.8 that the nominal optimum has a poor probability of constraint satisfaction in both models. This is not surprising since this optimum is obtained by strict optimization of the linkage?s torque, neglecting the variations in L c and L r . The robust optimum obtained by our method, on the other hand, has a 1.0 probability of constraint satisfaction in both models, much more robust than the nominal optimum. This observation confirms the information provided by the ? g values of the two optima. The robust optimum has a much larger ? g value (1.02) than the nominal optimum (0.005). The Monte Carlo (uniform) optimum has a 0.989 probability of constraint satisfaction for the uniform distribution model (same value as imposed by the probabilistic constraint). But this probability value increases to 0.999 for the normal model. The same pattern is also observed for the Monte Carlo (normal) optimum. It has a 0.996 probability of constraint satisfaction for the normal model (same as imposed by the constraint), but this value reduces to 0.925 for the uniform model. This observation shows that the information provided by the probabilistic constraints is dependent on the accuracy of the assumed distribution model. As such, unless the distribution of the uncertain parameters is known with relative certainty, such probability information must be used with utmost caution since it can be misleading. 169 To further validate our method, we calculated and compared the FSR and FWCSR of the optima in Table 5.7. For simplicity, we only show the comparison for the nominal and robust optima. We form the FSR (S g ) of each design by first forming the FSR of each constraint (S g,j ) and then forming the intersection. S g,j is obtained by constructing the difference function ?g j (?L c , ?L r ) = g j (L c +?L c , L r +?L r ) ? g j (L c , L r ) from Eq. (5.29), substituting the L c , L r , and d values of the design into this function, and then setting it to ?g j ? ?g j,0 . The ?g j,0 is obtained by taking the absolute value of the j-th constraint of the design. The constraint values of the nominal and robust optima are shown in Table 5.9. Table 5.9: Constraint values of the nominal and robust optima. Nominal Robust g 1 -0.00026 -0.037 g 2 -0.000031 -0.048 g 3 -0.173 -0.169 g 4 -0.366 -0.396 g 5 -0.457 -0.429 Observe in Table 5.9 that only g 1 and g 2 are active (or nearly active) for the nominal optimum while g 3 , g 4 , and g 5 are relatively the same for the two optima. This implies that the ?g 1 and ?g 2 functions are critical components of S f , while ?g 3 , ?g 4 , and ?g 5 are not. So, in constructing the FWCSR, using only ?g 1 and ?g 2 suffices. From Eq. (5.29), we obtain the ?g 1 and ?g 2 difference functions: ()[ ] )LL(707.0dd ? radians)(in LL d ? cos? ? L314.0g L707.0?sinL? ? sinLL 6.5 1 g ccr rr r1- c2 crrr1 ?+?= ? ? ? ? ? ? ? ? ?+ = ??=? ?+??+=? (5.35) 170 Figure 5.22 shows the FSR and FWCSR of the nominal and robust optima. In this figure only the critical components, i.e., ?g 1 and ?g 2 , are shown. Note that S g,1 is not a linear function (although it may seem so from the figure). S g,2 is a linear function, while S g,1 is a trigonometric function as shown in Eq. (5.35). Note also that Figure 5.22 shows the regions in the non-normalized space (but has same scale). We do so to better relate to the actual values of L c and L r . (a) nominal (b) robust -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 -0.4 -0.3 -0.2 -0.1 0.1 0.2 0.3 0.4 ?L c ?L r FWCSR S g,1 S g,2 S g = S g,1 ? S g,2 ?L c ?L r FWCSR -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 -0.4 -0.3 -0.2 -0.1 0.1 0.2 0.3 0.4 S g,1 S g,2 S g = S g,1 ? S g,2 Figure 5.22: FSR and FWCSR of nominal and robust optima. We observe in Figure 5.22 that the FWCSR of the nominal optimum is very small, close to a zero radius. This is because g 1 and g 2 are almost active for this optimum, and as such there is very little ?cushion? for L c and L r variation (in the worst case sense). In contrast, the FWCSR of the robust optimum is much larger (R g = 1.44), and it allows for more variations in L c and L r . This is also reflected by the larger g 1 and g 2 values of the robust optimum in Table 5.9. Observe also that the FWCSR of the robust optimum covers the area bounded by [-0.1, 0.1] as specified. 171 5.5. SUMMARY ? The feasibility robustness of a design is indicated by its feasibility sensitivity set (S g ). Like the objective sensitivity set (S f ), the feasibility sensitivity set S g shows how much ?p a design can absorb before it violates a prescribed limit. Thus, the larger S g , the more robust the design. ? However, unlike S f , S g is a one-sided sensitivity measure because we only need to limit the increase, but not the decrease, in the constraints. ? The plot of S g in the ?p-space is the Feasibility Sensitivity Region (FSR) of a design. To account for directional sensitivity, we use the worst-case estimate of FSR, the Feasibility Worst Case Sensitivity Region (FWCSR) as a robustness measure. The FWCSR radius (R g ) can be calculated by solving a single-objective optimization with one equality constraint. ? Unlike objective robustness, the increment limits for constraints are determined by how far a design is from the constraint boundary, and not by the designer. ? In addition, a design has to be feasible nominally for the FWCSR measure to make sense. If a design is infeasible, then feasibility robustness cannot be guaranteed. ? The inner optimization to calculate R g must be normalized if the scale of ?p is different. After the normalization, a design is guaranteed to be feasibly robust if its feasibility robustness index, ? g = (G) -1/2 R g , is greater than or equal to 1.0. 172 CHAPTER 6 DISCUSSIONS 6.1. INTRODUCTION In the last three chapters, we presented methods for objective robust optimization and feasibility robust optimization separately. The purpose of this chapter is to show how to combine these methods for both objective and feasibility robust optimization. In addition, this chapter also aims to address those issues in our robust optimization methods that we have not yet addressed, or so far only briefly discussed. More specifically, this chapter will discuss issues regarding: (i) use of one-sided sensitivity measure for objective robustness, (ii) asymmetrical two-sided sensitivity measure, (iii) asymmetrical parameter variations, and (iv) a comparison between robustness index and robustness probability. 6.2. OBJECTIVE AND FEASIBILITY ROBUST OPTIMIZATION In Chapters 3 and 4, we introduced an index ? to measure objective robustness of a design, where ? = (G) -1/2 R f , and R f calculated by solving an optimization problem in Eq. (4.6), restated here for convenience: 01 )( max:tosubject )p()(Rminimize 0, 0 M,...,1 2 1 G 1 2 =? ? ? ? ? ? ? ? ? ? ???? ? ? ? ? ? ? ? ?=? = = ? i i i j jf f f pp p p (6.1) Similarly, in Chapter 5 we introduced an index ? g to measure feasibility robustness of a design, where ? g = (G) -1/2 R g , and R g calculated by Eq. (5.16), restated below. 173 []0),(gmax:subject to p)(Rminimize 00 J,...,1 2 1 G 1 2 g =???+ ? ? ? ? ? ? ?=? = = ? ? ppp p p x j j i i (6.2) Adding the constraint 0 ? ? 0 ??1 to an optimization problem guarantees objective robustness of an optimum design, while adding the constraint 0 ? ? g,0 g ??1 guarantees its feasibility robustness. Accordingly, adding both of these robustness constraints to an optimization problem will guarantee both objective and feasibility robustness of an optimum design. Eq. (6.3) shows the overall formulation of a general robust optimization problem, where ? and ? g are calculated by Eq. (6.1) and (6.2), respectively. [ ] )constraint robustnessty (feasibili0 ? ? 1 )constraint robustness (objective0 ? ? 1 K,...,1;0),(h J,...,1;0),(g:subject to ),(),...,,(minimize g,0 g 0 0 0 0M01 ?? ?? == =? k j ff k j p p pp x x xx x (6.3) Notice in Eq. (6.1) and (6.2) that R f and R g are defined in the same space ( p? -space), and are of the same scale (normalized by 0 p? ). Based on this observation, then the two robustness constraints in Eq. (6.3) will be satisfied if the larger of the two constraints is satisfied, i.e., if 0 ? ? 1, ? ? 1max g,0 g 0 ? ? ? ? ? ? ? ? ? ?? . Using this fact, the robustness constraints in Eq. (6.3) can be stated more compactly as: 0 ? ? , ? ? min g,0 g 0 ? ? ? ? ? ? ? ? ? 1 . ? 174 The advantage of keeping the objective and feasibility robustness constraints separate like shown in Eq. (6.3) (they are still separate in the compact form as well) is that it provides flexibility for a designer to specify his/her preference towards the two types of robustness. By setting different values for ? 0 and ? g,0 , a designer can specify that (s)he considers one type of robustness (i.e., objective robustness or feasibility robustness) more important than the other type. For example, if a designer specifies ? 0 = 0.8 but ? g,0 = 1.0, then it implies that the feasibility robustness of an optimum design is considered to be more important than its objective robustness. If a designer is indifferent towards either the objective or the feasibility robustness (i.e., ? 0 = ? g,0 ), then the robustness constraint 0 ? ? , ? ? min g,0 g 0 ? ? ? ? ? ? ? ? ? ?1 simplifies into []0R,Rmin ? G 1 g 0 2/1 ? ? ? ? ? ? ? ? ? ? ? f . This last inequality is of particular interest to us. In this inequality, we only need to determine the smaller of R f and R g , but not both. Let?s discuss the meaning of min[R f ,R g ] in more detail next. Recall from previous chapters that conceptually R f is the normalized WCSR radius of a design, and WCSR is the worst-case estimate of the corresponding SR. Likewise, R g is the normalized FWCSR radius of the design, and FWCSR is the worst-case estimate of the FSR. Since SR and FSR are defined in the same p? -space and are of the same scale, min[R f ,R g ] implies that we are looking for the radius of worst-case estimate of an intersection of SR and FSR, as shown in Figure 6.1. In Figure 6.1, the region between the two solid lines is the SR, and the region bounded by the dashed line is the FSR. The R f 175 and R g of the SR and FSR are also shown in Figure 6.1. The shaded region is the intersection of SR and FSR. R g FSR SR and FSR intersection 1 p? 2 p? SR 0 R f min[R f ,R g ] = R f Figure 6.1: Intersection of SR and FSR. Mathematically, a ?p point is inside the intersection region if it satisfies all of the following inequalities: . A ?p point is outside the intersection region if either [ or ? ? ? ? ? =?? g j 0, g j ?>? is true for at least one i = 1,?,M or j = 1,?,J, respectively. For a ?p point to be on the boundary of the intersection region, it needs to satisfy [ and 2 0, 2 ][] ii ff ??? 0, g j g j ??? , with at least one strict equality (one i or j from i = 1,?,M or j = 1,?,J, respectively). Using the simplified condition for SR and FSR boundaries developed previously (recall Section 4.3.2 and 5.2.2), the condition for a ?p point to be on the boundary of the intersection region can be 176 simplified into: 01 |),(g| g max, || maxmax 0 J,...,1 0, M,...,1 =? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? == px j j j i i i f f , where we have used the fact that ?g j,0 = |g j (x 0 ,p)|. Notice also that in this simplified condition we have used |),(g| g 0 px j j ? instead of g j (x 0 ,p) to define the FSR boundary (recall Section 5.3). Using the ratio |),(g| g 0 px j j ? is recommended because we are comparing it to a normalized ?f i value. Using g j (x 0 ,p) in the formulation may create difficulty in terms of numerical comparison. Using the above mathematical definitions, the radius (R) of the worst-case estimate of the intersection region can be calculated by solving the following optimization problem (normalized): 01 |),(g| )(g max, |)(| maxmax :subject to p)R(minimize 0 0 J,...,1 0, 0 M,...,1 2 1 G 1 2 =? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ???+? ? ? ? ? ? ? ? ? ? ???+? ? ? ? ? ? ? ?=? == = ? ? p ppp ppp p p x j j j i i i i i f f (6.4) Recall from Figure 6.1 that this radius value (R) is equal to min[R f ,R g ]. So, Eq. (6.4) shows that we can find min[R f ,R g ] by solving just one inner optimization problem instead of two (i.e., Eq. (6.1) and (6.2)). If we define a quantity ? min = (G) -1/2 min[R f ,R g ], then the overall objective and feasibility robust optimization problem becomes as shown in Eq. (6.5) in which min[R f ,R g ] is calculated by solving Eq. (6.4). Here, we call ? min the overall robustness index. We have used ? 0 for the robustness constraint in Eq. (6.5), but since ? 0 = ? g,0 , we could have used ? g,0 as well. 177 [ ] )constraint robustness (overall0 ? ? 1 J,...,1;0),(g:subject to ),(),...,,(minimize 0 min 0 0M01 ?? =? j ff j p pp x xx x (6.5) One advantage of having to solve just one inner optimization problem is that practically the optimization algorithm to solve Eq. (6.5) (the outer and inner problems and the interface between them) will be much easier to implement. In addition, it helps the outer problem to converge faster because it has one fewer constraint to satisfy. Solving only one inner problem also helps reduce numerical errors transmitted from the inner problem to the outer problem. One potential disadvantage of combining Eq. (6.1) and (6.2) into a single inner problem is that solving Eq. (6.4) might be less efficient computationally than solving Eq. (6.1) and (6.2) separately. For instance, if Eq. (6.1) and Eq. (6.2) can be solved in say T iterations each, it is possible that Eq. (6.4) may need more than T iterations to converge because its constraint is more complex. Before we continue further, it is important to point out again that Eq. (6.4) and (6.5) are valid only if the designer is indifferent to either objective or feasibility robustness (i.e., ? 0 = ? g,0 ). If one type of robustness is preferred to the other type, then we must solve Eq. (6.1) and (6.2) separately. 6.2.1. Design of a Payload for an Undersea Autonomous Vehicle (UAV) To demonstrate our combined objective and feasibility robust optimization method, we apply it to an engineering example: the design of a UAV payload. The description of the problem is as follows. 178 Typically, the payload of a UAV must be effective in several different uses, called ?scenarios.? Effectiveness in a scenario is measured by the probability of success, P S , of payload delivery in that scenario. The design goal is to simultaneously maximize the individual P S ?s for all scenarios. The payload design is constrained by upper limits on the weight of the payload and on the radiated noise generated by the payload. There are six design variables: the payload length (PL), the hull diameter (DH), the material of the hull (HM), the payload type (PT), the first inner material type (I1), and the second inner material type (I2). Four of the variables are discrete: HM, PT, I1, and I2. The choices for HM, PT and I1 are [6061AL, 7075AL], [BULK, MULTI_MISS], and [TYPE_1A, TYPE_1B], respectively. For discrete variable I2, the options available are [TYPE_2A, TYPE_2B, TYPE_1B], but I2 can be TYPE_1B only if the variable I1 is TYPE_1B also. The other two variables are continuous and they are bounded as: 6.0 ? DH ? 12.75 and 1.0(DH) ? PL ? 5.0(DH). In addition to the six design variables, there is a fixed continuous design parameter, the maximum depth (= 3000 ft), at which the payload operates. Unlike our other design examples, there are no closed-form relationships to map the design variables to the constraints and to the P S ?s. Rather, we are provided with a design analyzer (a computer program) that maps the design variables to the payload weight, the radiated noise, and the P S ?s for the scenarios. In this example, we address a two objective payload design optimization with two constraints. The two objectives are to maximize P S1 and P S2 for two different scenarios. The two constraints are an 85 lb upper bound on the payload weight and a 0.16 Watt/m 2 upper bound on the radiated noise generated. The problem is mathematically formulated as follows. 179 ( ) () 016.0I2I1,PT,HM,DH,PL,Noise 085I2I1,PT,HM,DH,PL,Weight:subject to I2I1,PT,HM,DH,PL,Pmaximize I2I1,PT,HM,DH,PL,Pmaximize S2 S1 ?? ?? (6.6) There are some uncertainties in the formulation of the problem, and it is modeled by assuming that a parameter (internal to the design analyzer), C eq , which is used in calculating the weight of the payload has an uncontrollable variation. Since the value of C eq depends on the discrete combination of [PT, I1, I2], its variation is taken to be a percentage of the actual value: ?C eq,0 = (0.10)C eq . In addition, we also assume that two internal parameters [A 1 , A 2 ] used in calculating P S vary by 0.01 each. The problem we are solving, Eq. (6.6), has two P S objectives, so we have a pair of [A 1 , A 2 ] variations in calculating P S . It is also assumed that two of the design variables have uncontrollable variations as well: [?PL 0 , ?DH 0 ] = [0.01, 0.01]. In total, there are 7 uncertain parameters in this problem: [Ceq, A 1,s1 , A 2,s1 , A 1,s2 , A 2,s2 , PL, DH]. The maximum allowable variations in the P S ?s are specified to be [?P S1,0 , ?P S2,0 ] = [0.025, 0.025]. For the designer, the objective and feasibility robustness of the payload designs are equally important, and the desired robustness is specified to be: ? 0 = ? g,0 = 1.0. Since the designer is indifferent towards the two types of robustness, we can use the combined inner problem, Eq. (6.4), to search for the robust Pareto optima of this problem. Adding our overall robustness constraint 0 ? ? 0 min ??1 to Eq. (6.6) and solving it, we obtain the robust Pareto optima of the problem as shown in Figure 6.2. For comparison, Figure 6.2 also shows the nominal Pareto optima of the problem (i.e., Eq. (6.6) without the robustness constraint), and the Pareto optima obtained from solving the problem using a 180 probabilistic approach. In the probabilistic approach, we minimize the worst-case P S of each objective in the form of the sum of mean and standard deviation of the PS?s. The mean and standard deviation values are calculated by running 10,000 Monte Carlo simulations assuming uniform probability distribution. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 P S1 P S2 Nominal Robust Probabilistic nominal optimum to be analyzed robust optimum to be analyzed probabilistic optimum to be analyzed P S2 Figure 6.2: Pareto sets of the payload problem. We see in Figure 6.2 that overall the robust Pareto optima are inferior to the nominal Pareto optima (this is a maximization problem), although there seems to be some overlap between the two Pareto frontiers. This observation is expected because there is a trade-off between the performance of an optimum and its robustness. We also see that the probabilistic Pareto solutions are very close to the nominal Pareto frontier, suggesting that these points do not meet our robustness requirement. To verify the robustness of the designs, we performed a sensitivity analysis on three of the Pareto optimum designs obtained, one each from the nominal, robust, and 181 probabilistic Pareto set (randomly selected). The objective and constraint values of the three designs are shown in Table 6.1. Table 6.2 shows their design variable values. Table 6.1: Objective and constraint values of the optima. Nominal Robust Probabilistic P S1 0.067 0.295 0.514 P S2 0.695 0.295 0.135 Weight (lb) 85.000 84.433 84.95 Noise (W/m 2 ) 0.158 0.158 0.157 Table 6.2: Design variables of the optima. Nominal Robust Probabilistic PL (inch) 19.679 24.262 24.237 DH (inch) 9.683 9.048 10.216 HM 7075AL 7075AL 7075AL PT BULK MULTI_MISS MULTI_MISS I1 TYPE_1B TYPE_1B TYPE_1B I2 TYPE_2A TYPE_1B TYPE_2B We perform the sensitivity analysis by perturbing the 7 uncertain parameters [Ceq, A 1,s1 , A 2,s1 , A 1,s2 , A 2,s2 , PL, DH] around their original values, and then observing the changes in the objective and constraint values of the two designs. The perturbation values used in this analysis are randomly sampled from the given ranges of the parameter variations. A design meets the robustness criterion if these changes are within the specified limits. The results of the analysis are shown in Figure 6.3, 6.4, and 6.5 for the nominal, robust, and probabilistic optimum, respectively. In these figures, the dashed lines are the variation limits, and a design is robust if the bars do not cross these lines. 182 -0.03 -0.02 -0.01 0 0.01 0.02 0.03 161 Case # ?Ps2 82 83 84 85 86 87 161 Case # W e ight 0.155 0.156 0.157 0.158 0.159 0.16 0.161 0.162 161 Case # No i s e -0.03 -0.02 -0.01 0 0.01 0.02 0.03 161 Case # ?Ps1 W e ight W e ight No i s e No i s e Figure 6.3: Sensitivity analysis of the nominal optimum payload. -0.03 -0.02 -0.01 0 0.01 0.02 0.03 161 Case # ?Ps1 -0.03 -0.02 -0.01 0 0.01 0.02 0.03 161 Case # ?Ps2 0.155 0.156 0.157 0.158 0.159 0.16 0.161 0.162 161 Case # No i s e 82 83 84 85 86 87 161 Case # We i g h t No i s e No i s e We i g h t We i g h t Figure 6.4: Sensitivity analysis of the robust optimum payload. 183 -0.03 -0.02 -0.01 0 0.01 0.02 0.03 161 Case # ?Ps2 0.155 0.156 0.157 0.158 0.159 0.16 0.161 0.162 161 Case # No i s e -0.03 -0.02 -0.01 0 0.01 0.02 0.03 161 Case # ?Ps1 82 83 84 85 86 87 161 Case # We i g h t No i s e No i s e We i g h t We i g h t Figure 6.5: Sensitivity analysis of the probabilistic optimum payload. We observe in Figure 6.3 that the nominal design satisfies both of the objective variation limits and the noise constraint. However, it does not satisfy the weight constraint. Similarly, the probabilistic design also satisfies the objective and noise robustness requirements, but not the weight constraint robustness (Figure 6.5). In contrast, we observe in Figure 6.4 that the robust design satisfies the variation limits for both objectives and constraints. This shows that the nominal and probabilistic designs do not meet the robustness criteria specified while the robust design does. In turn, these observations verify that using our robust optimization method to solve Eq. (6.6) indeed results in optimum designs that are robust with respect to both objectives and constraints. 184 6.3. ONE-SIDED SENSITIVITY FOR OBJECTIVE ROBUSTNESS Up to this point, we have used a two-sided sensitivity measure to calculate objective robustness of a design. The comparison studies presented in Chapters 3 and 4 showed applications of this two-sided objective robustness measure to several examples. However, in some cases it may be more appropriate to use a one-sided sensitivity measure to calculate objective robustness of a design. For instance, if the objective of an optimization problem is to minimize the total cost of a design, then we are only interested in preventing the cost increase, but not the decrease, due to parameter variations. For this type of optimization problems, we should use a one-sided sensitivity measure to calculate objective robustness of a design alternative, and not a two-sided one. A one-sided sensitivity measure for objective robustness is essentially the same as the one-sided sensitivity measure for feasibility robustness (recall Chapter 5) except that now we are looking at objective functions instead of constraint functions. Suppose ?f 0 = [?f 1,0 , ?f 2,0 , ?, ?f M,0 ] is the maximum allowable increase in the objective values of a design x 0 due to parameter variations. The one-sided S f of x 0 is then as follows (notice the absence of the square terms): { } ),(),()(:where M,...,1,)( :R)( 0000 0, G 0 pppp ppS xx x iii if fff iff i ??+=?? =???????= (6.7) In Eq. (6.7) we have constrained the increase in ?f i and not the decrease because we are minimizing f i . Since the smaller f i the better, it is only logical that an increase in f i is undesirable. The plot of this S f in the ?p-space is the one-sided SR of x 0 , and the worst-case estimate of this SR is the one-sided WCSR of x 0 . The radius of this one-sided WCSR 185 ( ) is obtained by solving the inner optimization problem shown in Eq. (6.8). Notice that Eq. (6.8) is the same as the inner problem to calculate feasibility robustness (Eq. (5.11)), except that now the equality constraint involves f + f R i instead of g j . 01 )( max:subject to p)(Rminimize ,0 M,...,1 2 1 G 1 2 =? ? ? ? ? ? ? ? ? ? ?? ? ? ? ? ? ? ?=? = = + ? ? i i i i if f f p p p (6.8) Eq. (6.9) shows the normalized version of Eq. (6.8). 01 )( max:tosubject )p()(Rminimize 0, 0 M,...,1 2 1 G 1 2 =? ? ? ? ? ? ? ? ? ? ???? ? ? ? ? ? ? ? ?=? = = + ? i i i j jf f f pp p p (6.9) Using the R value obtained from Eq. (6.9), the one-sided objective robustness of x + f 0 is then indicated by the one-sided robustness index , where a value shows that x ++ = f - R(G)? 1/2 0.0.1? ? + 0 is robust, while a value shows that x1? < + 0 is not robust. Formulating the one-sided objective robustness constraint 0 ? ? 0 ?? + + 1 , where is the desired level of robustness, the optimization problem to obtain an optimum design that is objectively robust one-sidedly is as shown in Eq. (6.10). + 0 ? [ ] )robustness objective sided-one(0 ? ? 1 K,...,1;0),(h J,...,1;0),(g:subject to ),(),...,,(minimize 0 0 0 0M01 ?? == =? + + k j ff k j p p pp x x xx x (6.10) 186 6.4. ASYMMETRICAL TWO-SIDED SENSITIVITY MEASURE In calculating the two-sided sensitivity of a design?s objectives, we have used a single positive value ?f i,0 to limit both the increase and decrease in f i , i.e., a symmetrical two-sided sensitivity. Sometimes, however, it is desired to have different limits for the increase and decrease in f i , i.e., an asymmetrical two-sided sensitivity. The calculation of the asymmetrical robustness index of a design is a straightforward extension of the symmetrical one. Let and be the maximum allowable decrease and increase in f, respectively. Here we assume that ? and for all i=1,?,M. If ? then we are essentially looking at a symmetrical two-sided sensitivity (recall Chapter 4). Using these two allowable limits, the asymmetrical S ],...,[ 0,M0,10 ??? ??=? fff + 0,i f ],...,[ 0,M0,10 +++ ??=? fff +? ?= 00 ff 0, 0,0, >? +? ii ff ? ??? 0,i f f of a design x 0 is then as follows: { } ),(),()(:where M,...,1,)(:R)( 0000 0,0, G 0 pppp ppS xx x iii iif fff ifff i ??+=?? =??????????= +? (6.11) Notice in Eq. (6.11) that the square terms are now replaced by two inequalities. The plot of this S f in the ?p-space is the asymmetrical SR of x 0 , while the worst-case estimate to this SR is the asymmetrical WCSR. The inner optimization problem to calculate the radius of the asymmetrical WCSR ( ) is shown in Eq. (6.12) (this is the normalized version). +?/ R f 01 )( max, )( maxmax:tosubject )p()(Rminimize 0, 0 M,...,1 0, 0 M,...,1 2 1 G 1 2/ =? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ???? ? ? ? ? ? ? ? ? ? ???? ? ? ? ? ? ? ? ?=? + = ? = = +? ? i i i i i i j jf f f f f pp pp p p (6.12) 187 The equality constraint in Eq. (6.12) shows the asymmetry of this two-sided measure. The first and second terms in the equality constraint in Eq. (6.12) correspond to the decrement and increment limits in f i , respectively. The asymmetric robustness index of x 0 is then calculated as where is obtained by solving Eq. (6.12). As before, the value shows that design x +?+? = /1/2/ R(G)? f - 0.1? +?/ R f ? /+? 0 is robust while ? shows that it is not robust. To obtain an optimum design that is objectively robust asymmetrically, we only need to add the robustness constraint 0.1 / < +? 0 / / ? + + ? ? 1 0 ? ? ? to the optimization problem of interest ( is the desired level of robustness, and is specified by the designer). +?/ 0 ? 6.5. ASYMMETRICAL PARAMETER VARIATIONS One of the assumptions of our robust optimization method is that the parameter variation ranges are symmetric: -?p 0,i ? ?p i ? ?p 0,i , i=1,?,G (?p 0,i > 0). This assumption is necessary because we are using a single point normalization to account for the scale difference of the ?p when measuring the robustness of a design. Sometimes, however, the variation ranges are not symmetric, and when this occurs, our robustness measure must be modified accordingly to account for this asymmetry. Let and be the lower and upper bounds of the parameter variation ranges, respectively. We assume that ? for all i=1,?,G (otherwise it is a symmetric range). The purpose of normalizing ?p ]p,...,p[ G,01,00 ??? ??=?p ]p,...,p[ G,01,00 +++ ??=?p 0pp ,0,0 >?? +? ii i is to map it to the [-1,1] range. When the parameter variation (?p 0 ) is symmetric, the 188 normalization i i i ,0 p p p ? ? =? provides such a mapping. When ?p 0 is not symmetric, however, there are two cases to consider: the negative and positive ?p i . If ?p i is negative, the normalization must be performed with respect to . If ?p ? ? 0 p + 0 p i is positive, the normalization must be performed with respect to . Based on this fact, the normalization for the asymmetric ?p is then: ? ? ? ? ? ? ? ? >? ? ? ?? ? ? = + ? 0p, p p 0p, p p p ,0 ,0 i i i i i i i ? or ? ? ? ? ? >??? ???? =? + ? 0p,)p)(p( 0p,)p)(p( p ,0 ,0 iii iii i . Using the ternary operator ??? defined in Chapter 2, this normalization can be written in a compact form as: ??=? +? 00 , ppp ?? ,p? . 01 ,, )p 0, 0 2 1 2 =? ? ? ? ? ? ?? ? ? ? ? i j f p ) 0 ? + p Because now the ?p 0 ranges have been normalized to be [-1,1], the radius of the exterior hyper-sphere of these ranges are still = (G) -1/2 . So, the calculation of the objective and feasibility robustness index remain the same as before, i.e., ? = (G) -1/2 R f and ? g = (G) -1/2 R g , respectively. However, the calculation of the radius of the WCSR and FWCSR have to be modified as shown in Eq. (6.13) and (6.14), respectively. (6.13) ( max:tosubject ()(Rminimize M,...,1 G 1 ? ? ? ? ??? ? ? ? ? ?=? = = ? i i j f f p p p []0),,,(gmax:subject to p)(Rminimize 000 J,...,1 2 1 G 1 2 g =?????+ ? ? ? ? ? ? ?=? +? = = ? ? pppp p p x j j i i (6.14) 189 6.6. ROBUSTNESS INDEX VS. ROBUSTNESS PROBABILITY Our robust optimization method measures the robustness of a design based on the values of its robustness index: ? and ? g for objective and feasibility robustness, respectively. The magnitude of a robustness index tells us the degree of robustness of a design: the larger ? (and/or ? g ), the more robust the design. However, since we are only provided with ranges of ?p 0 , ? (and/or ? g ) does not tell us the actual robustness probability of the design. (For objective robustness, the robustness probability is the probability that the objective values of the design stay within the acceptable limits. For feasibility robustness, it is the probability of constraint satisfaction of the design.) Nevertheless, if we know the probability distribution of ?p 0 , we can use the value of ? and/or ? g to calculate a lower bound on the robustness probability of the design. The procedure to calculate the robustness probability is described next. Here we only show the calculation for independent uniform and normal distributions. Probability calculations for other distributions and when there are correlations will be somewhat more involved, but the basic procedure is the same. Uniform Distribution Suppose ?p is distributed uniformly within [-1,+1] and there is no correlation among its ?p i components. If the distribution is not within [-1,+1], it can be normalized to fall into this range (it has to be normalized anyway since we are going to compare it with the normalized robustness index). The probability density function of this distribution is f x (?p) = (2) -G , and the probability that a random variable X is between [?p 1 ,?p 2 ] is: )p()...p()(...][P G1 p p p p 21 1,2 1,1 G,2 G,1 ???=???? ?? ? ? ? ? ddf x ppXp (6.15) 190 Substituting f x (?p) = (2) -G into Eq. (6.15), P . )pp()2(][ ,1,2 G 1 G 21 ii i ???=???? ? = ? pXp A value of ? = 1.0 indicates that the WCSR of a design encloses the [-1,+1] range in ?p-space. (In our discussion we use ?, but it is applicable to ? g as well.) More generally, a robustness index of ? tells us that the WCSR of the design encloses the [-?,+?] range in ?p-space. Recall that the ?p range defined by a WCSR tells us the ?p that must occur if we want the ?f 0 limit to be satisfied, while those ?p defined by the probability distributions are the ?p that actually does occur. In other words, the probability that the design will satisfy the ?f 0 limit is equal to the probability that a random ?p falls in the [-?,+?] range, i.e., ][P ?p? +???? G 1 G )? )(?( =?? ? = ? i . Using the formula obtained previously, the probability that a design will satisfy the ?f 0 limit is then: . Table 6.3 shows the probability values for several instances of ? and G. G ?)2(][P =+??? ?X? Table 6.3: Probability values for uniform distribution. 1234 0.01 0.01 0 0 0 0.1 0.10 0.01 0.001 0 0.5 0.5 0.25 0.125 0.063 0.8 0.8 0.64 0.512 0.410 0.9 0.9 0.81 0.729 0.656 1.0 1.0 1.0 1.0 1.0 G ? It is very important to point out that the probability calculation above is valid only if each ?p i is uniformly distributed and they are independent. In addition, the formula P[.] = ? G is only a lower bound (worst case) of the actual robustness probability value of 191 the design. This is because we are comparing the uniform distribution (which is a hyper- cube) with a WCSR (which is a hyper-sphere). So, there are some ?p values that are part of WCSR (and hence ?), but are not included in the probability calculation. Another reason our probability calculation is only a lower bound is because WCSR is a worst-case estimate of the actual SR, so again, there are some ?p values that in reality the design can absorb, but are not included in our calculation. Normal Distribution Suppose ?p is multi-variate normally and independently distributed with a mean ? = 0 and a covariance matrix . We use a standard deviation of ? ? ? ? ? ? ? ? ? ? ? = 9/10 09/1 L MOM L ? ii = 1/3 so that the range [-1,1] covers 3? ii of the distribution. The probability density function of this distribution is ? ? ? ? ? ? ????? ? ? ? ? ? )()( 2 1 exp 1T 2/1 ?p??p ? ? ? ? =? ? )2( 1 )( G ? p x f where |?| and ? -1 are the determinant and inverse of ?, respectively. The probability that a random variable X is between [?p 1 ,?p 2 ] is obtained by substituting this density function into Eq. (6.15) and then performing the integration. There is no closed-form solution to this integration, so we must numerically calculate it. As with the uniform distribution, given the robustness index ?, the robustness probability of a design is equal to the probability that a random ?p falls into the [-?,+?] range: . Table 6.4 shows the probability values for several instances of ? and G (these values are obtained numerically). ][P ?p? +???? 192 Table 6.4: Probability values for normal distribution. 1234 0.01 0.024 0 0 0 0.1 0.236 0.056 0.013 0.004 0.5 0.866 0.742 0.639 0.551 0.8 0.984 0.950 0.926 0.903 0.9 0.993 0.968 0.952 0.936 1.0 0.997 0.975 0.963 0.951 G ? Keep in mind again that the values in Table 6.4 are valid only if the ?p distribution is normal. Also, these values are the lower bounds of the actual values of the robustness probability of the design. Unlike the uniform distribution, however, the lower bound for ? = 1.0 is not 1.0, rather it decreases as G increases. The reason for this is because we only use ? ii = 1/3 for the distribution. As G increases, the ?tail? region of the normal distribution is becoming larger, and this region is not included in our probability calculation. Example 6.1 Let us revisit the control valve actuator linkage example from Chapter 5. Previously, we obtained a set of optimum designs for various values of ? (Figure 5.21). Using the probability values in Tables 6.3 and 6.4, estimate the probability of constraint satisfaction (P s ) of each design assuming uniform and normal ?p distribution. Calculate the actual P s of the designs, and compare them to the estimated values. Solution We can easily estimate the P s of the designs using the strategy explained previously. To calculate the actual P s of the designs, we performed 100,000 runs of Monte Carlo 193 simulations for each design, for each distribution model. Figure 6.6(a) and (b) show the plots of the estimate and actual P s of the designs for the uniform and normal distribution models, respectively. We see in both Figure 6.6(a) and (b) that the estimate P s values are always lower than the actual P s values. This is because the estimated values are only a lower bound of the actual values. Figure 6.6 also shows that the difference between the estimate and the actual P s values decreases as ? increases. Intuitively, this observation is expected. The discrepancy between the estimated and the actual values is mainly caused by the fact that we have used a worst-case estimate of the FSR to measure a design?s robustness. As the actual P s approaches 1.0, ? increases, and the FWCSR of the design will become more and more like the actual ?p distribution. Therefore, as ? increases, the lower bound P s will approach that of the actual P s . ? 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 ? P s Actual Estimate (a) uniform 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 ? P s Actual Estimate (b) normal P s P s P s P s Figure 6.6: Comparison between the estimate and actual P s . 194 6.7. SUMMARY ? Adding objective and feasibility robustness constraints to an optimization problem results in optimum designs that are robust both objectively and feasibly. ? The advantage of keeping the objective and feasibility robustness separate is that it provides the designer the flexibility to state his/her preference towards either of the two types of robustness (i.e., by changing ? 0 and ? g,0 ). ? When a designer is indifferent towards the two types of robustness, the overall robustness of the design can be calculated by solving just one inner optimization problem instead of two. ? Practically, it is easier to solve just one inner optimization problem (to calculate the design?s robustness) than to solve two inner problems. Also, it helps the outer problem converges faster as well as reduces the error transmitted from the inner to the outer problem. However, solving two inner problems might be computationally more efficient. ? Sometimes it is more appropriate to use a one-sided sensitivity measure for objective robustness. This one-sided measure is the same as the one-sided measure used in feasibility robustness, except that in this case we use objective functions instead of constraint functions in the formulation. ? For a two-sided objective robustness, sometimes it is desired to have different allowable limits for the decrease and increase in f i , i.e., an asymmetric two-sided sensitivity. This asymmetry can be easily incorporated into our robustness formulation by changing the square terms in S f by two inequalities. 195 ? In our robust optimization method, we had made a necessary assumption that parameter variations are symmetric. However, in the event they are not symmetric, our symmetric normalization can be easily modified to account for it. ? Since our method does not use a presumed probability density function (pdf), the value of the robustness index does not provide us with the probability information of the degree of robustness of the design. However, if the pdf of ?p is known, we can use the ? value to calculate a lower bound of the robustness probability of the design. 196 CHAPTER 7 CONCLUSIONS 7.1. CONCLUDING REMARKS In this dissertation, we have presented a step-by-step development of a novel method for robust design optimization. After presenting our research objective and review of previous work in Chapters 1 and 2, we developed our method for objective robust optimization for a single objective problem in Chapter 3, and then extended it to multi- objective problems in Chapter 4. In Chapter 5, we developed a method for feasibility robust optimization of a design. Chapter 6 presented our combined objective and feasibility robust optimization method. The essence of our robust optimization method is the robustness measurement of a design alternative using a sensitivity region concept. A sensitivity region is an inherent property of a design that shows how much parameter variations the design can absorb given a limit on its performance variation. The more parameter variations a design can absorb (i.e., the more we can allow the parameters to vary), the more robust the design is. In the method, we use the worst-case estimate of the sensitivity region as a measure of a design?s robustness. Based on this worst-case estimate, we calculate a robustness index for the design, which we then constrain and add to the original optimization problem to guarantee the robustness of the optimum design solution obtained. In Chapters 3-6, we demonstrated the application of our robust optimization method to several numerical and engineering examples. For comparison, we also solved some of the problems using several other well-known robust optimization methods. We showed in 197 these examples that our method indeed obtains a design that is robust and optimum, and that our method is computationally efficient. In the next few subsections, we provide some additional concluding remarks regarding the results of our research. 7.1.1. Verification In Chapters 3 through 6 we solved several numerical and engineering examples and analyzed the results obtained to verify our robust optimization methods. In the wine-bottle function example (Section 3.4.1), we can graphically verify the validity of our robust optimization method. The function has a unique property in that there is a flat region around the middle at which the function is insensitive to the variable variations, i.e., a robust region. A robust optimization method is valid if its solutions are within this flat region. We observe that the solutions from our method indeed fall into the flat region, thus verifies that our method is a valid robust optimization method for this example. For the other comparison studies, graphical verification is difficult or impossible to do. So instead, we performed a sensitivity analysis on the results obtained to see if they are indeed robust. The results of our sensitivity analysis are summarized in Table 7.1. In this table, the symbol ??? means that the optimum designs obtained are robust, while the symbol ??? means that they are not. We show in Table 7.1 four categories of robust optimization methods. ?Nominal? method refers to a regular optimization method without robustness consideration. ?Robust? method refers to our robust optimization method with ? 0 = ? g,0 = 1.0. ?Sampling? method refers to those methods that perform a 198 localized sampling around the nominal parameter values (e.g., Monte Carlo). ?Gradient? method refers to those methods that use the gradient of the functions in calculating a design?s robustness (e.g., worst-case gradient, moment matching). Table 7.1: Summary of sensitivity analysis results. Nominal Robust Sampling Gradient Three-bar truss ? ? ? Welded beam ? ? ? Compression spring ? ? ? Numerical (multi-objective) ? ? ? Vibrating platform ? ? ? Speed reducer ? ? ? Power electronic module ? ? ? Numerical (feasibility) ? ? ? Explosive actuated cylinder ? ? ? Belleville spring ? ? ? Control valve actuator linkage ? ? ? Payload for an UAV ? ? ? We see in Table 7.1 that optimum designs obtained by our robust optimization method always satisfy the robustness requirements. This observation verifies that our method is indeed a valid robust optimization method. In contrast, optimum designs obtained by a regular optimization (the ?Nominal? method) are not robust for these examples. This is expected since this method does not account for a design?s robustness. The optima of the ?Sampling? method also do not satisfy the robustness requirement. This is because the method uses probability to measure a design?s robustness, so there is a chance (albeit small) that the optimum design will violate the requirement. In the sensitivity analysis, a design is termed robust only if it never violates the requirement. The optimum of the ?Gradient? method is robust in the compression spring example, but it has a very poor objective value (recall Section 3.4.4). In the numerical example in 199 Chapter 5 and Belleville spring design examples, the ?Gradient? method fails to obtain a robust optimum design. 7.1.2. Computational Efficiency Our robust optimization method calculates the robustness of a design by solving an inner optimization problem, which is a single objective problem with an equality constraint. When the objective/constraint functions of the outer problem (the original optimization problem) are simple enough, analytic solutions to the inner problem may be possible, in which case our method does not need to perform any function evaluations. When analytic solutions are not possible, the inner problem may be solved by a gradient- based optimization algorithm such as Sequential Quadratic Programming. The computational cost of such algorithms is generally in the order of 10 1 . When gradient- based algorithms are not applicable (e.g., when the functions are non-differentiable), stochastic algorithms such as GA may be used instead. For high solution accuracy, the computational cost for stochastic algorithms is generally in the order of more than 10 3 . However, throughout our comparison studies, we found that GA can solve the inner optimization problem using only ~200-300 function evaluations. This is because the inner problem is not too difficult an optimization problem to solve. Its objective function is convex and unimodal, and the search space is not large. Table 7.2 shows a summary of the number of function evaluations (F call ) performed by the four methods to calculate the robustness of one design alternative. 200 Table 7.2: Summary of average number of function evaluations. Nominal Robust Sampling Gradient Wine-bottle N/A 30 Three-bar truss N/A 39 10000 Welded beam N/A 250 100000 Compression spring N/A 250 0 Numerical (multi-objective) N/A 250 100000 Vibrating platform N/A 250 100000 Speed reducer N/A 300 100000 Power electronic module N/A 300 100000 Numerical (feasibility) N/A 24 0 Explosive actuated cylinder N/A 45 100000 Belleville spring N/A 300 0 Control valve actuator linkage N/A 250 100000 Payload for an UAV N/A 300 10000 In Table 7.2, the F call for the ?Nominal? method is 0 because this method does not calculate the design?s robustness. The F call for the ?Gradient? method is also 0, but this is because the gradient of the functions in the examples are known in closed form. If the gradient has to be numerically estimated, the F call will be non-zero, depending on the dimension of the inner problem. The F call of our robust optimization method varies from 24 to 300. The F call is either 250 or 300 when we used GA to solve the inner problem, and it is less than 50 when we used fmincon. Overall, the F call of our method is much lower than the ?Sampling? method, whose F call ranges from 10,000 to 100,000. Table 7.3 shows a summary of the computational cost of the four methods in terms of absolute time (?hr? for hour, ?m? for minute, and ?s? for second), and the optimization algorithms used in each problem. These time values are obtained using a Pentium III 866 MHz computer with 384 RAM. 201 Table 7.3: Summary of computational time. Nominal Robust Sampling Gradient Algorithm Wine-bottle 2 s 30 m 43 s GA Three-bar truss 0.25 s 3 s 3 m 17 s fmincon Welded beam 18 s 8 m 25 s 1 hr 15 m GA Compression spring 3 s 8 m 14 s 3 s GA Numerical (multi-objective) 4 s 18 m 3 s 2 hr 3 m NSGA Vibrating platform 4 s 19 m 8 s 1 hr 32 m MOGA Speed reducer 5 s 12 m 58 s 1 hr 33 m NSGA Power electronic module 5 s 33 m 6 s 7 hr 42 m NSGA Numerical (feasibility) 0.22 s 2.5 s 4 s fmincon Explosive actuated cylinder 1 s 18 s 1 m 28 s fmincon Belleville spring 4 s 12 m 47 s N/A GA Control valve act linkage 11 s 16 m 17 s 2 hr 28 m GA Payload for an UAV 5 s 22 m 29 s 1 hr 52 m NSGA The values in Table 7.3 confirm the data shown in Table 7.2. Overall, the ?Nominal? method is the fastest, requiring at maximum only 18 sec to solve the problem. This is not surprising since this method does not perform any additional function evaluations. The ?Gradient? method is also very fast since the gradient information is available analytically. The ?Sampling? method is the slowest and very computationally extensive. For the power electronic module example, it took more than 7 hours to complete the optimization process. In contrast, the computation time of our robust optimization method is much less. It ranges from a few seconds to several minutes depending on the algorithms used to solve the inner optimization problem (fmincon and GA, respectively). 7.1.3. Advantages and Disadvantages The main difference between our robust optimization method and the other methods is that our method calculates the robustness of a design in a ?reverse? mode. That is, 202 instead of calculating ?f (or ?g for constraints) for a given ?p, our method calculates ?p for a given ?f (or ?g). The advantage of working in this reverse mode is that the robustness information provided by our method does not depend on the actual ?p. Should the actual uncontrollable ?p change, the sensitivity region of the design will not change, so we can still use this information to determine the design?s robustness with respect to the new ?p. In contrast, for the conventional ?forward? methods, if ?p changes, then the design?s robustness previously calculated is no longer valid, and we will have to re- evaluate it. Because our robustness calculation does not depend on the actual ?p, our method is independent of the probability distribution of ?p. As long as ?p falls within the sensitivity region of the design, the design is guaranteed to satisfy the ?f 0 (or ?g 0 ) requirement. If the probability distribution of ?p changes, the robustness probability of the design will change, but this guarantee stays the same. Another advantage of our method is that it does not use the gradient information of the objective/constraint functions. As a result, our robustness calculation is valid even if the ?p variations are large, beyond the linear range in which gradient estimation is valid. This is in contrast with those methods that use gradient calculations, such as a Taylor series expansion, to calculate a design?s robustness. The numerical example in Chapter 5 (Section 5.4.1) showed how gradient-based robustness methods fail when ?p becomes large, while our method is still valid. Since our method does not use gradient information, it is also applicable to optimization problems whose objective/constraint functions are not differentiable everywhere with respect to ?p. This is demonstrated in the welded beam 203 example in Section 3.4.3, where the objective function of the problem is a step-function with respect to ?p. Our method is also computationally efficient. As experimentally showed, the upper bound for the number of function evaluations needed by our method is in the order of 10 2 when stochastic algorithms are necessary to solve the inner problem. Our method uses more F call than the gradient-based methods. However, the applications of the gradient- based methods are limited to small range of ?p variations. Our method is more efficient than sampling-based methods whose F call is in the order of 10 3 or more. Even for the more efficient sampling-based methods (such as MPP (Du and Chen, 2000)), the F call is still in the order of 10 2 or above. Besides, our method does not need a presumed probability distribution to calculate a design?s robustness. One shortcoming of our method is that it is conservative because it uses only the worst-case estimate of the sensitivity region to determine a design?s robustness. So, there are some ?p variations that in reality the design can absorb, but they are not included in the calculations. Our method also does not provide probability information regarding a design?s robustness. However, this is not because we cannot calculate the probability, but rather because we do not assume a probability distribution of the ?p. If the pdf of ?p is known, we can numerically calculate a lower bound of the probability (recall Section 6.6). If the actual probability of the design is necessary, it may be interpolated experimentally. We first solve the problem for the robust optimum using several values of ? 0 , and then calculate the actual probability of these optima. The probability of an optimum for other values of ? 0 can then be interpolated from the results (recall Figure 6.5). 204 7.2. CONTRIBUTIONS In this dissertation, we have introduced and developed several new and innovative concepts for robust optimization of a design. The contributions of the research presented in this dissertation are summarized below. ? Introduced and developed the notion of ?reverse? robustness measure of a design alternative. This robustness measure does not require a presumed probability distribution of parameter variations. It also does not use gradient information so that it is valid for large variations of parameters, and is applicable to non- differentiable objective/constraint functions. ? Introduced and developed the concept of ?one-sided? and ?two-sided? sensitivity set and sensitivity region of a design alternative for single and multiple objective/constraint functions. The concept of an asymmetrical two-sided sensitivity of a design has also been introduced and developed. ? Introduced the notion of directional sensitivity of a design, and developed an approach to account for it using the worst-case estimation of sensitivity region. ? Developed a mathematical formulation to calculate the radius of worst- case sensitivity region, and an approach to normalize the formulation to account for scale importance of parameters. ? Developed an approach to calculate the lower bound of the robustness probability of a design when the probability distribution of the uncertain parameters is known. 205 ? Developed an efficient constraint-based robust optimization method using the sensitivity region concept. The method is applicable to both single and multi- objective optimization problems, and can account for both objective and feasibility robustness of an optimum design. ? Introduced and developed the concept of a robustness index for a design alternative, which is a measure of robustness calculated based on the radius of worst-case sensitivity region. ? Developed an inner-outer optimization framework to efficiently search for design alternatives that are optimum and robust. ? Introduced and developed the concepts of multi-objective robustness and multi-objective robust optimality of a design alternative. 7.3. FUTURE RESEARCH DIRECTIONS The robust optimization method presented in this dissertation addresses many of the shortcomings of previous works in robust optimization. However, there are still many important research issues left unresolved. In this last section we briefly discuss some of these issues and provide some general research directions to address them. Some of the discussions presented here are based on our experience during the development of this dissertation. Some others are based on the inputs and comments from colleagues and active researchers from other institutions. ? One very important issue that has received little attention so far is in determining if a robust optimization is needed in the first place. We have assumed in our research that there is a trade-off between performance and robustness of a design. 206 However, it is not uncommon that robustness of a design increases as its performance increases. For a situation like this, robust optimization is not needed since the optimum design is already guaranteed to be also the most robust. To avoid wasting time and resources, we need some sort of indicators that can tell us from the beginning if the performance vs. robustness trade-off exists. The gradient of a function may be such an indicator. If the gradient of a function is monotonically decreasing with the function?s value, then as the function is minimized the gradient is minimized as well, so there is no performance- robustness trade-off. Other inherent properties, such as the concavity or modality of the function, may also indicate such a trade-off. ? One shortcoming of our robust optimization method is that it is conservative. This is because we have used the worst-case estimate of the sensitivity region of a design as a measure of the overall robustness of the design. If we can incorporate those portions of the sensitivity region that are not included in the worst-case estimate into our robustness calculations, we can obtain a more accurate description of the design?s robustness. An experiment-based regression analysis potentially can be used to numerically approximate the sensitivity region of a design so that we have the entire region as a robustness measure, and not just the worst-case region. ? In this dissertation, we assume that parameter variations are continuous. In many engineering design problems, parameters such as temperature variations or dimensional tolerances are continuous. However, in some cases the variations might be discrete, and one wants to find a design alternative that is optimum and 207 robust over a range of discrete scenarios. Examples of discrete variations include changes in material type, or number of teeth in a gear for a power tool. Theoretically, the sensitivity set concept should still be applicable to the discrete variations case; however, the notion of a sensitivity region may no longer apply. The possibility that the parameter variations have both continuous and discrete elements should also be investigated. ? An important issue that has not been addressed in this dissertation is the fact that the notion of a robust design is a subjective matter. A design that is considered robust by one designer may not satisfy the robustness preference or requirements of another designer. A design that is considered robust for each designer in a group may not be robust enough for the group collectively. The topic of preferences and decision-making is a very active area of research by itself. Nevertheless, if we are somehow able to incorporate some of the preference capturing methods into our robust optimization method, it will make the method more practical. ? Practically speaking, design optimization should be fitted within an iterative and collaborative process with various disciplines in which the information regarding the design is constantly updated and improved after each iteration. The robust optimization method presented in this dissertation does not account for this collaborative model. Integration of our method with some sort of systematic design techniques might be beneficial to improve the applicability of the method to more practical and real-world problems. 208 REFERENCES 1. Apostol, T.M., 1957, Mathematical Analysis: A Modern Approach to Advanced Calculus, Addison-Wesley, Reading, MA. 2. Arakawa, M, and Yamakawa, H., 1998, ?Robust Design Using Fuzzy Numbers,? Proc. of the Design Technical Conf., DETC98/DAC-5803, Atlanta, GA, Sept 13-16. 3. Arora, J.S., 2001, Introduction to Optimum Design, McGraw-Hill, New York. 4. Badhrinath, K., and Rao, J. R., 1994, ?Bi-Level Models for Optimum Designs which are Insensitive to Perturbations in Variables and Parameters,? Advances in Design Automation, 69(2), 15-23. 5. Balling, R.J., Free, J.C., and Parkinson, A.R., 1986, ?Consideration of Worst-Case Manufacturing Tolerances in Design Optimization,? Trans. of the ASME, Journal of Mech. Design, 108, 438-441. 6. Belegundu, A.D., and Chandrupatla, T.R., 1999, Optimization Concepts and Applications in Engineering, Prentice Hall, Upper Saddle River, NJ. 7. Belegundu, A.D., and Zhang, S., 1992, ?Robustness of Design Through Minimum Sensitivity,? Trans. of the ASME, Journal of Mech. Design, 114, 213-217. 8. Box, G., 1988, ?Signal-to-Noise Ratios, Performance Criteria, and Transformations,? Technometrics, 30(1), 1-18. 9. Branke, J., 1998, ?Creating Robust Solutions by Means of Evolutionary Algorithms,? Proc. of the 5 th Conf. On Parallel Problem Solving from Nature, 119-128, Amsterdam, The Netherlands, Sept 27-30. 209 10. Branke, J., 2001, ?Reducing the Sampling Variance when Searching for Robust Solutions,? Proc. of the Genetic and Evolutionary Computation Conf., 235-242, San Francisco, CA, July 7-11. 11. Chandu, S.V.L., and Grandhi, R.V., 1995, ?General Purpose Procedure for Reliability Based Structural Optimization Under Parametric Uncertainties,? Advances in Engineering Software, 23, 7-14. 12. Charnes, A., and Cooper, W.W., 1959, ?Chance-Constrained Programming,? Management Science, 6(1), 73-79. 13. Charnes, A., and Cooper, W.W., 1963, ?Deterministic Equivalents for Optimizing and Satisficing Under Chance Constraints,? Operations Research, 11, 18-39. 14. Chen, W., Allen, J.K., Tsui, K.L., and Mistree, F., 1996, ?A Procedure for Robust Design: Minimizing Variations Caused by Noise Factors and Control Factors,? Trans. of the ASME, Journal of Mech. Design, 118, 478-485. 15. Chen, W., and Yuan, C., 1999, ?A Probabilistic-Based Design Model for Achieving Flexibility in Design,? Trans. of the ASME, Journal of Mech. Design, 121, 77-83. 16. Chen, W., Sahai, A., Messac, A., and Sundararaj, G.J., 2000, ?Exploration of the Effectiveness of Physical Programming in Robust Design,? Trans. of the ASME, Journal of Mech. Design, 122, 155-163. 17. Choi, K.K., and Youn, B.D., 2002, ?On Probabilistic Approaches for Reliability- Based Design Optimization (RBDO),? AIAA-2002-5472, Proc. of the 9 th AIAA/ISSMO Symposium on Multidisciplinary Analysis and Optimization, Atlanta, GA, Sept.4-6. 210 18. Choi, K.K., Tu, J., and Park, Y.H., 2001, ?Extensions of Design Potential Concept for Reliability?Based Design Optimization to Nonsmooth and Extreme Cases,? Structural and Multidisciplinary Optimization, 22, 335-350. 19. Coello Coello, C. A., Van Veldhuizen, D. A., and Lamont, G. B., 2002, Evolutionary Algorithms for Solving Multi-Objective Problems, Kluwer Academic Publishers, Boston. 20. Deb, K., 1991, ?Optimal Design of a Welded Beam via Genetic Algorithms,? AIAA Journal, 29(11), 2013-2015. 21. Deb, K., 2001, Multi-Objective Optimization using Evolutionary Algorithms, John Wiley & Sons, Ltd, New York. 22. Du, X., and Chen, W., 2000, ?Towards a Better Understanding of Modeling Feasibility Robustness in Engineering Design,? Trans. of the ASME, Journal of Mech. Design, 122, 385-394. 23. Elsayed, E.A., and Chen, A., 1993, ?Optimal Levels of Process Parameters for Products with Multiple Characteristics,? Int. Journal of Production Research, 31(5), 1117-1132. 24. Fernandez, F.R., Nickel, S., Puerto, J., and Rodriguez-Chia, A.M., 2001, ?Robustness in the Pareto-Solutions for the Multi-Criteria Minisum Location Problem,? Journal of Multi-Criteria Decision Analysis, 10, 191-203. 25. Fiacco, A.V., 1983, Introduction to Sensitivity and Stability Analysis in Non Linear Programming, Vol 165 in Mathematics in Science and Engineering, Academic Press, New York. 211 26. Fonseca, C. M., and Fleming, P. J., 1993, ?Genetic Algorithms for Multiobjective Optimization: Formulation, Discussion, and Generalization,? Proc. of the 5 th Int. Conf. on Genetic Algorithms, 416-423, Urbana-Champaign, IL, July 17-21. 27. Goldberg, D. E., 1989, Genetic Algorithms in Search, Optimization and Machine Learning, Addison-Wesley, Reading, MA. 28. Haug, E.J., and Arora, J.S., 1979, Applied Optimal Design, Wiley-Interscience Publication, New York. 29. Hirokawa, N., and Fujita, K., 2002, ?Mini-max Type Formulation of Strict Robust Design Optimization Under Correlative Variation,? Proc. of Design Engineering Technical Conf., DETC2002/DAC-34041, Montreal, Canada, Sept 29-Oct 2. 30. Hwang, K.H., Lee, K.W., and Park, G.J., 2001, ?Robust Optimization of an Automobile Rearview Mirror for Vibration Reduction,? Structural Multidisciplinary Optimization, 21, 300-308. 31. Jung, D.H., and Lee, B.C., 2002, ?Development of a Simple and Efficient Method for Robust Optimization,? Int. Journal for Numerical Methods in Engineering, 23, 2201- 2215. 32. Kackar, R.N., 1985, ?Off-Line Quality Control, Parameter Design and the Taguchi Method (with discussions),? Journal of Quality Technology, 17, 175-209. 33. Kalsi, M., Hacker, K., and Lewis, K., 2001, ?A Comprehensive Robust Design Approach for Decision Trade-offs in Complex Systems Design,? Trans. of the ASME, Journal of Mech. Design, 123, 1-10. 34. Kunjur, A., and Krishnamurty, S., 1997, ?A Robust Multi-Criteria Optimization Approach,? Mechanisms and Machines Theory, 32(7), 797-810. 212 35. Kurapati, A., Azarm, S., and Wu, J., 2002, ?Constraint Handling Improvements for Multi-Objective Genetic Algorithms,? Structural and Multidisciplinary Optimization, 23(3), 204-213. 36. Lee, K., and Lee, T.H., 2001, ?Fuzzy Multi-Objective Optimization of an Automotive Seat Using Response Surface Model and Reliability Method,? Proc. of the 4 th World Congress of Structural and Multidisciplinary Optimization (WCSMO), Dailan, China, June 4-8. 37. Leon, R.V., Shoemaker, A.C., and Kacker, R.N., 1987, ?Performance Measures Independent of Adjustment,? Technometrics, 29(3), 253-265. 38. Messac, A., and Yahaya, I.A., 2002, ?Multiobjective Robust Design Using Physical Programming,? Structural and Multidisciplinary Optimization, 23, 357-371. 39. Miettinen, K. M., 1999, Nonlinear Multiobjective Optimization, Kluwer Academic Publishers, Boston. 40. Narayanan, S., and Azarm, S., 1999, ?On Improving Multiobjective Genetic Algorithms for Design Optimization,? Structural Optimization, 18, 146-155. 41. Otto, K.N., and Antonsson, E.K., 1993, ?Tuning Parameters in Engineering Design,? Trans. of the ASME, Journal of Mech. Design, 115, 14-19. 42. Palli, N., Azarm, A., McCluskey, P., and Sundararajan, R., 1998, ?An Interactive Multistage ?-Inequality Constraint Method for Multiple Objectives Decision Making,? Trans. of the ASME, Journal of Mech. Design, 120, 678-686. 43. Papalambros, P., and Wilde, D.J., 1980, ?Regional Monotonicity in Optimum Design,? Trans. of the ASME, Journal of Mech. Design, 102(3), 497-500. 213 44. Papalambros, P.Y., and Wilde, D.J., 2000, Principles of Optimal Design: Modeling and Computation, 2 nd ed, Cambridge University Press, New York. 45. Parkinson, A., Sorensen, C., and Pourhassan, N., 1993, ?A General Approach for Robust Optimal Design,? Trans. of the ASME, Journal of Mech. Design, 115, 74-80. 46. Parkinson, D.B., 1998, ?Simulated Variance Optimization for Robust Design,? Quality Reliability Engineering International, 14, 15-21. 47. Parkinson, D.B., 2000, ?The Application of a Robust Design Method to Tolerancing,? Trans. of the ASME, Journal of Mech. Design, 122, 149-154. 48. Phadke, M.S., 1989, Quality Engineering Using Robust Design, Prentice Hall, Englewood Cliffs, NJ. 49. Pignatiello, J.J., Jr., 1993, ?Strategies for Robust Multiresponse Quality Engineering,? IIE Transactions, 25(3), 5-14. 50. Pignatiello, J.J., Jr., and Ramberg, J.S., 1985, ?Discussion of Off-Line Quality Control, Parameter Design, and the Taguchi Method,? Journal of Quality Technology, 17, 198-206. 51. Pignatiello, J.J., Jr., and Ramberg, J.S., 1987, ?Discussion of Performance Measures Independent of Adjustment,? Technometrics, 29, 274-277. 52. Ragsdell, K. M., and Phillips, D. T., 1976, ?Optimal Design of a Class of Welded Structures Using Geometric Programming,? Journal of Engineering for Industries, Series B, 98(3), 1021-1025. 53. Ramakrishnan, B., and Rao, S.S., 1996, ?A General Loss Function Based Optimization Procedure for Robust Design,? Engineering Optimization, 25, 255-276. 214 54. Rao, S.S., 1984, ?Multiobjective Optimization in Structural Design with Uncertain Parameters and Stochastic Processes,? AIAA Journal, 22(11), 1670-1678. 55. Rao, S.S., and Cao, L., 2002, ?Optimum Design of Mechanical Systems Involving Interval Parameters,? Trans. of the ASME, Journal of Mech. Design, 124, 465-472. 56. Reklaitis, G. V., Ravindran, A., and Ragsdell, K. M., 1983, Engineering Optimization Methods and Applications, Wiley-Interscience Publication, New York. 57. Schmit, L.A., 1960, ?Structural Design by Systematic Synthesis,? Proc. of the 2 nd ASCE Conf. On Electronic Computations, 105-122, Pittsburgh, PA, Sept 8-9. 58. Shelokar, P.S., Jayaraman, V.K., and Kulkarni, B.D., 2002, ?Ant Algorithm for Single and Multiobjective Reliability Optimization Problem,? Quality and Reliability Engineering Int., 18, 497-514. 59. Siddall, J.N., 1982, Optimal Engineering Design: Principles and Applications, Marcel Dekker, New York. 60. Simpsons, T.W., Allen, J.K., Mistree, F., and Chen, W., 1997, ?Designing Ranged Sets of Top-Level Design Specifications for a Family of Aircraft: An Application of Design Capability Indices,? SAE World Aviation Congress and Exposition, AIAA- 97-5513, Anaheim, CA, October 13-16. 61. Spotts, M.F., 1953, Design of Machine Elements, 2 nd ed., Prentice Hall, Englewood Cliffs, NJ. 62. Srinivas, N., and Deb, K., 1995, ?Multiobjective Function Optimization Using Nondominated Sorting Genetic Algorithms,? Evolutionary Computation Journal, 2(3), 221-248. 215 63. Suhir, E., 1987, ?Die Attachment Design and Its Influence on Thermal Stresses in Die and the Attachment,? Proc. of the 37 th Electronic Components Conf., 508-517, Boston, MA, May 11-13. 64. Sun, P.F., Arora, J.S., and Haug, E.J., 1975, ?Fail-Safe Optimal Design of Structures,? Technical Report No. 19, Department of Civil and Environmental Engineering, The University of Iowa, Iowa City. 65. Sundaresan, S., Ishii, K., and Houser, D., 1992, ?Design Optimization for Robustness Using Performance Simulation Programs,? Engineering Optimization, 20, 163-178. 66. Sundaresan, S., Ishii, K., and Houser, D.R., 1993, ?A Robust Optimization Procedure with Variations on Design Variables and Constraints,? Advances in Design Automation, 69(1), 379-386. 67. Taguchi, G., 1978, ?Performance Analysis Design,? International Journal of Production Research, 16, 521-530. 68. Taguchi, G., and Phadke, M.S., 1984, ?Quality Engineering Through Design Optimization,? Proc. of the IEEE Globecom Conference, 1106-1113, Atlanta, GA, Nov 26-29. 69. Tsui, K.L., 1999, ?Robust Design Optimization for Multiple Characteristics Problems,? Int. Journal of Production Research, 37(2), 433-445. 70. Tsutsui, S., 1999, ?A Comparative Study on the Effects of Adding Perturbations to Phenotypic Parameters in Genetic Algorithms with a Robust Solution Searching Scheme,? Proc. of IEEE Systems, Man, and Cybernetics Conf., 585-591, Tokyo, Japan, Oct 12-15. 216 71. Tsutsui, S., and Gosh, A., 1997, ?Genetic Algorithms with a Robust Solution Searching Scheme,? IEEE Trans. on Evolutionary Computation, 1(3), 201-208. 72. Tu, J., Choi, K.K., and Park, Y.H., 1999, ?A New Study on Reliability-Based Design Optimization,? Trans. of ASME, Journal of Mech. Design, 121, 557-564. 73. Van Veldhuizen, D. A., and Lamont, G. B., 1998, ?Multiobjective evolutionary algorithm research: A history and analysis.? Technical Report TR-98-03, Air Force Institute of Technology, Wright-Patterson AFB, OH. 74. Wahl, A.M., 1963, Mechanical Springs, 2 nd ed., McGraw-Hill, New York. 75. Wang, H.T., Liu, Z.J., Chen, S.X., and Yang, J.P., 1999, ?Application of Taguchi Method to Robust Design of BLDC Motor Performance,? IEEE Transactions on Magnetics, 35(5), 3700-3702. 76. Wu, Y.T., Millwater, H.R., and Cruse, T.A., 1990, ?Advanced Probabilistic Structural Analysis Method for Implicit Performance Functions,? AIAA Journal, 28(9), 1663- 1669. 77. Youn, B.D., Choi, K.K., and Park, Y.H., 2003, ?Hybrid Analysis Method for Reliability-Based Design Optimization,? Trans. of ASME, Journal of Mech. Design, 125, 221-232. 78. Yu, J.C., and Ishii, K., 1994, ?Robust Design by Matching the Design with Manufacturing Variation Patterns,? Advances in Design Automation, 69(2), 7-14. 79. Yu, J.C., and Ishii, K., 1998, ?Design for Robustness Based on Manufacturing Variation Patters,? Trans. of the ASME, Journal of Mech. Design, 120, 196-202. 80. Zhu, J., and Ting, K.L., 2001, ?Performance Distribution Analysis and Robust Design,? Trans. of the ASME, Journal of Mech. Design, 123, 11-17. 217