Engineering Optimization Theory and Practice

The purpose of this textbook is to present the techniques and applications of engineering optimization in a comprehensive manner.

Singiresu S. Rao

819 Pages

74506 Reads



PDF Format

11.0 MB

Chemical Engineering

Download PDF format

  • Singiresu S. Rao   
  • 819 Pages   
  • 12 Feb 2015
  • Page - 1

    Engineering OptimizationEngineering Optimization: Theory and Practice, Fourth Edition Singiresu S. RaoCopyright © 2009 by John Wiley & Sons, Inc. read more..

  • Page - 2

    Engineering OptimizationTheory and PracticeFourth EditionSingiresu S. RaoJOHN WILEY & SONS, INC. read more..

  • Page - 3

    This book is printed on acid-free paper.Copyright c 2009 by John Wiley & Sons, Inc. All rights reservedPublished by John Wiley & Sons, Inc., Hoboken, New JerseyPublished simultaneously in CanadaNo part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form orby any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permittedunder Section 107 or 108 of the 1976 United States Copyright Act, without either the read more..

  • Page - 4

    read more..

  • Page - 5

    ContentsPrefacexvii1Introduction to Optimization11.1Introduction11.2Historical Development31.3Engineering Applications of Optimization51.4Statement of an Optimization Problem61.4.1Design Vector61.4.2Design Constraints71.4.3Constraint Surface81.4.4Objective Function91.4.5Objective Function Surfaces91.5Classification of Optimization Problems141.5.1Classification Based on the Existence of Constraints141.5.2Classification Based on the Nature of the Design Variables151.5.3Classification Based on read more..

  • Page - 6

    viiiContents2.5Multivariable Optimization with Inequality Constraints932.5.1Kuhn – Tucker Conditions982.5.2Constraint Qualification982.6Convex Programming Problem104References and Bibliography105Review Questions105Problems1063Linear Programming I: Simplex Method1193.1Introduction1193.2Applications of Linear Programming1203.3Standard Form of a Linear Programming Problem1223.4Geometry of Linear Programming Problems1243.5Definitions and Theorems1273.6Solution of a System of Linear Simultaneous read more..

  • Page - 7

    Contentsix4.7Karmarkar’s Interior Method2224.7.1Statement of the Problem2234.7.2Conversion of an LP Problem into the Required Form2244.7.3Algorithm2264.8Quadratic Programming2294.9MATLAB Solutions235References and Bibliography237Review Questions239Problems2395Nonlinear Programming I: One-Dimensional Minimization Methods2485.1Introduction2485.2Unimodal Function253ELIMINATION METHODS2545.3Unrestricted Search2545.3.1Search with Fixed Step Size2545.3.2Search with Accelerated Step read more..

  • Page - 8

    xContents6Nonlinear Programming II: Unconstrained Optimization Techniques3016.1Introduction3016.1.1Classification of Unconstrained Minimization Methods3046.1.2General Approach3056.1.3Rate of Convergence3056.1.4Scaling of Design Variables305DIRECT SEARCH METHODS3096.2Random Search Methods3096.2.1Random Jumping Method3116.2.2Random Walk Method3126.2.3Random Walk Method with Direction Exploitation3136.2.4Advantages of Random Search Methods3146.3Grid Search Method3146.4Univariate read more..

  • Page - 9

    Contentsxi7Nonlinear Programming III: Constrained Optimization Techniques3807.1Introduction3807.2Characteristics of a Constrained Problem380DIRECT METHODS3837.3Random Search Methods3837.4Complex Method3847.5Sequential Linear Programming3877.6Basic Approach in the Methods of Feasible Directions3937.7Zoutendijk’s Method of Feasible Directions3947.7.1Direction-Finding Problem3957.7.2Determination of Step Length3987.7.3Termination Criteria4017.8Rosen’s Gradient Projection read more..

  • Page - 10

    xiiContents7.21Checking the Convergence of Constrained Optimization Problems4647.21.1 Perturbing the Design Vector4657.21.2 Testing the Kuhn – Tucker Conditions4657.22Test Problems4677.22.1 Design of a Three-Bar Truss4677.22.2 Design of a Twenty-Five-Bar Space Truss4687.22.3 Welded Beam Design4707.22.4 Speed Reducer (Gear Train) Design4727.22.5 Heat Exchanger Design4737.23MATLAB Solution of Constrained Optimization Problems474References and Bibliography476Review read more..

  • Page - 11

    Contentsxiii9.5Example Illustrating the Calculus Method of Solution5559.6Example Illustrating the Tabular Method of Solution5609.7Conversion of a Final Value Problem into an Initial Value Problem5669.8Linear Programming as a Case of Dynamic Programming5699.9Continuous Dynamic Programming5739.10Additional Applications5769.10.1 Design of Continuous Beams5769.10.2 Optimal Layout (Geometry) of a Truss5779.10.3 Optimal Design of a Gear Train5799.10.4 Design of a Minimum-Cost Drainage read more..

  • Page - 12

    xivContents11.2.2 Random Variables and Probability Density Functions63311.2.3 Mean and Standard Deviation63511.2.4 Function of a Random Variable63811.2.5 Jointly Distributed Random Variables63911.2.6 Covariance and Correlation64011.2.7 Functions of Several Random Variables64011.2.8 Probability Distributions64311.2.9 Central Limit Theorem64711.3Stochastic Linear Programming64711.4Stochastic Nonlinear Programming65211.4.1 Objective Function65211.4.2 Constraints65311.5Stochastic Geometric read more..

  • Page - 13

    Contentsxv13.2.6 Numerical Results70213.3Simulated Annealing70213.3.1 Introduction70213.3.2 Procedure70313.3.3 Algorithm70413.3.4 Features of the Method70513.3.5 Numerical Results70513.4Particle Swarm Optimization70813.4.1 Introduction70813.4.2 Computational Implementation of PSO70913.4.3 Improvement to the Particle Swarm Optimization Method71013.4.4 Solution of the Constrained Optimization Problem71113.5Ant Colony Optimization71413.5.1 Basic Concept71413.5.2 Ant Searching Behavior71513.5.3 Path read more..

  • Page - 14

    xviContents14.7.2 Sensitivity Equations Using the Concept of Feasible Direction75414.8Multilevel Optimization75514.8.1 Basic Idea75514.8.2 Method75614.9Parallel Processing76014.10 Multiobjective Optimization76114.10.1 Utility Function Method76314.10.2 Inverted Utility Function Method76414.10.3 Global Criterion Method76414.10.4 Bounded Objective Function Method76414.10.5 Lexicographic Method76514.10.6 Goal Programming Method76514.10.7 Goal Attainment Method76614.11 Solution of Multiobjective read more..

  • Page - 15

    PrefaceThe ever-increasing demand on engineers to lower production costs to withstand globalcompetition has prompted engineers to look for rigorous methods of decision mak-ing, such as optimization methods, to design and produce products and systems botheconomically and efficiently. Optimization techniques, having reached a degree ofmaturity in recent years, are being used in a wide spectrum of industries, includingaerospace, automotive, chemical, electrical, construction, and manufacturing read more..

  • Page - 16

    xviiiPrefacefrom several fields of engineering to make the subject appealing to all branches ofengineering. A large number of solved examples, review questions, problems,project-type problems, figures, and references are included to enhance the presentationof the material.Specific features of the book include:•More than 130 illustrative examples accompanying most topics.•More than 480 references to the literature of engineering optimization theory andapplications.•More than 460 review read more..

  • Page - 17

    PrefacexixChapters 3 and 4 deal with the solution of linear programming problems. Thecharacteristics of a general linear programming problem and the development of thesimplex method of solution are given in Chapter 3. Some advanced topics in linearprogramming, such as the revised simplex method, duality theory, the decompositionprinciple, and post-optimality analysis, are discussed in Chapter 4. The extension oflinear programming to solve quadratic programming problems is also considered read more..

  • Page - 18

    1Introduction to Optimization1.1INTRODUCTIONOptimization is the act of obtaining the best result under given circumstances. In design,construction, and maintenance of any engineering system, engineers have to take manytechnological and managerial decisions at several stages. The ultimate goal of all suchdecisions is either to minimize the effort required or to maximize the desired benefit.Since the effort required or the benefit desired in any practical situation can be expressedas a function read more..

  • Page - 19

    2Introduction to OptimizationFigure 1.1Minimum of f (x)is same as maximum of −f (x).cf(x)cf(x)f(x)f(x)f(x)f(x)f(x)cf*f*f*x*xx*xc + f(x)c + f*Figure 1.2Optimum solution of cf (x)or c + f (x)same as that of f (x).Table 1.1 lists various mathematical programming techniques together with otherwell-defined areas of operations research. The classification given in Table 1.1 is notunique; it is given mainly for convenience.Mathematical programming techniques are useful in finding the minimum of read more..

  • Page - 20

    1.2Historical Development3Table 1.1Methods of Operations ResearchMathematical programming orStochastic processoptimization techniquestechniquesStatistical methodsCalculus methodsStatistical decision theoryRegression analysisCalculus of variationsMarkov processesCluster analysis, patternrecognitionNonlinear programmingQueueing theoryGeometric programmingRenewal theoryDesign of experimentsQuadratic programmingSimulation methodsDiscriminate analysis(factor analysis)Linear programmingReliability read more..

  • Page - 21

    4Introduction to Optimizationsufficiency conditions for the optimal solution of programming problems laid the foun-dations for a great deal of later research in nonlinear programming. The contributionsof Zoutendijk and Rosen to nonlinear programming during the early 1960s have beensignificant. Although no single technique has been found to be universally applica-ble for nonlinear programming problems, work of Carroll and Fiacco and McCormickallowed many difficult problems to be solved by read more..

  • Page - 22

    1.3Engineering Applications of Optimization51.3ENGINEERING APPLICATIONS OF OPTIMIZATIONOptimization, in its broadest sense, can be applied to solve any engineering problem.Some typical applications from different engineering disciplines indicate the wide scopeof the subject:1.Design of aircraft and aerospace structures for minimum weight2.Finding the optimal trajectories of space vehicles3.Design of civil engineering structures such as frames, foundations, bridges,towers, chimneys, and dams for read more..

  • Page - 23

    6Introduction to Optimization1.4STATEMENT OF AN OPTIMIZATION PROBLEMAn optimization or a mathematical programming problem can be stated as follows.Find X =x1x2...xnwhich minimizes f (X)subject to the constraintsgj (X) ≤ 0,j = 1, 2, . . . , mlj (X) = 0,j = 1, 2, . . . , p(1.1)where Xis an n-dimensional vector called the design vector, f (X)is termed the objec-tive function, and gj (X)and lj (X)are known as inequalityand read more..

  • Page - 24

    1.4Statement of an Optimization Problem7Figure 1.3Gear pair in mesh.the design variable spaceor simply design space. Each point in the n-dimensionaldesign space is called a design pointand represents either a possible or an impossiblesolution to the design problem. In the case of the design of a gear pair, the designpoint {1.0, 20, 40}T, for example, represents a possible solution, whereas the designpoint {1.0, −20, 40.5}T represents an impossible solution since it is not possible tohave read more..

  • Page - 25

    8Introduction to Optimization1.4.3Constraint SurfaceFor illustration, consider an optimization problem with only inequality constraintsgj (X) ≤ 0. The set of values of Xthat satisfy the equation gj (X) = 0 forms a hyper-surface in the design space and is called a constraint surface. Note that this is an(n − 1)-dimensional subspace, where nis the number of design variables. The constraintsurface divides the design space into two regions: one in which gj (X) <0 and the otherin which gj (X) read more..

  • Page - 26

    1.4Statement of an Optimization Problem91.4.4Objective FunctionThe conventional design procedures aim at finding an acceptable or adequate designthat merely satisfies the functional and other requirements of the problem. In general,there will be more than one acceptable design, and the purpose of optimization isto choose the best one of the many acceptable designs available. Thus a criterionhas to be chosen for comparing the different alternative acceptable designs and forselecting the best read more..

  • Page - 27

    10Introduction to OptimizationFigure 1.5Contours of the objective function.has to be solved purely as a mathematical problem. The following example illustratesthe graphical optimization procedure.Example 1.1Design a uniform column of tubular section, with hinge joints at bothends, (Fig. 1.6) to carry a compressive load P = 2500 kgf for minimum cost. Thecolumn is made up of a material that has a yield stress (σy) of 500 kgf/cm2, modulusof elasticity (E) of 0.85 × 106 kgf/cm2, and weight density read more..

  • Page - 28

    1.4Statement of an Optimization Problem11iFigure 1.6Tubular column under compression.The behavior constraints can be expressed asstress induced ≤ yield stressstress induced ≤ buckling stressThe induced stress is given byinduced stress = σi =Pπ dt=2500π x1x2(E3)The buckling stress for a pin-connected column is given bybuckling stress = σb =Euler buckling loadcross-sectional area=π 2EIl21π dt(E4)whereI = second moment of area of the cross section of the column=π64(d4o− d4i )=π64(d2o+ read more..

  • Page - 29

    12Introduction to OptimizationThus the behavior constraints can be restated asg1(X) =2500πx1x2 − 500 ≤ 0(E6)g2(X) =2500πx1x2 −π 2(0.85 × 106)(x21+ x22 )8(250)2≤ 0(E7)The side constraints are given by2 ≤ d ≤ 140.2 ≤ t ≤ 0.8which can be expressed in standard form asg3(X) = −x1 + 2.0 ≤ 0(E8)g4(X) = x1 − 14.0 ≤ 0(E9)g5(X) = −x2 + 0.2 ≤ 0(E10)g6(X) = x2 − 0.8 ≤ 0(E11)Since there are only two design variables, the problem can be solved graphically asshown read more..

  • Page - 30

    1.4Statement of an Optimization Problem13Figure 1.7Graphical optimization of Example 1.1.x12468101214x22.410.7160.2190.09260.04730.02740.0172These points are plotted as curve P2Q2, the feasible region is identified, and the infea-sible region is shown by hatched lines as in Fig. 1.7. The plotting of side constraintsis very simple since they represent straight lines. After plotting all the six constraints,the feasible region can be seen to be given by the bounded area ABCDEA. read more..

  • Page - 31

    14Introduction to OptimizationNext, the contours of the objective function are to be plotted before finding theoptimum point. For this, we plot the curves given byf (X) = 9.82x1x2 + 2x1 = c = constantfor a series of values of c. By giving different values to c, the contours of fcan beplotted with the help of the following points.For 9.82x1x2 + 2x1 = 50.0:x20. 9.82x1x2 + 2x1 = read more..

  • Page - 32

    1.5Classification of Optimization Problems151.5.2Classification Based on the Nature of the Design VariablesBased on the nature of design variables encountered, optimization problems can beclassified into two broad categories. In the first category, the problem is to find valuesto a set of design parameters that make some prescribed function of these parametersminimum subject to certain constraints. For example, the problem of minimum-weightdesign of a prismatic beam shown in Fig. 1.8a read more..

  • Page - 33

    16Introduction to OptimizationHere the design variables are functions of the length parameter t. This type of problem,where each design variable is a function of one or more parameters, is known as atrajectoryor dynamic optimization problem[1.55].1.5.3Classification Based on the Physical Structure of the ProblemDepending on the physical structure of the problem, optimization problems can beclassified as optimal control and nonoptimal control problems.Optimal Control Problem.An optimal read more..

  • Page - 34

    1.5Classification of Optimization Problems17Figure 1.9Control points in the path of the rocket.SOLUTIONLet points (or control points) on the path at which the thrusts of therocket are changed be numbered as 1, 2, 3, . . . ,13 (Fig. 1.9). Denoting xi as the thrust,vi the velocity, ai the acceleration, and mi the mass of the rocket at point i, Newton’ssecond law of motion can be applied asnet force on the rocket = mass × accelerationThis can be written asthrust − gravitational force − air read more..

  • Page - 35

    18Introduction to Optimizationorxi − mig − k1vi = miai(E1)where the mass mi can be expressed asmi = mi−1 − k2s(E2)and k1 and k2 are constants. Equation (E1) can be used to express the acceleration, ai,asai =ximi − g −k1vimi(E3)If ti denotes the time taken by the rocket to travel from point ito point i + 1, thedistance traveled between the points iand i + 1 can be expressed ass = viti +12 ait2ior12t2iximi − g −k1vimi+ tivi − s = 0(E4)from which ti can be determined asti =−vi read more..

  • Page - 36

    1.5Classification of Optimization Problems19Thus the problem can be stated as an OC problem asFind X =x1x2...x12which minimizesf (X) =12i=1ti =12i=1−vi + v2i+ 2sximi − g −k1vimiximi − g −k1vimisubject tomi+1 = mi − k2s,i = 1, 2, . . . ,12vi+1 = v2i+ 2sximi − g −k1vimi,i = 1, 2, . . . ,12|xi| ≤ Fi,i = 1, 2, . . . ,12v1 = v13 = 01.5.4Classification Based on read more..

  • Page - 37

    20Introduction to OptimizationFigure 1.10Step-cone pulley.SOLUTIONThe design vector can be taken asX =d1d2d3d4wwhere di is the diameter of the ith step on the output pulley and wis the width of thebelt and the steps. The objective function is the weight of the step-cone pulley system:f (X) = ρwπ4(d21+ d22+ d23+ d24+ d′ 21+ d′ 22+ d′ 23+ d′ 24 )= ρwπ4d211 +7503502+ d221 +4503502+ d231 +2503502+ d241 read more..

  • Page - 38

    1.5Classification of Optimization Problems21To have the belt equally tight on each pair of opposite steps, the total length of thebelt must be kept constant for all the output speeds. This can be ensured by satisfyingthe following equality constraints:C1 − C2 = 0(E2)C1 − C3 = 0(E3)C1 − C4 = 0(E4)where Ci denotes length of the belt needed to obtain output speed Ni (i = 1, 2, 3, 4)and is given by [1.116, 1.117]:Ci ≃π di21 +NiN+NiN− 12d2i4a+ 2awhere Nis the speed of the input shaft and read more..

  • Page - 39

    22Introduction to OptimizationFinally, the lower bounds on the design variables can be taken asw ≥ 0(E8)di ≥ 0,i = 1, 2, 3, 4(E9)As the objective function, (E1), and most of the constraints, (E2) to (E9), are nonlinearfunctions of the design variables d1, d2, d3, d4, and w, this problem is a nonlinearprogramming problem.Geometric Programming Problem.DefinitionA function h(X)is called a posynomialif hcan be expressed as the sumof power terms each of the formcixai11xai22· · ·xainnwhere ci read more..

  • Page - 40

    1.5Classification of Optimization Problems23Figure 1.11Helical spring.where Gis the shear modulus, Fthe compressive load on the spring, wthe weight ofthe spring, ρthe weight density of the spring, and Ks the shear stress correction factor.Assume that the material is spring steel with G = 12 × 106 psi and ρ = 0.3 lb/in3, andthe shear stress correction factor is Ks ≈ 1.05.SOLUTIONThe design vector is given byX = x1x2x3= dDNand the objective function byf read more..

  • Page - 41

    24Introduction to Optimizationthat is,g2(X) =1250π d3KsFD>1(E3)natural frequency =√Gg2√2ρπdD2N≥ 100that is,g3(X) =√Gg d200√2ρπ D2N>1(E4)Since the equality sign is not included (along with the inequality symbol, >) in theconstraints of Eqs. (E2) to (E4), the design variables are to be restricted to positivevalues asd >0,D >0,N >0(E5)By substituting the known data, F = weight of the milling machine/4 = 1250 lb, ρ =0.3 lb/in3, G = 12 × 106 psi, and Ks = 1.05, Eqs. read more..

  • Page - 42

    1.5Classification of Optimization Problems25Example 1.5A manufacturing firm produces two products, Aand B, using two limitedresources. The maximum amounts of resources 1 and 2 available per day are 1000 and250 units, respectively. The production of 1 unit of product Arequires 1 unit of resource1 and 0.2 unit of resource 2, and the production of 1 unit of product Brequires 0.5unit of resource 1 and 0.5 unit of resource 2. The unit costs of resources 1 and 2 aregiven by the relations (0.375 − read more..

  • Page - 43

    26Introduction to OptimizationAs the objective function [Eq. (E5)] is a quadratic and the constraints [Eqs. (E1) to(E4)] are linear, the problem is a quadratic programming problem.Linear Programming Problem.If the objective function and all the constraints inEq. (1.1) are linear functions of the design variables, the mathematical programmingproblem is called a linear programming(LP) problem. A linear programming problemis often stated in the following standard form:Find X read more..

  • Page - 44

    1.5Classification of Optimization Problems27load (x1 + x2 + x3) that can be supported by the system. Assume that the weights ofthe beams 1, 2, and 3 are w1, w2, and w3, respectively, and the weights of the ropesare negligible.SOLUTIONAssuming that the weights of the beams act through their respectivemiddle points, the equations of equilibrium for vertical forces and moments for eachof the three beams can be written asFor beam 3:TE + TF = x3 + w3x3(3l) + w3(2l) − TF (4l) = 0For beam 2:TC + TD read more..

  • Page - 45

    28Introduction to OptimizationTD ≤ W2(E5)TE ≤ W3(E6)TF ≤ W3(E7)Finally, the nonnegativity requirement of the design variables can be expressed asx1 ≥ 0x2 ≥ 0x3 ≥ 0(E8)Since all the equations of the problem (E1) to (E8), are linear functions of x1, x2, andx3, the problem is a linear programming problem.1.5.5Classification Based on the Permissible Values of the Design VariablesDepending on the values permitted for the design variables, optimization problems canbe classified as read more..

  • Page - 46

    1.5Classification of Optimization Problems29and the constraints by4x1 + 8x2 + 2x3 + 5x4 + 3x5 ≤ 2000(E2)9x1 + 7x2 + 4x3 + 3x4 + 8x5 ≤ 2500(E3)xi ≥ 0 and integral,i = 1, 2, . . . ,5(E4)Since xi are constrained to be integers, the problem is an integer programmingproblem.1.5.6Classification Based on the Deterministic Nature of the VariablesBased on the deterministic nature of the variables involved, optimization problems canbe classified as deterministic and stochastic programming read more..

  • Page - 47

    30Introduction to OptimizationFigure 1.13Cross section of a reinforced concrete beam.and the constraint on the bending moment can be expressed as [1.120]P[MR − M ≥ 0] = P Asfs d − 0.59Asfsfcb− M ≥ 0 ≥ 0.95(E2)where P[· ··] indicates the probability of occurrence of the event [·· ·].To ensure that the beam remains underreinforced,† the area of steel is bounded bythe balanced steel area A(b)sasAs ≤ A(b)s(E3)whereA(b)s= (0.542)fcfsbd600600 + fsSince the design variables cannot read more..

  • Page - 48

    1.5Classification of Optimization Problems31Separable Programming Problem.DefinitionA function f(X) is said to be separableif it can be expressed as the sumof nsingle-variable functions, f1(x1), f2(x2), . . . , fn(xn), that is,f (X) =ni=1fi(xi )(1.11)A separable programming problem is one in which the objective function and theconstraints are separable and can be expressed in standard form asFind Xwhich minimizes f (X) =ni=1fi(xi)(1.12)subject togj (X) =ni=1gij (xi) ≤ bj ,j = 1, 2, . . . , read more..

  • Page - 49

    32Introduction to Optimizationcj xj /2. Thus the objective function (cost of ordering plus storing) can be expressedasf (X) =a1d1x1 +q1c1x12+a2d2x2 +q2c2x22+a3d3x3 +q3c3x32(E1)where the design vector Xis given byX = x1x2x3(E2)The constraint on the worth of inventory can be stated asc1x1 + c2x2 + c3x3 ≤ 45,000(E3)The limitation on the storage area is given bys1x1 + s2x2 + s3x3 ≤ 90(E4)Since the design variables cannot be negative, we havexj ≥ 0,j = 1, 2, read more..

  • Page - 50

    1.5Classification of Optimization Problems33Multiobjective Programming Problem.A multiobjective programming problem canbe stated as follows:Find Xwhich minimizes f1(X), f2(X), . . . , fk(X)subject togj (X) ≤ 0,j = 1, 2, . . . , m(1.13)where f1, f2, . . . , fk denote the objective functions to be minimized simultaneously.Example 1.10A uniform column of rectangular cross section is to be constructedfor supporting a water tank of mass M(Fig. 1.14). It is required (1) to minimize themass of the read more..

  • Page - 51

    34Introduction to Optimizationwhere Eis the Young’s modulus and Iis the area moment of inertia of the columngiven byI =112 bd3(E3)The natural frequency of the water tank can be maximized by minimizing −ω. Withthe help of Eqs. (E1) and (E3), Eq. (E2) can be rewritten asω =Ex1x324l3(M +33140 ρlx1x2)1/2(E4)The direct compressive stress (σc) in the column due to the weight of the water tankis given byσc =Mgbd=Mgx1x2(E5)and the buckling stress for a fixed-free column (σb) is given by read more..

  • Page - 52

    1.7Engineering Optimization Literature351.6OPTIMIZATION TECHNIQUESThe various techniques available for the solution of different types of optimizationproblems are given under the heading of mathematical programming techniques inTable 1.1. The classical methods of differential calculus can be used to find the uncon-strained maxima and minima of a function of several variables. These methods assumethat the function is differentiable twice with respect to the design variables and thederivatives read more..

  • Page - 53

    36Introduction to OptimizationThe most widely circulated journals that publish papers related to engineering opti-mization are Engineering Optimization, ASME Journal of Mechanical Design, AIAAJournal, ASCE Journal of Structural Engineering, Computers and Structures, Interna-tional Journal for Numerical Methods in Engineering, Structural Optimization, Journalof Optimization Theory and Applications, Computers and Operations Research, Oper-ations Research, Management Science, Evolutionary read more..

  • Page - 54

    1.8Solution of Optimization Problems Using MATLAB37Table 1.2MATLAB Programs or Functions for Solving Optimization ProblemsName of MATLAB programType of optimizationStandard form for solutionor function to solveproblemby MATLABthe problemFunction of one variable orscalar minimizationFind xto minimize f (x)with x1 < x < x2fminbndUnconstrained minimizationof function of severalvariablesFind xto minimize f (x)fminuncor fminsearchLinear programmingproblemFind xto minimize fT xsubject to[A]x ≤ read more..

  • Page - 55

    38Introduction to OptimizationSOLUTIONStep 1: Write an M-file probofminobj.mfor the objective function.function f= probofminobj (x)f= 9.82*x(1)*x(2)+2*x(1);Step 2: Write an M-file conprobformin.mfor the constraints.function [c, ceq] = conprobformin(x)% Nonlinear inequality constraintsc = [2500/(pi*x(1)*x(2))-500;2500/(pi*x(1)*x(2))-(pi^2*(x(1)^2+x(2)^2))/0.5882;-x(1)+2;x(1)-14;-x(2)+0.2;x(2)-0.8];% Nonlinear equality constraintsceq = [];Step 3: Invoke constrained optimization program (write read more..

  • Page - 56

    References and Bibliography39than options. TolFun and maximum constraint violationis lessthan options.TolCon.Active inequalities (to within options.TolCon = 1e-006):lower upper ineqlin ineqnonlin12x=5.4510 0.2920fval =26.5310The values of constraints at optimum solutionc=-0.0000-0.0000-3.4510-8.5490-0.0920-0.5080ceq =[]REFERENCES AND BIBLIOGRAPHYStructural Optimization1.1K. I. Majid, Optimum Design of Structures, Wiley, New York, 1974.1.2D. G. Carmichael, Structural Modelling and Optimization, read more..

  • Page - 57

    40Introduction to Optimization1.13Y. Jaluria, Design and Optimization of Thermal Systems, 2nd ed., CRC Press, BocaRaton, FL, 2007.Chemical and Metallurgical Process Optimization1.14W. H. Ray and J. Szekely, Process Optimization with Applications to Metallurgy andChemical Engineering, Wiley, New York, 1973.1.15T. F. Edgar and D. M. Himmelblau, Optimization of Chemical Processes, McGraw-Hill,New York, 1988.1.16R. Aris, The Optimal Design of Chemical Reactors, a Study in Dynamic read more..

  • Page - 58

    References and Bibliography41General Nonlinear Programming Theory1.33S. L. S. Jacoby, J. S. Kowalik, and J. T. Pizzo, Iterative Methods for Nonlinear Opti-mization Problems, Prentice-Hall, Englewood Cliffs, NJ, 1972.1.34L. C. W. Dixon, Nonlinear Optimization: Theory and Algorithms, Birkhauser, Boston,1980.1.35G. S. G. Beveridge and R. S. Schechter, Optimization: Theory and Practice,McGraw-Hill, New York, 1970.1.36B. S. Gottfried and J. Weisman, Introduction to Optimization Theory, read more..

  • Page - 59

    42Introduction to OptimizationOptimal Control1.55D. E. Kirk, Optimal Control Theory: An Introduction, Prentice-Hall, Englewood Cliffs,NJ, 1970.1.56A. P. Sage and C. C. White III, Optimum Systems Control, 2nd ed., Prentice-Hall,Englewood Cliffs, NJ, 1977.1.57B. D. O. Anderson and J. B. Moore, Linear Optimal Control, Prentice-Hall, EnglewoodCliffs, NJ, 1971.1.58A. E. Bryson and Y. C. Ho, Applied Optimal Control: Optimization, Estimation, andControl, Blaisdell, Waltham, MA, 1969.Geometric read more..

  • Page - 60

    References and Bibliography431.76J. K. Karlof (Ed.), Integer Programming: Theory and Practice, CRC Press, Boca Raton,FL, 2006.1.77L. A. Wolsey, Integer Programming, Wiley, New York, 1998.Dynamic Programming1.78R. Bellman, Dynamic Programming, Princeton University Press, Princeton, NJ, 1957.1.79R. Bellman and S. E. Dreyfus, Applied Dynamic Programming, Princeton UniversityPress, Princeton, NJ, 1962.1.80G. L. Nemhauser, Introduction to Dynamic Programming, Wiley, New York, 1966.1.81L. Cooper and read more..

  • Page - 61

    44Introduction to OptimizationNontraditional Optimization Techniques1.98M. Mitchell, An Introduction to Genetic Algorithms, MIT Press, Cambridge, MA, 1998.1.99D. B. Fogel, Evolutionary Computation: Toward a New Philosophy of Machine Intelli-gence, 3rd ed., IEEE Press, Piscataway, NJ, 2006.1.100K. Deb, Multi-Objective Optimization Using Evolutionary Algorithms, Wiley, Chich-ester, England, 2001.1.101C. A. Coello Coello, D. A. van Veldhuizen and G. B. Lamont, Evolutionary Algorithmsfor Solving read more..

  • Page - 62

    Review Questions451.121N. H. Cook, Mechanics and Materials for Design, McGraw-Hill, New York, 1984.1.122R. Ramarathnam and B. G. Desai, Optimization of polyphase induction motor design: anonlinear programming approach, IEEE Transactions on Power Apparatus and Systems,Vol. PAS-90, No. 2, pp. 570–578, 1971.1.123R. M. Stark and R. L. Nicholls, Mathematical Foundations for Design: Civil EngineeringSystems, McGraw-Hill, New York, 1972.1.124T. F. Coleman, M. A. Branch, and A. Grace, Optimization read more..

  • Page - 63

    46Introduction to Optimization1.6State the linear programming problem in standard form.1.7Define an OC problem and give an engineering example.1.8What is the difference between linear and nonlinear programming problems?1.9What is the difference between design variables and preassigned parameters?1.10What is a design space?1.11What is the difference between a constraint surface and a composite constraint surface?1.12What is the difference between a bound point and a free point in the design read more..

  • Page - 64

    Problems47Figure 1.15Two-bar truss.1.2The two-bar truss shown in Fig. 1.15 is symmetric about the yaxis. The nondimensionalarea of cross section of the members A/Aref, and the nondimensional position of joints1 and 2, x/ h, are treated as the design variables x1 and x2, respectively, where Arefis the reference value of the area (A)and his the height of the truss. The coordinatesof joint 3 are held constant. The weight of the truss (f1) and the total displacement ofjoint 3 under the given load read more..

  • Page - 65

    48Introduction to Optimizationρ = 0.283 lb/in3, P = 10,000 lb, σ0 = 20,000 psi, h = 100 in., Aref = 1 in2, xmin1= 0.1, xmin2=0.1, xmax1= 2.0, and xmax2= jobs are to be performed in an automobile assembly line as noted in the followingtable:Time required toJobs that must beJobcomplete thecompleted beforeNumberjob (min)starting this job14None28None37None46None531, 3652, 3, 4715, 6896927, 81089It is required to set up a suitable number of workstations, with one worker assignedto each read more..

  • Page - 66

    Problems49to avoid severe jerks. Formulate the problem of finding the elevation of the track tominimize the construction costs as an OC problem. Assume the construction costs to beproportional to the amount of dirt added or removed. The elevation of the track is equalto aand bat x = 0 and x = L, respectively.1.5A manufacturer of a particular product produces x1 units in the first week and x2 unitsin the second week. The number of units produced in the first and second weeks mustbe at least read more..

  • Page - 67

    50Introduction to OptimizationFigure 1.18Locations of circular disks in a rectangular plate.Figure 1.19Cone clutch.(b)What is the solution if the constraint R1 ≥ 2R2 is changed to R1 ≤ 2R2?(c)Find the solution of the problem stated in part (a) by assuming a uniform wearcondition between the cup and the cone. The torque transmitted (T )under uniformwear condition is given byT =πfpR2sin α(R21− R22 )Note:Use graphical optimization for the solutions. read more..

  • Page - 68

    Problems511.10A hollow circular shaft is to be designed for minimum weight to achieve a minimumreliability of 0.99 when subjected to a random torque of (T , σT ) = (106, 104) lb-in.,where Tis the mean torque and σT is the standard deviation of the torque, T. Thepermissible shear stress, τ0, of the material is given by (τ 0, στ0) = (50,000, 5000) psi,where τ 0 is the mean value and στ0 is the standard deviation of τ0. The maximuminduced stress (τ ) in the shaft is given byτ =T read more..

  • Page - 69

    52Introduction to OptimizationFigure 1.20Shell-and-tube heat exchanger.Figure 1.21Electrical bridge network.1.13The bridge network shown in Fig. 1.21 consists of five resistors Ri(i = 1, 2, . . . ,5).If Ii is the current flowing through the resistance Ri, the problem is to find the resistancesR1, R2, . . . , R5 so that the total power dissipated by the network is a minimum. Thecurrent Ii can vary between the lower and upper limits Ii,min and Ii,max, and the voltagedrop, Vi = RiIi, must be read more..

  • Page - 70

    Problems53irrigation canal cannot supply more than 4 × 105 m3 of water. Formulate the problem offinding the planting schedule for maximizing the expected returns of the farmer.1.16There are two different sites, each with four possible targets (or depths) to drill an oilwell. The preparation cost for each site and the cost of drilling at site ito target jaregiven below:Drilling cost to target jSite i1234Preparation cost14197112795213Formulate the problem of determining the best site for each read more..

  • Page - 71

    54Introduction to OptimizationThe air gap is to be less than k1√x2 + 7.5 where k1 is a constant. The temperature ofthe external surface of the motor cannot exceedTabove the ambient temperature.Assuming that the heat can be dissipated only by radiation, formulate the problem formaximizing the power of the motor [1.59]. Hints:1.The heat generated due to current flow is given by k2x1x−12 x−14 x25 , where k2 is aconstant. The heat radiated from the external surface for a temperature read more..

  • Page - 72

    Problems55Figure 1.24Beam-column.Formulate the problem of minimizing the cost of the pipeline.1.19A beam-column of rectangular cross section is required to carry an axial load of 25 lband a transverse load of 10 lb, as shown in Fig. 1.24. It is to be designed to avoid thepossibility of yielding and buckling and for minimum weight. Formulate the optimizationproblem by assuming that the beam-column can bend only in the vertical (xy) plane.Assume the material to be steel with a specific weight of read more..

  • Page - 73

    56Introduction to OptimizationFigure 1.26Processing plant layout (coordinates in ft).1.21Consider the problem of determining the economic lot sizes for four different items.Assume that the demand occurs at a constant rate over time. The stock for theith item is replenished instantaneously upon request in lots of sizes Qi . The totalstorage space available is A, whereas each unit of item ioccupies an area di. Theobjective is to find the values of Qi that optimize the per unit cost of holding read more..

  • Page - 74

    Problems57Composition by weightAlloyCopperZincLeadTinA801064B6020182C≥ 75≥ 15≥ 16≥ 3If alloy Bcosts twice as much as alloy A, formulate the problem of determining theamounts of Aand Bto be mixed to produce alloy Cat a minimum cost.1.24An oil refinery produces four grades of motor oil in three process plants. The refineryincurs a penalty for not meeting the demand of any particular grade of motor oil. Thecapacities of the plants, the production costs, the demands of the various grades read more..

  • Page - 75

    58Introduction to OptimizationFigure 1.27Scaffolding system.Figure 1.28Power screw.(h), and screw length (s)as design variables. Consider the following constraints in theformulation:1.The screw should be self-locking [1.117].2.The shear stress in the screw should not exceed the yield strength of the material inshear. Assume the shear strength in shear (according to distortion energy theory), tobe 0.577σy, where σy is the yield strength of the material.3.The bearing stress in the threads should read more..

  • Page - 76

    Problems59Figure 1.29Simply supported beam under loads.not exceed 0.5 in. The beam should not buckle either in the yzor the xzplane underthe axial load. Assuming the ends of the beam to be pin ended, formulate the opti-mization problem using xi, i = 1, 2, 3, 4 as design variables for the following data:Fy = 300 lb, P = 40,000 lb, l = 120 in., E = 30 × 106 psi, ρ = 0.284 lb/in3, lowerbound on x1 and x2 = 0.125 in, upper bound on x1, and x2 = 4 in.(b)Formulate the problem stated in part (a) read more..

  • Page - 77

    60Introduction to OptimizationFigure 1.31Crane hook carrying a load.where Sis the yield strength, ethe joint efficiency, pthe pressure, and Rthe radius.Formulate the design problem for minimum structural volume using xi, i = 1, 2, 3, 4, asdesign variables. Assume the following data: S = 30,000 psi and e = crane hook is to be designed to carry a load Fas shown in Fig. 1.31. The hook canbe modeled as a three-quarter circular ring with a rectangular cross section. The stressesinduced at read more..

  • Page - 78

    Problems61Figure 1.32Four-bar truss.cross section x2 and length √3 l. The truss is made of a lightweight material for whichYoung’s modulus and the weight density are given by 30 × 106 psi and 0.03333 lb/in3,respectively. The truss is subject to the loads P1 = 10,000 lb and P2 = 20,000 lb. Theweight of the truss per unit value of lcan be expressed asf = 3x1(1)(0.03333) + x2√3(0.03333) = 0.1x1 + 0.05773x2The vertical deflection of joint Acan be expressed asδA =0.6x1 +0.3464x2and the read more..

  • Page - 79

    62Introduction to Optimizationx2x1p0 per unit lengthCross-sectionPLL2Figure 1.33A simply supported beam subjected to concentrated and distributed loads.(a)Formulate the problem as a mathematical programming problem assuming thatthe cross-sectional dimensions of the beam are restricted as x1 ≤ x2, 0.04m ≤ x1≤ 0.12m, and 0.06m ≤ x2 ≤ 0.20 m.(b)Find the solution of the problem formulated in part (a) using MATLAB.(c)Find the solution of the problem formulated in part (a) read more..

  • Page - 80

    2Classical Optimization Techniques2.1INTRODUCTIONThe classical methods of optimization are useful in finding the optimum solution ofcontinuous and differentiable functions. These methods are analytical and make useof the techniques of differential calculus in locating the optimum points. Since someof the practical problems involve objective functions that are not continuous and/ordifferentiable, the classical optimization techniques have limited scope in practicalapplications. However, a study read more..

  • Page - 81

    64Classical Optimization TechniquesFigure 2.1Relative and global minima.exists as a definite number, which we want to prove to be zero. Since x∗ is a relativeminimum, we havef (x∗) ≤ f (x∗ + h)for all values of hsufficiently close to zero. Hencef (x∗ + h)− f (x∗)h≥ 0if h >0f (x∗ + h)− f (x∗)h≤ 0if h <0Thus Eq. (2.1) gives the limit as htends to zero through positive values asf′(x∗) ≥ 0(2.2)while it gives the limit as htends to zero through negative values read more..

  • Page - 82

    2.2Single-Variable Optimization65Figure 2.2Derivative undefined at x∗.3.The theorem does not say what happens if a minimum or maximum occurs atan endpoint of the interval of definition of the function. In this caselimh→0f (x∗ + h)− f (x∗)hexists for positive values of honly or for negative values of honly, and hencethe derivative is not defined at the endpoints.4.The theorem does not say that the function necessarily will have a minimumor maximum at every point where the derivative read more..

  • Page - 83

    66Classical Optimization TechniquesTheorem 2.2 Sufficient ConditionLet f′(x∗) = f′′(x∗) = · · · = f (n−1)(x∗) = 0,but f (n)(x∗) = 0. Then f (x∗) is (i) a minimum value of f (x)if f (n)(x∗) >0 and nis even; (ii) a maximum value of f (x)if f (n)(x∗) <0 and nis even; (iii) neither amaximum nor a minimum if nis odd.Proof: Applying Taylor’s theorem with remainder after nterms, we havef (x∗ + h)=f (x∗) + hf′(x∗) +h22!f′′(x∗) + · · · +hn−1(n− read more..

  • Page - 84

    2.2Single-Variable Optimization67Example 2.2In a two-stage compressor, the working gas leaving the first stage ofcompression is cooled (by passing it through a heat exchanger) before it enters thesecond stage of compression to increase the efficiency [2.13]. The total work input toa compressor (W )for an ideal gas, for isentropic compression, is given byW= cpT1p2p1(k−1)/k+p3p2(k−1)/k− 2where cp is the specific heat of the gas at constant pressure, kis the ratio of specificheat at read more..

  • Page - 85

    68Classical Optimization Techniques2.3MULTIVARIABLE OPTIMIZATION WITH NO CONSTRAINTSIn this section we consider the necessary and sufficient conditions for the minimumor maximum of an unconstrained function of several variables. Before seeing theseconditions, we consider the Taylor’s series expansion of a multivariable function.Definition: rth Differential of f .If all partial derivatives of the function fthroughorder r≥ 1 exist and are continuous at a point X∗, the polynomialdr f (X∗) read more..

  • Page - 86

    2.3Multivariable Optimization with No Constraints69wheref10−2= e−2df10−2= h1∂f∂x1 10−2+ h2∂f∂x2 10−2+ h3∂f∂x3 10−2= [h1ex3 + h2(2x2x3)+ h3x22+ h3x1ex3]10−2= h1e−2 + h3e−2d2f10−2=3i=13j=1hihj∂2f∂xi∂xj 10−2= h21∂2f∂x21 + h22∂2f∂x22 + h23∂2f∂x23+ 2h1h2∂2f∂x1∂x2 + 2h2h3∂2f∂x2∂x3 + 2h1h3∂2f∂x1∂x310−2= [h21(0) + h22(2x3)+ read more..

  • Page - 87

    70Classical Optimization Techniquesthat is,f (X∗ + h)− f (X∗) = hk∂f∂xk(X∗) +12!d2f (X∗ + θ h),0 < θ <1Since d2f (X∗ + θ h) is of order h2i , the terms of order hwill dominate the higher-orderterms for small h. Thus the sign of f (X∗ + h)− f (X∗) is decided by the sign ofhk ∂f (X∗)/∂xk. Suppose that ∂f (X∗)/∂xk >0. Then the sign of f (X∗ + h)− f (X∗)will be positive for hk >0 and negative for hk <0. This means that X∗ cannot bean read more..

  • Page - 88

    2.3Multivariable Optimization with No Constraints71will have the same sign as (∂2f/∂xi∂xj )| X= X∗ for all sufficiently small h. Thusf (X∗ + h)− f (X∗) will be positive, and hence X∗ will be a relative minimum, ifQ=ni=1nj=1hihj∂2f∂xi∂xjX=X∗(2.11)is positive. This quantity Qis a quadratic form and can be written in matrix form asQ= hTJh|X=X∗(2.12)whereJ|X=X∗ =∂2f∂xi∂xjX=X∗(2.13)is the matrix of second partial derivatives and is called the Hessian matrixof read more..

  • Page - 89

    72Classical Optimization TechniquesFigure 2.4Spring–cart system.SOLUTIONAccording to the principle of minimum potential energy, the system willbe in equilibrium under the load Pif the potential energy is a minimum. The potentialenergy of the system is given bypotential energy (U )= strain energy of springs − work done by external forces= [ 12 k2x21+12 k3(x2− x1)2 +12 k1 x22 ] − P x2The necessary conditions for the minimum of Uare∂U∂x1 = k2x1− k3(x2− x1)= 0(E1)∂U∂x2 = k3(x2− read more..

  • Page - 90

    2.3Multivariable Optimization with No Constraints73The determinants of the square submatrices of JareJ1= k2+ k3 = k2+ k3 >0J2=k2+ k3−k3−k3 k1+ k3 = k1k2+ k1k3+ k2k3 >0since the spring constants are always positive. Thus the matrix Jis positive definiteand hence (x∗1 , x∗2 )corresponds to the minimum of potential energy.2.3.1Semidefinite CaseWe now consider the problem of determining the sufficient conditions for the casewhen the Hessian matrix of the given function is read more..

  • Page - 91

    74Classical Optimization TechniquesAs an example, consider the function f (x, y)= x2− y2. For this function,∂f∂x= 2xand∂f∂y= −2yThese first derivatives are zero at x∗ = 0 and y∗ = 0. The Hessian matrix of fat(x∗, y∗) is given byJ=200 −2Since this matrix is neither positive definite nor negative definite, the point (x∗ = 0,y∗ = 0) is a saddle point. The function is shown graphically in Fig. 2.5. It can be seenthat f (x, y∗) = f (x, 0) has a relative minimum and f read more..

  • Page - 92

    2.4Multivariable Optimization with Equality Constraints75These equations are satisfied at the points(0, 0),(0, −83 ),(−43 ,0),and(−43 ,−83 )To find the nature of these extreme points, we have to use the sufficiency conditions.The second-order partial derivatives of fare given by∂2f∂x21 = 6x1 + 4∂2f∂x22 = 6x2 + 8∂2f∂x1∂x2 = 0The Hessian matrix of fis given byJ=6x1 + 4006x2 + 8If J1= |6x1 + 4| and J2=6x1 + 4006x2 + 8, the values of J1 and J2 and the natureof the extreme read more..

  • Page - 93

    76Classical Optimization TechniquesHere mis less than or equal to n; otherwise (if m > n), the problem becomes overdefinedand, in general, there will be no solution. There are several methods available for thesolution of this problem. The methods of direct substitution, constrained variation, andLagrange multipliers are discussed in the following sections.2.4.1Solution by Direct SubstitutionFor a problem with nvariables and mequality constraints, it is theoretically possibleto solve read more..

  • Page - 94

    2.4Multivariable Optimization with Equality Constraints77The necessary conditions for the maximum of fgive∂f∂x1 = 8x2 (1 − x21− x22 )1/2 −x21(1 − x21− x22 )1/2= 0(E5)∂f∂x2 = 8x1 (1 − x21− x22 )1/2 −x22(1 − x21− x22 )1/2= 0(E6)Equations (E5) and (E6) can be simplified to obtain1 − 2x21− x22= 01 − x21− 2x22 = 0from which it follows that x∗1= x∗2= 1/√3 and hence x∗3= 1/√3. This solution givesthe maximum volume of the box asfmax=83√3To find whether read more..

  • Page - 95

    78Classical Optimization Techniqueswe indicate its salient features through the following simple problem with n= 2 andm= 1:Minimize f (x1, x2)(2.17)subject tog(x1, x2)= 0(2.18)A necessary condition for fto have a minimum at some point (x∗1 , x∗2 ) is that the totalderivative of f (x1, x2) with respect to x1 must be zero at (x∗1 , x∗2 ). By setting the totaldifferential of f (x1, x2) equal to zero, we obtaindf=∂f∂x1dx1+∂f∂x2dx2= 0(2.19)Since g(x∗1 , x∗2 )= 0 at the minimum read more..

  • Page - 96

    2.4Multivariable Optimization with Equality Constraints79lie on the constraint curve, g(x1, x2)= 0. Thus any set of variations (dx1, dx2) thatdoes not satisfy Eq. (2.22) leads to points such as D, which do not satisfy constraintEq. (2.18).Assuming that ∂g/∂x2= 0, Eq. (2.22) can be rewritten asdx2= −∂g/∂x1∂g/∂x2(x∗1 , x∗2 )dx1(2.23)This relation indicates that once the variation in x1(dx1) is chosen arbitrarily, thevariation in x2 (dx2) is decided automatically in order to have read more..

  • Page - 97

    80Classical Optimization TechniquesFigure 2.7Cross section of the log.This problem has two variables and one constraint; hence Eq. (2.25) can be appliedfor finding the optimum solution. Sincef= kx−1y−2(E1)g= x2 + y2 − a2(E2)we have∂f∂x= −kx−2y−2∂f∂y= −2kx−1y−3∂g∂x= 2x∂g∂y= 2yEquation (2.25) gives−kx−2y−2(2y) + 2kx−1y−3(2x) = 0at(x∗, y∗)that is,y∗ =√2x∗(E3)Thus the beam of maximum tensile stress carrying capacity has a depth of √2 timesits read more..

  • Page - 98

    2.4Multivariable Optimization with Equality Constraints81Necessary Conditions for a General Problem.The procedure indicated above canbe generalized to the case of a problem in nvariables with mconstraints. In this case,each constraint equation gj (X) = 0, j= 1, 2, . . . , m, gives rise to a linear equation inthe variations dxi, i= 1, 2, . . . , n. Thus there will be in all mlinear equations in nvariations. Hence any mvariations can be expressed in terms of the remaining n− mvariations. These read more..

  • Page - 99

    82Classical Optimization TechniquesIn terms of the notation of our equations, let us take the independent variables asx3= y3andx4= y4so thatx1= y1andx2= y2Then the Jacobian of Eq. (2.27) becomesJg1, g2x1, x2=∂g1∂y1∂g1∂y2∂g2∂y1∂g2∂y2=1 21 2= 0and hence the necessary conditions of Eqs. (2.26) cannot be applied.Next, let us take the independent variables as x3= y2 and x4= y4 so that x1= y1and x2= y3. Then the Jacobian of Eq. (2.27) becomesJg1, g2x1, read more..

  • Page - 100

    2.4Multivariable Optimization with Equality Constraints83=y4 y1 y3513615= y4(5− 3) − y1(25− 18) + y3(5− 6)= 2y4 − 7y1 − y3= 0(E5)Equations (E4) and (E5) give the necessary conditions for the minimum or the maxi-mum of fasy1=12 y2y3= 2y4 − 7y1 = 2y4 −72 y2(E6)When Eqs. (E6) are substituted, Eqs. (E2) and (E3) take the form−8y2 + 11y4 = 10−15y2 + 16y4 = 15from which the desired optimum solution can be obtained asy∗1= −574y∗2= −537y∗3=15574y∗4=3037Sufficiency read more..

  • Page - 101

    84Classical Optimization TechniquesAs an example, consider the problem of minimizingf (X)= f (x1, x2, x3)subject to the only constraintg1(X)= x21+ x22+ x23− 8 = 0Since n= 3 and m= 1 in this problem, one can think of any of the mvariables,say x1, to be dependent and the remaining n− mvariables, namely x2 and x3, to beindependent. Here the constrained partial derivative (∂f/∂x2)g, for example, meansthe rate of change of fwith respect to x2 (holding the other independent variable read more..

  • Page - 102

    2.4Multivariable Optimization with Equality Constraints85difficult task and may be prohibitive for problems with more than three constraints.Thus the method of constrained variation, although it appears to be simple in theory, isvery difficult to apply since the necessary conditions themselves involve evaluation ofdeterminants of order m+ 1. This is the reason that the method of Lagrange multipliers,discussed in the following section, is more commonly used to solve a multivariableoptimization read more..

  • Page - 103

    86Classical Optimization Techniqueschoose to express dx1 in terms of dx2, we would have obtained the requirement that(∂g/∂x1)|(x∗1,x∗2) be nonzero to define λ. Thus the derivation of the necessary conditionsby the method of Lagrange multipliers requires that at least one of the partial derivativesof g(x1, x2) be nonzero at an extreme point.The necessary conditions given by Eqs. (2.34) to (2.36) are more commonly gen-erated by constructing a function L, known as the Lagrange function, read more..

  • Page - 104

    2.4Multivariable Optimization with Equality Constraints87Necessary Conditions for a General Problem.The equations derived above can beextended to the case of a general problem with nvariables and mequality constraints:Minimize f (X)(2.39)subject togj (X)= 0,j= 1, 2, . . . , mThe Lagrange function, L, in this case is defined by introducing one Lagrange multiplierλj for each constraint gj (X) asL(x1, x2, . . . , xn, λ1, λ2, . . . , λm)= f (X)+ λ1g1(X)+ λ2g2(X)+ ·· · + λmgm(X)(2.40)By read more..

  • Page - 105

    88Classical Optimization TechniquesProof: The proof is similar to that of Theorem 2.4.Notes:1.IfQ=ni=1nj=1∂2L∂xi ∂xj(X∗, λ∗)dxi dxjis negative for all choices of the admissible variations dxi, X∗ will be a con-strained maximum of f (X).2.It has been shown by Hancock [2.1] that a necessary condition for the quadraticform Q, defined by Eq. (2.43), to be positive (negative) definite for all admissi-ble variations dXis that each root of the polynomial zi, defined by the read more..

  • Page - 106

    2.4Multivariable Optimization with Equality Constraints89subject to2π x21+ 2π x1x2= A0= 24πThe Lagrange function isL(x1, x2, λ)= π x21 x2+ λ(2π x21+ 2π x1x2− A0)and the necessary conditions for the maximum of fgive∂L∂x1 = 2π x1x2+ 4π λx1+ 2π λx2= 0(E1)∂L∂x2 = π x21+ 2π λx1= 0(E2)∂L∂λ= 2π x21+ 2π x1x2− A0= 0(E3)Equations (E1) and (E2) lead toλ= −x1x22x1 + x2 = −12x1that is,x1=12 x2(E4)and Eqs. (E3) and (E4) give the desired solution asx∗1=A06π1/2, read more..

  • Page - 107

    90Classical Optimization TechniquesL22=∂2L∂x22 (X∗,λ∗)= 0g11=∂g1∂x1 (X∗,λ∗) = 4π x∗1+ 2π x∗2= 16πg12=∂g1∂x2 (X∗,λ∗) = 2π x∗1= 4πThus Eq. (2.44) becomes4π − z2π16π2π0 − z4π16π4π0= 0that is,272π2z + 192π3 = 0This givesz= −1217 πSince the value of zis negative, the point (x∗1 , x∗2 )corresponds to the maximum of f.Interpretation of the Lagrange Multipliers.To find the physical meaning of theLagrange multipliers, consider the following read more..

  • Page - 108

    2.4Multivariable Optimization with Equality Constraints91ordb= d˜g=ni=1∂˜g∂xidxi(2.51)Equation (2.49) can be rewritten as∂f∂xi + λ∂g∂xi =∂f∂xi − λ∂˜g∂xi = 0(2.52)or∂˜g∂xi =∂f /∂xiλ,i= 1, 2, . . . , n(2.53)Substituting Eq. (2.53) into Eq. (2.51), we obtaindb=ni=11λ∂f∂xidxi=dfλ(2.54)sincedf=ni=1∂f∂xidxi(2.55)Equation (2.54) givesλ=dfdborλ∗ =df∗db(2.56)ordf∗ = λ∗db(2.57)Thus λ∗ denotes the sensitivity (or rate of change) of fwith respect read more..

  • Page - 109

    92Classical Optimization Techniques3. λ∗ = 0. In this case, any incremental change in bhas absolutely no effect on theoptimum value of fand hence the constraint will not be binding. This meansthat the optimization of fsubject to g= 0 leads to the same optimum pointX∗ as with the unconstrained optimization of f.In economics and operations research, Lagrange multipliers are known as shadow pricesof the constraints since they indicate the changes in optimal value of the objectivefunction per read more..

  • Page - 110

    2.5Multivariable Optimization with Inequality Constraints93One procedure for finding the effect on f∗ of changes in the value of b(right-handside of the constraint) would be to solve the problem all over with the new value ofb. Another procedure would involve the use of the value of λ∗. When the originalconstraint is tightened by 1 unit (i.e., db= −1), Eq. (2.57) givesdf∗ = λ∗db = 2(−1) = −2Thus the new value of f∗ is f∗ + df∗ = 14.07. On the other hand, if we relax read more..

  • Page - 111

    94Classical Optimization Techniques(necessary conditions):∂L∂xi(X, Y, λ)=∂f∂xi(X)+mj=1λj∂gj∂xi(X)= 0,i= 1, 2, . . . , n(2.62)∂L∂λj(X, Y, λ)= Gj (X, Y)= gj (X)+ y2j= 0,j= 1, 2, . . . , m(2.63)∂L∂yj(X, Y, λ)= 2λj yj= 0,j= 1, 2, . . . , m(2.64)It can be seen that Eqs. (2.62) to (2.64) represent (n+ 2m) equations in the (n+ 2m)unknowns, X, λ, and Y. The solution of Eqs. (2.62) to (2.64) thus gives the optimumsolution vector, X∗; the Lagrange multiplier vector, λ∗; and read more..

  • Page - 112

    2.5Multivariable Optimization with Inequality Constraints95where ∇f and ∇gj are the gradients of the objective function and the jth constraint,respectively:∇f =∂f /∂x1∂f /∂x2...∂f /∂xnand∇gj =∂gj /∂x1∂gj /∂x2...∂gj /∂xnEquation (2.69) indicates that the negative of the gradient of the objective function canbe expressed as a linear combination of the read more..

  • Page - 113

    96Classical Optimization TechniquesFigure 2.8Feasible direction S.Example 2.12Consider the following optimization problem:Minimizef (x1, x2)= x21+ x22subject tox1+ 2x2 ≤ 151 ≤ xi≤ 10; i= 1, 2Derive the conditions to be satisfied at the point X1= {1, 7}T by the search directionS= {s1, s2}T if it is a (a) usable direction, and (b) feasible direction.SOLUTIONThe objective function and the constraints can be stated asf (x1, x2)= x21+ x22g1(X)= x1+ 2x2 ≤ 15 read more..

  • Page - 114

    2.5Multivariable Optimization with Inequality Constraints97g2(X)= 1 − x1≤ 0g3(X)= 1 − x2≤ 0g4(X)= x1− 10 ≤ 0g5(X)= x2− 10 ≤ 0At the given point X1= {1, 7}T, all the constraints can be seen to be satisfied with g1and g2 being active. The gradients of the objective and active constraint functions atpoint X1= {1, 7}T are given by∇f =∂f∂x1∂f∂x2X1= 2x12x2X1=214∇g1 read more..

  • Page - 115

    98Classical Optimization Techniques2.5.1Kuhn –Tucker ConditionsAs shown above, the conditions to be satisfied at a constrained minimum point, X∗, ofthe problem stated in Eq. (2.58) can be expressed as∂f∂xi +j∈J1λj∂gj∂xi = 0,i= 1, 2, . . . , n(2.73)λj >0,j∈ J1(2.74)These are called Kuhn–Tucker conditionsafter the mathematicians who derived themas the necessary conditions to be satisfied at a relative minimum of f(X) [2.8]. Theseconditions are, in general, not sufficient read more..

  • Page - 116

    2.5Multivariable Optimization with Inequality Constraints99gj≤ 0,j= 1, 2, . . . , mhk= 0,k= 1, 2, . . . , pλj≥ 0,j= 1, 2, . . . , m(2.77)where λj and βk denote the Lagrange multipliers associated with the constraintsgj≤ 0 and hk= 0, respectively. Although we found qualitatively that theKuhn–Tucker conditions represent the necessary conditions of optimality, thefollowing theorem gives the precise conditions of optimality.Theorem 2.7Let X∗ be a feasible solution to the problem of Eqs. read more..

  • Page - 117

    100Classical Optimization TechniquesFigure 2.9Feasible region and contours of the objective function.It is clear that ∇g1(X∗) and ∇g2(X∗) are not linearly independent. Hence the constraintqualification is not satisfied at the optimum point. Noting that∇f (X∗) =2(x1 − 1)2x2(0, 0) = −20the Kuhn–Tucker conditions can be written, using Eqs. (2.73) and (2.74), as−2 + λ1(0)+ λ2(0)= 0(E4)0 + λ1(−2) + λ2(2)= 0(E5)λ1 >0(E6)λ2 >0(E7)Since Eq. (E4) is not satisfied and read more..

  • Page - 118

    2.5Multivariable Optimization with Inequality Constraints101Example 2.14A manufacturing firm producing small refrigerators has entered intoa contract to supply 50 refrigerators at the end of the first month, 50 at the end of thesecond month, and 50 at the end of the third. The cost of producing xrefrigeratorsin any month is given by $(x2 + 1000). The firm can produce more refrigerators inany month and carry them to a subsequent month. However, it costs $20 per unit forany refrigerator carried read more..

  • Page - 119

    102Classical Optimization Techniquesthat is,x1− 50 ≥ 0(E7)x1+ x2− 100 ≥ 0(E8)x1+ x2+ x3− 150 ≥ 0(E9)λj≤ 0,j= 1, 2, 3that is,λ1≤ 0(E10)λ2≤ 0(E11)λ3≤ 0(E12)The solution of Eqs. (E1) to (E12) can be found in several ways. We proceed to solvethese equations by first nothing that either λ1= 0 or x1= 50 according to Eq. (E4).Using this information, we investigate the following cases to identify the optimumsolution of the problem.Case 1: λ1 = 0.Equations (E1) to (E3) givex3= read more..

  • Page - 120

    2.5Multivariable Optimization with Inequality Constraints103This solution can be seen to satisfy Eqs. (E10) to (E12) but violate Eqs. (E7)and (E9).3. λ2= 0, λ3= 0. Equations (E13) givex1= −20, x2= −10, x3= 0This solution satisfies Eqs. (E10) to (E12) but violates the constraints, Eqs. (E7)to (E9).4.−130 − λ2− λ3= 0, −180 − λ2−32 λ3= 0. The solution of these equationsand Eqs. (E13) yieldsλ2= −30, λ3= −100, x1= 45,x2= 55,x3= 50This solution satisfies Eqs. (E10) to read more..

  • Page - 121

    104Classical Optimization Techniques4. x1+ x2− 100 = 0, x1+ x2+ x3− 150 = 0: The solution of these equationsyieldsx1= 50,x2= 50,x3= 50This solution can be seen to satisfy all the constraint Eqs. (E7) to (E9). Thevalues of λ1, λ2, and λ3 corresponding to this solution can be obtained fromEqs. (E15) asλ1= −20, λ2= −20, λ3= −100Since these values of λi satisfy the requirements [Eqs. (E10) to (E12)], thissolution can be identified as the optimum solution. Thusx∗1= 50,x∗2= read more..

  • Page - 122

    Review Questions105REFERENCES AND BIBLIOGRAPHY2.1H. Hancock, Theory of Maxima and Minima, Dover, New York, 1960.2.2M. E. Levenson, Maxima and Minima, Macmillan, New York, 1967.2.3G. B. Thomas, Jr., Calculus and Analytic Geometry, Addison-Wesley, Reading, MA,1967.2.4A. E. Richmond, Calculus for Electronics, McGraw-Hill, New York, 1972.2.5B. Kolman and W. F. Trench, Elementary Multivariable Calculus, Academic Press, NewYork, 1971.2.6G. S. G. Beveridge and R. S. Schechter, Optimization: Theory and read more..

  • Page - 123

    106Classical Optimization Techniques2.12What is the significance of Lagrange multipliers?2.13Convert an inequality constrained problem into an equivalent unconstrained problem.2.14State the Kuhn–Tucker conditions.2.15What is an active constraint?2.16Define a usable feasible direction.2.17What is a convex programming problem? What is its significance?2.18Answer whether each of the following quadratic forms is positive definite, negative defi-nite, or neither:(a) f= x21− x22(b) f= read more..

  • Page - 124

    Problems107Figure 2.10Electric generator with load.2.3Find the maxima and minima, if any, of the functionf (x)= 4x3 − 18x2 + 27x − 72.4The efficiency of a screw jack is given byη=tan αtan(α + φ)where αis the lead angle and φis a constant. Prove that the efficiency of the screw jackwill be maximum when α= 45◦ − φ/2 with ηmax= (1 − sin φ)/(1 + sin φ).2.5Find the minimum of the functionf (x)= 10x6 − 48x5 + 15x4 + 200x3 − 120x2 − 480x + 1002.6Find the angular orientation read more..

  • Page - 125

    108Classical Optimization Techniques2.11If a crank is at an angle θfrom dead center with θ= ωt, where ωis the angular velocityand tis time, the distance of the piston from the end of its stroke (x)is given byx= r(1 − cos θ )+r24l(1 − cos 2θ )where ris the length of the crank and lis the length of the connecting rod. For r= 1and l= 5, find (a) the angular position of the crank at which the piston moves withmaximum velocity, and (b) the distance of the piston from the end of its stroke read more..

  • Page - 126

    Problems1092.20Determine whether the following matrix is positive definite:[A] =−14 3 03 −1 404 22.21The potential energy of the two-bar truss shown in Fig. 2.11 is given byf (x1, x2)=EAs12s2x21+EAshs2x22− P x1 cos θ− P x2 sin θwhere Eis Young’s modulus, Athe cross-sectional area of each member, lthe span ofthe truss, sthe length of each member, hthe height of the truss, Pthe applied load,θthe angle at which the load is applied, and x1 and x2 are, respectively, the read more..

  • Page - 127

    110Classical Optimization Techniques2.24Find the second-order Taylor’s series approximation of the functionf (x1, x2)= (x1− 1)2ex2 + x1at the points (a) (0,0) and (b) (1,1).2.25Find the third-order Taylor’s series approximation of the functionf (x1, x2, x3)= x22 x3+ x1ex3at point (1, 0, −2).2.26The volume of sales (f ) of a product is found to be a function of the number of newspaperadvertisements (x) and the number of minutes of television time (y) asf= 12xy − x2 − 3y2Each newspaper read more..

  • Page - 128

    Problems1112.33Find the admissible and constrained variations at the point X= {0,4}T for the followingproblem:Minimize f= x21+ (x2− 1)2subject to−2x21+ x2= 42.34Find the diameter of an open cylindrical can that will have the maximum volume for agiven surface area, S.2.35A rectangular beam is to be cut from a circular log of radius r. Find the cross-sectionaldimensions of the beam to (a) maximize the cross-sectional area of the beam, and (b)maximize the perimeter of the beam section.2.36Find read more..

  • Page - 129

    112Classical Optimization Techniques2.42Find the dimensions of an open rectangular box of volume Vfor which the amount ofmaterial required for manufacture (surface area) is a minimum.2.43A rectangular sheet of metal with sides aand bhas four equal square portions (of side d)removed at the corners, and the sides are then turned up so as to form an open rectangularbox. Find the depth of the box that maximizes the volume.2.44Show that the cone of the greatest volume that can be inscribed in a given read more..

  • Page - 130

    Problems1132.52A department store plans to construct a one-story building with a rectangular planform.The building is required to have a floor area of 22,500 ft2 and a height of 18 ft. It isproposed to use brick walls on three sides and a glass wall on the fourth side. Find thedimensions of the building to minimize the cost of construction of the walls and the roofassuming that the glass wall costs twice as much as that of the brick wall and the roofcosts three times as much as that of the read more..

  • Page - 131

    114Classical Optimization TechniquesddhhDP(a)(b)Indentation or craterof diameter d and depth hSpherical (ball)indenter ofdiameter DFigure 2.13Brinell hardness test.2.60A manufacturer produces small refrigerators at a cost of $60 per unit and sells them toa retailer in a lot consisting of a minimum of 100 units. The selling price is set at $80per unit if the retailer buys 100 units at a time. If the retailer buys more than 100 unitsat a time, the manufacturer agrees to reduce the price of all read more..

  • Page - 132

    Problems1152.63Consider the following optimization problem:Maximize f= −x1 − x2subject tox21+ x2≥ 24 ≤ x1+ 3x2x1+ x42≤ 30(a)Find whether the design vector X= {1,1}T satisfies the Kuhn–Tucker conditions fora constrained optimum.(b)What are the values of the Lagrange multipliers at the given design vector?2.64Consider the following problem:Maximize f (X)= x21+ x22+ x23subject tox1+ x2+ x3≥ 52 − x2x3≤ 0x1≥ 0,x2≥ 0,x3≥ 2Determine whether the Kuhn–Tucker conditions are read more..

  • Page - 133

    116Classical Optimization TechniquesDetermine whether the following search direction is usable, feasible, or both at the designvector X=51 :S=01,S= −11,S=10,S= −122.67Consider the following problem:Minimize f= x31− 6x21+ 11x1 + x3subject tox21+ x22− x23≤ 04 − x21− x22− x23≤ 0xi≥ 0,i= 1, 2, 3,x3≤ 5Determine whether the following vector represents an optimum solution:X= 0√2√22.68Minimize f= x21+ 2x22+ 3x23subject to the constraintsg1= x1− read more..

  • Page - 134

    Problems1172.71Consider the following problem:Maximize f (x)= (x− 1)2subject to−2 ≤ x≤ 4Determine whether the constraint qualification and Kuhn–Tucker conditions are satisfiedat the optimum point.2.72Consider the following problem:Minimize f= (x1− 1)2 + (x2− 1)2subject to2x2 − (1 − x1)3 ≤ 0x1≥ 0x2≥ 0Determine whether the constraint qualification and the Kuhn–Tucker conditions are sat-isfied at the optimum point.2.73Verify whether the following problem is read more..

  • Page - 135

    118Classical Optimization Techniquessubject tox22− x1≥ 0x21− x2≥ 0−12≤ x1≤12 ,x2≤ 1X1=00,X2=0−1,X3= −12142.76Consider the following optimization problem:Minimize f= −x21− x22+ x1x2+ 7x1 + 4x2subject to2x1 + 3x2 ≤ 24−5x1 + 12x2 ≤ 24x1≥ 0,x2≥ 0,x2≤ 4Find a usable feasible direction at each of the following design vectors:X1=11,X2=64 read more..

  • Page - 136

    3Linear Programming I:Simplex Method3.1INTRODUCTIONLinear programmingis an optimization method applicable for the solution of prob-lems in which the objective function and the constraints appear as linear functionsof the decision variables. The constraint equations in a linear programming problemmay be in the form of equalities or inequalities. The linear programming type of opti-mization problem was first recognized in the 1930s by economists while developingmethods for the optimal allocation read more..

  • Page - 137

    120Linear Programming I: Simplex Methodtheory, decomposition method, postoptimality analysis, and Karmarkar’s method, areconsidered in Chapter 4.3.2APPLICATIONS OF LINEAR PROGRAMMINGThe number of applications of linear programming has been so large that it is notpossible to describe all of them here. Only the early applications are mentioned hereand the exercises at the end of this chapter give additional example applications oflinear programming. One of the early industrial applications of read more..

  • Page - 138

    3.2Applications of Linear Programming121Figure 3.1Rigid frame.Figure 3.2Collapse mechanisms of the frame. Mb, moment carrying capacity of beam; Mc,moment carrying capacity of column [3.9].of the plastic moment capacities, find the values of the ultimate moment capacitiesMb and Mc for minimum weight. Assume that the two columns are identical and thatP1 =3, P2 =1, h =8, and l =10.SOLUTIONThe objective function can be expressed asf (Mb, Mc) =weight of beam +weight of columns=α(2lMb +2hMc) read more..

  • Page - 139

    122Linear Programming I: Simplex Methodwhere αis a constant indicating the weight per unit length of the member with aunit plastic moment capacity. Since a constant multiplication factor does not affect theresult, fcan be taken asf =2lMb +2hMc =20Mb +16Mc(E1)The constraints (U ≥E)from the four collapse mechanisms can be expressed asMc ≥6Mb ≥2.52Mb +Mc ≥17Mb +Mc ≥12(E2)3.3STANDARD FORM OF A LINEAR PROGRAMMING PROBLEMThe general linear programming problem can be stated in the following read more..

  • Page - 140

    3.3Standard Form of a Linear Programming Problem123whereX = x1x2...xn,b = ,c = ,a = a11 a12 · · ·a1na21 a22 · · ·a2n...am1 am2 · · ·amnThe characteristics of a linear programming problem, stated in standard form, are1.The objective function is of the minimization type.2.All the read more..

  • Page - 141

    124Linear Programming I: Simplex MethodSimilarly, if the constraint is in the form of a “greater than or equal to” type ofinequality asak1x1 +ak2x2 + · · · +aknxn ≥bkit can be converted into the equality form by subtracting a variable asak1x1 +ak2x2 + · · · +aknxn −xn+1 =bkwhere xn+1 is a nonnegative variable known as a surplus variable.It can be seen that there are mequations in ndecision variables in a linear pro-gramming problem. We can assume that m < n; for if m > n, read more..

  • Page - 142

    3.4Geometry of Linear Programming Problems125on the various machines are given by10x +5y ≤2500(E1)4x +10y ≤2000(E2)x +1.5y ≤450(E3)Since the variables xand ycannot take negative values, we havex ≥0y ≥0(E4)The total profit is given byf (x, y) =50x +100y(E5)Thus the problem is to determine the nonnegative values of xand ythat satisfy theconstraints stated in Eqs. (E1) to (E3) and maximize the objective function given byEq. (E5). The inequalities (E1) to (E4) can be plotted in the read more..

  • Page - 143

    126Linear Programming I: Simplex MethodFigure 3.4Contours of objective function.In some cases, the optimum solution may not be unique. For example, if theprofit rates for the machine parts I and II are $40 and $100 instead of $50 and $100,respectively, the contours of the profit function will be parallel to side CGof thefeasible region as shown in Fig. 3.5. In this case, line P ′′Q′′, which coincides with theboundary line CG, will correspond to the maximum (feasible) profit. Thus read more..

  • Page - 144

    3.5Definitions and Theorems127Figure 3.6Unbounded solution.can be taken as an optimum solution with a profit value of $20,000. There are threeother possibilities. In some problems, the feasible region may not be a closed convexpolygon. In such a case, it may happen that the profit level can be increased to aninfinitely large value without leaving the feasible region, as shown in Fig. 3.6. In thiscase the solution of the linear programming problem is said to be unbounded. On theother extreme, read more..

  • Page - 145

    128Linear Programming I: Simplex MethodDefinitions1. Point in n-dimensional space.A point Xin an n-dimensional space is char-acterized by an ordered set of nvalues or coordinates (x1, x2, . . . , xn). Thecoordinates of Xare also called the componentsof X.2. Line segment in n dimensions (L).If the coordinates of two points Aand Bare given by x(1)jand x(2)j(j =1, 2, . . . , n), the line segment (L)joining thesepoints is the collection of points X(λ)whose coordinates are given by xj =λx(1)j+(1 read more..

  • Page - 146

    3.5Definitions and Theorems129Figure 3.8Hyperplane in two dimensions.4. Convex set.A convex set is a collection of points such that if X(1)and X(2)areany two points in the collection, the line segment joining them is also in thecollection. A convex set, S, can be defined mathematically as follows:If X(1), X(2) ∈S,thenX ∈SwhereX =λX(1) +(1 −λ)X(2),0 ≤λ ≤1A set containing only one point is always considered to be convex. Someexamples of convex sets in two dimensions are shown shaded read more..

  • Page - 147

    130Linear Programming I: Simplex MethodFigure 3.11Convex polytopes in two and three dimensions (a, b)and convex polyhedra intwo and three dimensions (c, d).can be seen that a convex polygon, shown in Fig. 3.11a and c, can be consideredas the intersection of one or more half-planes.6. Vertex or extreme point.This is a point in the convex set that does not lie on aline segment joining two other points of the set. For example, every point onthe circumference of a circle and each corner point of a read more..

  • Page - 148

    3.5Definitions and Theorems13110. Basic feasible solution.This is a basic solution that satisfies the nonnegativityconditions of Eq. (3.3).11. Nondegenerate basic feasible solution.This is a basic feasible solution that hasgot exactly mpositive xi.12. Optimal solution.A feasible solution that optimizes the objective function iscalled an optimal solution.13. Optimal basic solution.This is a basic feasible solution for which the objectivefunction is optimal.Theorems.The basic theorems of linear read more..

  • Page - 149

    132Linear Programming I: Simplex MethodProof: The feasible region Sof a standard linear programming problem is defined asS = {X |aX =b, X ≥0}(3.11)Let the points X1 and X2 belong to the feasible set Sso thataX1 =b,X1 ≥0(3.12)aX2 =b,X2 ≥0(3.13)Multiply Eq. (3.12) by λand Eq. (3.13) by (1 −λ)and add them to obtaina[λX1 +(1 −λ)X2] =λb +(1 −λ)b =bthat is,aXλ =bwhereXλ =λX1 +(1 −λ)X2Thus the point Xλ satisfies the constraints and if0 ≤λ ≤1,Xλ ≥0Hence the theorem is read more..

  • Page - 150

    3.6Solution of a System of Linear Simultaneous Equations133a feasible solution and fC =λfA +(1 −λ)fB . In this case, the value of fdecreasesuniformly from fA to fB , and thus all points on the line segment between Aand B(including those in the neighborhood of A) have fvalues less than fA and correspondto feasible solutions. Hence it is not possible to have a local minimum at Aand at thesame time another point Bsuch that fA > fB . This means that for all B, fA ≤fB , sothat fA is the read more..

  • Page - 151

    134Linear Programming I: Simplex Methodthe equation kEr , where kis a nonzero constant, and (2) any equation Er is replaced bythe equation Er +kEs, where Es is any other equation of the system. By making use ofthese elementary operations, the system of Eqs. (3.14) can be reduced to a convenientequivalent form as follows. Let us select some variable xi and try to eliminate it from allthe equations except the jth one (for which aji is nonzero). This can be accomplishedby dividing the jth equation read more..

  • Page - 152

    3.7Pivotal Reduction of a General System of Equations1350x1 +0x2 +1x3 + · · · +0xn =b′′3(3.16)...0x1 +0x2 +0x3 + · · · +1xn =b′′nThis system of Eqs. (3.16) is said to be in canonical form and has been obtained aftercarrying out npivot operations. From the canonical form, the solution vector can bedirectly obtained asxi =b′′i ,i =1, 2, . . . , n(3.17)Since the set of Eqs. (3.16) has been obtained from Eqs. (3.14) only through elementaryoperations, the system of Eqs. (3.16) is read more..

  • Page - 153

    136Linear Programming I: Simplex MethodOne special solution that can always be deduced from the system of Eqs. (3.19) isxi =b′′i ,i =1, 2, . . . , m0,i =m +1, m +2, . . . , n(3.20)This solution is called a basic solutionsince the solution vector contains no morethan mnonzero terms. The pivotal variables xi, i =1, 2, . . . , m, are called the basicvariablesand the other variables xi, i =m +1, m +2, . . . , n, are called the nonbasicvariables. Of course, this is not the only solution, but it read more..

  • Page - 154

    3.7Pivotal Reduction of a General System of Equations137Finally we pivot on a′33 to obtain the required canonical form asx1+x4=2I3 =I2 −5III3x2−x4=1II3 =II2 +4III3x3 +3x4 =3III3 = −18 III2From this canonical form, we can readily write the solution of x1, x2, and x3 in termsof the other variable x4 asx1 =2 −x4x2 =1 +x4x3 =3 −3x4If Eqs. (I0), (II0), and (III0) are the constraints of a linear programming problem, thesolution obtained by setting the independent variable equal to zero is read more..

  • Page - 155

    138Linear Programming I: Simplex Methodinto the current basis in place of x2. Thus we have to pivot a′′23 in Eq. (II4). This leadsto the following canonical system:x1+x2=3I5 =I4 +13 II5x3+3x2 =6II5 =3II4x4 −x2= −1III5 =III4 −13 II5The solution for x1, x3, and x4 is given byx1 =3 −x2x3 =6 −3x2x4 = −1 +x2from which the basic solution can be obtained asx1 =3,x3 =6,x4 = −1(basic variables)x2 =0(nonbasic variable)Since all the xj are not nonnegative, this basic solution is not read more..

  • Page - 156

    3.9Simplex Algorithm139solutions and pick the one that is feasible and corresponds to the optimal value of theobjective function. This can be done because the optimal solution, if one exists, alwaysoccurs at an extreme point or vertex of the feasible domain. If there are mequalityconstraints in nvariables with n ≥m, a basic solution can be obtained by setting anyof the n −mvariables equal to zero. The number of basic solutions to be inspected isthus equal to the number of ways in which read more..

  • Page - 157

    140Linear Programming I: Simplex Methodminimizes the function f (X)and satisfies the equations:1x1 +0x2 + · · · +0xm +a′′1,m+1xm+1 + · · · +a′′1nxn=b′′10x1 +1x2 + · · · +0xm +a′′2,m+1xm+1 + · · · +a′′2nxn=b′′2...0x1 +0x2 + · · · +1xm +a′′m,m+1xm+1 + · · · +a′′mnxn =b′′m0x1 +0x2 + · · · +0xm −f+c′′m+1xm+1 + · · · +c′′mnxn= −f ′′0(3.21)where a′′ij , c′′j , b′′i , and f ′′0 are constants. Notice that (−f read more..

  • Page - 158

    3.9Simplex Algorithm141Since the variables xm+1, xm+2, . . . , xn are presently zero and are constrained to benonnegative, the only way any one of them can change is to become positive. But ifc′′i >0 for i =m +1, m +2, . . . , n, then increasing any xi cannot decrease the valueof the objective function f. Since no change in the nonbasic variables can cause ftodecrease, the present solution must be optimal with the optimal value of fequal to f ′′0 .A glance over c′′ican also tell read more..

  • Page - 159

    142Linear Programming I: Simplex Methodxm =b′′m −a′′ms xs,b′′m ≥0f =f′′0 +c′′s xs,c′′s <0(3.28)Since c′′s <0, Eq. (3.28) suggests that the value of xs should be made as large aspossible in order to reduce the value of fas much as possible. However, in the processof increasing the value of xs, some of the variables xi(i =1, 2, . . . , m)in Eqs. (3.27)may become negative. It can be seen that if all the coefficients a′′is ≤0, i =1, 2, . . . , m,then xs read more..

  • Page - 160

    3.9Simplex Algorithm143Figure 3.14Flowchart for finding the optimal solution by the simplex algorithm. read more..

  • Page - 161

    144Linear Programming I: Simplex MethodExample 3.4Maximize F =x1 +2x2 +x3subject to2x1 +x2 −x3 ≤2−2x1 +x2 −5x3 ≥ −64x1 +x2 +x3 ≤6xi ≥0,i =1, 2, 3SOLUTIONWe first change the sign of the objective function to convert it to aminimization problem and the signs of the inequalities (where necessary) so as toobtain nonnegative values of bi (to see whether an initial basic feasible solution canbe obtained readily). The resulting problem can be stated asMinimize f = −x1 −2x2 read more..

  • Page - 162

    3.9Simplex Algorithm145Thus x2 enters the next basic set. To obtain the new canonical form, we select the pivotelement a′′rs such thatb′′ra′′rs=mina′′is >0b′′ia′′isIn the present case, s =2 and a′′12 and a′′32 are ≥0. Since b′′1 /a′′12 =2/1 and b′′3 /a′′32 =6/1, xr =x1. By pivoting an a′′12, the new system of equations can be obtained as2x1 +1x2 −x3 +x4=24x1 +0x2 +4x3 +x4 +x5=82x1 +0x2 +2x3 −x4 +x6=43x1 +0x2 −3x3 +2x4−f =4(E3)The read more..

  • Page - 163

    146Linear Programming I: Simplex Methodform as shown below:BasicVariablesb′′i /a′′is forvariablesx1x2x3x4x5x6−fb′′ia′′is >0x421Pivotelement−1100022 ←Smaller one(x4 drops fromnext basis)x52−1501006x6411001066−f−1−2−100010↑Most negative c′′i (x2 enters next basis)Result of pivoting:x221−110002x5404Pivotelement110082 (Select thisarbitrarily. x5drops from nextbasis)x6202−101042−f30−320014↑Most negative c′′i (x3 enters the next basis)Result of read more..

  • Page - 164

    3.9Simplex Algorithm147SOLUTIONIntroducing the slack variables x3 ≥0 and x4 ≥0, the given system ofequations can be written in canonical form asx1 −x2+x3=13x1 −2x2+x4=6−3x1 −2x2−f =0(E1)The basic feasible solution corresponding to this canonical form is given byx3 =1,x4 =6(basic variables)x1 =x2 =0(nonbasic variables)(E2)f =0Since the cost coefficients corresponding to the nonbasic variables are negative, thesolution given by Eq. (E2) is not optimum. Hence the simplex procedure is read more..

  • Page - 165

    148Linear Programming I: Simplex MethodAt this stage we notice that x3 has the most negative cost coefficient and henceit should be brought into the next basis. However, since all the coefficients a′′i3 arenegative, the value of fcan be decreased indefinitely without violating any of theconstraints if we bring x3 into the basis. Hence the problem has no bounded solution.In general, if all the coefficients of the entering variable xs(a′′is )have negative orzero values at any read more..

  • Page - 166

    3.9Simplex Algorithm149Result of pivoting:x3801−12001,500x24101011000200x581000−31010300−f000100120,000Since all c′′i ≥0, the present solution is optimum. The optimum values aregiven byx2 =200,x3 =1500,x5 =300(basic variables)x1 =x4 =0(nonbasic variables)fmin = −20,000Important note:It can be observed from the last row of the preceding tableau thatthe cost coefficient corresponding to the nonbasic variable x1(c′′1 )is zero. This is anindication that an alternative solution read more..

  • Page - 167

    150Linear Programming I: Simplex Methodare known, an infinite number of nonbasic (optimal) feasible solutions can be obtainedby taking any weighted average of the two solutions asX∗ =λX1 +(1 −λ)X2X∗ =x∗1x∗2x∗3x∗4x∗5=(1 −λ)15008200λ +(1 −λ)1251500λ0300λ +(1 read more..

  • Page - 168

    3.10Two Phases of the Simplex Method1511.Arrange the original system of Eqs. (3.32) so that all constant terms bi arepositive or zero by changing, where necessary, the signs on both sides of anyof the equations.2.Introduce to this system a set of artificial variables y1, y2, . . . , ym (which serveas basic variables in phase I), where each yi ≥0, so that it becomesa11x1 +a12x2 + · · · +a1nxn +y1=b1a21x1 +a22x2 + · · · +a2nxn+y2=b2...am1x1 +am2x2 + · · · +amnxn+ym =bmbi ≥0(3.34)Note read more..

  • Page - 169

    152Linear Programming I: Simplex Methodwheredi = −(a1i +a2i + · · · +ami),i =1, 2, . . . , n(3.39)−w0 = −(b1 +b2 + · · · +bm)(3.40)Equations (3.38) provide the initial basic feasible solution that is necessary forstarting phase I.4.In Eq. (3.37), the expression of w, in terms of the artificial variablesy1, y2, . . . , ym is known as the infeasibility form. whas the property that ifas a result of phase I, with a minimum of w >0, no feasible solution existsfor the original linear read more..

  • Page - 170

    3.10Two Phases of the Simplex Method153Figure 3.15Flowchart for the two-phase simplex method. read more..

  • Page - 171

    154Linear Programming I: Simplex MethodFigure 3.15(continued )the complete array of equations can be written as3x1 −3x2 +4x3 +2x4 −x5 +y1=0x1 +x2 +x3 +3x4 +x5+y2=22x1 +3x2 +2x3 −x4 +x5−f =0y1 +y2 −w =0(E2) read more..

  • Page - 172

    3.10Two Phases of the Simplex Method155This array can be rewritten as a canonical system with basic variables as y1,y2, −f, and −wby subtracting the sum of the first two equations of (E2) fromthe last equation of (E2). Thus the last equation of (E2) becomes−4x1 +2x2 −5x3 −5x4 +0x5 −w = −2(E3)Since this canonical system [first three equations of (E2), and (E3)] providesan initial basic feasible solution, phase I of the simplex method can be started.The phase I computations are read more..

  • Page - 173

    156Linear Programming I: Simplex MethodStep 4At this stage we notice that the present basic feasible solution does not containany of the artificial variables y1 and y2, and also the value of wis reduced to0. This indicates that phase I is completed.Step 5Now we start phase II computations by dropping the wrow from furtherconsideration. The results of phase II are again shown in tableau form:BasicOriginal variablesConstantValue of b′′i /a′′is forvariablesx1x2x3x4x5b′′ia′′is read more..

  • Page - 174

    3.11MATLAB Solution of LP Problems1574x1 +x2 +x3 ≤6xi ≥0;i =1, 2, 3SOLUTIONStep 1Express the objective function in the form f (x) =f T xand identify the vectorsxand fasx =x1x2x3andf =−1−2−1Express the constraints in the form Ax ≤band identify the matrix Aand thevector basA =21 −12 −15411andb =266Step 2Use the command for executing linear programming program using simplexmethod as indicated below:clcclear read more..

  • Page - 175

    158Linear Programming I: Simplex Methodcgiterations: []message: 'Optimization terminated.'REFERENCES AND BIBLIOGRAPHY3.1G. B. Dantzig, Linear Programming and Extensions, Princeton University Press,Princeton, NJ, 1963.3.2W. J. Adams, A. Gewirtz, and L. V. Quintas, Elements of Linear Programming, VanNostrand Reinhold, New York, 1969.3.3W.W. Garvin, Introduction to Linear Programming, McGraw-Hill, New York, 1960.3.4S. I. Gass, Linear Programming: Methods and Applications, 5th ed., McGraw-Hill, read more..

  • Page - 176

    Review Questions1593.8What is a basis?3.9What is a pivot operation?3.10What is the difference between a convex polyhedron and a convex polytope?3.11What is a basic degenerate solution?3.12What is the difference between the simplex algorithm and the simplex method?3.13How do you identify the optimum solution in the simplex method?3.14Define the infeasibility form.3.15What is the difference between a slack and a surplus variable?3.16Can a slack variable be part of the basis at the optimum read more..

  • Page - 177

    160Linear Programming I: Simplex Method(u)The optimum solution of an LP problem cannot contain slack variables in the basis.(v)If the infeasibility form has a nonzero value at the end of phase I, it indicates anunbounded solution to the LP problem.(w)The solution of an LP problem can be a local optimum.(x)In a standard LP problem, all the cost coefficients will be positive.(y)In a standard LP problem, all the right-hand-side constants will be positive.(z)In a LP problem, the number of read more..

  • Page - 178

    Problems161Figure 3.16Reservoir in an irrigation district.wet and dry seasons, respectively. Of the total amount of water released to the irrigationdistrict per year (x2), 30% is to be released during the wet season and 70% during thedry season. The yearly cost of diverting the required amount of water from the mainstream to the irrigation district is given by 18(0.3x2) +12(0.7x2). The cost of buildingand maintaining the reservoir, reduced to an yearly basis, is given by 25x1. Determinethe read more..

  • Page - 179

    162Linear Programming I: Simplex Method3.6What elementary operations can be used to transform2x1 +x2 +x3 =9x1 +x2 +x3 =62x1 +3x2 +x3 =13intox1 =3x2 =2x1 +3x2 +x3 =10Find the solution of this system by reducing into canonical form.3.7Find the solution of the following LP problem graphically:Maximize f =2x1 +6x2subject to−x1 +x2 ≤12x1 +x2 ≤2x1 ≥0,x2 ≥03.8Find the solution of the following LP problem graphically:Minimize f = −3x1 +2x2subject to0 ≤x1 ≤41 ≤x2 ≤6x1 +x2 ≤53.9Find read more..

  • Page - 180

    Problems1633.10Find the solution of the following problem by the graphical method:Minimize f =x21 x22subject tox31 x22 ≥e3x1x42 ≥e4x21 x32 ≤ex1 ≥0,x2 ≥0where eis the base of natural logarithms.3.11Prove Theorem 3.6.For Problems 3.12 to 3.42, use a graphical procedure to identify (a) the feasible region,(b) the region where the slack (or surplus) variables are zero, and (c) the optimumsolution.3.12Maximize f =6x +7ysubject to7x +6y ≤425x +9y ≤45x −y ≤4x ≥0,y ≥03.13Rework read more..

  • Page - 181

    164Linear Programming I: Simplex Method−x +6y ≥125x +2y ≤68x ≤10x ≥0,y ≥03.17Rework Problem 3.16 by changing the objective to Minimize f =x −y.3.18Maximize f =x +2ysubject tox −y ≥ −85x −y ≥0x +y ≥8−x +6y ≥125x +2y ≥68x ≤10x ≥0,y ≥03.19Rework Problem 3.18 by changing the objective to Minimize f =x −y.3.20Maximize f =x +3ysubject to−4x +3y ≤12x +y ≤7x −4y ≤2x ≥0,y ≥03.21Minimize f =x +3ysubject to−4x +3y ≤12x +y ≤7x −4y ≤2xand yare read more..

  • Page - 182

    Problems1653.23Maximize f =x +3ysubject to−4x +3y ≤12x +y ≤7x −4y ≥2x ≥0,y ≥03.24Minimize f =x −8ysubject to3x +2y ≥6x −y ≤69x +7y ≤1083x +7y ≤702x −5y ≥ −35x ≥0, y ≥03.25Rework Problem 3.24 by changing the objective to Maximize f =x −8y.3.26Maximize f =x −8ysubject to3x +2y ≥6x −y ≤69x +7y ≤1083x +7y ≤702x −5y ≥ −35x ≥0, yis unrestricted in sign3.27Maximize f =5x −2ysubject to3x +2y ≥6x −y ≤6 read more..

  • Page - 183

    166Linear Programming I: Simplex Method9x +7y ≤1083x +7y ≤702x −5y ≥ −35x ≥0,y ≥03.28Minimize f =x −4ysubject tox −y ≥ −44x +5y ≤455x −2y ≤205x +2y ≤10x ≥0,y ≥03.29Maximize f =x −4ysubject tox −y ≥ −44x +5y ≤455x −2y ≤205x +2y ≥10x ≥0, yis unrestricted in sign3.30Minimize f =x−4ysubject tox −y ≥ −44x +5y ≤455x −2y ≤205x +2y ≥10x ≥0,y ≥03.31Rework Problem 3.30 by changing the objective to Maximize f =x −4y.3.32Minimize f read more..

  • Page - 184

    Problems167subject to10x +y ≥105x +4y ≥203x +7y ≥21x +12y ≥12x ≥0,y ≥03.33Rework Problem 3.32 by changing the objective to Maximize f =4x +5y.3.34Rework Problem 3.32 by changing the objective to Minimize f =6x +2y.3.35Minimize f =6x +2ysubject to10x +y ≥105x +4y ≥203x +7y ≥21x +12y ≥12xand yare unrestricted in sign3.36Minimize f =5x +2ysubject to3x +4y ≤24x −y ≤3x +4y ≥43x +y ≥3x ≥0, y ≥03.37Rework Problem 3.36 by changing the objective to Maximize f =5x read more..

  • Page - 185

    168Linear Programming I: Simplex Methodsubject to3x +4y ≤24x −y ≤3x +4y ≤43x +y ≥3x ≥0,y ≥03.40Maximize f =3x +2ysubject to9x +10y ≤33021x −4y ≥ −36x +2y ≥66x −y ≤723x +y ≤54x ≥0, y ≥03.41Rework Problem 3.40 by changing the constraint x +2y ≥6 to x +2y ≤6.3.42Maximize f =3x +2ysubject to9x +10y ≤33021x −4y ≥ −36x +2y ≤66x −y ≤723x +y ≥54x ≥0, y ≥03.43Maximize f =3x +2ysubject to21x −4y ≥ −36x +2y ≥66x −y ≤72x ≥0, y ≥0 read more..

  • Page - 186

    Problems1693.44Reduce the system of equations2x1 +3x2 −2x3 −7x4 =2x1 +x2 −x3 +3x4 =12x1 −x2 +x3 +5x4 =8into a canonical system with x1, x2, and x3 as basic variables. From this derive all othercanonical forms.3.45Maximize f =240x1 +104x2 +60x3 +19x4subject to20x1 +9x2 +6x3 +x4 ≤2010x1 +4x2 +2x3 +x4 ≤10xi ≥0,i =1 to 4Find all the basic feasible solutions of the problem and identify the optimal solution.3.46A progressive university has decided to keep its library open round the clock read more..

  • Page - 187

    170Linear Programming I: Simplex Methodlimit on the lengths of the standard rolls, find the cutting pattern that minimizes the trimlosses while satisfying the order above.3.48Solve the LP problem stated in Example 1.6 for the following data: l =2 m,W1 =3000 N, W2 =2000 N, W3 =1000 N, and w1 =w2 =w3 =200 N.3.49Find the solution of Problem 1.1 using the simplex method.3.50Find the solution of Problem 1.15 using the simplex method.3.51Find the solution of Example 3.1 using (a) the graphical method read more..

  • Page - 188

    Problems171Solve Problems 3.54–3.90 by the simplex method.3.54Problem 1.223.55Problem 1.233.56Problem 1.243.57Problem 1.253.58Problem 3.73.59Problem 3.123.60Problem 3.133.61Problem 3.143.62Problem 3.153.63Problem 3.163.64Problem 3.173.65Problem 3.183.66Problem 3.193.67Problem 3.203.68Problem 3.213.69Problem 3.223.70Problem 3.233.71Problem 3.243.72Problem 3.253.73Problem 3.263.74Problem 3.273.75Problem 3.283.76Problem 3.293.77Problem 3.303.78Problem 3.313.79Problem 3.323.80Problem read more..

  • Page - 189

    172Linear Programming I: Simplex Method3.85Problem 3.383.86Problem 3.393.87Problem 3.403.88Problem 3.413.89Problem 3.423.90Problem 3.433.91The temperatures measured at various points inside a heated wall are given below:Distance from the heated surface as apercentage of wall thickness, xi020406080100Temperature, ti (◦C)40035025017510050It is decided to use a linear model to approximate the measured values ast =a +bx(1)where tis the temperature, xthe percentage of wall thickness, and aand bthe read more..

  • Page - 190

    Problems173The times available on machines A1 and A2 per day are 1200 and 1000 minutes, respec-tively. The profits per unit of B1, B2, and B3 are $4, $2, and $3, respectively. Themaximum number of units the company can sell are 500, 400, and 600 for B1, B2, andB3, respectively. Formulate and solve the problem for maximizing the profit.3.94Two types of printed circuit boards Aand Bare produced in a computer manufacturingcompany. The component placement time, soldering time, and inspection time read more..

  • Page - 191

    174Linear Programming I: Simplex Method3.97A bank offers four different types of certificates of deposits (CDs) as indicated below:CD typeDuration (yr)Total interest at maturity (%)10.5521.0732.01044.015If a customer wants to invest $50,000 in various types of CDs, determine the plan thatyields the maximum return at the end of the fourth year.3.98The production of two machine parts Aand Brequires operations on a lathe (L), ashaper (S), a drilling machine (D), a milling machine (M), and a read more..

  • Page - 192

    Problems175Quantity of coalrequired to generate 1PollutionCost of coalMWh at the powercaused atat powerplant (tons)power plantplantCoal typeABABABC12. the problem of determining the amounts of different grades of coal to be usedat each power plant to minimize (a) the total pollution level, and (b) the total cost ofoperation.3.101A grocery store wants to buy five different types of vegetables from four farms in amonth. The prices of the read more..

  • Page - 193

    176Linear Programming I: Simplex MethodAssuming that a particular process can be employed for any number of days in a30-day month, determine the operating schedule of the plant for maximizing the profit.3.103Solve Example 3.7 using MATLAB (simplex method).3.104Solve Problem 3.12 using MATLAB (simplex method).3.105Solve Problem 3.24 using MATLAB (simplex method).3.106Find the optimal solution of the LP problem stated in Problem 3.45 using MATLAB(simplex method).3.107Find the optimal solution of read more..

  • Page - 194

    4Linear Programming II:Additional Topics and Extensions4.1INTRODUCTIONIf a LP problem involving several variables and constraints is to be solved by using thesimplex method described in Chapter 3, it requires a large amount of computer storageand time. Some techniques, which require less computational time and storage spacecompared to the original simplex method, have been developed. Among these tech-niques, the revised simplex method is very popular. The principal difference betweenthe original read more..

  • Page - 195

    178Linear Programming II: Additional Topics and Extensions1.The relative cost coefficients cj to compute†cs = min(cj )(4.1)cs determines the variable xs that has to be brought into the basis in the nextiteration.2.By assuming that cs <0, the elements of the updated columnAs =a1sa2s...amsand the values of the basic variablesXB =have to be calculated. With this information, read more..

  • Page - 196

    4.2Revised Simplex Method179subject toAX = A1x1 + A2x2 + · ·· + Anxn = b(4.4)Xn×1 ≥ 0n×1(4.5)where the jth column of the coefficient matrix Ais given byAjm×1=a1ja2j...amjAssuming that the linear programming problem has a solution, letB = [Aj1 Aj2 · · · Ajm]be a basis matrix withXBm×1 =xj1xj2...xjmand cBm×1 read more..

  • Page - 197

    180Linear Programming II: Additional Topics and ExtensionsDefinition.The row vectorcTB B−1 = πT =π1π2...πmT(4.7)is called the vector of simplex multipliers relative to the fequation. If the computationscorrespond to phase I, two vectors of simplex multipliers, one relative to the fequation,and the other relative to the wequation are to be defined asπT = cTB B−1 =π1π2...πmTσT read more..

  • Page - 198

    4.2Revised Simplex Method181and the modified cost coefficient cj ascj = cj − πTAj(4.10)Equations (4.9) and (4.10) can be used to perform a simplex iteration by generatingAj and cj from the original problem data, Aj and cj .Once Aj and cj are computed, the pivot element ars can be identified by usingEqs. (4.1) and (4.2). In the next step, Ps is introduced into the basis and Pjr is removed.This amounts to generating the inverse of the new basis matrix. The computationalprocedure can be seen read more..

  • Page - 199

    182Linear Programming II: Additional Topics and Extensionsonly to illustrate the transformation, and it can be dropped in actual computations. Thusin practice, we write the m + 1 × m + 2 matrixa1sa2s...D−1ars...amscsand carry out a pivot operation on ars. The first m + 1 columns of the resulting matrixwill give us the desired matrix D−1new.Procedure.The detailed iterative procedure of the revised simplex method to read more..

  • Page - 200

    Table4.1OriginalSystemofEquationsAdmissible(original)variableArtificialvariableObjectivevariablex1x2···xj···xnxn+1xn+2···xn+m−f−wConstant←−−−−−−Initialbasis−−−−−−→a11a12a1ja1n1b1a21a22a2ja2n1b2  A1  A2  Aj  An. . .. . .. . .. . .am1am2amjamn1bmc1c2cjcn000100d1d2djdn00001−w0183 read more..

  • Page - 201

    184Linear Programming II: Additional Topics and ExtensionsTable 4.2Tableau at the Beginning of Cycle 0Columns of the canonical formValue of theBasic variablesxn+1xn+2· · ·xn+r· · ·xn+m−f−wbasic variablexsaxn+11b1xn+21b2......xn+r1br......xn+m1bm←−−−−−− Inverse of the basis ←−−−−−−−f00· · ·0· · ·010−w00· · ·0· · ·01−w0 = −mi=1bia This column is blank at the beginning of cycle 0 and filled up only at the end of cycle 0.seen to be an read more..

  • Page - 202

    4.2Revised Simplex Method185Table 4.3Relative Cost Factor dj or cjVariable xjCycle numberx1x2· · ·xnxn+1xn+2· · ·xn+mPhase I01...ld1d2· · ·dn00· · ·0Use the values of σi (if phase I) or πi (if phase II) of thecurrent cycle and computedj = dj − (σ1a1j + σ2a2j + · · · + σmamj )orPhase IIl + 1l + 2...cj = cj − (π1a1j + π2a2j + · · · + πmamj )Enter dj or cj in the row corresponding to the current cycleand choose the read more..

  • Page - 203

    186Linear Programming II: Additional Topics and Extensionsthe basic set in the next cycle in place of the rth basic variable (r to be foundlater), such thatcs = min(cj <0)5.Compute the elements of the xs column from Eq. (4.9) asAs = B−1As = βij Asthat is,a1s = β11a1s + β12a2s + · ·· + β1mamsa2s = β21a1s + β22a2s + · ·· + β2mams...ams = βm1a1s + βm2a2s + · · · + βmmamsand enter in the last column of Table 4.2 (if cycle 0) or Table 4.4 (if cycle k).6.Inspect the signs of read more..

  • Page - 204

    4.2Revised Simplex Method187Table 4.5Tableau at the Beginning of Cycle k + 1Columns of the canonical formBasic variablesxn+1· · ·xn+m−f−wValue of the basic variablexsaxj1β11 − a1sβ∗r1· · ·β1m − a1sβ∗rmb1 − a1sbr∗...xsβ∗r1· · ·β∗rmbr∗...xjmβm1 − amsβ∗r1· · ·βmm − ams β∗rmbm − amsb∗r−f−π1 − csβ∗r1· · ·−πm − csβ∗rm1−f 0− csb∗r−w−σ1 − dsβ∗r1· · ·−σm − ds β∗rm1−w0 − dsb∗rβ∗ri=βriars(i = 1 read more..

  • Page - 205

    188Linear Programming II: Additional Topics and ExtensionsTable 4.6Detached Coefficients of the Original SystemAdmissible variablesx1x2x3x4x5x6−fConstants21−110022−1501064110016−1−2−100010Table 4.7Tableau at the Beginning of Cycle 0Columns of the canonical formBasic variablesx4x5x6−fValue of the basicvariable (constant)x2ax410002a42 = 1Pivot elementx501006a52 = −1x600106a62 = 1Inverse of the basis = [βij ]−f00010c2 = −2a This column is entered at the end of step 5.Step 2The read more..

  • Page - 206

    4.2Revised Simplex Method189Table 4.8Relative Cost Factors cjVariable xjCycle numberx1x2x3x4x5x6Phase IICycle 0−1−2−1000Cycle 130−3200Cycle 2600114340Step 4Find whether all cj ≥ 0 for optimality. The present basic feasible solution isnot optimal since some cj are negative. Hence select a variable xs to enterthe basic set in the next cycle such that cs = min(cj <0) = c2 in this case.Therefore, x2 enters the basic set.Step 5Compute the elements of the xs column asAs = [βij ]Aswhere read more..

  • Page - 207

    190Linear Programming II: Additional Topics and ExtensionsTable 4.9Tableau at the Beginning of Cycle 1Columns of the original canonical formBasic variablesx4x5x6−fValue of the basicvariablex3ax210002a23 = −1x511008a53 = 4Pivot elementx6−10114a63 = 2←Inverse of the basis = [βij ] →−f2 = −π1 0 = −π20 = −π314c3 = −3a This column is entered at the end of step 5.where the negative values of π1, π2, and π3 are given by the row of −f inTable 4.9, and aij and ci are given in read more..

  • Page - 208

    4.2Revised Simplex Method191Table 4.10Tableau at the Beginning of Cycle 2Columns of the original canonical formBasic variablesx4x5x6−fValue of the basicvariablexsax25414004x31414002x6−64−24110−f114340110a This column is blank at the beginning of cycle 2.Hereb5a53 =84= 2b6a63 =42= 2Since there is a tie between x5 and x6, we select xr = x5 arbitrarily.Step 7To bring x3 into the basic set in place of x5, pivot on ars = a53 in Table 4.9.Enter the result as shown in Table 4.10, keeping its read more..

  • Page - 209

    192Linear Programming II: Additional Topics and Extensions4.3DUALITY IN LINEAR PROGRAMMINGAssociated with every linear programming problem, called the primal, there is anotherlinear programming problem called its dual. These two problems possess very inter-esting and closely related properties. If the optimal solution to any one is known, theoptimal solution to the other can readily be obtained. In fact, it is immaterial whichproblem is designated the primal since the dual of a dual is the read more..

  • Page - 210

    4.3Duality in Linear Programming1934.3.2General Primal– Dual RelationsAlthough the primal–dual relations of Section 4.3.1 are derived by considering a systemof inequalities in nonnegative variables, it is always possible to obtain the primal–dualrelations for a general system consisting of a mixture of equations, less than or greaterthan type of inequalities, nonnegative variables or variables unrestricted in sign byreducing the system to an equivalent inequality system of Eqs. (4.17). The read more..

  • Page - 211

    194Linear Programming II: Additional Topics and ExtensionsTable 4.13Primal–Dual Relations Where m∗ = mand n∗ = nPrimal problemCorresponding dual problemMinimize f =ni=1ci xiMaximize ν =mi=1biyisubject tosubject tonj =1aij xj = bi, i = 1, 2, . . . , mmi=1yiaij ≤ cj , j = 1, 2, . . . , nwherewherexi ≥ 0, i = 1, 2, . . . , nyi is unrestricted in sign, i = 1, 2, · · · , mIn matrix formIn matrix formMinimize f = cTXMaximize ν = YTbsubject tosubject toAX = bATY ≤ cwherewhereX ≥ 0Yis read more..

  • Page - 212

    4.3Duality in Linear Programming1954.3.4Duality TheoremsThe following theorems are useful in developing a method for solving LP problemsusing dual relationships. The proofs of these theorems can be found in Ref. [4.10].Theorem 4.1The dual of the dual is the primal.Theorem 4.2Any feasible solution of the primal gives an fvalue greater than or atleast equal to the νvalue obtained by any feasible solution of the dual.Theorem 4.3If both primal and dual problems have feasible solutions, both read more..

  • Page - 213

    196Linear Programming II: Additional Topics and Extensionsas variants of the regular simplex method, to solve a linear programming problem bystarting from an infeasible solution to the primal. All these methods work in an iterativemanner such that they force the solution to become feasible as well as optimal simulta-neously at some stage. Among all the methods, the dual simplex method developed byLemke [4.2] and the primal–dual method developed by Dantzig, Ford, and Fulkerson[4.3] have been read more..

  • Page - 214

    4.3Duality in Linear Programming1972.We can see that the primal will not have a feasible solution when all arj arenonnegative from the following reasoning. Let (x1, x2, . . . , xm) be the set ofbasic variables. Then the rth basic variable, xr , can be expressed asxr = br −nj =m+1arj xjIt can be seen that if br <0 and arj ≥ 0 for all j, xr cannot be made non-negative for any nonnegative value of xj . Thus the primal problem containsan equation (the rth one) that cannot be satisfied by any read more..

  • Page - 215

    198Linear Programming II: Additional Topics and ExtensionsStep 1Write the system of equations (E1) in tableau form:VariablesBasicvariablesx1x2x3x4x5x6−fbix3−1010000−2.5x40−101000−6x5−2−100100−17 ← Minimum,pivot rowPivot elementx6−1−100010−12−f2016000010Select the pivotal row rsuch thatbr = min(bi <0) = b3 = −17in this case. Hence r = 3.Step 2Select the pivotal column sascs−ars = minarj <0cj−arjSincec1−a31 =202= 10,c2−a32 =161= 16,ands = 1Step 3The pivot read more..

  • Page - 216

    4.3Duality in Linear Programming199Step 2Since a22 is the only negative coefficient, it is taken as the pivot element.Step 3The result of pivot operation on a22 in the preceding table is as follows:VariablesBasicvariablesx1x2x3x4x5x6−fbix300112−12003x2010−10006x110012−1200112x6000−12−1210−12← Minimum,pivot rowPivot element−f00061001−206Step 4Since all bi are not ≥ 0, the present solution is not optimum. Hence we go tothe next iteration.Step 1The pivot row (corresponding to read more..

  • Page - 217

    200Linear Programming II: Additional Topics and Extensions4.4DECOMPOSITION PRINCIPLESome of the linear programming problems encountered in practice may be very largein terms of the number of variables and/or constraints. If the problem has some specialstructure, it is possible to obtain the solution by applying the decomposition principledeveloped by Dantzing and Wolfe [4.4]. In the decomposition method, the originalproblem is decomposed into small subproblems and then these subproblems are read more..

  • Page - 218

    4.4Decomposition Principle201A1X1 + A2X2 + · ·· + ApXp = b0(4.25b)B1X1= b1B2X2= b2...BpXp = bp(4.25c)X1 ≥ 0,X2 ≥ 0, ·· · , Xp ≥ 0whereX1 =x1x2...xm1, X2 =xm1+1xm1+2...xm1+m2, . . . ,Xp = xm1+m2+···+mp−1+1xm1+m2+···+mp−1+2xm1+m2+···+mp−1+mpX =X1X2...XpIt can be noted read more..

  • Page - 219

    202Linear Programming II: Additional Topics and Extensionsof feasible solutions is a bounded convex set, let sk be the number of verticesof this set. By using the definition of convex combination of a set of points,†any point Xk satisfying Eqs. (4.27) can be represented asXk = µk,1X(k)1+ µk,2X(k)2+ · · · + µk,sk X(k)sk(4.28)µk,1 + µk,2 + · · · + µk,sk= 1(4.29)0 ≤ µk,i ≤ 1,i = 1, 2, . . . , sk,k = 1, 2, . . . , p(4.30)where X(k)1 , X(k)2 , . . . , X(k)sk are the extreme points read more..

  • Page - 220

    4.4Decomposition Principle203µj,i ≥ 0,i = 1, 2, . . . , sj ,j = 1, 2, . . . , p(4.32)Since the extreme points X(k)1 , X(k)2 , . . . , X(k)skare known from the solu-tion of the set BkXk = bk, Xk ≥ 0, k = 1, 2, . . . , p, and since ck andAk, k = 1, 2, . . . , p, are known as problem data, the unknowns in Eqs. (4.32)are µj,i, i = 1, 2, . . . , sj ; j = 1, 2, . . . , p. Hence µj,i will be the new decisionvariables of the modified problem stated in Eqs. (4.32).3.Solve the linear programming read more..

  • Page - 221

    204Linear Programming II: Additional Topics and ExtensionsFertilizer Ashould not contain more than 60% of ammonia and Bshould containat least 50% of ammonia. On the average, the plant can sell up to 1000 lb/hr and dueto limitations on the production facilities, not more than 600 lb of fertilizer Acan beproduced per hour. The availability of chemical C1 is restricted to 500 lb/hr. Assumingthat the production costs are same for both Aand B, determine the quantities of Aand Bto be produced per hour read more..

  • Page - 222

    4.4Decomposition Principle205subject toA1X1+ A2X2 ≤ b0B1X1≤ b1B2X2 ≤ b2X1 ≥ 0,X2 ≥ 0(E5)whereX1 =x1x2,X2 =y1y2,c1 =12,c2 =23,A1 =1 11 0,[A2] =1 11 0,b0 =1000500,B1 =111 −2,b1 =6000,B2 = −2 1 , b2 = {0} ,X =X1X2Step 1We first consider the subsidiary constraint setsB1X1 ≤ b1,X1 ≥ 0(E6)B2X2 ≤ b2,X2 ≥ 0(E7)The convex feasible regions represented by (E6) and (E7) are shown in Fig. 4.1aand b, respectively. The vertices of the two feasible regions are given byX(1)1= point P read more..

  • Page - 223

    206Linear Programming II: Additional Topics and ExtensionsX(2)1= point S =00X(2)2= point T =10002000X(2)3= point U =10000Thus any point in the convex feasible sets defined by Eqs. (E6) and (E7) canbe represented, respectively, asX1 = µ1100 + µ120600 + µ13400200=400µ13600µ12 + 200µ13withµ11 + µ12 + µ13 = 1,0 ≤ µ1i ≤ 1,i = 1, 2, 3(E8)andX2 = µ2100 + µ2210002000 + µ2310000=1000µ22 + 1000µ232000µ22withµ21 + µ22 + µ23 = 1; 0 ≤ µ2i ≤ 1,i = read more..

  • Page - 224

    4.5Sensitivity or Postoptimality Analysis207µ11 + µ12 + µ13 = 1µ21 + µ22 + µ23 = 1withµ11 ≥ 0, µ12 ≥ 0, µ13 ≥ 0, µ21 ≥ 0, µ22 ≥ 0, µ23 ≥ 0The optimization problem can be stated in standard form (after adding the slackvariables αand β) asMinimize f = −1200µ12 − 800µ13 − 8000µ22 − 2000µ23subject to600µ12 + 600µ13 + 3000µ22 + 1000µ23 + α = 1000400µ13 + 1000µ22 + 1000µ23 + β = 500µ11 + µ12 + µ13 = 1µ21 + µ22 + µ23 = 1µij ≥ 0 (i = 1, 2;j = 1, 2, read more..

  • Page - 225

    208Linear Programming II: Additional Topics and Extensions4.5.1Changes in the Right-Hand-Side ConstantsbiSuppose that we have found the optimal solution to a LP problem. Let us now changethe bi to bi + bi so that the new problem differs from the original only on theright-hand side. Our interest is to investigate the effect of changing bi to bi + bi onthe original optimum. We know that a basis is optimal if the relative cost coefficientscorresponding to the nonbasic variables cj are nonnegative. read more..

  • Page - 226

    4.5Sensitivity or Postoptimality Analysis209that is,xi =mj =1βij bj ,i = 1, 2, . . . , m(4.38)Finally, the change in the optimal value of the objective function ( f )due to thechangebi can be obtained asf = cTBXB = cTB B−1 b = πTb =mj =1πj bj(4.39)Suppose that the changes made in bi( bi) are such that the inequality (4.34) is violatedfor some variables so that these variables become infeasible for the new right-hand-sidevector. Our interest in this case will be to determine the new optimal read more..

  • Page - 227

    210Linear Programming II: Additional Topics and ExtensionsSOLUTIONLet x1, x2, x3, and x4 denote the number of units of products A, B, C,and Dproduced per day. Then the problem can be stated in standard form as follows:Minimize f = −45x1 − 100x2 − 30x3 − 50x4subject to7x1 + 10x2 + 4x3 + 9x4 ≤ 12003x1 + 40x2 + x3 + x4 ≤ 800xi ≥ 0,i = 1 to 4By introducing the slack variables x5 ≥ 0 and x6 ≥ 0, the problem can be stated incanonical form and the simplex method can be applied. The read more..

  • Page - 228

    4.5Sensitivity or Postoptimality Analysis211Result of pivot operation:x3530173415−11508003x213010−130−11502750403−f2530050322323128,0003The optimum solution is given byx2 =403, x3 =8003(basic variables)x1 = x4 = x5 = x6 = 0 (nonbasic variables)fmin = −28,0003ormaximum profit =$28,0003From the final tableau, one can find thatXB =x3x2=8003403=vector of basic variables inthe optimum solution(E1)cB =c3c2=−30−100 =vector of original costcoefficients correspondingto the basic read more..

  • Page - 229

    212Linear Programming II: Additional Topics and ExtensionsIf the variables are not renumbered, Eq. (4.36) will be applicable for i = 3 and2 in the present problem withb3 = 300 andb2 = 200. From Eqs. (E1) to (E5) ofExample 4.5, the left-hand sides of Eq. (4.36) becomex3 + β33 b3 + β32 b2 =8003+415 (300) −115 (200) =500015x2 + β23 b3 + β22 b2 =403−1150 (300) +275 (200) =2500150Since both these values are ≥ 0, the original optimal basis Bremains optimal evenwith the new values of bi. The read more..

  • Page - 230

    4.5Sensitivity or Postoptimality Analysis213If the cj are changed to cj + cj , the original optimal solution remains optimal, pro-vided that the new values of cj , c′j satisfy the relationc′j= cj + cj −mk=1(ck + ck)mi=1aij βki≥ 0= cj + cj −mk=1ckmi=1aij βki≥ 0,j = m + 1, m + 2, ·· · , n(4.43)where cj indicate the values of the relative cost coefficients corresponding to theoriginal optimal solution.In particular, if changes are made only in the cost coefficients of the read more..

  • Page - 231

    214Linear Programming II: Additional Topics and ExtensionsExample 4.7Find the effect of changing c3 from −30 to −24 in Example 4.5.SOLUTIONHerec3 = 6 and Eq. (4.43) gives thatc′1= c1 + c1 − c3[a21β32 + a31β33] =253+ 0 − 6[3(−115 ) + 7( 415 )] = −53c′4= c4 + c4 − c3[a24β32 + a34β33] =503+ 0 − 6[1(−115 ) + 9( 415 )] =83c′5= c5 + c5 − c3[a25β32 + a35β33] =223+ 0 − 6[0(−115 ) + 1( 415 )] =8615c′6= c6 + c6 − c3[a26β32 + a36β33] =23+ 0 − 6[1(−115 ) + 0( read more..

  • Page - 232

    4.5Sensitivity or Postoptimality Analysis215cost coefficients corresponding to the new variables xn+k be denoted by ai,n+k, i = 1to mand cn+k, respectively. If the new variables are treated as additional nonbasicvariables in the old optimum solution, the corresponding relative cost coefficients aregiven bycn+k = cn+k −mi=1πiai,n+k(4.46)where π1, π2, . . . , πm are the simplex multipliers corresponding to the original optimumsolution. The original optimum remains optimum for the new read more..

  • Page - 233

    216Linear Programming II: Additional Topics and Extensionsthe procedure outlined in the preceding section. The second possibility occurs whenthe coefficients changed aij correspond to a basic variable, say, xj0 of the old optimalsolution. The following procedure can be adopted to examine the effect of changingai,j0 to ai,j0 + ai,j0.1.Introduce a new variable xn+1 to the original system with constraint coefficientsai,n+1 = ai,j0 + ai,j0(4.48)and cost coefficientcn+1 = cj0 (original value read more..

  • Page - 234

    4.5Sensitivity or Postoptimality Analysis217Since A1 is changed, we havec1 = c1 − πT A1 = −45 − (−223−23 )610=173As c1 is positive, the original optimum solution remains optimum for the new problemalso.Example 4.10Find the effect of changing A1 from73to56in Example 4.5.SOLUTIONThe relative cost coefficient of the nonbasic variable x1 for the new A1is given byc1 = c1 − πT A1 = −45 − (−223−23 )56= −133Since c1 is negative, x1 can be brought into the basis to reduce the read more..

  • Page - 235

    218Linear Programming II: Additional Topics and Extensions4.5.5Addition of ConstraintsSuppose that we have solved a LP problem with mconstraints and obtained the optimalsolution. We want to examine the effect of adding some more inequality constraints onthe original optimum solution. For this we evaluate the new constraints by substitutingthe old optimal solution and see whether they are satisfied. If they are satisfied, it meansthat the inclusion of the new constraints in the old problem read more..

  • Page - 236

    4.5Sensitivity or Postoptimality Analysis219Thus Eq. (E1) can be expressed as2x1 + 5( 403−130 x1 +130 x4 +1150 x5 −275 x6)+ 3( 8003−53 x1 −73 x4 −415 x5 +115 x6) + 4x4 = 600that is,−196 x1 −176 x4 −2330 x5 +115 x6 = −8003(E2)Step 2Transform this constraint such that the right-hand side becomes positive,that is,196 x1 +176 x4 +2330 x5 −115 x6 =8003(E3)Step 3Add an artifical variable, say, xk, the new constraint given by Eq. (E3) and theinfeasibility form w = xk into the read more..

  • Page - 237

    220Linear Programming II: Additional Topics and Extensions4.6TRANSPORTATION PROBLEMThis section deals with an important class of LP problems called the transportationproblem. As the name indicates, a transportation problemis one in which the objec-tive for minimization is the cost of transporting a certain commodity from a numberof origins to a number of destinations. Although the transportation problem can besolved using the regular simplex method, its special structure offers a more read more..

  • Page - 238

    4.6Transportation Problem221The problem stated in Eqs. (4.52) to (4.56) was originally formulated and solved byHitchcock in 1941 [4.6]. This was also considered independently by Koopmans in1947 [4.7]. Because of these early investigations the problem is sometimes called theHitchcock-Koopmans transportation problem. The special structure of the transportationmatrix can be seen by writing the equations in standard form:x11 + x12 + ·· · + x1n= a1x21 + x22 + · · · + x2n= a2......xm1 + xm2 + read more..

  • Page - 239

    222Linear Programming II: Additional Topics and ExtensionsFigure 4.2Transportation array.4.Select a variable to leave from the basis from among the current basic variables(using the feasibility condition).5.Find a new basic feasible solution and return to step 2.The details of these steps are given in Ref. [4.10].4.7KARMARKAR’S INTERIOR METHODKarmarkar proposed a new method in 1984 for solving large-scale linear programmingproblems very efficiently. The method is known as an interior read more..

  • Page - 240

    4.7Karmarkar’s Interior Method223Figure 4.3Improvement of objective function from different points of a polytope.Karmarkar’s method is based on the following two observations:1.If the current solution is near the center of the polytope, we can move along thesteepest descent direction to reduce the value of fby a maximum amount. FromFig. 4.3, we can see that the current solution can be improved substantially bymoving along the steepest descent direction if it is near the center (point 2) read more..

  • Page - 241

    224Linear Programming II: Additional Topics and Extensionswhere X = {x1, x2, . . . , xn}T, c = {c1,c2, . . . , cn}T, and [a] is an m × nmatrix. Inaddition, an interior feasible starting solution to Eqs. (4.59) must be known. Usually,X =1n,1n, ·· ·1nTis chosen as the starting point. In addition, the optimum value of fmust be zero forthe problem. ThusX(1) =1n1n· · ·1nT= interior feasiblefmin = 0(4.60)Although most LP problems may not be available in the form of Eq. (4.59) whilesatisfying read more..

  • Page - 242

    4.7Karmarkar’s Interior Method225We now define a new vector zasz = zzn−2zn−1znand solve the following related problem instead of the problem in Eqs. (4.64):Minimize {βdT00M} zsubject to[α] 0 −nβbnβb − [α]e0 0n0z =01eT z + zn−2 + zn−1 + zn = 1z ≥ 0(4.65)where eis an (m − 1)-component vector whose elements are all equal to 1, zn−2 is aslack variable that absorbs the difference between 1 and the sum of other read more..

  • Page - 243

    226Linear Programming II: Additional Topics and Extensionssubject to3x1 + x2 − 2x3 = 35x1 − 2x2 = 2xi ≥ 0,i = 1, 2, 3SOLUTIONIt can be seen thatd = {2 30}T, [α] =31 −25 −2 0, b =32,and X = x1x2x3We define the integers mand nas n = 6 and m = 3 and choose β = 10 so thatz =110 z1z2z3Noting that e = {1, 1, 1}T, Eqs. (4.66) can be expressed asMinimize {20 30000M} zsubject to3 1 −25 −2 000−61032× 61032−31 −25 −2 read more..

  • Page - 244

    4.7Karmarkar’s Interior Method2272.Test for optimality. Since f = 0 at the optimum point, we stop the procedureif the following convergence criterion is satisfied:||cT X(k)|| ≤ ε(4.67)where εis a small number. If Eq. (4.67) is not satisfied, go to step 3.3.Compute the next point, X(k+1). For this, we first find a point Y(k+1) in thetransformed unit simplex asY(k+1) =1n1n· · ·1nT−α([I ] − [P ]T([P ][P ]T)−1[P ])[D(X(k))]c||c||√n(n − 1)(4.68)where ||c|| is the length of the read more..

  • Page - 245

    228Linear Programming II: Additional Topics and ExtensionsStep 1We choose the initial feasible point asX(1) = 131313and set k = 1.Step 2Since |f (X(1))| = |23| >0.05, we go to step 3.Step 3Since [a] = {0, 1, −1}, c = {2,1, −1}T, ||c|| = (2)2 + (1)2 + (−1)2 =√6,we find that[D(X(1))] = 130 00 1300 0 13[a][D(X(1))] = {013−13}[P ] =[a][D(X(1))]111=0 13−131 11([P ][P ]T)−1 =29 00 3−1=9200 13[D(X(1))]c = read more..

  • Page - 246

    4.8Quadratic Programming229Noting thatnr=1x(1)ry(2)r=13 (34108 ) +13 (37108 ) +13 (37108 ) =13Eq. (4.71) can be used to find{x(2)i} =x(1)iy(2)i3r=1x(1)ry(2)r= 3 343243732437324= 341083710837108Set the new iteration number as k = k + 1 = 2 and go to step 2. The procedureis to be continued until convergence is achieved.Notes:1.Although X(2) = Y(2)in read more..

  • Page - 247

    230Linear Programming II: Additional Topics and Extensionsreduces to a LP problem. The solution of the quadratic programming problem statedin Eqs. (4.72) to (4.74) can be obtained by using the Lagrange multiplier technique.By introducing the slack variables s2i , i = 1, 2, . . . , m, in Eqs. (4.73) and the surplusvariables t2j , j = 1, 2, . . . , n, in Eqs. (4.74), the quadratic programming problem canbe written asMinimize f (X) = CTX +12 XTDX(4.72)subject to the equality constraintsATi X + s2i= read more..

  • Page - 248

    4.8Quadratic Programming231Multiplying Eq. (4.79) by si and Eq. (4.80) by tj , we obtainλis2i= λiYi = 0,i = 1, 2, . . . , m(4.85)θj t2j= 0,j = 1, 2, . . . , n(4.86)Combining Eqs. (4.84) and (4.85), and Eqs. (4.82) and (4.86), we obtainλi(ATi X − bi) = 0,i = 1, 2, . . . , m(4.87)θj xj = 0,j = 1, 2, . . . , n(4.88)Thus the necessary conditions can be summarized as follows:cj − θj +ni=1xidij +mi=1λiaij = 0,j = 1, 2, . . . , n(4.89)ATi X − bi = −Yi,i = 1, 2, . . . , m(4.90)xj ≥ 0,j read more..

  • Page - 249

    232Linear Programming II: Additional Topics and Extensionsphase I. This procedure involves the introduction of nnonnegative artificial variableszi into the Eqs. (4.89) so thatcj − θj +ni=1xidij +mi=1λiaij + zj = 0,j = 1, 2, . . . , n(4.97)Then we minimizeF =nj =1zj(4.98)subject to the constraintscj − θj +ni=1xidij +mi=1λiaij + zj = 0,j = 1, 2, . . . , nATi X + Yi = bi,i = 1, 2, . . . , mX ≥0,Y ≥0,λ ≥0,θ ≥ 0While solving this problem, we have to take care of the additional read more..

  • Page - 250

    4.8Quadratic Programming233By comparing this problem with the one stated in Eqs. (4.72) to (4.74), we find thatc1 = −4, c2 = 0,D =2 −2−2 4,A =211 −4,A1 =21,A2 =1−4,andB =60The necessary conditions for the solution of the problem stated in Eqs. (E1) can beobtained, using Eqs. (4.89) to (4.96), as−4 − θ1 + 2x1 − 2x2 + 2λ1 + λ2 = 00 − θ2 − 2x1 + 4x2 + λ1 − 4λ2 = 02x1 + x2 − 6 = −Y1x1 − 4x2 − 0 = −Y2(E2)x1 ≥ 0, x2 ≥ 0, Y1 ≥ 0, Y2 ≥ 0, λ1 ≥ 0,λ2 ≥ read more..

  • Page - 251

    234Linear Programming II: Additional Topics and ExtensionsAccording to the regular procedure of simplex method, λ1 enters the next basis sincethe cost coefficient of λ1 is most negative and z2 leaves the basis since the ratio bi/aisis smaller for z2. However, λ1 cannot enter the basis, as Y1 is already in the basis [tosatisfy Eqs. (E4)]. Hence we select x2 for entering the next basis. According to thischoice, z2 leaves the basis. By carrying out the required pivot operation, we obtain read more..

  • Page - 252

    4.9MATLAB Solutions235BasicVariablesbi/aisvariablesx1 x2λ1λ2θ1θ2Y1Y2z1z2wbifor ais>0x1100926−1261135130126−11303213Y2000−8126926−9137131−92691302413λ1001−713−513−313−21305133130813x2010−913113−2133130−11321301413−w000000001110Since both the artificial variables z1 and z2 are driven out of the basis, the present tableaugives the desired solution as x1 =3213 , x2 =1413 , Y2 =2413 , λ1 =813 (basic variables),λ2 = 0, Y1 = 0, θ1 = 0, θ2 = 0 (nonbasic variables). read more..

  • Page - 253

    236Linear Programming II: Additional Topics and ExtensionsExpress the constraints in the form A x ≤ band identify the matrix Aand thevector basA = 21 −12 −1 5411andb = 266Step 2Use the command for executing linear programming program using interiorpoint method as indicated below:clcclear allf=[ – 1; – 2; – 1];A=[2 1 — 1;2 — 1 5;4 1 1];b=[2;6;6];lb=zeros(3,1);Aeq=[];beq=[];options = optimset('Display', 'iter');[x,fval,exitflag,output] = read more..

  • Page - 254

    References and Bibliography237Example 4.16Find the solution of the following quadratic programming problemusing MATLAB:Minimize f = −4x1 + x21− 2x1x2 + 2x22subject to2x1 + x2 ≤ 6, x1 − 4x2 ≤ 0, x1 ≥ 0, x2 ≥ 0SOLUTIONStep 1Express the objective function in the form f (x) =12 xTH x + f Tx and identifythe matrix Hand vectors fand x:H =2 −2−2 4f = −40x =x1x2Step 2State the constraints in the form: A x ≤ band identify the matrix Aand vectorb:A =211 −4b =60Step 3Use the command read more..

  • Page - 255

    238Linear Programming II: Additional Topics and Extensions4.3G. B. Dantzig, L. R. Ford, and D. R. Fulkerson, A primal–dual algorithm for linearprograms, pp. 171–181 in Linear Inequalities and Related Systems, H. W. Kuhn andA. W. Tucker, Eds., Annals of Mathematics Study No. 38, Princeton University Press,Princeton, NJ, 1956.4.4G. B. Dantzig and P. Wolfe, Decomposition principle for linear programming, OperationsResearch, Vol. 8, pp. 101–111, 1960.4.5L. S. Lasdon, Optimization Theory for read more..

  • Page - 256

    Problems239REVIEW QUESTIONS4.1Is the decomposition method efficient for all LP problems?4.2What is the scope of postoptimality analysis?4.3Why is Karmarkar’s method called an interior method?4.4What is the major difference between the simplex and Karmarkar methods?4.5State the form of LP problem required by Karmarkar’s method.4.6What are the advantages of the revised simplex method?4.7Match the following terms and descriptions:(a)Karmarkar’s methodMoves from one vertex to read more..

  • Page - 257

    240Linear Programming II: Additional Topics and Extensions4.2Maximize f = 15x1 + 6x2 + 9x3 + 2x4subject to10x1 + 5x2 + 25x3 + 3x4 ≤ 5012x1 + 4x2 + 12x3 + x4 ≤ 487x1 + x4≤ 35xi ≥ 0,i = 1 to 44.3Minimize f = 2x1 + 3x2 + 2x3 − x4 + x5subject to3x1 − 3x2 + 4x3 + 2x4 − x5 = 0x1 + x2 + x3 + 3x4 + x5 = 2xi ≥ 0,i = 1, 2, . . . ,54.4Discuss the relationships between the regular simplex method and the revised simplexmethod.4.5Solve the following LP problem graphically and by the revised read more..

  • Page - 258

    Problems241x1 − x2 ≤ 11x1 ≥ 0, x2 unrestricted in sign(a)Write the dual of this problem.(b)Find the optimum solution of the dual.(c)Verify the solution obtained in part (b) by solving the primal problem graphically.4.8A water resource system consisting of two reservoirs is shown in Fig. 4.4. The flows andstorages are expressed in a consistent set of units. The following data are available:QuantityStream 1 (i = 1)Stream 2 (i = 2)Capacity of reservoir i97Available release fromreservoir read more..

  • Page - 259

    242Linear Programming II: Additional Topics and Extensionssubject tox1 + x2 + 2x3 − x5 − x6 = 1−2x1 + x3 + x4 + x5 − x7 = 2xi ≥ 0,i = 1 to 74.10Solve Problem 3.1 by solving its dual.4.11Show that neither the primal nor the dual of the problemMaximize f = −x1 + 2x2subject to−x1 + x2 ≤ −2x1 − x2 ≤ 1x1 ≥ 0,x2 ≥ 0has a feasible solution. Verify your result graphically.4.12Solve the following LP problem by decomposition principle, and verify your result bysolving it by the read more..

  • Page - 260

    Problems2434.14Express the dual of the following LP problem:Maximize f = 2x1 + x2subject tox1 − 2x2 ≥ 2x1 + 2x2 = 8x1 − x2 ≤ 11x1 ≥ 0,x2 is unrestricted in sign4.15Find the effect of changing b =1200800to1180120in Example 4.5 using sensitivity analysis.4.16Find the effect of changing the cost coefficients c1 and c4 from −45 and −50 to −40and −60, respectively, in Example 4.5 using sensitivity analysis.4.17Find the effect of changing c1 from −45 to −40 and c2 from −100 to read more..

  • Page - 261

    244Linear Programming II: Additional Topics and Extensions4.25Assume that products A, B, C, and Drequire, in addition to the stated amounts of copperand zinc, 4, 3, 2 and 5 lb of nickel per unit, respectively. If the total quantity of nickelavailable is 2000 lb, in what way the original optimum solution is affected?4.26If product Arequires 5 lb of copper and 3 lb of zinc (instead of 4 lb of copper and 2 lbof zinc) per unit, find the change in the optimum solution.4.27If product Crequires 5 lb read more..

  • Page - 262

    Problems2454.38Transform the following LP problem into the form required by Karmarkar’s method:Minimize f = x1 + x2 + x3subject tox1 + x2 − x3 = 43x1 − x2 = 0xi ≥ 0,i = 1, 2, 34.39A contractor has three sets of heavy construction equipment available at both New Yorkand Los Angeles. He has construction jobs in Seattle, Houston, and Detroit that requiretwo, three, and one set of equipment, respectively. The shipping costs per set betweencities iand j(cij ) are shown in Fig. 4.5. Formulate read more..

  • Page - 263

    246Linear Programming II: Additional Topics and ExtensionsFigure 4.6Plastic hinges in a frame.Figure 4.7Possible failure mechanisms of a portal frame. read more..

  • Page - 264

    Problems247to ensure nonzero reserve strength in each failure mechanism. Also, suggest a suitabletechnique for solving the problem. Assume that the moment capacities are restricted as0 ≤ Mi ≤ 2 × 105 lb-in., i = 1, 2, . . . ,7. Data: x = 100 in., y = 150 in., P1 = 1000 lb,and P2 = 500 lb.4.43Solve the LP problem stated in Problem 4.9 using MATLAB (interior method).4.44Solve the LP problem stated in Problem 4.12 using MATLAB (interior method).4.45Solve the LP problem stated in Problem 4.13 read more..

  • Page - 265

    5Nonlinear Programming I:One-Dimensional MinimizationMethods5.1INTRODUCTIONIn Chapter 2 we saw that if the expressions for the objective function and the constraintsare fairly simple in terms of the design variables, the classical methods of optimizationcan be used to solve the problem. On the other hand, if the optimization probleminvolves the objective function and/or constraints that are not stated as explicit functionsof the design variables or which are too complicated to manipulate, we read more..

  • Page - 266

    5.1Introduction249Figure 5.1Planar truss: (a) nodal and member numbers; (b) nodal degrees of, as they correspond to the fixed nodes)(4x4+ x6+ x7)u1+√3(x6− x7)u2− 4x4u3− x7u7+√3x7u8= 0(E1)√3(x6− x7)u1+ 3(x6+ x7)u2+√3x7u7− 3x7u8= −4RlE(E2)− 4x4u1+ (4x4+ 4x5+ x8+ x9)u3+√3(x8− x9)u4− 4x5u5− x8u7−√3x8u8− x9u9+√3x9u10= 0(E3)√3(x8− x9)u3+ 3(x8+ x9)u4−√3x8u7− 3x8u8+√3x9u9− 3x9u10= 0(E4)− 4x5u3+ (4x5+ x10+ x11)u5+√3(x10− x11)u6− read more..

  • Page - 267

    250Nonlinear Programming I: One-Dimensional Minimization MethodsIt is important to note that an explicit closed-form solution cannot be obtained forthe displacements as the number of equations becomes large. However, given anyvector X, the system of Eqs. (E1) to (E10) can be solved numerically to find the nodaldisplacement u1, u2, . . . , u10.The optimization problem can be stated as follows:Minimize f (X)=11i=1ρxili(E11)subject to the constraintsgj (X)= |uj(X)| − δ≤ 0,j= 1, 2, . . . read more..

  • Page - 268

    5.1Introduction251Figure 5.2Contact stress between two spheres.SOLUTIONFor ν1= ν2= 0.3, Eq. (E1) reduces tof (λ)=0.751+ λ2+ 0.65λ tan−11λ− 0.65(E4)where f= τzx/pmax and λ= z/a. Since Eq. (E4) is a nonlinear function of the distance,λ, the application of the necessary condition for the maximum of f, df/dλ= 0, givesrise to a nonlinear equation from which a closed-form solution for λ∗ cannot easily beobtained. In such cases, numerical methods of optimization can be conveniently read more..

  • Page - 269

    252Nonlinear Programming I: One-Dimensional Minimization Methodsx2Figure 5.3Iterative process of optimization.The iterative procedure indicated by Eq. (5.1) is valid for unconstrained as well asconstrained optimization problems. The procedure is represented graphically for a hypo-thetical two-variable problem in Fig. 5.3. Equation (5.1) indicates that the efficiencyof an optimization method depends on the efficiency with which the quantities λ∗i andSi are determined. The methods of finding read more..

  • Page - 270

    5.2Unimodal Function253Table 5.1One-dimensional Minimization MethodsEliminationmethodsUnrestrictedsearchRequiring noderivatives(quadratic)RequiringderivativesCubicDirect rootNewtonQuasi-NewtonSecantExhaustive searchDichotomoussearchFibonacci methodGolden sectionmethodNumerical methodsAnalytical methods(differential calculus methods)Interpolationmethodsinterpolation methods involve polynomial approximations to the given function. Thedirect root methods are root finding methods that can be read more..

  • Page - 271

    254Nonlinear Programming I: One-Dimensional Minimization MethodsFigure 5.5Outcome of first two experiments: (a) f1 < f2; (b) f1 > f2; (c) f1= f2.For example, consider the normalized interval [0, 1] and two function evaluationswithin the interval as shown in Fig. 5.5. There are three possible outcomes, namely,f1 < f2, f1 > f2, or f1= f2. If the outcome is that f1 < f2, the minimizing xcannotlie to the right of x2. Thus that part of the interval [x2, 1] can be discarded and a read more..

  • Page - 272

    5.3Unrestricted Search255used must be small in relation to the final accuracy desired. Although this method isvery simple to implement, it is not efficient in many cases. This method is describedin the following steps:1. Start with an initial guess point, say, x1.2. Find f1= f (x1).3. Assuming a step size s, find x2= x1+ s.4. Find f2= f (x2).5. If f2 < f1, and if the problem is one of minimization, the assumption of uni-modality indicates that the desired minimum cannot lie at x < x1. read more..

  • Page - 273

    256Nonlinear Programming I: One-Dimensional Minimization MethodsiValue of sxi= x1+ sfi= f(xi)Is fi > fi−1?1—0.00.0—20.050.05−0.0725No30.100.10−0.140No40.200.20−0.260No50.400.40−0.440No60.800.80−0.560No71.601.60+0.160YesFrom these results, the optimum point can be seen to be xopt≈ x6= 0.8. In this case,the points x6 and x7 do not really bracket the minimum point but provide informationabout it. If a better approximation to the minimum is desired, the procedure can berestarted read more..

  • Page - 274

    5.5Dichotomous Search257is given byLn= xj+1− xj−1=2n+ 1L0(5.2)The final interval of uncertainty obtainable for different number of trials in the exhaus-tive search method is given below:Number of trials23456· · ·nLn/L02/32/42/52/62/7· · ·2/(n+ 1)Since the function is evaluated at all npoints simultaneously, this method can be calleda simultaneous search method. This method is relatively inefficient compared to thesequential search methods discussed next, where the information gained read more..

  • Page - 275

    258Nonlinear Programming I: One-Dimensional Minimization MethodsFigure 5.7Dichotomous search.function at the two points, almost half of the interval of uncertainty is eliminated. Letthe positions of the two experiments be given by (Fig. 5.7)x1=L02−δ2x2=L02+δ2where δis a small positive number chosen so that the two experiments give significantlydifferent results. Then the new interval of uncertainty is given by (L0/2+ δ/2). Thebuilding block of dichotomous search consists of conducting a read more..

  • Page - 276

    5.5Dichotomous Search259where δis a small quantity, say 0.001, and nis the number of experiments. If themiddle point of the final interval is taken as the optimum point, the requirement canbe stated as12LnL0 ≤110i.e.,12n/2+δL01−12n/2≤15Since δ= 0.001 and L0= 1.0, we have12n/2+110001−12n/2≤15i.e.,999100012n/2≤9955000or2n/2 ≥999199≃ 5.0Since nhas to be even, this inequality gives the minimum admissible value of nas 6.The search is made as follows. The first two experiments are read more..

  • Page - 277

    260Nonlinear Programming I: One-Dimensional Minimization MethodsThe final set of experiments will be conducted atx5= 0.74925+1.0− 0.749252− 0.0005= 0.874125x6= 0.74925+1.0− 0.749252+ 0.0005= 0.875125The corresponding function values aref5= f (x5)= 0.874125(−0.625875) = −0.5470929844f6= f (x6)= 0.875125(−0.624875) = −0.5468437342Since f5 < f6, the new interval of uncertainty is given by (x3, x6)= (0.74925,0.875125). The middle point of this interval can be taken as optimum, and read more..

  • Page - 278

    5.6Interval Halving Method261Figure 5.8Possibilities in the interval halving method: (a) f2 > f0 > f1; (b) f1 > f0 > f2;(c) f1 > f0 and f2 > f0. read more..

  • Page - 279

    262Nonlinear Programming I: One-Dimensional Minimization MethodsExample 5.6 Find the minimum of f= x(x− 1.5) in the interval (0.0, 1.0) to within10% of the exact value.SOLUTIONIf the middle point of the final interval of uncertainty is taken as theoptimum point, the specified accuracy can be achieved if12Ln≤L010or12(n−1)/2L0≤L05(E1)Since L0= 1, Eq. (E1) gives12(n−1)/2 ≤15or2(n−1)/2 ≥ 5(E2)Since nhas to be odd, inequality (E2) gives the minimum permissible value of nas 7.With read more..

  • Page - 280

    5.7Fibonacci Method2635.7FIBONACCI METHODAs stated earlier, the Fibonacci methodcan be used to find the minimum of a functionof one variable even if the function is not continuous. This method, like many otherelimination methods, has the following limitations:1. The initial interval of uncertainty, in which the optimum lies, has to be known.2. The function being optimized has to be unimodal in the initial interval of uncer-tainty.3. The exact optimum cannot be located in this method. Only an read more..

  • Page - 281

    264Nonlinear Programming I: One-Dimensional Minimization Methodsand with one experiment left in it. This experiment will be at a distance ofL∗2=Fn−2FnL0=Fn−2Fn−1L2(5.8)from one end andL2− L∗2=Fn−3FnL0=Fn−3Fn−1L2(5.9)from the other end. Now place the third experiment in the interval L2 so that the currenttwo experiments are located at a distance ofL∗3=Fn−3FnL0=Fn−3Fn−1L2(5.10)from each end of the interval L2. Again the unimodality property will allow us toreduce the read more..

  • Page - 282

    5.7Fibonacci Method265Table 5.2Reduction RatiosValue of nFibonacci number, FnReduction ratio, Ln/L0011.0111.0220.5330.3333450.2580.12506130.076927210.047628340.029419550.0181810890.01124111440.006944122330.004292133770.002653146100.001639159870.001013161,5970.0006406172,5840.0003870184,1810.0002392196,7650.00014792010,9460.00009135Position of the Final Experiment.In this method the last experiment has to beplaced with some care. Equation (5.12) givesL∗nLn−1 =F0F2 =12forall n(5.16)Thus after read more..

  • Page - 283

    266Nonlinear Programming I: One-Dimensional Minimization MethodsFigure 5.9Flowchart for implementing Fibonacci search method. read more..

  • Page - 284

    5.8Golden Section Method267SOLUTIONHere n= 6 and L0= 3.0, which yieldL∗2=Fn−2FnL0=513(3.0)= 1.153846Thus the positions of the first two experiments are given by x1= 1.153846 andx2= 3.0− 1.153846= 1.846154 with f1= f (x1)= −0.207270 and f2= f (x2)=−0.115843. Since f1 is less than f2, we can delete the interval [x2, 3.0] by usingthe unimodality assumption (Fig. 5.10a). The third experiment is placed at x3= 0+(x2− x1)= 1.846154− 1.153846= 0.692308, with the corresponding function read more..

  • Page - 285

    268Nonlinear Programming I: One-Dimensional Minimization MethodsFigure 5.10Graphical representation of the solution of Example 5.7. read more..

  • Page - 286

    5.8Golden Section Method269Figure 5.10(continued )This result can be generalized to obtainLk= limN→∞FN−1FNk−1L0(5.19)Using the relationFN= FN−1+ FN−2(5.20)we obtain, after dividing both sides by FN−1,FNFN−1 = 1+FN−2FN−1(5.21)By defining a ratio γasγ= limN→∞FNFN−1(5.22) read more..

  • Page - 287

    270Nonlinear Programming I: One-Dimensional Minimization MethodsEq. (5.21) can be expressed asγ≃1γ+ 1that is,γ2 − γ− 1= 0(5.23)This gives the root γ= 1.618, and hence Eq. (5.19) yieldsLk=1γk−1L0= (0.618)k−1L0(5.24)In Eq. (5.18) the ratios FN−2/FN−1 and FN−1/FN have been taken to be samefor large values of N. The validity of this assumption can be seen from the followingtable:Value of N2345678910∞RatioFN−1FN0.50.6670.60.6250.61560.6190.61770.61810.61840.618The ratio read more..

  • Page - 288

    5.9Comparison of Elimination Methods271Example 5.8 Minimize the functionf (x)= 0.65− [0.75/(1+ x2)] − 0.65x tan−1(1/x)using the golden section method with n= 6.SOLUTIONThe locations of the first two experiments are defined by L∗2=0.382L0= (0.382)(3.0)= 1.1460. Thus x1= 1.1460 and x2= 3.0− 1.1460= 1.8540with f1= f (x1)= −0.208654 and f2= f (x2)= −0.115124. Since f1 < f2, wedelete the interval [x2, 3.0] based on the assumption of unimodality and obtain the newinterval of read more..

  • Page - 289

    272Nonlinear Programming I: One-Dimensional Minimization MethodsTable 5.3Final Intervals of UncertaintyMethodFormulan= 5n= 10Exhaustive searchLn=2n+ 1L00.33333L00.18182L0Dichotomous search(δ = 0.01 andn= even)Ln=L02n/2+ δ1−12n/214 L0+ 0.0075 withn= 4,18 L0+ 0.00875with n= 60.03125L0+ 0.0096875Interval halving (n≥ 3and odd)Ln= (12 )(n−1)/2L00.25L00.0625L0 with n= 9,0.03125L0 withn= 11FibonacciLn=1FnL00.125L00.01124L0Golden sectionLn= (0.618)n−1L00.1459L00.01315L0Table 5.4Number of read more..

  • Page - 290

    5.10Quadratic Interpolation Method273SOLUTIONThe new design point X can be expressed asX=x1x2= X1+ λS=−2 + λ−2 + 0.25λBy substituting x1= −2 + λand x2= −2 + 0.25λ in Eq. (E1), we obtain fas afunction of λasf (λ)= f−2 + λ−2 + 0.25λ= [(−2 + λ)2 − (−2 + 0.25λ)]2+ [1− (−2 + λ)]2 = λ4 − 8.5λ3 + 31.0625λ2 − 57.0λ+ 45.0The value of λat which f (λ)attains a minimum gives λ∗.In the following sections, we discuss three different interpolation methods read more..

  • Page - 291

    274Nonlinear Programming I: One-Dimensional Minimization Methodsthat is,˜λ∗ = −b2c(5.30)The sufficiency condition for the minimum of h(λ)is thatd2hdλ2 ˜λ∗> 0that is,c > 0(5.31)To evaluate the constants a, b, and cin Eq. (5.29), we need to evaluate the functionf (λ)at three points. Let λ= A, λ= B, and λ= Cbe the points at which the functionf (λ)is evaluated and let fA, fB , and fC be the corresponding function values, that is,fA= a+ bA+ cA2fB= a+ bB+ cB2fC= a+ bC+ read more..

  • Page - 292

    5.10Quadratic Interpolation Method275provided thatc=fC+ fA− 2fB2t2> 0(5.41)The inequality (5.41) can be satisfied iffA+ fC2> fB(5.42)(i.e., the function value fB should be smaller than the average value of fA and fC).This can be satisfied if fB lies below the line joining fA and fC as shown in Fig. 5.12.The following procedure can be used not only to satisfy the inequality (5.42) butalso to ensure that the minimum˜λ∗ lies in the interval 0 <˜λ∗ <2t.1. Assuming that fA= f read more..

  • Page - 293

    276Nonlinear Programming I: One-Dimensional Minimization Methodsllll~*l~*l~*Figure 5.13Possible outcomes when the function is evaluated at λ= t0: (a) f1 < fA andt0 <˜λ∗; (b) f1 < fA and t0 >˜λ∗; (c) f1 > fA and t0 >˜λ∗.f (l)f (l)f (l)f (l)llllFigure 5.14Possible outcomes when function is evaluated at λ= t0 and 2t0: (a) f2 < f1 andf2 < fA; (b) f2 < fA and f2 > f1; (c) f2 > fA and f2 > f1.if they differ not more than by a small amount. This read more..

  • Page - 294

    5.10Quadratic Interpolation Method277df/dλand use the criterionf (˜λ∗ + ˜λ∗) − f (˜λ∗ − ˜λ∗)2˜λ∗≤ ε2(5.44)to stop the procedure. In Eqs. (5.43) and (5.44), ε1 and ε2 are small numbers to bespecified depending on the accuracy desired.If the convergence criteria stated in Eqs. (5.43) and (5.44) are not satisfied, a newquadratic functionh′(λ) = a′ + b′λ + c′λ2is used to approximate the function f (λ). To evaluate the constants a′, b′, and c′,the read more..

  • Page - 295

    278Nonlinear Programming I: One-Dimensional Minimization MethodsTable 5.5Refitting SchemeNew points for refittingCaseCharacteristicsNewOld1˜λ∗ > BAB˜f < fBB˜λ∗CCNeglect old A2˜λ∗ > BAA˜f> fBBBC˜λ∗Neglect old C3˜λ∗ < BAA˜f < fBB˜λ∗CBNeglect old C4˜λ∗ < BA˜λ∗˜f> fBBBCCNeglect old AExample 5.10 Find the minimum of f= λ5− 5λ3− 20λ+ 5.SOLUTIONSince this is not a multivariable optimization problem, we can proceeddirectly to stage 2. read more..

  • Page - 296

    5.10Quadratic Interpolation Method279andh(˜λ∗) = h(1.135)= 5− 204(1.135)+ 90(1.135)2 = −110.9Since˜f = f (˜λ∗) = (1.135)5 − 5(1.135)3 − 20(1.135)+ 5.0= −23.127we haveh(˜λ∗) − f (˜λ∗)f (˜λ∗)= −116.5 + 23.127−23.127= 3.8As this quantity is very large, convergence is not achieved and hence we have to userefitting.Iteration 2Since˜λ∗ < Band˜f> fB , we take the new values of A, B, and CasA= 1.135,fA= −23.127B= 2.0,fB= −43.0C= 4.0,fC= 629.0and read more..

  • Page - 297

    280Nonlinear Programming I: One-Dimensional Minimization Methods5.11CUBIC INTERPOLATION METHODThe cubic interpolation method finds the minimizing step length λ∗ in four stages [5.5,5.11]. It makes use of the derivative of the function f:f′(λ) =dfdλ=ddλf (X+ λS)= ST∇f(X + λS)The first stage normalizes the S vector so that a step size λ= 1 is acceptable. Thesecond stage establishes bounds on λ∗, and the third stage finds the value of˜λ∗ byapproximating f (λ)by a cubic read more..

  • Page - 298

    5.11Cubic Interpolation Method281is used to approximate the function f (λ)between points Aand B, we need to find thevalues fA= f (λ= A), f′A= df/dλ(λ= A), fB= f (λ= B), and f′B= df/dλ(λ=B)in order to evaluate the constants, a, b, c, and din Eq. (5.45). By assuming thatA= 0, we can derive a general formula for˜λ∗. From Eq. (5.45) we havefA= a+ bA+ cA2 + dA3fB= a+ bB+ cB2 + dB3f′A= b+ 2cA+ 3dA2f′B= b+ 2cB+ 3dB2(5.46)Equations (5.46) can be solved to find the constants asa= read more..

  • Page - 299

    282Nonlinear Programming I: One-Dimensional Minimization MethodswhereQ= (Z2 − f′Af ′B )1/2(5.55)2(B− A)(2Z+ f′A+ f′B )(f′A+ Z± Q)−2(B − A)(f′ 2A+ Zf′B+ 3Zf′A+ 2Z2)−2(B + A)f′Af ′B > 0(5.56)By specializing Eqs. (5.47) to (5.56) for the case where A= 0, we obtaina= fAb= f′Ac= −1B(Z+ f′A)d=13B2(2Z+ f′A+ f′B )˜λ∗ = Bf′A+ Z± Qf′A+ f′B+ 2Z(5.57)Q= (Z2 − f′Af ′B )1/2 > 0(5.58)whereZ=3(fA− fB )B+ f′A+ f′B(5.59)The two values of˜λ∗ read more..

  • Page - 300

    5.11Cubic Interpolation Method283where ε1 and ε2 are small numbers whose values depend on the accuracy desired. Thecriterion of Eq. (5.61) can be stated in nondimensional form asST∇f|S||∇f| ˜λ∗ ≤ ε2(5.62)If the criteria stated in Eqs. (5.60) and (5.62) are not satisfied, a new cubic equationh′(λ) = a′ + b′λ + c′λ2 + d′λ3can be used to approximate f (λ). The constants a′, b′, c′, and d′ can be evaluatedby using the function and derivative values at the best read more..

  • Page - 301

    284Nonlinear Programming I: One-Dimensional Minimization MethodsFigure 5.17Flowchart for cubic interpolation method. read more..

  • Page - 302

    5.11Cubic Interpolation Method285Iteration 1To find the value of˜λ∗ and to test the convergence criteria, we first compute Zand QasZ=3(5.0− 113.0)3.2− 20.0+ 350.688= 229.588Q= [229.5882 + (20.0)(350.688)]1/2 = 244.0Hence˜λ∗ = 3.2−20.0 + 229.588± 244.0−20.0 + 350.688+ 459.176= 1.84or− 0.1396By discarding the negative value, we have˜λ∗ = 1.84Convergence criterion: If˜λ∗ is close to the true minimum, λ∗, then f′(˜λ∗) =df (˜λ∗)/dλ should be approximately read more..

  • Page - 303

    286Nonlinear Programming I: One-Dimensional Minimization MethodsThusA= 1.84,fA= −41.70,f′A= −13.00B= 2.05,fB= −42.90,f′B= 5.35A < λ∗ < BIteration 3Z=3.0(−41.70 + 42.90)(2.05− 1.84)− 13.00+ 5.35= 9.49Q= [(9.49)2 + (13.0)(5.35)]1/2 = 12.61Therefore,˜λ∗ = 1.84+ −13.00 + 9.49± 12.61−13.00 + 5.35+ 18.98(2.05− 1.84)= 2.0086Convergence criterion:f′(˜λ∗) = 5.0(2.0086)4 − 15.0(2.0086)2 − 20.0= 0.855Assuming that this value is close to zero, we can stop the read more..

  • Page - 304

    5.12Direct Root Methods287Thus the Newton method, Eq. (5.65), is equivalent to using a quadratic approximationfor the function f (λ)and applying the necessary conditions. The iterative process givenby Eq. (5.65) can be assumed to have converged when the derivative, f′(λi+1), is closeto zero:|f′(λi+1)| ≤ ε(5.66)where εis a small quantity. The convergence process of the method is shown graphi-cally in Fig. 5.18a.Remarks:1. The Newton method was originally developed by Newton for solving read more..

  • Page - 305

    288Nonlinear Programming I: One-Dimensional Minimization MethodsExample 5.12 Find the minimum of the functionf (λ)= 0.65−0.751+ λ2− 0.65λ tan−11λusing the Newton–Raphson method with the starting point λ1= 0.1. Use ε= 0.01 inEq. (5.66) for checking the convergence.SOLUTIONThe first and second derivatives of the function f (λ)are given byf′(λ) =1.5λ(1+ λ2)2+0.65λ1+ λ2− 0.65 tan−11λf′′(λ) =1.5(1− 3λ2)(1+ λ2)3+0.65(1− λ2)(1+ λ2)2+0.651+ λ2=2.8− 3.2λ2(1+ read more..

  • Page - 306

    5.12Direct Root Methods289finite difference formulas asf′(λi) =f (λi+ λ)− f (λi− λ)2 λ(5.67)f′′(λi) =f (λi+ λ)− 2f (λi )+ f (λi− λ)λ2(5.68)whereλis a small step size. Substitution of Eqs. (5.67) and (5.68) into Eq. (5.65)leads toλi+1= λi−λ[f (λi+ λ)− f (λi− λ)]2[f (λi+ λ)− 2f (λi )+ f (λi− λ)](5.69)The iterative process indicated by Eq. (5.69) is known as the quasi-Newton method.To test the convergence of the iterative process, the following read more..

  • Page - 307

    290Nonlinear Programming I: One-Dimensional Minimization MethodsIteration 2f2= f (λ2)= −0.303368, f+2= f (λ2+ λ)= −0.304662,f−2= f (λ2− λ)= −0.301916λ3= λ2−λ(f+2− f−2 )2(f+2− 2f2+ f−2 ) = 0.465390Convergence check:|f ′(λ3)| =f+3− f−32 λ= 0.017700 > εIteration 3f3= f (λ3)= −0.309885, f+3= f (λ3+ λ)= −0.310004,f−3= f (λ3− λ)= −0.309650λ4= λ3−λ(f+3− f−3 )2(f+3− 2f3+ f−3 ) = 0.480600Convergence check:|f ′(λ4)| =f+4− f−42 λ= read more..

  • Page - 308

    5.12Direct Root Methods291f′(l)lA = lili+ 2li+ 1l*Figure 5.19Iterative process of the secant method.the secant method can also be considered as a quasi-Newton method. It can also beconsidered as a form of elimination technique since part of the interval, (A, λi+1) inFig. 5.19, is eliminated in every iteration. The iterative process can be implemented byusing the following step-by-step procedure.1. Set λ1= A= 0 and evaluate f′(A). The value of f′(A) will be negative.Assume an initial read more..

  • Page - 309

    292Nonlinear Programming I: One-Dimensional Minimization Methodsf′(l)ll~1*l~2*l~3*Figure 5.20Situation when f′A varies very slowly.Remarks:1. The secant method is identical to assuming a linear equation for f′(λ). Thisimplies that the original function, f (λ), is approximated by a quadratic equation.2. In some cases we may encounter a situation where the function f′(λ) variesvery slowly with λ, as shown in Fig. 5.20. This situation can be identifiedby noticing that the point read more..

  • Page - 310

    5.13Practical Considerations293Iteration 2Since f′(λ2) = +0.0105789 > 0, we set new A= 0.4, f′(A) = −0.103652,B = λ2=0.545757, f′(B) = f′(λ2) = +0.0105789, and computeλ3= A−f′(A)(B − A)f′(B) − f′(A) = 0.490632Convergence check:|f ′(λ3)| = |+0.00151235| < ε.Since the process has converged, the optimum solution is given by λ∗ ≈ λ3=0.490632.5.13PRACTICAL CONSIDERATIONS5.13.1How to Make the Methods Efficient and More ReliableIn some cases, some of the read more..

  • Page - 311

    294Nonlinear Programming I: One-Dimensional Minimization Methods5.13.3Comparison of MethodsIt has been shown in Section 5.9 that the Fibonacci method is the most efficient elimina-tion technique in finding the minimum of a function if the initial interval of uncertaintyis known. In the absence of the initial interval of uncertainty, the quadratic interpo-lation method or the quasi-Newton method is expected to be more efficient when thederivatives of the function are not available. When the read more..

  • Page - 312

    Review Questions295This produces the solution or ouput as follows:x=0.4809fval =-0.3100REFERENCES AND BIBLIOGRAPHY5.1J. S. Przemieniecki, Theory of Matrix Structural Analysis, McGraw-Hill, New York, 1968.5.2M. J. D. Powell, An efficient method for finding the minimum of a function of sev-eral variables without calculating derivatives, Computer Journal, Vol. 7, pp. 155 –162,1964.5.3R. Fletcher and C. M. Reeves, Function minimization by conjugate gradients, ComputerJournal, Vol. 7, pp. 149 read more..

  • Page - 313

    296Nonlinear Programming I: One-Dimensional Minimization Methods5.10What is a dichotomous search method?5.11Define the golden mean.5.12What is the difference between quadratic and cubic interpolation methods?5.13Why is refitting necessary in interpolation methods?5.14What is a direct root method?5.15What is the basis of the interval halving method?5.16What is the difference between Newton and quasi-Newton methods?5.17What is the secant method?5.18Answer true or false:(a) A unimodal function read more..

  • Page - 314

    Problems2975.5The shear stress induced along the z-axis when two cylinders are in contact with eachother is given byτzypmax = −12−11+zb2+ 2−11+zb2× 1+zb2− 2zb(1)where 2b is the width of the contact area and pmax is the maximum pressure developedat the center of the contact area (Fig. 5.21):b= 2Fπ l1− v21E1+1− v22E21d1 +1d21/2(2)pmax=2Fπ bl(3)Fis the contact force; E1 and E2 are Young’s moduli of the read more..

  • Page - 315

    298Nonlinear Programming I: One-Dimensional Minimization Methodssuch as roller bearings, when the contact load (F )is large, a crack originates at the pointof maximum shear stress and propagates to the surface leading to a fatigue failure. Tolocate the origin of a crack, it is necessary to find the point at which the shear stressattains its maximum value. Show that the problem of finding the location of the maximumshear stress for ν1= ν2= 0.3 reduces to maximizing the functionf (λ)=0.5√1+ read more..

  • Page - 316

    Problems299(c) Interval halving method(d) Fibonacci method(e) Golden section method5.14Find the number of experiments to be conducted in the following methods to obtain avalue of Ln/L0= 0.001:(a) Exhaustive search(b) Dichotomous search with δ= 10−4(c) Interval halving method(d) Fibonacci method(e) Golden section method5.15Find the value of xin the interval (0, 1) which minimizes the function f= x(x− 1.5)to within±0.05 by (a) the golden section method and (b) the Fibonacci method.5.16Find read more..

  • Page - 317

    300Nonlinear Programming I: One-Dimensional Minimization Methods5.21Consider the problemMinimize f (X)= 100(x2− x21 )2 + (1− x1)2and the starting point, X1= −11 . Find the minimum of f(X) along the direction, S1=40using quadratic interpolation method. Use a maximum of two refits.5.22Solve Problem 5.21 using the cubic interpolation method. Use a maximum of two refits.5.23Solve Problem 5.21 using the direct root method. Use a maximum of two refits.5.24Solve Problem 5.21 using the Newton read more..

  • Page - 318

    6Nonlinear Programming II:Unconstrained OptimizationTechniques6.1INTRODUCTIONThis chapter deals with the various methods of solving the unconstrained minimizationproblem:Find X=x1x2...xnwhich minimizes f (X)(6.1)It is true that rarely a practical design problem would be unconstrained; still, a studyof this class of problems is important for the following reasons:1. The constraints do not have significant influence in certain design read more..

  • Page - 319

    302Nonlinear Programming II: Unconstrained Optimization Techniquesare satisfied. The point X∗ is guaranteed to be a relative minimum if the Hessian matrixis positive definite, that is,JX∗= [J ]X∗=∂2f∂xi ∂xj(X∗) = positive definite(6.3)Equations (6.2) and (6.3) can be used to identify the optimum point during numericalcomputations. However, if the function is not differentiable, Eqs. (6.2) and (6.3) cannotbe applied to identify the optimum point. For example, consider the read more..

  • Page - 320

    6.1Introduction303Figure 6.2Finite-element model of a cantilever beam.of the beam (F ), which can be expressed as [6.1]F=1210EId2wdx22dx− P0u3− M0u4(E6)where Eis Young’s modulus and Iis the area moment of inertia of the beam. Formulatethe optimization problem in terms of the variables x1= u3 and x2= u4l for the caseP0l3/EI = 1 and M0l2/EI = 2.SOLUTIONSince the boundary conditions are given by u1= u2= 0, w(x)can beexpressed asw(x)= (−2α3 + 3α2)u3+ (α3 − α2)lu4(E7)so read more..

  • Page - 321

    304Nonlinear Programming II: Unconstrained Optimization TechniquesEquation (E6) can be rewritten asF=1210EId2wdx22l dα− P0u3− M0u4=EI l2106u3l2(−2α + 1)+2u4l(3α− 1)2dα− P0u3− M0u4=EIl3(6u23+ 2u24l2 − 6u3u4l)− P0u3− M0u4(E9)By using the relations u3= x1, u4l= x2, P0l3/EI = 1, and M0l2/EI = 2, and intro-ducing the notation f= F l3/EI, Eq. (E9) can be expressed asf= 6x21− 6x1x2+ 2x22− x1− 2x2(E10)Thus the optimization problem is to determine x1 and x2, which minimize the read more..

  • Page - 322

    6.1Introduction305those requiring only first derivatives of the function are called first-order methods; thoserequiring both first and second derivatives of the function are termed second-ordermethods.6.1.2General ApproachAll the unconstrained minimization methods are iterative in nature and hence they startfrom an initial trial solution and proceed toward the minimum point in a sequentialmanner as shown in Fig. 5.3. The iterative process is given byXi+1= Xi+ λ∗i Si(6.4)where Xi is the read more..

  • Page - 323

    306Nonlinear Programming II: Unconstrained Optimization Techniquesdesign variables changes the condition number† of the Hessian matrix. When the con-dition number of the Hessian matrix is 1, the steepest descent method, for example,finds the minimum of a quadratic objective function in one iteration.If f=12 XT[A]X denotes a quadratic term, a transformation of the formX= [R]Yorx1x2=r11 r12r21 r22y1y2(6.7)can be used to obtain a new quadratic term as12 YT[˜A]Y =12 YT[R]T[A][R]Y(6.8)The matrix read more..

  • Page - 324

    6.1Introduction307the second-order Taylor’s series approximation of a general nonlinear function at thedesign vector Xi can be expressed asf (X)= c+ BTX +12 XT[A]X(6.12)wherec= f (Xi)(6.13)B=∂f∂x1 Xi...∂f∂xn Xi(6.14)[A]=∂2f∂x21 Xi·· ·∂2f∂x1∂xn Xi......∂2f∂xn∂x1 Xi ·· ·∂2f∂x2n Xi(6.15)The transformations indicated by read more..

  • Page - 325

    308Nonlinear Programming II: Unconstrained Optimization Techniqueswhere λi is the ith eigenvalue and ui is the corresponding eigenvector. In the presentcase, the eigenvalues, λi, are given by12− λi− 6− 6 4− λi = λ2i− 16λi+ 12= 0(E4)which yield λ1= 8+√52= 15.2111 and λ2= 8−√52= 0.7889. The eigenvectorui corresponding to λi can be found by solving Eq. (E3):12− λ1− 6− 6 4− λ1u11u21 =00or(12− λ1)u11− 6u21= 0oru21= −0.5332u11that is,u1=u11u21 read more..

  • Page - 326

    6.2Random Search Methods309Stage 2: Reducing [ ˜A] to a Unit MatrixThe transformation is given by Y= [S]Z, where[S]= 1√19.5682001√3.5432=0.2262 3: Complete TransformationThe total transformation is given byX= [R]Y= [R][S]Z= [T ]Z(E7)where[T ]= [R][S]=11−0.5352 1.86850.2262 000.5313=0.2262 0.5313−0.1211 0.9927(E8)orx1= 0.2262z1+ 0.5313z2x2= −0.1211z1 + 0.9927z2With this transformation, the quadratic function of Eq. (E1) becomesf (z1, z2)= BT[T read more..

  • Page - 327

    310Nonlinear Programming II: Unconstrained Optimization TechniquesFigure 6.3Contours of the original and transformed functions. read more..

  • Page - 328

    6.2Random Search Methods311Figure 6.3(continued ).6.2.1Random Jumping MethodAlthough the problem is an unconstrained one, we establish the bounds li and ui foreach design variable xi, i= 1, 2, . . . , n, for generating the random values of xi:li≤ xi≤ ui,i= 1, 2, . . . , n(6.16)In the random jumping method, we generate sets of nrandom numbers, (r1, r2, . . . , rn),that are uniformly distributed between 0 and 1. Each set of these numbers is used tofind a point, X, inside the hypercube read more..

  • Page - 329

    312Nonlinear Programming II: Unconstrained Optimization Techniques6.2.2Random Walk MethodThe random walk methodis based on generating a sequence of improved approxima-tions to the minimum, each derived from the preceding approximation. Thus if Xi isthe approximation to the minimum obtained in the (i− 1)th stage (or step or iteration),the new or improved approximation in the ith stage is found from the relationXi+1= Xi+ λui(6.18)where λis a prescribed scalar step length and ui is a unit read more..

  • Page - 330

    6.2Random Search Methods313Table 6.2Minimization of fby Random Walk MethodStepNumber ofCurrent objectivelength,trialsComponents of X1+ λufunction value,λrequireda12f1= f (X1+ λu)1.01−0.936960.34943−0.063291.02−1.152711.32588−1.11986Next 100 trials did not reduce the function value.0.51−1.343611.78800−1.128840.53−1.073181.36744−1.20232Next 100 trials did not reduce the function read more..

  • Page - 331

    314Nonlinear Programming II: Unconstrained Optimization Techniques6.2.4Advantages of Random Search Methods1. These methods can work even if the objective function is discontinuous andnondifferentiable at some of the points.2. The random methods can be used to find the global minimum when the objectivefunction possesses several relative minima.3. These methods are applicable when other methods fail due to local difficultiessuch as sharply varying functions and shallow regions.4. Although the read more..

  • Page - 332

    6.4Univariate Method315variables (n= 10), the number of grid points will be 310= 59,049 with pi= 3 and410= 1,048,576 with pi= 4. However, for problems with a small number of designvariables, the grid method can be used conveniently to find an approximate minimum.Also, the grid method can be used to find a good starting point for one of the moreefficient methods.6.4UNIVARIATE METHODIn this method we change only one variable at a time and seek to produce a sequenceof improved approximations to read more..

  • Page - 333

    316Nonlinear Programming II: Unconstrained Optimization TechniquesThe univariate method is very simple and can be implemented easily. However,it will not converge rapidly to the optimum solution, as it has a tendency to oscil-late with steadily decreasing progress toward the optimum. Hence it will be better tostop the computations at some point near to the optimum point rather than trying tofind the precise optimum point. In theory, the univariate method can be applied to findthe minimum of read more..

  • Page - 334

    6.4Univariate Method317Step 3:To find whether the value of fdecreases along S1 or−S1, we use the probelength ε. Sincef1= f (X1)= f (0, 0)= 0,f+ = f (X1+ εS1)= f (ε,0)= 0.01− 0+ 2(0.0001)+ 0+ 0= 0.0102 > f1f− = f (X1− εS1)= f (−ε,0) = −0.01 − 0+ 2(0.0001)+ 0+ 0= −0.0098 < f1,−S1 is the correct direction for minimizing ffrom X1.Step 4:To find the optimum step length λ∗1 , we minimizef (X1− λ1S1)= f (−λ1, 0)= (−λ1) − 0+ 2(−λ1)2 + 0+ 0= 2λ21− λ1As read more..

  • Page - 335

    318Nonlinear Programming II: Unconstrained Optimization TechniquesNext we set the iteration number as i= 3, and continue the procedure until the optimumsolution X∗ = −1.01.5with f (X∗) = −1.25 is found.Note:If the method is to be computerized, a suitable convergence criterion has tobe used to test the point Xi+1(i= 1, 2, . . .)for optimality.6.5PATTERN DIRECTIONSIn the univariate method, we search for the minimum along directions parallel to thecoordinate axes. We noticed that this read more..

  • Page - 336

    6.6Powell’s Method319denotes the number of design variables and then searches for the minimum along thepattern direction Si, defined bySi= Xi− Xi−n(6.24)where Xi is the point obtained at the end of nunivariate steps and Xi−n is the startingpoint before taking the nunivariate steps. In general, the directions used prior to takinga move along a pattern direction need not be univariate directions.6.6POWELL’S METHODPowell’s methodis an extension of the basic pattern search method. It is read more..

  • Page - 337

    320Nonlinear Programming II: Unconstrained Optimization Techniquesand hence∇Q(X1) − ∇Q(X2) = A(X1− X2)(6.27)If S is any vector parallel to the hyperplanes, it must be orthogonal to the gradients∇Q(X1) and∇Q(X2). ThusST∇Q(X1) = STAX1+ STB = 0(6.28)ST∇Q(X2) = STAX2+ STB = 0(6.29)By subtracting Eq. (6.29) from Eq. (6.28), we obtainST A(X1− X2)= 0(6.30)Hence S and (X1− X2) are A-conjugate.The meaning of this theorem is illustrated in a two-dimensional space in Fig. 6.7.If X1 and read more..

  • Page - 338

    6.6Powell’s Method321different starting points Xa and Xb, respectively, the line (X1− X2) will be conjugateto the search direction S.Theorem 6.2 If a quadratic functionQ(X)=12 XTAX+ BTX+ C(6.31)is minimized sequentially, once along each direction of a set of nmutually conjugatedirections, the minimum of the function Qwill be found at or before the nth stepirrespective of the starting point.Proof: Let X∗ minimize the quadratic function Q(X). Then∇Q(X∗) = B+ AX∗ = 0(6.32)Given a point read more..

  • Page - 339

    322Nonlinear Programming II: Unconstrained Optimization Techniqueswhere λ∗i is found by minimizing Q(Xi+ λiSi) so that†STi∇Q(Xi+1) = 0(6.39)Since the gradient of Qat the point Xi+1 is given by∇Q(Xi+1) = B+ AXi+1(6.40)Eq. (6.39) can be written asSTi{B + A(Xi+ λ∗i Si)} = 0(6.41)This equation givesλ∗i= −(B+ AXi)TSiSTi ASi(6.42)From Eq. (6.38), we can express Xi asXi= X1+i−1j=1λ∗j Sj(6.43)so thatXTi ASi= XT1 ASi+i−1j=1λ∗j STj ASi= XT1 ASi(6.44)using the relation (6.25). read more..

  • Page - 340

    6.6Powell’s Method323Example 6.5 Consider the minimization of the functionf (x1, x2)= 6x21+ 2x22− 6x1x2− x1− 2x2If S1=12denotes a search direction, find a direction S2 that is conjugate to thedirection S1.SOLUTIONThe objective function can be expressed in matrix form asf (X)= BT X+12XT[A]X= {−1 −2}x1x2+12{x1 x2}12−6−6 4x1x2and the Hessian matrix [A] can be identified as[A]=12−6−6 4The direction S2=s1s2will be conjugate to S1=12ifST1 [A]S2= (1 2)12−6−6 4s1s2= 0which upon read more..

  • Page - 341

    324Nonlinear Programming II: Unconstrained Optimization TechniquesFigure 6.8Progress of Powell’s stored as the vector Z in block A, and the pattern direction is constructed by sub-tracting the previous base point from the current one in block B. The pattern directionis then used as a minimization direction in blocks C and D. For the next cycle, thefirst direction used in the previous cycle is discarded in favor of the current patterndirection. This is achieved by updating the read more..

  • Page - 342

    6.6Powell’s Method325lllllllllFigure 6.9Flowchart for Powell’s Method.direction are points that are minima along Sn in the first cycle, the first pattern directionS(1)pin the second cycle, the second pattern direction S(2)pin the third cycle, and so on.Quadratic Convergence.It can be seen from Fig. 6.9 that the pattern direc-tions S(1)p , S(2)p , S(3)p , . . .are nothing but the lines joining the minima found alongthe directions Sn, S(1)p , S(2)p , . . ., respectively. Hence by Theorem read more..

  • Page - 343

    326Nonlinear Programming II: Unconstrained Optimization TechniquesSn, S(1)p , S(2)p , . . .are A-conjugate. Since, by Theorem 6.2, any search method involv-ing minimization along a set of conjugate directions is quadratically convergent,Powell’s method is quadratically convergent. From the method used for construct-ing the conjugate directions S(1)p , S(2)p , . . ., we find that nminimization cycles arerequired to complete the construction of nconjugate directions. In the ith cycle,the read more..

  • Page - 344

    6.6Powell’s Method327As df/dλ= 0 at λ∗ =12 , we have X2= X1+ λ∗S2 =00.5.Next we minimize falong S1=10from X2=0.50.0 . Sincef2= f (X2)= f (0.0, 0.5)= −0.25f+ = f (X2+ εS1)= f (0.01, 0.50)= −0.2298 > f2f− = f (X2− εS1)= f (−0.01, 0.50)= −0.2698fdecreases along−S1. As f (X2− λS1)= f (−λ, 0.50)= 2λ2− 2λ− 0.25, df/dλ=0 at λ∗ =12 . Hence X3= X2− λ∗S1 = −0.50.5 .Now we minimize falong S2=01from X3= −0.50.5 . As f3= f (X3)= −0.75,f+ = f (X3+ εS2)= read more..

  • Page - 345

    328Nonlinear Programming II: Unconstrained Optimization TechniquesIf we do not recognize X5 as the optimum point at this stage, we proceed tominimize falong the direction S2=01from X5. Then we would obtainf5= f (X5)= −1.25, f+ = f (X5+ εS2) > f5,andf− = f (X5− εS2) > f5This shows that fcannot be minimized along S2, and hence X5 will be the optimumpoint. In this example the convergence has been achieved in the second cycle itself.This is to be expected in this case, as fis a read more..

  • Page - 346

    6.7Simplex Method329Figure rejecting the vertex corresponding to the highest function value. Since the directionof movement of the simplex is always away from the worst result, we will be movingin a favorable direction. If the objective function does not have steep valleys, repetitiveapplication of the reflection process leads to a zigzag path in the general direction ofthe minimum as shown in Fig. 6.11. Mathematically, the reflected point Xr is givenbyXr= (1+ α)X0− read more..

  • Page - 347

    330Nonlinear Programming II: Unconstrained Optimization TechniquesX0 is the centroid of all the points Xi except i= h:X0=1nn+1i= 1i= hXi(6.50)and α > 0 is the reflection coefficient defined asα=distance between Xr and X0distance between Xh and X0(6.51)Thus Xr will lie on the line joining Xh and X0, on the far side of X0 from Xh with|Xr − X0| = α|Xh − X0|. If f (Xr )lies between f (Xh) and f (Xl), where Xl is thevertex corresponding to the minimum function value,f (Xl)= mini=1 to read more..

  • Page - 348

    6.7Simplex Method331Whenever such situation is encountered, we reject the vertex corresponding to thesecond worst value instead of the vertex corresponding to the worst function value.This method, in general, leads the process to continue toward the region of the desiredminimum. However, the final simplex may again straddle the minimum, or it may liewithin a distance of the order of its own size from the minimum. In such cases it maynot be possible to obtain a new simplex with vertices closer read more..

  • Page - 349

    332Nonlinear Programming II: Unconstrained Optimization Techniquesto Xe using the relationXe= γ Xr+ (1− γ )X0(6.53)where γis called the expansion coefficient, defined asγ=distance between Xe and X0distance between Xr and X0> 1If f (Xe) < f (Xl), we replace the point Xh by Xe and restart the process of reflec-tion. On the other hand, if f (Xe) > f (Xl), it means that the expansion process is notsuccessful and hence we replace point Xh by Xr and start the reflection process read more..

  • Page - 350

    6.7Simplex Method333SOLUTIONIteration 1Step 1:The function value at each of the vertices of the current simplex is given byf1= f (X1)= 4.0− 4.0+ 2(16.0)+ 2(16.0)+ 16.0= 80.0f2= f (X2)= 5.0− 4.0+ 2(25.0)+ 2(20.0)+ 16.0= 107.0f3= f (X3)= 4.0− 5.0+ 2(16.0)+ 2(20.0)+ 25.0= 96.0Therefore,Xh= X2=5.04.0,f (Xh)= 107.0,Xl= X1=4.04.0,andf (Xl)= 80.0Step 2:The centroid X0 is obtained asX0=12(X1+ X3)=124.0+ 4.04.0+ 5.0=4.04.5withf (X0)= 87.75Step 3:The reflection point is found asXr= 2X0− read more..

  • Page - 351

    334Nonlinear Programming II: Unconstrained Optimization TechniquesIteration 2Step 1:As f (X1)= 80.0, f (X2)= 56.75, and f (X3)= 96.0,Xh= X3=4.05.0andXl= X2=2.05.5Step 2:The centroid isX0=12(X1+ X2)=124.0+ 2.04.0+ 5.5=3.04.75f (X0)= 67.31Step 3:Xr= 2X0− Xh=6.09.5−4.05.0=2.04.5f (Xr )= 2.0− 4.5+ 2(4.0)+ 2(9.0)+ 20.25= 43.75Step 4:As f (Xr ) < f (Xl), we find Xe asXe= 2Xr− X0=4.09.0−3.04.75=1.04.25f (Xe)= 1.0− 4.25+ 2(1.0)+ 2(4.25)+ 18.0625= 25.3125Step 5:As f (Xe) < f (Xl), we read more..

  • Page - 352

    6.8Gradient of a Function335Indirect Search (Descent) Methods6.8GRADIENT OF A FUNCTIONThe gradient of a function is an n-component vector given by∇fn×1=∂f/∂x1∂f/∂x2...∂f/∂xn(6.56)The gradient has a very important property. If we move along the gradient directionfrom any point in n-dimensional space, the function value increases at the fastest rate.Hence the gradient direction is called the direction of steepest read more..

  • Page - 353

    336Nonlinear Programming II: Unconstrained Optimization TechniquesSince the gradient vector represents the direction of steepest ascent, the negativeof the gradient vector denotes the direction of steepest descent. Thus any method thatmakes use of the gradient vector can be expected to give the minimum point fasterthan one that does not make use of the gradient vector. All the descent methods makeuse of the gradient vector, either directly or indirectly, in finding the search directions.Before read more..

  • Page - 354

    6.8Gradient of a Function337Eq. (6.61) can be rewritten asdfds= ||∇f|| ||u||cosθ(6.62)where||∇f || and||u|| denote the lengths of the vectors∇f and u, respectively, and θindicates the angle between the vectors∇f and u. It can be seen that df/dswill bemaximum when θ= 0◦ and minimum when θ= 180◦. This indicates that the functionvalue increases at a maximum rate in the direction of the gradient (i.e., when u isalong∇f ).Theorem 6.4 The maximum rate of change of fat any point X is read more..

  • Page - 355

    338Nonlinear Programming II: Unconstrained Optimization TechniquesThis formula requires two additional function evaluations for each of the partial deriva-tives. In Eqs. (6.63) and (6.64),xi is a small scalar quantity and ui is a vector of ordernwhose ith component has a value of 1, and all other components have a value of zero.In practical computations, the value ofxi has to be chosen with some care. Ifxi istoo small, the difference between the values of the function evaluated at (Xm+ xiui)and read more..

  • Page - 356

    6.9Steepest Descent (Cauchy) Method339where xj is the jth component of X. But∂xj∂λ=∂∂λ(xij+ λsij )= sij(6.66)where xij and sij are the jth components of Xi and Si, respectively. Hencedfdλ=nj=1∂f∂xjsij= ∇fTSi(6.67)If λ∗ minimizes fin the direction Si, we havedfdλλ=λ∗= ∇f |Tλ∗Si= 0(6.68)at the point Xi+ λ∗Si.6.9STEEPEST DESCENT (CAUCHY) METHODThe use of the negative of the gradient vector as a direction for minimization wasfirst made by Cauchy in 1847 [6.12]. In read more..

  • Page - 357

    340Nonlinear Programming II: Unconstrained Optimization TechniquesSOLUTIONIteration 1The gradient of fis given by∇f =∂f/∂x1∂f/∂x2=1+ 4x1+ 2x2−1 + 2x1+ 2x2∇f1 = ∇f (X1)=1−1Therefore,S1= −∇f1 = −11To find X2, we need to find the optimal step length λ∗1. For this, we minimize f (X1+λ1S1)= f (−λ1, λ1)= λ21− 2λ1 with respect to λ1. Since df/dλ1= 0 at λ∗1= 1, weobtainX2= X1+ λ∗1 S1=00+ 1−11= −11As∇f2 = ∇f (X2)= −1−1 =00, X2 is not read more..

  • Page - 358

    6.10Conjugate Gradient (Fletcher– Reeves) Method341Asf (X3+ λ3S3)= f (−0.8 − 0.2λ3, 1.2+ 0.2λ3)= 0.04λ23− 0.08λ3− 1.20,dfdλ3 = 0 at λ∗3= 1.0Therefore,X4= X3+ λ∗3S3= −0.81.2+ 1.0−0.20.2= −1.01.4The gradient at X4 is given by∇f4 = −0.20−0.20Since∇f4 =00 , X4 is not optimum and hence we have to proceed to the next iteration.This process has to be continued until the optimum point, X∗ = −1.01.5 , is found.Convergence Criteria:The following criteria can be used read more..

  • Page - 359

    342Nonlinear Programming II: Unconstrained Optimization TechniquesWe have seen that Powell’s conjugate direction method requires nsingle-variableminimizations per iteration and sets up a new conjugate direction at the end of eachiteration. Thus it requires, in general, n2single-variable minimizations to find the mini-mum of a quadratic function. On the other hand, if we can evaluate the gradients of theobjective function, we can set up a new conjugate direction after every read more..

  • Page - 360

    6.10Conjugate Gradient (Fletcher– Reeves) Method3436.10.2Fletcher –Reeves MethodThe iterative procedure of Fletcher–Reeves method can be stated as follows:1. Start with an arbitrary initial point X1.2. Set the first search direction S1= −∇f (X1)= −∇f1.3. Find the point X2 according to the relationX2= X1+ λ∗1 S1(6.80)where λ∗1 is the optimal step length in the direction S1. Set i= 2 and go to thenext step.4. Find∇fi = ∇f (Xi), and setSi= −∇fi + read more..

  • Page - 361

    344Nonlinear Programming II: Unconstrained Optimization TechniquesSOLUTIONIteration 1∇f =∂f/∂x1∂f/∂x2 =1+ 4x1+ 2x2−1 + 2x1+ 2x2∇f1 = ∇f (X1)=1−1The search direction is taken as S1= −∇f1 = −11 . To find the optimal step lengthλ∗1 along S1, we minimize f (X1+ λ1S1) with respect to λ1. Heref (X1+ λ1S1)= f (−λ1, +λ1) = λ21− 2λ1dfdλ1 = 0atλ∗1= 1Therefore,X2= X1+ λ∗1 S1=00+ 1−11= −11Iteration 2Since∇f2 = ∇f (X2)= −1−1, Eq. (6.81) gives the next read more..

  • Page - 362

    6.11Newton’s Method345Thus the optimum point is reached in two iterations. Even if we do not know this pointto be optimum, we will not be able to move from this point in the next iteration. Thiscan be verified as follows.Iteration 3Now∇f3 = ∇f (X3)=00,|∇f2|2 = 2,and|∇f3|2 = 0.ThusS3= −∇f3 + (|∇f3|2/|∇f2|2)S2 = −00+0200=00This shows that there is no search direction to reduce ffurther, and hence X3 isoptimum.6.11NEWTON’S METHODNewton’s method presented in Section 5.12.1 read more..

  • Page - 363

    346Nonlinear Programming II: Unconstrained Optimization TechniquesExample 6.10Show that the Newton’s method finds the minimum of a quadraticfunction in one iteration.SOLUTIONLet the quadratic function be given byf (X)=12 XT[A]X+ BTX+ CThe minimum of f (X)is given by∇f = [A]X+ B= 0orX∗ = −[A]−1BThe iterative step of Eq. (6.86) givesXi+1= Xi− [A]−1([A]Xi + B)(E1)where Xi is the starting point for the ith iteration. Thus Eq. (E1) gives the exact solutionXi+1= X∗ = −[A]−1BFigure read more..

  • Page - 364

    6.11Newton’s Method347Therefore,[J1]−1 =14+2 −2−2 4=12−12−121Asg1=∂f/∂x1∂f/∂x2X1=1+ 4x1+ 2x2−1 + 2x1+ 2x2(0,0) =1−1Equation (6.86) givesX2= X1− [J1]−1g1=00−12−12−1211−1 = −132To see whether or not X2 is the optimum point, we evaluateg2=∂f/∂x1∂f/∂x2X2=1+ 4x1+ 2x2−1 + 2x1+ 2x2(−1,3/2)=00As g2= 0, X2 is the optimum point. Thus the method has converged in one iterationfor this quadratic function.If f (X)is a nonquadratic function, Newton’s method read more..

  • Page - 365

    348Nonlinear Programming II: Unconstrained Optimization Techniques6.12MARQUARDT METHODThe steepest descent method reduces the function value when the design vector Xi isaway from the optimum point X∗. The Newton method, on the other hand, convergesfast when the design vector Xi is close to the optimum point X∗. The Marquardt method[6.15] attempts to take advantage of both the steepest descent and Newton methods.This method modifies the diagonal elements of the Hessian matrix, [Ji], as[˜Ji] read more..

  • Page - 366

    6.12Marquardt Method349where λ∗iis found using any of the one-dimensional search methods described inChapter 5.Example 6.12Minimize f (x1, x2)= x1− x2+ 2x21+ 2x1x2+ x22 from the startingpoint X1=00using Marquardt method with α1= 104, c1=14 , c2= 2, andε= 10−2.SOLUTIONIteration 1 (i= 1)Here f1= f (X1)= 0.0 and∇f1 = ∂f∂x1∂f∂x2(0,0)=1+ 4x1+ 2x2−1 + 2x1+ 2x2(0,0)=1−1Since||∇f1|| = 1.4142 > ε, we read more..

  • Page - 367

    350Nonlinear Programming II: Unconstrained Optimization Techniques6.13QUASI-NEWTON METHODSThe basic iterative process used in the Newton’s method is given by Eq. (6.86):Xi+1= Xi− [Ji]−1∇f(Xi)(6.93)where the Hessian matrix [Ji] is composed of the second partial derivatives of fand varies with the design vector Xi for a nonquadratic (general nonlinear) objectivefunction f. The basic idea behind the quasi-Newton or variable metric methods is toapproximate either [Ji] by another matrix [Ai] read more..

  • Page - 368

    6.13Quasi-Newton Methods351have been suggested in the literature for the computation of [Bi] as the iterative processprogresses (i.e., for the computation of [Bi+1] once [Bi] is known). A major concernis that in addition to satisfying Eq. (6.102), the symmetry and positive definiteness ofthe matrix [Bi] is to be maintained; that is, if [Bi] is symmetric and positive definite,[Bi+1] must remain symmetric and positive definite.6.13.1Rank 1 UpdatesThe general formula for updating the matrix [Bi] read more..

  • Page - 369

    352Nonlinear Programming II: Unconstrained Optimization Techniquesusing Eq. (6.111) and the new point X3 is determined from Eq. (6.94). This iterativeprocess is continued until convergence is achieved. If [Bi] is symmetric, Eq. (6.111)ensures that [Bi+1] is also symmetric. However, there is no guarantee that [Bi+1]remains positive definite even if [Bi] is positive definite. This might lead to a breakdownof the procedure, especially when used for the optimization of nonquadratic functions.It read more..

  • Page - 370

    6.13Quasi-Newton Methods353where Si is the search direction, di= Xi+1− Xi can be rewritten asdi= λ∗i Si(6.121)Thus Eq. (6.119) can be expressed as[Bi+1]= [Bi]+λ∗i SiSTiSTi gi −[Bi]gigTi [Bi]gTi [Bi]gi(6.122)Remarks:1. Equations (6.111) and (6.119) are known as inverse update formulassince theseequations approximate the inverse of the Hessian matrix of f.2. It is possible to derive a family of direct update formulas in which approx-imations to the Hessian matrix itself are considered. read more..

  • Page - 371

    354Nonlinear Programming II: Unconstrained Optimization Techniques4. It has been shown that the BFGS method exhibits superlinear convergence nearX∗ [6.17].5. Numerical experience indicates that the BFGS method is the best unconstrainedvariable metric method and is less influenced by errors in finding λ∗i comparedto the DFP method.6. The methods discussed in this section are also known as secant methods sinceEqs. (6.99) and (6.102) can be considered as secant equations (see Section read more..

  • Page - 372

    6.14Davidon –Fletcher–Powell Method355this involves more computational effort. Another possibility is to specify a maximumnumber of refits in the one-dimensional minimization method and to skip the updatingof [Bi] if λ∗i could not be found accurately in the specified number of refits. The lastpossibility is to continue updating the matrix [Bi] using the approximate values of λ∗ifound, but restart the whole procedure after certain number of iterations, that is, restartwith i= 1 in read more..

  • Page - 373

    356Nonlinear Programming II: Unconstrained Optimization TechniquesThe quantity STi+1[A]Si can be written asSTi+1[A]Si= −([Bi+1]∇fi+1)T[A]Si= −∇fTi+1[Bi+1][A]Si= −∇fTi+1Si= 0(E11)since λ∗iis the minimizing step in the direction Si. Equation (E11) proves that thesuccessive directions generated in the DFP method are [A]-conjugate and hence themethod is a conjugate gradient method.Example 6.14 Minimize f (x1, x2)= 100(x21− x2)2 + (1− x1)2 taking X1= −2−2asthe starting point. read more..

  • Page - 374

    6.14Davidon –Fletcher–Powell Method357Cubic Interpolation Method (First Fitting)Stage 1:As the search direction S1 is normalized already, we go to stage 2.Stage 2:To establish lower and upper bounds on the optimal step size λ∗1 , we have tofind two points Aand Bat which the slope df/dλ1 has different signs. Wetake A= 0 and choose an initial step size of t0= 0.25 to find B.At λ1= A= 0:fA= f (λ1= A= 0)= 3609f′A=dfdλ1 λ1=A=0= −4956.64At λ1= t0= 0.25:f= 2535.62dfdλ1 = read more..

  • Page - 375

    358Nonlinear Programming II: Unconstrained Optimization TechniquesTherefore,˜λ∗i = 2.0+ −113.95 − 24.41+ 143.2−113.95 + 174.68− 48.82(2.5− 2.0)= 2.2Stage 4:To find whether˜λ∗1 is close to λ∗1 , we test the value of df/dλ1.dfdλ1 ˜λ∗1 = −0.818Also,f (λ1= ˜λ∗1) = 216.1Since df/dλ1 is not close to zero at˜λ∗1, we use a refitting technique.Second Fitting:Now we take A= ˜λ∗1 since df/dλ1 is negative at˜λ∗1 and B= 2.5.ThusA= 2.2, fA= 216.10, f′A= read more..

  • Page - 376

    6.14Davidon –Fletcher–Powell Method359SOLUTIONIteration 1 (i= 1)Here∇f1 = ∇f (X1)=1+ 4x1+ 2x2−1 + 2x1+ 2x2(0,0) =1−1and henceS1= −[B1]∇f1 = −1 00 11−1 = −11To find the minimizing step length λ∗1 along S1, we minimizef (X1+ λ1S1)= f00+ λ1−11= f (−λ1, λ1)= λ21− 2λ1with respect to λ1. Since df/dλ1= 0 at λ∗1= 1, we obtainX2= X1+ λ∗1 S1=00+ 1−11= −11Since∇f2 = ∇f (X2)= −1−1and||∇f2|| = 1.4142 > ε, we proceed to update thematrix [Bi] by read more..

  • Page - 377

    360Nonlinear Programming II: Unconstrained Optimization Techniques[N1]= −([B1]g1)([B1]g1)TgT1 [B1]g1= −−20{−2 0}4= −144 00 0= −1 00 0[B2]= [B1]+ [M1]+ [N1]=1 00 1+ 12−12−1212+−1 00 0=0.5−0.5−0.5 1.5Iteration 2 (i= 2)The next search direction is determined asS2= −[B2]∇f2 = −0.5−0.5−0.5 1.5−1−1 =01To find the minimizing step length λ∗2 along S2, we minimizef (X2+ λ2S2)= f−11+ λ201= f−11+ λ2= −1 − (1+ λ2)+ 2(−1)2 + read more..

  • Page - 378

    6.15Broyden – Fletcher–Goldfarb –Shanno Method3613. Find the optimal step length λ∗i in the direction Si and setXi+1= Xi+ λ∗i Si(6.135)4. Test the point Xi+1 for optimality. If||∇fi+1|| ≤ ε, where εis a small quantity,take X∗ ≈ Xi+1 and stop the process. Otherwise, go to step 5.5. Update the Hessian matrix as[Bi+1]= [Bi]+ 1+gTi [Bi]gidTi gididTidTi gi −digTi [Bi]dTi gi−[Bi]gidTidTi gi(6.136)wheredi= Xi+1− Xi= λ∗i Si(6.137)gi= ∇fi+1 − ∇fi = ∇f (Xi+1)− ∇f read more..

  • Page - 379

    362Nonlinear Programming II: Unconstrained Optimization TechniquesTo find the minimizing step length λ∗1 along S1, we minimizef (X1+ λ1S1)= f00+ λ1−11= f (−λ1,λ1) = λ21− 2λ1with respect to λ1. Since df/dλ1= 0 at λ∗1= 1, we obtainX2= X1+ λ∗1 S1=00+ 1−11= −11Since∇f2 = ∇f (X2)= −1−1and||∇f2|| = 1.4142 > ε, we proceed to update thematrix [Bi] by computingg1= ∇f2 − ∇f1 = −1−1 −1−1 = −20d1= λ∗1S1= 1−11= −11d1dT1= −11{−1 1} read more..

  • Page - 380

    6.16Test Functions363Iteration 2 (i= 2)The next search direction is determined asS2= −[B2]∇f2 = −12−12−1252−1−1 =02To find the minimizing step length λ∗2 along S2, we minimizef (X2+ λ2S2)= f−11+ λ202= f (−1, 1+ 2λ2)= 4λ22− 2λ2− 1with respect to λ2. Since df/dλ2= 0 at λ∗2=14 , we obtainX3= X2+ λ∗2S2= −11+1402= −132This point can be identified to be optimum since∇f3 =00and||∇f3|| = 0 < ε6.16TEST FUNCTIONSThe efficiency of an optimization read more..

  • Page - 381

    364Nonlinear Programming II: Unconstrained Optimization Techniques2. A quadratic function:f (x1, x2)= (x1+ 2x2− 7)2 + (2x1+ x2− 5)2(6.141)X1=00,X∗ =13f1= 7.40,f∗ = 0.03. Powell’s quartic function [6.7]:f (x1, x2, x3, x4)= (x1+ 10x2)2 + 5(x3− x4)2+ (x2− 2x3)4 + 10(x1− x4)4(6.142)XT1= {x1 x2 x3 x4}1 = {3 − 1 0 1}, X∗T = {0 0 0 0}f1= 215.0,f∗ = 0.04. Fletcher and Powell’s helical valley [6.21]:f (x1, x2, x3)= 100 [x3− 10θ (x1, x2)]2 + [ x21+ x22− 1]2+ x23(6.143)where2π read more..

  • Page - 382

    6.17MATLAB Solution of Unconstrained Optimization Problems365X1=0.5−2,X∗ =54,X∗alternate=11.41 . . .−0.8968 . . .f1= 400.5,f∗ = 0.0,f∗alternate= 48.9842 . . .7. Powell’s badly scaled function [6.28]:f (x1, x2)= (10,000x1x2− 1)2 + [exp(−x1) + exp(−x2) − 1.0001]2(6.146)X1=01,X∗ =1.098 . . .× 10−59.106 . . .f1= 1.1354,f*= 0.08. Brown’s badly scaled function [6.29]:f (x1, x2)= (x1− 106)2 + (x2− 2× 10−6)2 + (x1x2− 2)2(6.147)X1=11,X∗ =1062× 10−6f1≈ read more..

  • Page - 383

    366Nonlinear Programming II: Unconstrained Optimization TechniquesSOLUTIONStep 1:Write an M-file objfun.m for the objective function.function f= objfun (x)f= 100* (x(2)-x(1) *x(1))^2+(1-x(1))^2;Step 2:Invoke unconstrained optimization program (write this in new MATLAB file).clcclear allwarning offx0 = [-1.2,1.0]; % Starting guessfprintf ('The values of function value at startingpointn');f=objfun(x0)options = optimset('LargeScale', 'off');[x, fval] = fminunc (@objfun,x0,options)This produces read more..

  • Page - 384

    References and Bibliography3676.8H. H. Rosenbrock, An automatic method for finding the greatest or least value of afunction, Computer Journal, Vol. 3, No. 3, pp. 175 –184, 1960.6.9S. S. Rao, Optimization: Theory and Applications, 2nd ed., Wiley Eastern, New Delhi,1984.6.10W. Spendley, G. R. Hext, and F. R. Himsworth, Sequential application of simplex designsin optimization and evolutionary operation, Technometrics, Vol. 4, p. 441, 1962.6.11J. A. Nelder and R. Mead, A simplex method for read more..

  • Page - 385

    368Nonlinear Programming II: Unconstrained Optimization Techniques6.30A. R. Colville, A Comparative Study of Nonlinear Programming Codes, Report 320-2949,IBM New York Scientific Center, 1968.6.31E. D. Eason and R. G. Fenton, A comparison of numerical optimization methods forengineering design, ASME Journal of Engineering Design, Vol. 96, pp. 196 –200, 1974.6.32R.W.H. Sargent and D. J. Sebastian, Numerical experience with algorithms for uncon-strained minimization, pp. 45 –113 in Numerical read more..

  • Page - 386

    Review Questions3696.16Why is Powell’s method called a pattern search method?6.17What are the roles of univariate and pattern moves in the Powell’s method?6.18What is univariate method?6.19Indicate a situation where a central difference formula is not as accurate as a forwarddifference formula.6.20Why is a central difference formula more expensive than a forward or backward differenceformula in finding the gradient of a function?6.21What is the role of one-dimensional minimization methods read more..

  • Page - 387

    370Nonlinear Programming II: Unconstrained Optimization TechniquesPROBLEMS6.1A bar is subjected to an axial load, P0, as shown in Fig. 6.17. By using a one-finite-elementmodel, the axial displacement, u(x), can be expressed as [6.1]u(x)= {N1(x) N2(x)}u1u2where Ni(x) are called the shape functions:N1(x)= 1−xl,N2(x)=xland u1 and u2 are the end displacements of the bar. The deflection of the bar at pointQcan be found by minimizing the potential energy of the bar (f ), which can beexpressed read more..

  • Page - 388

    Problems371Figure 6.18Tapered cantilever beam.Figure 6.19Three-degree-of-freedom spring –mass system.6.4The steady-state temperatures at points 1 and 2 of the one-dimensional fin (x1 and x2)shown in Fig. 6.20 correspond to the minimum of the function [6.1]:f (x1, x2)= 0.6382x21+ 0.3191x22− 0.2809x1x2− 67.906x1− 14.290x2Plot the function fin the (x1, x2) space and identify the steady-state temperatures ofpoints 1 and 2 of the fin. read more..

  • Page - 389

    372Nonlinear Programming II: Unconstrained Optimization TechniquesFigure 6.20Straight fin.6.5Figure 6.21 shows two bodies, Aand B, connected by four linear springs. The springs areat their natural positions when there is no force applied to the bodies. The displacementsx1 and x2 of the bodies under any applied force can be found by minimizing the potentialenergy of the system. Find the displacements of the bodies when forces of 1000 lb and2000 lb are applied to bodies Aand B, respectively, read more..

  • Page - 390

    Problems373qFigure 6.22Two-bar truss.(b) Find the steepest descent direction, S1, of fat the trial vector X1=00 .(c) Derive the one-dimensional minimization problem, f (λ), at X1 along the directionS1.(d) Find the optimal step length λ∗ using the calculus method and find the new designvector X2.6.7Three carts, interconnected by springs, are subjected to the loads P1, P2, and P3 as shownin Fig. 6.23. The displacements of the carts can be found by minimizing the potentialenergy of the system read more..

  • Page - 391

    374Nonlinear Programming II: Unconstrained Optimization TechniquesFigure 6.23Three carts interconnected by springs.6.9Plot the contours of the following function in the two dimensional (x1, x2) space over theregion (−4 ≤ x1≤ 4,−3 ≤ x2≤ 6) and identify the optimum point:f (x1, x2)= 2(x2− x21 )2 + (1− x1)26.10Consider the problemf (x1, x2)= 100(x2− x21 )2 + (1− x1)2Plot the contours of fover the region (−4 ≤ x1≤ 4,−3 ≤ x2≤ 6) and identify theoptimum point.6.11It is read more..

  • Page - 392

    Problems3756.15Find a suitable transformation or scaling of variables to reduce the condition number ofthe Hessian matrix of the following function to one:f= 4x21+ 3x22− 5x1x2− 8x1+ 106.16Determine whether the following vectors serve as conjugate directions for minimizing thefunction f= 2x21+ 16x22− 2x1x2− x1− 6x2− 5.(a) S1=15−1,S2=11(b) S1= −115,S2=116.17Consider the problem:Minimize f= x1− x2+ 2x21+ 2x1x2+ x22Find the solution of this problem in the range−10 ≤ xi≤ 10, read more..

  • Page - 393

    376Nonlinear Programming II: Unconstrained Optimization Techniques6.27Perform two iterations of the Marquardt’s method to minimize the function given inProblem 6.20 from the stated starting point.6.28Prove that the search directions used in the Fletcher–Reeves method are [A]-conjugatewhile minimizing the functionf (x1, x2)= x21+ 4x226.29Generate a regular simplex of size 4 in a two-dimensional space using each base point:(a)4−3(b)11(c)−1−26.30Find the coordinates of the vertices of a read more..

  • Page - 394

    Problems3776.35Consider the minimization of the functionf=1x21+ x22+ 2Perform one iteration of Newton’s method from the starting point X1=40usingEq. (6.86). How much improvement is achieved with X2?6.36Consider the problem:Minimize f= 2(x1− x21 )2 + (1− x1)2If a base simplex is defined by the verticesX1=00,X2=10,X3=01find a sequence of four improved vectors using reflection, expansion, and/or contraction.6.37Consider the problem:Minimize f= (x1+ 2x2− 7)2 + (2x1+ x2− 5)2If a base read more..

  • Page - 395

    378Nonlinear Programming II: Unconstrained Optimization Techniques6.45Minimize f= 4x21+ 3x22− 5x1x2− 8x1 starting from point (0, 0) using Powell’s method.Perform four iterations.6.46Minimize f (x1, x2)= x41− 2x21 x2+ x21+ x22+ 2x1+ 1 by the simplex method. Performtwo steps of reflection, expansion, and/or contraction.6.47Solve the following system of equations using Newton’s method of unconstrained mini-mization with the starting pointX1= 0002x1− x2+ x3= −1, x1+ read more..

  • Page - 396

    Problems3796.54Same as Problem 6.53 for the following function:f= (x2− x21 )2 + (1− x1)26.55Verify whether the following search directions are [A]-conjugate while minimizing thefunctionf= x1− x2+ 2x21+ 2x1x2+ x22(a) S1= −11, S2=10(b) S1= −11, S2=016.56Solve the equations x1+ 2x2+ 3x3= 14, x1− x2+ x3= 1, and 3x1− 2x2+ x3= 2using Marquardt’s method of unconstrained minimization. Use the starting pointX1= {0,0,0}T.6.57Apply the simplex method to minimize the function fgiven in read more..

  • Page - 397

    7Nonlinear Programming III:Constrained OptimizationTechniques7.1INTRODUCTIONThis chapter deals with techniques that are applicable to the solution of the constrainedoptimization problem:Find Xwhich minimizes f (X)subject togj (X)≤ 0,j= 1, 2, . . . , mhk(X)= 0,k= 1, 2, . . . , p(7.1)There are many techniques available for the solution of a constrained nonlinear pro-gramming problem. All the methods can be classified into two broad categories: directmethods and indirect methods, as shown in read more..

  • Page - 398

    7.2Characteristics of a Constrained Problem381Table 7.1Constrained Optimization TechniquesDirect methodsIndirect methodsRandom search methodsTransformation of variables techniqueHeuristic search methodsSequential unconstrained minimizationComplex methodtechniquesObjective and constraint approximationInterior penalty function methodmethodsExterior penalty function methodSequential linear programming methodAugmented Lagrange multiplier methodSequential quadratic programming methodMethods of read more..

  • Page - 399

    382Nonlinear Programming III: Constrained Optimization TechniquesFigure 7.2Constrained minimum occurring on a nonlinear constraint.3.If the objective function has two or more unconstrained local minima, the con-strained problem may have multiple minima as shown in Fig. 7.3.4.In some cases, even if the objective function has a single unconstrainedminimum, the constraints may introduce multiple local minima as shown inFig. 7.4.A constrained optimization technique must be able to locate the minimum read more..

  • Page - 400

    7.3Random Search Methods383Figure 7.4Relative minima introduced by constraints.Direct Methods7.3RANDOM SEARCH METHODSThe random search methods described for unconstrained minimization (Section 6.2)can be used, with minor modifications, to solve a constrained optimization problem.The basic procedure can be described by the following steps:1.Generate a trial design vector using one random number for each design variable.2.Verify whether the constraints are satisfied at the trial design vector. read more..

  • Page - 401

    384Nonlinear Programming III: Constrained Optimization TechniquesAnother procedure involves constructing an unconstrained function, F (X), byadding penalty for violating any constraint as (as described in Section 7.12):F (X)= f (X)+ amj=1[Gj (X)]2 + bpk=1[Hk(X)]2(7.4)where[Gj (X)]2 = [max(0, gj (X))]2(7.5)[Hk(X)]2 = h2k (X)(7.6)indicate the squares of violations of inequality and equality constraints, respectively,and aand bare constants. Equation (7.4) indicates that while minimizing the read more..

  • Page - 402

    7.4Complex Method385are found one at a time by the use of random numbers generated in the range0 to 1, asxi,j= x(l)i+ ri,j (x(u)i− x(l)i),i= 1, 2, . . . , n, j= 2, 3, . . . , k(7.8)where xi,j is the ith component of the point Xj , and ri,j is a random numberlying in the interval (0, 1). It is to be noted that the points X2, X3, . . . , Xkgenerated according to Eq. (7.8) satisfy the side constraints, Eqs. (7.7c) butmay not satisfy the constraints given by Eqs. (7.7b).As soon as a new point Xj read more..

  • Page - 403

    386Nonlinear Programming III: Constrained Optimization Techniques4.If at any stage, the reflected point Xr (found in step 3) violates any of theconstraints [Eqs. (7.7b)], it is moved halfway in toward the centroid until itbecomes feasible, that is,(Xr )new=12 (X0+ Xr )(7.12)This method will progress toward the optimum point as long as the complexhas not collapsed into its centroid.5.Each time the worst point Xh of the current complex is replaced by a newpoint, the complex gets modified and we read more..

  • Page - 404

    7.5Sequential Linear Programming3877.5SEQUENTIAL LINEAR PROGRAMMINGIn the sequential linear programming(SLP) method, the solution of the original nonlin-ear programming problem is found by solving a series of linear programming problems.Each LP problem is generated by approximating the nonlinear objective and constraintfunctions using first-order Taylor series expansions about the current design vector, Xi.The resulting LP problem is solved using the simplex method to find the new designvector read more..

  • Page - 405

    388Nonlinear Programming III: Constrained Optimization TechniquesIf gj (Xi+1)≤ εfor j= 1, 2, . . . , m, and |hk(Xi+1)| ≤ ε, k= 1, 2, . . . , p,where εis a prescribed small positive tolerance, all the original constraints canbe assumed to have been satisfied. Hence stop the procedure by takingXopt≃ Xi+1If gj (Xi+1) > εfor some j, or |hk(Xi+1)| > εfor some k, find the most violatedconstraint, for example, asgk(Xi+1)= maxj[gj (Xi+1)](7.19)Relinearize the constraint gk(X)≤ 0 read more..

  • Page - 406

    7.5Sequential Linear Programming389Figure 7.5Graphical representation of the problem stated by Eq. (7.21).subject toc≤ x≤ d(7.22)The optimum solution of this approximating LP problem can be seen to be x∗ = c.Next, we linearize the constraint g(x)about point cand add it to the previous constraintset. Thus the new LP problem becomesMinimize f (x)= c1x(7.23a)subject toc≤ x≤ d(7.23b)g(c)+dgdx(c)(x− c)≤ 0(7.23c)The feasible region of x, according to the constraints (7.23b) and (7.23c), read more..

  • Page - 407

    390Nonlinear Programming III: Constrained Optimization TechniquesFigure 7.6Linearization of constraint about c.g(c)+dgdx(c)(x− c)≤ 0(7.24c)g(e)+dgdx(e)(x− e)≤ 0(7.24d )The permissible range of x, according to the constraints (7.24b), (7.24c), and (7.24d),can be seen to be f≤ x≤ dfrom Fig. 7.7. The optimum solution of the LP problemof Eqs. (7.24) can be obtained as x∗ = f.We then linearize g(x)≤ 0 about the present point x∗ = fand add it to theprevious constraint set [Eqs. read more..

  • Page - 408

    7.5Sequential Linear Programming391Figure 7.7Linearization of constraint about e.Example 7.1Minimize f (x1, x2)= x1− x2subject tog1(x1, x2)= 3x21 − 2x1x2 + x22− 1 ≤ 0using the cutting plane method. Take the convergence limit in step 5 as ε= 0.02.Note:This example was originally given by Kelly [7.4]. Since the constraintboundary represents an ellipse, the problem is a convex programming problem. Fromgraphical representation, the optimum solution of the problem can be identified asx∗1= read more..

  • Page - 409

    392Nonlinear Programming III: Constrained Optimization Techniquessubject to− 2 ≤ x1≤ 2− 2 ≤ x2≤ 2(E1)The solution of this problem can be obtained asX= −22with f (X)= −4Step 4:Since we have solved one LP problem, we can takeXi+1= X2= −22Step 5:Since g1(X2)= 23 > ε, we linearize g1(X) about point X2 asg1(X)≃ g1(X2)+ ∇g1(X2)T(X − X2)≤ 0(E2)Asg1(X2)= 23,∂g1∂x1 X2 = (6x1 − 2x2)|X2 = −16∂g1∂x2 X2 = (−2x1 + 2x2)|X2 = 8Eq. (E2) becomesg1(X)≃ −16x1 + 8x2 read more..

  • Page - 410

    7.6Basic Approach in the Methods of Feasible Directions393Table 7.2Results for Example 7.1Solution of theIterationNew linearizedapproximating LPnumber,constraintproblemiconsideredXi+1f (Xi+1)g1(Xi+1)1−2 ≤ x1≤ 2 and−2 ≤ x2≤ 2(−2.0,2.0)−4.00000 23.000002−16.0x1 + 8.0x2 − 25.0 ≤ 0(−0.56250,2.00000) −2.562506.199223−7.375x1 + 5.125x2−8.19922 ≤ 0(0.27870, 2.00000)−1.721932.119784−2.33157x1 + 3.44386x2−4.11958 ≤ 0(−0.52970,0.83759) read more..

  • Page - 411

    394Nonlinear Programming III: Constrained Optimization Techniquesknown as methods of feasible directions. There are many ways of choosing usablefeasible directions, and hence there are many different methods of feasible directions.As seen in Chapter 2, a direction Sis feasible at a point Xi if it satisfies the relationddλgj (Xi+ λS)|λ=0 = ST∇gj(Xi) ≤ 0(7.26)where the equality sign holds true only if a constraint is linear or strictly concave,as shown in Fig. 2.8. A vector Swill be a read more..

  • Page - 412

    7.7Zoutendijk’s Method of Feasible Directions395subject toST∇gj(Xi) + θj α≤ 0,j= 1, 2, . . . , p(7.30b)ST∇f + α≤ 0(7.30c)− 1 ≤ si≤ 1,i= 1, 2, . . . , n(7.30d )where si is the ith component of S, the first pconstraints have been assumedto be active at the point Xi (the constraints can always be renumbered to satisfythis requirement), and the values of all θj can be taken as unity. Here αcan betaken as an additional design variable.4.If the value of α∗ found in step 3 is read more..

  • Page - 413

    396Nonlinear Programming III: Constrained Optimization TechniquesEqs. (7.27) and (7.28), one would naturally be tempted to choose the “best” possibleusable feasible direction at Xi.Thus we seek to find a feasible direction that, in addition to decreasing the valueof f, also points away from the boundaries of the active nonlinear constraints. Such adirection can be found by solving the following optimization problem. Given the pointXi, find the vector Sand the scalar αthat maximize read more..

  • Page - 414

    7.7Zoutendijk’s Method of Feasible Directions397s1∂g2∂x1 + s2∂g2∂x2 + · · · + sn∂g2∂xn + θ2α≤ 0...s1∂gp∂x1 + s2∂gp∂x2 + · · · + sn∂gp∂xn + θpα≤ 0(7.39)s1∂f∂x1 + s2∂f∂x2 + · · · + sn∂f∂xn + α≤ 0s1− 1 ≤ 0s2− 1 ≤− 1 ≤ 0−1 − s1≤ 0−1 − s2≤ 0...−1 − sn≤ 0where pis the number of active constraints and the partial derivatives ∂g1/∂x1, ∂g1/∂x2,. . . , ∂gp/∂xn, ∂f/∂x1, . . . , ∂f/∂xn have read more..

  • Page - 415

    398Nonlinear Programming III: Constrained Optimization Techniquest1∂f∂x1 + t2∂f∂x2 + · ·· + tn∂f∂xn + α+ yp+1=ni=1∂f∂xit1+ yp+2= 2t2+ yp+3= yp+n+1= 2t1≥ 0t2≥≥ 0α≥ 0where y1, y2, . . . , yp+n+1 are the nonnegative slack variables. The simplex methoddiscussed in Chapter 3 can be used to solve the direction-finding problem stated inEqs. (7.40). This problem can also be solved by more sophisticated methods that treatthe upper bounds on ti in a special read more..

  • Page - 416

    7.7Zoutendijk’s Method of Feasible Directions399Method 1.The optimal step length, λi, can be found by any of the one-dimensionalminimization methods described in Chapter 5. The only drawback with these methodsis that the constraints will not be considered while finding λi. Thus the new pointXi+1= Xi+ λiSi may lie either in the interior of the feasible region (Fig. 7.8a), or onthe boundary of the feasible region (Fig. 7.8b), or in the infeasible region (Fig. 7.8c).If the point Xi+1 lies in read more..

  • Page - 417

    400Nonlinear Programming III: Constrained Optimization Techniquesthe constraint gj is active if gj (Xi+1)= 10−2, 10−3, 10−8, and so on? We immediatelynotice that a small margin εhas to be specified to detect an active constraint. Thus wecan accept a point Xto be lying on the constraint boundary if |gj(X)| ≤ εwhere εisa prescribed small number. If point Xi+1 lies in the infeasible region, the step lengthhas to be reduced (corrected) so that the resulting point lies in the feasible read more..

  • Page - 418

    7.7Zoutendijk’s Method of Feasible Directions401that is,ε= −δ100|f1|f′1(7.45)It is to be noted that the value of εwill always be positive since f′1 given inEq. (7.44) is always negative. This method yields good results if the percentagereduction (δ)is restricted to small values on the order of 1 to CriteriaIn steps 4 and 5 of the algorithm, the optimization procedure is assumed to haveconverged whenever the maximum value of α(α∗) becomes approximately zero read more..

  • Page - 419

    402Nonlinear Programming III: Constrained Optimization TechniquesSOLUTIONStep 1:At X1=00 :f (X1)= 8andg1(X1)= −4Iteration 1Step 2:Since g1(X1) <0, we take the search direction asS1= −∇f (X1)= −∂f/∂x1∂f/∂x2X1=44This can be normalized to obtain S1=11 .Step 5:To find the new point X2, we have to find a suitable step length along S1. Forthis, we choose to minimize f (X1+ λS1) with respect to λ. Heref (X1+ λS1)= f (0 + λ,0 + λ)= 2λ2 − 8λ + 8dfdλ= 0atλ= 2Thus the new read more..

  • Page - 420

    7.7Zoutendijk’s Method of Feasible Directions403subject tot1+ 2t2 + α+ y1= 3−43 t1−43 t2+ α+ y2= −83t1+ y3= 2t2+ y4= 2t1≥ 0t2≥ 0α≥ 0where y1 to y4 are the nonnegative slack variables. Since an initial basic feasiblesolution is not readily available, we introduce an artificial variable y5≥ 0 intothe second constraint equation. By adding the infeasibility form w= y5, theLP problem can be solved to obtain the solution:t∗1= 2,t∗2=310 ,α∗ =410 ,y∗4=1710 ,y∗1= y∗2= read more..

  • Page - 421

    404Nonlinear Programming III: Constrained Optimization Techniques7.8ROSEN’S GRADIENT PROJECTION METHODThe gradient projection method of Rosen [7.9, 7.10] does not require the solution of anauxiliary linear optimization problem to find the usable feasible direction. It uses theprojection of the negative of the objective function gradient onto the constraints thatare currently active. Although the method has been described by Rosen for a generalnonlinear programming problem, its effectiveness read more..

  • Page - 422

    7.8Rosen’s Gradient Projection Method405is the vector of Lagrange multipliers associated with Eqs. (7.54) and βis the Lagrangemultiplier associated with Eq. (7.55). The necessary conditions for the minimum aregiven by∂L∂S= ∇f (X)+ Nλ+ 2βS = 0(7.57)∂L∂λ= NTS = 0(7.58)∂L∂β= STS − 1 = 0(7.59)Equation (7.57) givesS= −12β(∇f + Nλ)(7.60)Substitution of Eq. (7.60) into Eq. (7.58) givesNTS = −12β(NT∇f + NTNλ) = 0(7.61)If Sis normalized according to Eq. (7.59), βwill read more..

  • Page - 423

    406Nonlinear Programming III: Constrained Optimization TechniquesIf Xi is the starting point for the ith iteration (at which gj1, gj2, . . . , gjp are criticallysatisfied), we find Si from Eq. (7.66) asSi= −Pi∇f (Xi)||Pi∇f (Xi)||(7.67)where Pi indicates the projection matrix Pevaluated at the point Xi. If Si= 0, we startfrom Xi and move along the direction Si to find a new point Xi+1 according to thefamiliar relationXi+1= Xi+ λiSi(7.68)where λi is the step length along the search read more..

  • Page - 424

    7.8Rosen’s Gradient Projection Method407Figure 7.9Situation when Si= 0 and some λj are negative.and this vector will be a nonzero vector in view of the new computations we havemade. The new approximation Xi+1 is found as usual by using Eq. (7.68). At thenew point Xi+1, a new constraint may become active (in Fig. 7.9, the constraint g3becomes active at the new point Xi+1). In such a case, the new active constrainthas to be added to the set of active constraints to find the new projection read more..

  • Page - 425

    408Nonlinear Programming III: Constrained Optimization Techniquesthat lies outside the feasible region. Hence the following procedure is generally adoptedto find a suitable step length λi. Since the constraints gj (X)are linear, we havegj (λ)= gj (Xi+ λSi)=ni=1aij (xi+ λsi)− bj=ni=1aij xi− bj+ λni=1aij si= gj (Xi)+ λni=1aij si,j= 1, 2, . . . , read more..

  • Page - 426

    7.8Rosen’s Gradient Projection Method409value ofdfdλ= STi∇f (λ)atλ= λMIf the minimum value of λ, λ∗i , lies in between λ= 0 and λ= λM , the quantitydf/dλ(λM )will be positive. In such a case we can find the minimizing step length λ∗iby interpolation or by using any of the techniques discussed in Chapter 5.An important point to be noted is that if the step length is given by λi (not byλ∗i ), at least one more constraint will be active at Xi+1 than at Xi. These read more..

  • Page - 427

    410Nonlinear Programming III: Constrained Optimization Techniques6.If Si= 0, find the maximum step length λM that is permissible without violatingany of the constraints as λM= min(λk), λk >0 and kis any integer among 1 tomother than j1, j2, . . . , jp. Also find the value of df/dλ(λM )= STi∇f (Xi+λM Si). If df/dλ(λM )is zero or negative, take the step length as λi= λM . Onthe other hand, if df/dλ(λM )is positive, find the minimizing step length λ∗ieither by interpolation read more..

  • Page - 428

    7.8Rosen’s Gradient Projection Method411as∇f (X1)=2x1 − 22x2 − 4X1 =0−2The normalized search direction can be obtained asS1=1[(−0.4707)2 + (0.1177)2]1/2−0.47070.1177= −0.97010.2425Step 5:Since S1= 0, we go step 6.Step 6:To find the step length λM , we setX=x1x2= X1+ λS=1.0 − 0.9701λ1.0 + 0.2425λFor j= 2:g2(X)= (2.0 − 1.9402λ) + (3.0 + 0.7275λ) − 6.0 = 0atλ= λ2= −0.8245For j= 3:g3(X)= −(1.0 − 0.9701λ) = 0atλ= λ3= 1.03For j= 4:g4(X)= −(1.0 + 0.2425λ) = read more..

  • Page - 429

    412Nonlinear Programming III: Constrained Optimization TechniquesStep 7:We obtain the new point X2 asX2= X1+ λ1S1=1.01.0+ 0.2425−0.97010.2425=0.76471.0588Since λ1= λ∗1 and λ∗1 < λM , no new constraint has become active at X2 andhence the matrix N1 remains unaltered.Iteration i = 2Step 3:Since g1(X2)= 0, we set p= 1, j1= 1 and go to step 4.Step 4:N1=14P2=11716 −4− 41f (X2)=2x1 − 22x2 − 4X2 =1.5294 − 2.02.1176 − 4.0= −0.4706−1.8824S2= −P2∇f (X2)=11716 −4− read more..

  • Page - 430

    7.9Generalized Reduced Gradient Method413By adding a nonnegative slack variable to each of the inequality constraints inEq. (7.80), the problem can be stated asMinimize f (X)(7.83)subject tohj (X)+ xn+j= 0,j= 1, 2, . . . , m(7.84)hk(X)= 0,k= 1, 2, . . . , l(7.85)x(l)i≤ xi≤ x(u)i,i= 1, 2, . . . , n(7.86)xn+j≥ 0,j= 1, 2, . . . , m(7.87)with n+ mvariables (x1, x2, . . . , xn, xn+1, . . . , xn+m). The problem can be rewrittenin a general form as:Minimize f (X)(7.88)subject togj (X)= 0,j= 1, 2, read more..

  • Page - 431

    414Nonlinear Programming III: Constrained Optimization TechniquesConsider the first variations of the objective and constraint functions:df (X)=n−li=1∂f∂yidyi+m+li=1∂f∂zidzi= ∇TYf dY+ ∇TZ f dZ(7.94)dgi(X)=n−lj=1∂gi∂yjdyj+m+lj=1∂gi∂zjdzjordg= [C] dY+ [D] dZ(7.95)where∇Yf =∂f∂y1∂f∂y2...∂f∂yn−l(7.96)∇Zf read more..

  • Page - 432

    7.9Generalized Reduced Gradient Method415dY=dy1dy2...dyn−l(7.100)dZ=dz1dz2...dzm+l(7.101)Assuming that the constraints are originally satisfied at the vector X, (g(X)= 0), anychange in the vector dXmust correspond to dg= 0to maintain feasibility at X+ dX.Equation (7.95) can be solved to read more..

  • Page - 433

    416Nonlinear Programming III: Constrained Optimization Techniquesfixed, in order to havegi(X)+ dgi(X)= 0,i= 1, 2, . . . , m+ l(7.106)we must haveg(X)+ dg(X)= 0(7.107)Using Eq. (7.95) for dgin Eq. (7.107), we obtaindZ= [D]−1(−g(X) − [C]dY)(7.108)The value of dZgiven by Eq. (7.108) is used to update the value of ZasZupdate= Zcurrent+ dZ(7.109)The constraints evaluated at the updated vector X, and the procedure [of finding dZusing Eq. (7.108)] is repeated until dZis sufficiently small. read more..

  • Page - 434

    7.9Generalized Reduced Gradient Method417can be used for this purpose. For example, if a steepest descent method isused, the vector Sis determined asS= −GR(7.110)5. Find the minimum along the search direction. Although any of the one-dimensional minimization procedures discussed in Chapter 5 can be usedto find a local minimum of falong the search direction S, the followingprocedure can be used conveniently.(a) Find an estimate for λas the distance to the nearest side constraint. Whendesign read more..

  • Page - 435

    418Nonlinear Programming III: Constrained Optimization TechniquesIf the vector Xnew corresponding to λ∗ is found infeasible, then Ynew is heldconstant and Znew is modified using Eq. (7.108) with dZ= Znew− Zold.Finally, when convergence is achieved with Eq. (7.108), we find thatXnew=Yold+ YZold+ Z(7.116)and go to step 1.Example 7.4Minimize f (x1, x2, x3)= (x1− x2)2 + (x2− x3)4subject tog1(X)= x1(1+ x22 )+ x43− 3 = 0−3 ≤ xi≤ 3,i= 1, 2, 3using the GRG method.SOLUTIONStep 1:We read more..

  • Page - 436

    7.9Generalized Reduced Gradient Method419we find, at X1,∇Yf = ∂f∂x1∂f∂x2X1=2(−2.6 − 2)−2(−2.6 − 2) + 4(2 − 2)3= −9.29.2∇Zf =∂f∂x3 X1 = {−4(x2 − x3)3}X1 = 0[C] =∂g1∂x1∂g1∂x2 X1 = [5 −10.4][D] =∂g1∂x3 X1 = [32]D−1 = [ 132 ],[D]−1[C] =132 [5 −10.4] = [0.15625 −0.325]GR= ∇Yf − [[D]−1[C]]T∇Zf= −9.29.2−0.15625−0.325(0) = −9.29.2Step 3:Since the components of GR are not zero, the point X1 read more..

  • Page - 437

    420Nonlinear Programming III: Constrained Optimization Techniques(b) The upper bound on λis given by the smaller of λ1 and λ2, which is equalto 0.5435. By expressingX=Y+ λSZ+ λTwe obtainX= x1x2x3= −2.622 + λ9.2−9.2−4.4275= −2.6 + 9.2λ2 − 9.2λ2 − 4.4275λand hencef (λ)= f (X)= (−2.6 + 9.2λ − 2 + 9.2λ)2+ (2 − 9.2λ − 2 + 4.4275λ)4= 518.7806λ4 + 338.56λ2 − 169.28λ + 21.16df/dλ= 0 read more..

  • Page - 438

    7.9Generalized Reduced Gradient Method421Since[D] =∂g1∂z1= [4x33] = [4(1.02595)3] = [4.319551]g1(X)= {−2.4684}[C] =∂g1∂y1∂g1∂y2= {[2(−0.576 + 0.024)][−2(−0.576 + 0.024)+ 4(−0.024 − 1.02595)3]}= [−1.104 −3.5258]dZ=14.3195512.4684 − {−1.104 −3.5258}×2.024−2.024 = {−0.5633}we have Znew= Zold+ dZ= {2 − 0.5633} = {1.4367}. The current XnewbecomesXnew=Yold+ dYZold+ dZ= −0.576−0.0241.4367The constraint becomesg1= (−0.576)(1−(−0.024)2) read more..

  • Page - 439

    422Nonlinear Programming III: Constrained Optimization TechniquesStep 2:We compute the GRG at the current Xusing Eq. (7.105). Since∇Yf = ∂f∂x1∂f∂x2=2(−0.576 + 0.024)−2(−0.576 + 0.024) + 4(−0.024 − 1.2477)3= −1.104−7.1225∇Zf =∂f∂z1=∂f∂x3= {−4(−0.024 − 1.2477)3} = {8.2265}[C] =∂g1∂x1∂g1∂x2 = [(1 + (−0.024)2) 2(−0.576)(−0.024)]= [1.0005760.027648][D] =∂g1∂x3 = [4x33] = [4(1.2477)3] = read more..

  • Page - 440

    7.10Sequential Quadratic Programming423The extension to include inequality constraints will be considered at a later stage. TheLagrange function, L(X, λ), corresponding to the problem of Eq. (7.117) is given byL= f (X)+pk=1λkhk(X)(7.118)where λk is the Lagrange multiplier for the kth equality constraint. The Kuhn–Tuckernecessary conditions can be stated as∇L = 0or∇f +pk=1λk∇hk = 0or∇f + [A]Tλ = 0(7.119)hk(X)= 0,k= 1, 2, . . . , p(7.120)where [A] is an n× pmatrix whose kth column read more..

  • Page - 441

    424Nonlinear Programming III: Constrained Optimization Techniqueswhere [∇2L]n×n denotes the Hessian matrix of the Lagrange function. The first set ofequations in (7.125) can be written separately as[∇2L]jXj+ [H ]j λj= −∇Lj(7.128)Using Eq. (7.127) forλj and Eq. (7.119) for ∇Lj, Eq. (7.128) can be expressed as[∇2L]jXj+ [H ]j (λj+1− λj )= −∇fj − [H ]Tj λj(7.129)which can be simplified to obtain[∇2L]jXj+ [H ]j λj+1= −∇fj(7.130)Equation (7.130) and the second set of read more..

  • Page - 442

    7.10Sequential Quadratic Programming425subject togj+ ∇gTjX≤ 0,j= 1, 2, . . . , mhk+ ∇hTkX= 0,k= 1, 2, . . . , p(7.136)with the Lagrange function given by˜L = f (X)+mj=1λj gj (X)+pk=1λm+khk(X)(7.137)Since the minimum of the augmented Lagrange function is involved, the sequentialquadratic programming method is also known as the projected Lagrangian method.7.10.2Solution ProcedureAs in the case of Newton’s method of unconstrained minimization, the solution vectorXin Eq. (7.136) is read more..

  • Page - 443

    426Nonlinear Programming III: Constrained Optimization Techniqueswithλj= |λj|, j= 1, 2, . . . , m+ pin first iterationmax{|λj|,12 (˜λj,|λj|)}in subsequent iterations(7.142)and˜λj = λj of the previous iteration. The one-dimensional step length α∗ can be foundby any of the methods discussed in Chapter 5.Once Xj+1 is found from Eq. (7.140), for the next iteration the Hessian matrix [H ]is updated to improve the quadratic approximation in Eq. (7.138). Usually, a modifiedBFGS formula, read more..

  • Page - 444

    7.10Sequential Quadratic Programming427We assume the matrix [H1] to be the identity matrix and hence the objective functionof Eq. (7.138) becomesQ(S)= 0.1s1 + 0.05773s2 + 0.5s21 + 0.5s22(E5)Equation (7.139) gives β1= β3= 0 since g1= g3= 0 and β2= 1.0 since g2 <0, andhence the constraints of Eq. (7.138) can be expressed as˜g1 = −0.004254s1 − 0.007069s2 ≤ 0(E6)˜g2 = −5.8765 − s1≤ 0(E7)˜g3 = −s2 ≤ 0(E8)We solve this quadratic programming problem [Eqs. (E5) to (E8)] directly read more..

  • Page - 445

    428Nonlinear Programming III: Constrained Optimization TechniquesBy using quadratic interpolation technique (unrestricted search method can also be usedfor simplicity), we find that φattains its minimum value of 1.48 at α∗ = 64.93, whichcorresponds to the new design vectorX2=8.76578.8719with f (X2)= 1.38874 and g1(X2)= +0.0074932 (violated slightly). Next we updatethe matrix [H ] using Eq. (7.143) with˜L = 0.1x1 + 0.05773x2 + 12.24500.6x1 +0.3464x2− 0.1∇x ˜L read more..

  • Page - 446

    7.11Transformation Techniques429that the constraints are satisfied automatically [7.13]. Thus it may be possible to converta constrained optimization problem into an unconstrained one by making a change ofvariables. Some typical transformations are indicated below:1.If lower and upper bounds on xi are specified asli≤ xi≤ ui(7.148)these can be satisfied by transforming the variable xi asxi= li+ (ui− li)sin2yi(7.149)where yi is the new variable, which can take any value.2.If a variable xi read more..

  • Page - 447

    430Nonlinear Programming III: Constrained Optimization Techniquessubject tox1+ x2+ x3≤ 60(E2)x1≤ 36(E3)xi≥ 0,i= 1, 2, 3(E4)By introducing new variables asy1= x1,y2= x2,y3= x1+ x2+ x3(E5)orx1= y1,x2= y2,x3= y3− y1− y2(E6)the constraints of Eqs. (E2) to (E4) can be restated as0 ≤ y1≤ 36,0 ≤ y2≤ 60,0 ≤ y3≤ 60(E7)where the upper bound, for example, on y2 is obtained by setting x1= x3= 0 inEq. (E2). The constraints of Eq. (E7) will be satisfied automatically if we define read more..

  • Page - 448

    7.12Basic Approach of the Penalty Function Method431unconstrained minimization problems. Let the basic optimization problem, withinequality constraints, be of the form:Find Xwhich minimizes f (X)subject togj (X)≤ 0,j= 1, 2, . . . , m(7.153)This problem is converted into an unconstrained minimization problem by constructinga function of the formφk= φ(X, rk)= f (X)+ rkmj=1Gj [gj (X)](7.154)where Gj is some function of the constraint gj , and rk is a positive constant knownas the penalty read more..

  • Page - 449

    432Nonlinear Programming III: Constrained Optimization TechniquesFigure 7.10Penalty function methods: (a)exterior method; (b)interior method.It can be seen from Fig. 7.10a that the unconstrained minima of φ(X, rk) convergeto the optimum point X∗ as the parameter rk is increased sequentially. On the otherhand, the interior method shown in Fig. 7.10b gives convergence as the parameter rkis decreased sequentially.There are several reasons for the appeal of the penalty function formulations. read more..

  • Page - 450

    7.13Interior Penalty Function Method433approached. This behavior can also be seen from Fig. 7.10b. Thus once the uncon-strained minimization of φ(X, rk) is started from any feasible point X1, the subsequentpoints generated will always lie within the feasible domain since the constraint bound-aries act as barriers during the minimization process. This is why the interior penaltyfunction methods are also known as barrier methods. The φfunction defined originallyby Carroll [7.14] isφ(X, rk)= f read more..

  • Page - 451

    434Nonlinear Programming III: Constrained Optimization Techniques4.Suitable convergence criteria have to be chosen to identify the optimum point.5.The constraints have to be normalized so that each one of them vary between−1 and 0 only.All these aspects are discussed in the following paragraphs.Starting Feasible Point X1.In most engineering problems, it will not be very difficultto find an initial point X1 satisfying all the constraints, gj (X1) <0. As an example,consider the problem of read more..

  • Page - 452

    7.13Interior Penalty Function Method4355.If all the constraints are not satisfied at the point XM , set the new starting pointas X1= XM , and renumber the constraints such that the last rconstraints willbe the unsatisfied ones (this value of rwill be different from the previous value),and go to step 2.This procedure is repeated until all the constraints are satisfied and a point X1=XM is obtained for which gj (X1) <0, j= 1, 2, . . . , m.If the constraints are consistent, it should be read more..

  • Page - 453

    436Nonlinear Programming III: Constrained Optimization Techniques1.The relative difference between the values of the objective function obtainedat the end of any two consecutive unconstrained minimizations falls below asmall number ε1, that is,f (X∗k )− f (X∗k−1)f (X∗k )≤ ε1(7.167)2.The difference between the optimum points X∗k and X∗k−1becomes very small.This can be judged in several ways. Some of them are given below:|( X)i| ≤ ε2(7.168)whereX= X∗k− X∗k−1, and ( read more..

  • Page - 454

    7.13Interior Penalty Function Method437effective in reducing the disparities between the contributions of the various constraintsto the φfunction.Example 7.7Minimize f (x1, x2)=13 (x1+ 1)3 + x2subject tog1(x1, x2)= −x1 + 1 ≤ 0g2(x1, x2)= −x2 ≤ 0SOLUTIONTo illustrate the interior penalty function method, we use the calculusmethod for solving the unconstrained minimization problem in this case. Hence thereis no need to have an initial feasible point X1. The φfunction isφ(X, r)=13(x1+ read more..

  • Page - 455

    438Nonlinear Programming III: Constrained Optimization TechniquesTable 7.3Results for Example 7.7Value of rx∗1 (r)= (r1/2+ 1)1/2x∗2 (r)= r1/2φmin(r)f (r)10005.7116431.62278376.2636132.40031003.3166210.0000089.977236.8109102.040173.1622825.304812.528611.414211.000009.10465.69040.11.147270.316234.61173.61640.011.048810.100003.27162.96670.0011.015690.031622.85692.76150.00011.004990.010002.72672.69670.000011.001580.003162.68562.67620.0000011.000500.001002.67272.6697Exact solution read more..

  • Page - 456

    7.13Interior Penalty Function Method439Table 7.4Results for Example 7.8Number ofStarting pointiterations takenfor minimizingfor minimizingkValue of rkφkφkOptimum X∗kφ∗kf∗k11.0 × 10090.378981.679652.3461710.36219 5.7076621.0 × 10−10.378981.679652.3461770.100881.419451.683024.124402.7326731.0 × 10−20.100881.419451.6830250.030661.414111.498422.254371.8301241.0 × read more..

  • Page - 457

    440Nonlinear Programming III: Constrained Optimization TechniquesProof: If X∗ is the optimum solution of the constrained problem, we have to provethatlimrk→0[min φ(X, rk)]= φ(X∗k , rk)= f (X∗)(7.177)Since f (X)is continous and f (X∗) ≤ f (X)for all feasible points X, we can choosefeasible point˜Xsuch thatf (˜X) < f (X∗) +ε2(7.178)for any value of ε >0. Next select a suitable value of k, say K, such thatrk≤ε2mminj−1gj (˜X)(7.179)From the definition of the read more..

  • Page - 458

    7.13Interior Penalty Function Method441By using inequalities (7.178) and (7.186), inequality (7.185) becomesf (X∗) ≤ φ(X∗k , rk) < f (X∗) +ε2+ε2= f (X∗) + εorφ(X∗k , rk)− f (X∗) < ε(7.187)Given any ε >0 (however small it may be), it is possible to choose a value of kso asto satisfy the inequality (7.187). Hence as k→ ∞(rk → 0), we havelimrk→0φ(X∗k , rk)= f (X∗)This completes the proof of the theorem.Additional Results.From the proof above, it follows read more..

  • Page - 459

    442Nonlinear Programming III: Constrained Optimization TechniquesCanceling the common terms from both sides, we can write the inequality (7.193) asf (X∗k+1)1rk+1 −1rk< f (X∗k )1rk+1 −1rk(7.194)since1rk+1 −1rk =rk− rk+1rkrk+1>0(7.195)we obtainf (X∗k+1) < f (X∗k )(7.196)7.14CONVEX PROGRAMMING PROBLEMIn Section 7.13 we saw that the sequential minimization ofφ(X, rk)= f (X)− rkmj=11gj (X),rk >0(7.197)for a decreasing sequence of values of rk gives the minima X∗k . As read more..

  • Page - 460

    7.15Exterior Penalty Function Method443SUMT method is a global one. In such cases one has to satisfy with a local minimumonly. However, one can always reapply the SUMT method from different feasiblestarting points and try to find a better local minimum point if the problem has severallocal minima. Of course, this procedure requires more computational effort.7.15EXTERIOR PENALTY FUNCTION METHODIn the exterior penalty function method, the φfunction is generally taken asφ(X, rk)= f (X)+ rkmj=1gj read more..

  • Page - 461

    444Nonlinear Programming III: Constrained Optimization TechniquesFigure 7.11A φfunction discontinuous for q= 0.Figure 7.12Derivatives of a φfunction discontinuous for 0 < q <1.3. q= 1. In this case, under certain restrictions, it has been shown by Zangwill[7.16] that there exists an r0 so large that the minimum of φ(X, rk) is exactlythe constrained minimum of the original problem for all rk > r0. However, thecontours of the φfunction look similar to those shown in Fig. 7.12 and read more..

  • Page - 462

    7.15Exterior Penalty Function Method445Figure 7.13A φfunction for q >1.4. q >1. The φfunction will have continuous first derivatives in this case as shownin Fig. 7.13. These derivatives are given by∂φ∂xi =∂f∂xi + rkmj=1q gj (X)q−1∂gj (X)∂xi(7.202)Generally, the value of qis chosen as 2 in practical computation. We assume avalue of q >1 in subsequent discussion of this method.Algorithm.The exterior penalty function method can be stated by the followingsteps:1.Start from read more..

  • Page - 463

    446Nonlinear Programming III: Constrained Optimization TechniquesExample 7.9Minimize f (x1, x2)=13 (x1+ 1)3 + x2subject tog1(x1, x2)= 1 − x1≤ 0g2(x1, x2)= −x2 ≤ 0SOLUTIONTo illustrate the exterior penalty function method, we solve the uncon-strained minimization problem by using differential calculus method. As such, it is notnecessary to have an initial trial point X1. The φfunction isφ(X1, r)=13 (x1+ 1)3 + x2+ r[max(0, 1 − x1)]2 + r[max(0, −x2)]2The necessary conditions for the read more..

  • Page - 464

    7.16Extrapolation Techniques in the Interior Penalty Function Method447Table 7.5Results for Example 7.9Value of rx∗1x∗2φmin(r)fmin(r)0.001−0.93775−500.00000−249.9962−500.00000.01−0.80975−50.00000−24.9650−49.99770.1−0.45969−5.00000−2.2344−4.947410.23607−0.500000.96310.1295100.83216−0.050002.30682.00011000.98039−0.005002.62492.58401,0000.99800−0.000502.66242.658210,0000.99963−0.000052.66552.6652∞108383Convergence Proof.To prove the convergence of the read more..

  • Page - 465

    448Nonlinear Programming III: Constrained Optimization TechniquesX∗1, X∗2, . . . , X∗k converges to the minimum point X∗, and the sequence f∗1 , f∗2 , . . . , f∗kto the minimum value f∗ of the original constrained problem stated in Eq. (7.153) asrk→ 0. After carrying out a certain number of unconstrained minimizations of φ, theresults obtained thus far can be used to estimate the minimum of the original constrainedproblem by a method known as the extrapolation technique. The read more..

  • Page - 466

    7.16Extrapolation Techniques in the Interior Penalty Function Method449From Eqs. (7.206) and (7.208), the extrapolated value of the true minimum can beobtained asX∗(r = 0) = A0=X∗k− cX∗k−11 − c(7.209)The extrapolation technique [Eq. (7.203)] has several advantages:1.It can be used to find a good estimate to the optimum of the original problemwith the help of Eq. (7.205).2.It can be used to provide an additional convergence criterion to terminate theminimization process. The point read more..

  • Page - 467

    450Nonlinear Programming III: Constrained Optimization Techniques7.16.2Extrapolation of the Function fAs in the case of the design vector, it is possible to use extrapolation techniqueto estimate the optimum value of the original objective function, f∗. For this, letf∗1 , f∗2 , . . . , f∗kbe the values of the objective function corresponding to the vectorsX∗1, X∗2, . . . , X∗k . Since the points X∗1, X∗2, . . . , X∗k have been found to be the uncon-strained minima of the read more..

  • Page - 468

    7.17Extended Interior Penalty Function Methods451Example 7.10Find the extrapolated values of Xand fin Example 7.8 using theresults of minimization of φ(X, r1) and φ(X, r2).SOLUTIONFrom the results of Example 7.8, we have for r1= 1.0,X∗1= 0.378981.679652.34617,f∗1= 5.70766and for r2= 0.1,c= 0.1,X∗2= 0.100881.419451.68302,f∗2= 2.73267By using Eq. (7.206) for approximating X∗(r), the extrapolated vector X∗ is given byEq. (7.209) asX∗ ≃ read more..

  • Page - 469

    452Nonlinear Programming III: Constrained Optimization Techniquesfunction is constructed as follows:φk= φ(X, rk)= f (X)+ rkmj=1˜gj(X)(7.222)where˜gj(X) = −1gj (X)if gj (X)≤ ε−2ε − gj (X)ε2if gj (X) > ε(7.223)and εis a small negative number that marks the transition from the interior penalty[gj (X)≤ ε] to the extended penalty [gj (X) > ε]. To produce a sequence of improvedfeasible designs, the value of εis to be selected such that the function φk read more..

  • Page - 470

    7.18Penalty Function Method for Problems with Mixed Equality and Inequality Constraints453Figure 7.14Linear extended penalty function method.Eq. (7.225) compared to Eq. (7.222). The concept of extended interior penalty functionapproach can be generalized to define a variable penalty function method from whichthe linear and quadratic methods can be derived as special cases [7.24].Example 7.11Plot the contours of the φk function using the linear extended interiorpenalty function for the read more..

  • Page - 471

    454Nonlinear Programming III: Constrained Optimization TechniquesFigure 7.15Graphs of φk.methods that can be used to solve a general class of problems.Minimize f (X)subject togj (X)≤ 0,j= 1, 2, . . . , mlj (X)= 0,j= 1, 2, . . . , p(7.227)7.18.1Interior Penalty Function MethodSimilar to Eq. (7.154), the present problem can be converted into an unconstrainedminimization problem by constructing a function of the formφk= φ(X, rk)= f (X)+ rkmj=1Gj [gj (X)] + H (rk)pj=1l2j (X)(7.228)where Gj is read more..

  • Page - 472

    7.18Penalty Function Method for Problems with Mixed Equality and Inequality Constraints455H (rk)→ ∞, the quantitypj=1l2j (X)must tend to zero. Ifpj=1l2j (X)does not tend tozero, φk would tend to infinity, and this cannot happen in a sequential minimizationprocess if the problem has a solution. Fiacco and McCormick [7.17, 7.21] used thefollowing form of Eq. (7.228):φk= φ(X, rk)= f (X)− rkmj=11gj (X)+1√rkpj=1l2j (X)(7.229)If φk is minimized for a decreasing sequence of values rk, the read more..

  • Page - 473

    456Nonlinear Programming III: Constrained Optimization TechniquesAs in the case of Eq. (7.199), this function has to be minimized for an increasingsequence of values of rk. It can be proved that as rk→ ∞, the unconstrained minima,X∗k , of φ(X, rk) converge to the minimum of the original constrained problem statedin Eq. (7.227).7.19PENALTY FUNCTION METHOD FOR PARAMETRICCONSTRAINTS7.19.1Parametric ConstraintIn some optimization problems, a particular constraint may have to be satisfied read more..

  • Page - 474

    7.19Penalty Function Method for Parametric Constraints457Figure 7.17Output angles generated and desired.Figure 7.18Rectangular plate under arbitrary stated as a parametric constraint as|σ(x, y)| − σmax≤ 0,0 ≤ x≤ a,0 ≤ y≤ b(7.233)Thus this constraint has to be satisfied at all the values of parameters xand y.7.19.2Handling Parametric ConstraintsOne method of handling a parametric constraint is to replace it by a number of ordinaryconstraints asgj (X, θi)≤ 0,i= 1, 2, . . read more..

  • Page - 475

    458Nonlinear Programming III: Constrained Optimization TechniquesAnother method of handling the parametric constraints is to construct the φfunctionin a different manner as follows [7.1, 7.15].Interior Penalty Function Methodφ(X, rk)= f (X)− rkmj=1θuθl1gj (X, θ )dθ(7.235)The idea behind using the integral in Eq. (7.235) for a parametric constraint is tomake the integral tend to infinity as the value of the constraint gj (X, θ )tends to zeroeven at one value of θin its range. If a read more..

  • Page - 476

    7.20Augmented Lagrange Multiplier Method459Figure 7.19Numerical integration procedure.where ris the number of discrete values of θ, andθis the uniform spacing betweenthe discrete values so thatθ1= θl,θ2= θ1+ θ,θ3= θ1+ 2 θ, . . . , θr= θ1+ (r− 1) θ= θuIf gj (X, θ )cannot be expressed as a closed-form function of X, the derivative ∂gj /∂xioccurring in Eq. (7.236) has to be evaluated by using some form of a finite-differenceformula.Exterior Penalty Function Methodφ(X, rk)= f read more..

  • Page - 477

    460Nonlinear Programming III: Constrained Optimization Techniquessubject tohj (X)= 0,j= 1, 2, . . . , p,p < n(7.240)The Lagrangian corresponding to Eqs. (7.239) and (7.240) is given byL(X, λ)= f (X)+pj=1λj hj (X)(7.241)where λj , j= 1, 2, . . . , p, are the Lagrange multipliers. The necessary conditions fora stationary point of L(X, λ)include the equality constraints, Eq. (7.240). The exteriorpenalty function approach is used to define the new objective function A(X, λ, rk),termed the read more..

  • Page - 478

    7.20Augmented Lagrange Multiplier Method461where X(k)denotes the starting vector used in the minimization of A. The value of rkis updated asrk+1= crk,c >1(7.247)The function Ais then minimized with respect to Xto find X∗(k+1) and the iterativeprocess is continued until convergence is achieved for λ(k)jor X∗. If the value of rk+1exceeds a prespecified maximum value rmax, it is set equal to rmax. The iterative processis indicated as a flow diagram in Fig. 7.20.Figure 7.20Flowchart of read more..

  • Page - 479

    462Nonlinear Programming III: Constrained Optimization Techniques7.20.2Inequality-Constrained ProblemsConsider the following inequality-constrained problem:Minimize f (X)(7.248)subject togj (X)≤ 0,j= 1, 2, . . . , m(7.249)To apply the ALM method, the inequality constraints of Eq. (7.249) are first convertedto equality constraints asgj (X)+ y2j= 0,j= 1, 2, . . . , m(7.250)where y2j are the slack variables. Then the augmented Lagrangian function is con-structed asA(X, λ, Y, rk)= f (X)+mj=1λj read more..

  • Page - 480

    7.20Augmented Lagrange Multiplier Method4637.20.3Mixed Equality–Inequality-Constrained ProblemsConsider the following general optimization problem:Minimize f (X)(7.255)subject togj (X)≤ 0,j= 1, 2, . . . , m(7.256)hj (X)= 0,j= 1, 2, . . . , p(7.257)This problem can be solved by combining the procedures of the two preceding sections.The augmented Lagrangian function, in this case, is defined asA(X, λ, rk)= f (X)+mj=1λj αj+pj=1λm+jhj(X)+ rkmj=1α2j+ rkpj=1h2j (X)(7.258)where αj is given read more..

  • Page - 481

    464Nonlinear Programming III: Constrained Optimization TechniquesTable 7.6Results for Example 7.12λ(i)r kx∗(i)1x∗(i)2Value of read more..

  • Page - 482

    7.21Checking the Convergence of Constrained Optimization Problems465iterative process and using the solution with confidence. In addition to the convergencecriteria discussed earlier, the following two methods can also be used to test the pointfor optimality.7.21.1Perturbing the Design VectorSince the optimum pointX∗ =x∗1x∗2...x∗ncorresponds to the minimum function value subject to the satisfaction of the constraintsgj (X∗) ≤ 0, read more..

  • Page - 483

    466Nonlinear Programming III: Constrained Optimization Techniquestest for the satisfaction of these conditions before taking a point Xas optimum.Equations (2.73) can be written asj∈j1λj∂gj∂xi = −∂f∂xi,i= 1, 2, . . . , n(7.261)where J1 indicates the set of active constraints at the point X. If gj1(X)= gj2(X)=· ·· = gjp(X)= 0, Eqs. (7.261) can be expressed asGn×pλp×1 = Fn×1(7.262)whereG=∂gj1∂x1∂gj2∂x1 · · read more..

  • Page - 484

    7.22Test Problems4677.22TEST PROBLEMSAs discussed in previous sections, a number of algorithms are available for solvinga constrained nonlinear programming problem. In recent years, a variety of computerprograms have been developed to solve engineering optimization problems. Many ofthese are complex and versatile and the user needs a good understanding of the algo-rithms/computer programs to be able to use them effectively. Before solving a newengineering design optimization problem, we usually read more..

  • Page - 485

    468Nonlinear Programming III: Constrained Optimization Techniqueswhere σi is the stress induced in member i, σ (u) the maximum permissible stress intension, σ (l) the maximum permissible stress in compression, x(l)ithe lower boundon xi, and x(u)ithe upper bound on xi. The stresses are given byσ1(X)= Px2+√2x1√2x21+ 2x1x2σ2(X)= P1x1+√2x2σ3(X)= −Px2√2x21+ 2x1x2Data: σ (u) = 20, σ (l) = −15, x(l)i= 0.1(i = 1, 2), x(u)i= 5.0(i = 1, 2), P= 20,and E= 1.Optimum read more..

  • Page - 486

    7.22Test Problems469Figure 7.22A 25-bar space truss [7.38].= sum of deflections of nodes 1 and 2f3(X)= −ω1 = negative of fundamental natural frequency of vibrationwhere δix= deflection of node ialong xdirection.Table 7.7Loads Acting on the 25-Bar TrussJoint1236Load condition 1, loads in poundsFx0000Fy20,000−20,00000Fz−5,000−5,00000Load condition 2, loads in poundsFx1,0000500500Fy10,00010,00000Fz−5,000−5,00000 read more..

  • Page - 487

    470Nonlinear Programming III: Constrained Optimization TechniquesConstraints:|σij(X)| ≤ σmax,i= 1, 2, . . . ,25,j= 1, 2σij (X)≤ pi(X),i= 1, 2, . . . ,25,j= 1, 2x(l)i≤ xi≤ x(u)i,i= 1, 2, . . . ,8where σij is the stress induced in member iunder load condition j, x(l)ithe lowerbound on xi, and x(u)ithe upper bound on xi.Data: σmax= 40,000 psi, x(l)i= 0.1 in2, x(u)i= 5.0 in2 for i= 1, 2, . . . ,25.Optimum solution:See Table Beam DesignThe welded beam shown in Fig. 7.23 read more..

  • Page - 488

    7.22Test Problems471lLPhhbtFigure 7.23Welded beam [7.39].Objective function: f (X)= 1.10471x21 x2+ 0.04811x3x4(14.0 + x2)Constraints:g1(X)= τ (X)− τmax≤ 0g2(X)= σ (X)− σmax≤ 0g3(X)= x1− x4≤ 0g4(X)= 0.10471x21 + 0.04811x3x4(14.0 + x2)− 5.0 ≤ 0g5(X)= 0.125 − x1≤ 0g6(X)= δ(X)− δmax≤ 0g7(X)= P− Pc(X)≤ 0g8(X) to g11(X) : 0.1 ≤ xi≤ 2.0,i= 1, 4g12(X) to g15(X) : 0.1 ≤ xi≤ 10.0,i= 2, 3whereτ (X)= (τ′)2 + 2τ′τ′′x22R+ (τ′′)2τ′ read more..

  • Page - 489

    472Nonlinear Programming III: Constrained Optimization TechniquesJ= 2x1x2√2x2212+x1+ x322σ (X)=6P Lx4x23δ(X)=4P L3Ex33 x4Pc(X)=4.013 EG(x23 x64 /36)L21 −x32LE4GData: P= 6000 lb, L= 14 in., E= 30 × 106 psi, G= 12 × 106 psi, τmax=13,600 psi, σmax= 30,000 psi, and δmax= 0.25 in.Starting and optimum solutions:Xstart = hltb= in.,fstart = $5.3904,X∗ = read more..

  • Page - 490

    7.22Test Problems473Objective(minimization of weight of speed reducer):f (X)= 0.7854x1x22 (3.3333x23 + 14.9334x3 − 43.0934) − 1.508x1(x26+ x27 )+ 7.477(x36 + x37 )+ 0.7854(x4x26+ x5x27 )Constraints:g1(x)=27x−11 x−22 x−13≤ 1g2(x)=397.5x−11 x−22 x−23≤ 1g3(x)=1.93x−12 x−13 x34 x−46≤ 1g4(x)=1.93x−12 x−13 x35 x−47≤ 1g5(x)=745x4x2x32+ (16.9)1060.50.1x36≤ 1100g6(x)=745x5x2x32+ (157.5)1060.50.1x37 ≤ 850g7(x)=x2x3 ≤ 40g8(x) : 5 ≤x1x2 ≤ 12 : g9(x)g10(x) : 2.6 read more..

  • Page - 491

    474Nonlinear Programming III: Constrained Optimization TechniquesConstraints:g1(X)= 0.0025(x4 + x6)− 1 ≤ 0g2(X)= 0.0025(−x4 + x5+ x7)− 1 ≤ 0g3(X)= 0.01(−x5 + x8)− 1 ≤ 0g4(X)= 100x1 − x1x6+ 833.33252x4 − 83,333.333 ≤ 0g5(X)= x2x4− x2x7− 1250x4 + 1250x5 ≤ 0g6(X)= x3x5− x3x8− 2500x5 + 1,250,000 ≤ 0g7 : 100 ≤ x1≤ 10,000 : g8g9 : 1000 ≤ x2≤ 10,000 : g10g11 : 1000 ≤ x3≤ 10,000 : g12g13 to g22 : 10 ≤ xi≤ 1000,i= 4, 5, . . . ,8Optimumsolution:X∗ = {567 read more..

  • Page - 492

    7.23MATLAB Solution of Constrained Optimization Problems475Step 3:Invoke constrained optimization program (write this in new MATLAB file).clcclear allwarning offx0 = [.1,.1, 3.0]; % Starting guessfprintf ('The values of function value and constraints atstarting pointn');f=objfun (x0)[c, ceq] = constraints (x0)options= optimset ('LargeScale', 'off');[x, fval]=fmincon (@objfun, x0, [], [], [], [], [], [],@constraints, options)fprintf ('The values of constraints at optimum solutionn');[c, ceq] = read more..

  • Page - 493

    476Nonlinear Programming III: Constrained Optimization Techniques-0.0000-0.0000-3.58580-1.4142-1.4142ceq =[]REFERENCES AND BIBLIOGRAPHY7.1R. L. Fox, Optimization Methods for Engineering Design, Addison-Wesley, Reading, MA,1971.7.2M. J. Box, A new method of constrained optimization and a comparison with othermethods, Computer Journal, Vol. 8, No. 1, pp. 42–52, 1965.7.3E. W. Cheney and A. A. Goldstein, Newton’s method of convex programming andTchebycheff approximation, Numerische Mathematik, read more..

  • Page - 494

    References and Bibliography4777.18D. Kavlie and J. Moe, Automated design of frame structure, ASCE Journal of the Struc-tural Division, Vol. 97, No. ST1, pp. 33–62, Jan. 1971.7.19J. H. Cassis and L. A. Schmit, On implementation of the extended interior penalty func-tion, International Journal for Numerical Methods in Engineering, Vol. 10, pp. 3–23,1976.7.20R. T. Haftka and J. H. Starnes, Jr., Application of a quadratic extended interior penaltyfunction for structural optimization, AIAA read more..

  • Page - 495

    478Nonlinear Programming III: Constrained Optimization Techniques7.39K. M. Ragsdell and D. T. Phillips, Optimal design of a class of welded structuresusing geometric programming, ASME Journal of Engineering for Industry, Vol. 98,pp. 1021–1025, 1976.7.40J. Golinski, An adaptive optimization system applied to machine synthesis, Mechanismand Machine Synthesis, Vol. 8, pp. 419–436, 1973.7.41H. L. Li and P. Papalambros, A production system for use of global optimization knowl-edge, ASME Journal read more..

  • Page - 496

    Review Questions479(h)The solutions of all LP problems in the SLP method lie in the infeasible domain ofthe original problem.(i)The SLP method is applicable to both convex and nonconvex problems.(j)The usable feasible directions can be generated using random numbers.(k)The usable feasible direction makes an obtuse angle with the gradients of all theconstraints.(l)If the starting point is feasible, all subsequent unconstrained minima will be feasiblein the exterior penalty function method.(m)The read more..

  • Page - 497

    480Nonlinear Programming III: Constrained Optimization Techniques7.21Construct the φk function to be used for a mixed equality–inequality constrained problemin the interior penalty function approach.7.22What is a parametric constraint?7.23Match the following methods:(a)Zoutendijk methodHeuristic method(b)Cutting plane methodBarrier method(c)Complex methodFeasible directions method(d)Projected Lagrangian methodSequential linear programming method(e)Penalty function methodGradient projection read more..

  • Page - 498

    Problems4817.4Consider the tubular column described in Example 1.1. Starting from the design vector(d = 8.0 cm, t= 0.4 cm), complete two steps of reflection, expansion, and/or contractionof the complex method.7.5Consider the problem:Minimize f (X)= x1− x2subject to3x21− 2x1x2 + x22− 1 ≤ 0(a)Generate the approximating LP problem at the vector, X1= −22 .(b)Solve the approximating LP problem using graphical method and find whether theresulting solution is feasible to the original read more..

  • Page - 499

    482Nonlinear Programming III: Constrained Optimization Techniques7.9Minimize f (X)= 9x21+ 6x22+ x23− 18x1 − 12x2 − 6x3 − 8subject tox1+ 2x2 + x3≤ 4xi≥ 0,i= 1, 2, 3Using the starting point X1= {0,0,0}T, complete one step of sequential linear program-ming method.7.10Complete one cycle of the sequential linear programming method for the truss ofSection 7.22.1 using the starting point, X1=11 .7.11A flywheel is a large mass that can store energy during coasting of an engine and feedit read more..

  • Page - 500

    Problems4837.14Consider the problem:Minimize f= (x1−1)2 + (x2−5)2subject tog1= −x21+ x2− 4 ≤ 0g2= −(x1 − 2)2 + x2− 3 ≤ 0Formulate the direction-finding problem at Xi= −15as a linear programming problem(in Zoutendijk method).7.15Minimize f (X)= (x1− 1)2 + (x2− 5)2subject to−x21+ x2≤ 4−(x1 − 2)2 + x2≤ 3starting from the point X1=11and using Zoutendijk’s method. Complete twoone-dimensional minimization steps.7.16Minimize f (X)= (x1− 1)2 + (x2− 2)2 − read more..

  • Page - 501

    484Nonlinear Programming III: Constrained Optimization Techniques7.19Approximate the following problem as a quadratic programming problem at (x1 = 1,x2= 1):Minimize f= x21+ x22− 6x1 − 8x2 + 15subject to4x21+ x22≤ 163x21+ 5x22≤ 15xi≥ 0,i= 1, 27.20Consider the truss structure shown in Fig. 7.25. The minimum weight design of the trusssubject to a constraint on the deflection of node Salong with lower bounds on the crosssectional areas of members can be started as follows:Minimize f= read more..

  • Page - 502

    Problems485subject to0 ≤ x10 ≤ x2≤x1√30 ≤ x1+√3x2 ≤ 67.23Construct the φk function, according to (a)interior and (b)exterior penalty functionmethods and plot its contours for the following problem:Maximize f= 2xsubject to2 ≤ x≤ 107.24Construct the φk function according to the exterior penalty function approach and completethe minimization of φk for the following problem.Minimize f (x)= (x− 1)2subject tog1(x)= 2 − x≤ 0,g2(x)= x− 4 ≤ 07.25Plot the contours of the φk read more..

  • Page - 503

    486Nonlinear Programming III: Constrained Optimization Techniques7.28Solve the following problem using an interior penalty function approach coupled with thecalculus method of unconstrained minimization:Minimize f= x2 − 2x − 1subject to1 − x≥ 0Note:Sequential minimization is not necessary.7.29Consider the problem:Minimize f= x21+ x22− 6x1 − 8x2 + 15subject to4x21+ x22≥ 16,3x1 + 5x2 ≤ 15Normalize the constraints and find a suitable value of r1 for use in the interior read more..

  • Page - 504

    Problems487Starting point forUnconstrainedValue ofminimization ofminimum ofkrkφ (X, rk)φ (X, rk)= X∗kf (X∗k )= f∗k11(−0.4597, −5.0)(0.2361, −0.5)0.1295210(0.2361, −0.5)(0.8322, −0.05)2.0001Estimate the optimum solution, X∗ and f∗, using a suitable extrapolation technique.7.35The results obtained in an exterior penalty function method of solution for the optimizationproblem stated in Problem 7.15 are given below:r1= 0.01,X∗1= − 0.80975−50.0,φ∗1= −24.9650, f∗1= read more..

  • Page - 505

    488Nonlinear Programming III: Constrained Optimization TechniquesFigure 7.26Two-bar truss subjected to a parametric load.7.40Solve the following optimization problem using the augmented Lagrange multipliermethod keeping rp= 1 throughout the iterative process and λ(1)= 0:Minimize f= (x1− 1)2 + (x2− 2)2subject to−x1 + 2x2 = 27.41Consider the problem:Minimize f= (x1− 1)2 + (x2− 5)2subject tox1+ x2− 5 = 0(a)Write the expression for the augmented Lagrange function with rp= 1.(b)Start read more..

  • Page - 506

    Problems489Determine whether the solutionX= 0√2√2is optimum by finding the values of the Lagrange multipliers.7.43Determine whether the solutionX= 0√2√2is optimum for the problem considered in Example 7.8 using a perturbation method withxi= 0.001, i= 1, 2, 3.7.44The following results are obtained during the minimization off (X)= 9 − 8x1 − 6x2 − 4x3 + 2x21+ 2x22+ x23+ 2x1x2 + 2x1x3subject tox1+ x2+ 2x3 ≤ 3xi≥ 0,i= 1, 2, 3using the interior read more..

  • Page - 507

    490Nonlinear Programming III: Constrained Optimization Techniques7.45Find the extrapolated solution of Problem 7.44 by using quadratic relations for X(r)andf (r).7.46Give a proof for the convergence of exterior penalty function method.7.47Write a computer program to implement the interior penalty function method withthe DFP method of unconstrained minimization and the cubic interpolation method ofone-dimensional search.7.48Write a computer program to implement the exterior penalty function read more..

  • Page - 508

    Problems4917.55Find the solution of the following problem using the MATLAB function fminconwiththe starting point: X1= {1.0, 1.0}T:Minimize f (X)= x21+ x22subject to4 − x1− x22≤ 03x2 − x1≤ 0− 3x2 − x1≤ 0 read more..

  • Page - 509

    8Geometric Programming8.1INTRODUCTIONGeometric programming is a relatively new method of solving a class of nonlinearprogramming problems. It was developed by Duffin, Peterson, and Zener [8.1]. It isused to minimize functions that are in the form of posynomials subject to constraints ofthe same type. It differs from other optimization techniques in the emphasis it places onthe relative magnitudes of the terms of the objective function rather than the variables.Instead of finding optimal values read more..

  • Page - 510

    8.4Solution Using Differential Calculus493is a second-degree polynomial in the variables, x1, x2, and x3 (coefficients of the variousterms are real) whileg(x1, x2, x3)= x1x2x3+ x21 x2+ 4x3 +2x1x2 + 5x−1/23is a posynomial. If the natural formulation of the optimization problem does not lead toposynomial functions, geometric programming techniques can still be applied to solvethe problem by replacing the actual functions by a set of empirically fitted posynomialsover a wide range of the read more..

  • Page - 511

    494Geometric ProgrammingBy multiplying Eq. (8.4) by xk, we can rewrite it asxk∂f∂xk =Nj=1akj (cjxa1j1xa2j2·· · xak−1,jk−1xakjkxak+1,jk+1 · ·· xanjn)=Nj=1akj Uj (X)= 0,k= 1, 2, . . . , n(8.5)To find the minimizing vectorX∗ =x∗1x∗2...x∗nwe have to solve the nequations given by Eqs. (8.4), simultaneously. To ensure that thepoint X∗ corresponds to the minimum of f(but not to the maximum or the stationarypoint of X), the read more..

  • Page - 512

    8.4Solution Using Differential Calculus495Equations (8.7) are called the orthogonality conditionsand Eq. (8.9) is called thenormality condition. To obtain the minimum value of the objective function f∗, thefollowing procedure can be adopted. Considerf∗ = (f∗)1 = (f∗)Nj=1∗j= (f∗) ∗1(f∗) ∗2· · ·(f ∗) ∗N(8.10)Sincef∗ =U∗1∗1 =U∗2∗2 = · ·· =U∗N∗N(8.11)from Eq. (8.8), Eq. (8.10) can be rewritten asf∗ =U∗1∗1∗1U∗2∗2∗2·· ·U∗N∗N∗N(8.12)By read more..

  • Page - 513

    496Geometric ProgrammingDegree of Difficulty.The quantity N− n− 1 is termed a degree of difficultyingeometric programming. In the case of a constrained geometric programming problem,Ndenotes the total number of terms in all the posynomials and nrepresents the numberof design variables. If N− n− 1 = 0, the problem is said to have a zero degree ofdifficulty. In this case, the unknowns∗j (j = 1, 2, . . . , N) can be determined uniquelyfrom the orthogonality and normality conditions. If read more..

  • Page - 514

    8.4Solution Using Differential Calculus497These equations, in the case of problems with a zero degree of difficulty, give a uniquesolution to w1, w2, . . . , wn. Once wi are found, the desired solution can be obtained asx∗i= ewi ,i= 1, 2, . . . , n(8.19)In a general geometric programming problem with a nonnegative degree of difficulty,N≥ n+ 1, and hence Eqs. (8.18) denote Nequations in nunknowns. By choosingany nlinearly independent equations, we obtain a set of solutions wi and hence read more..

  • Page - 515

    498Geometric ProgrammingFigure 8.1Open rectangular box.that is,1+ 3− 4= 0(E2)1+ 2− 4= 0(E3)2+ 3− 4= 0(E4)1+ 2+ 3+ 4= 1(E5)From Eqs. (E2) and (E3), we obtain4= 1+ 3= 1+ 2or2= 3(E6)Similarly, Eqs. (E3) and (E4) give us4= 1+ 2= 2+ 3or1= 3(E7)Equations (E6) and (E7) yield1= 2= 3while Eq. (E6) gives4= 1+ 3= 2 1Finally, Eq. (E5) leads to the unique solution∗1= ∗2= ∗3=15and∗4=25Thus the optimal value of the objective function can be found from Eq. (8.13) asf∗ read more..

  • Page - 516

    8.4Solution Using Differential Calculus499It can be seen that the minimum total cost has been obtained before finding theoptimal size of the box. To find the optimal values of the design variables, let us writeEqs. (8.14) asU∗1= 80x∗1 x∗2= ∗1f ∗ =15(200) = 40(E8)U∗2= 40x∗2 x∗3= ∗2f ∗ =15(200) = 40(E9)U∗3= 20x∗1 x∗3= ∗3f ∗ =15(200) = 40(E10)U∗4=80x∗1 x∗2 x∗3 = ∗4f ∗ =25(200) = 80(E11)From these equations, we obtainx∗2=121x∗1 read more..

  • Page - 517

    500Geometric Programmingorx∗1= 1Finally, we can obtain x∗3 by adding Eqs. (E14), (E15), and (E16) asw3= ln 1 + ln 2 + ln 1 = ln 2 = ln x∗3orx∗3= 2It can be noticed that there are four equations, Eqs. (E13) to (E16) in three unknownsw1, w2, and w3. However, not all of them are linearly independent. In this case, thefirst three equations only are linearly independent, and the fourth equation, (E16), canbe obtained by adding Eqs. (E13), (E14), and (E15), and dividing the result by read more..

  • Page - 518

    8.6Primal–Dual Relationship and Sufficiency Conditions in the Unconstrained Case501=c111c222· · ·CNNNni=1xai1i1ni=1xai2i2· ··ni=1xaiNiN=c111c222· · ·cNNNxNj=1a1j j1xNj=1a2j j2· ·· xNj=1anj jn(8.24)If we select the weightsj so as to satisfy the normalization condition, Eq. (8.21),and also the orthogonality relationsNj=1aij j= 0,i= 1, 2, . . . , n(8.25)Eq. (8.24) reduces toU111U222·· ·UNNN=c111c222·· ·cNNN(8.26)Thus the inequality (8.22) becomesU1+ U2+ · read more..

  • Page - 519

    502Geometric ProgrammingIn this section we prove that f∗ = v∗ and also that f∗ corresponds to the globalminimum of f (X). For convenience of notation, let us denote the objective functionf (X)by x0 and make the exponential transformationewi = xiorwi= ln xi,i= 0, 1, 2, . . . , n(8.30)where the variables wi are unrestricted in sign. Define the new variablesj , alsotermed weights, asj=Ujx0 =cjni=1xaijix0,j= 1, 2, . . . , N(8.31)which can be seen to be positive and satisfy the relationNj=1j= read more..

  • Page - 520

    8.6Primal–Dual Relationship and Sufficiency Conditions in the Unconstrained Case503wherew=w0w1...wn,=12...N,λ=λ0λ1...λN(8.37)with λdenoting the vector of Lagrange multipliers. At the stationary point of L, wehave∂L∂wi = 0,i= 0, 1, 2, . . . , n∂L∂ j = 0,j= 1, 2, . . . , N∂L∂λi = 0,i= 0, 1, 2, . . . , N(8.38)These read more..

  • Page - 521

    504Geometric ProgrammingBy substituting Eq. (8.45) into Eq. (8.36), we obtainL( , w)= −Nj=1j lnjcj + (1 − w0)Nj=1j− 1+ni=1wiNj=1aij j(8.46)The function given in Eq. (8.46) can be considered as the Lagrangian function cor-responding to a new optimization problem whose objective function ˜v( )is given by˜v( )= −Nj=1j lnjcj = lnNj=1cjjj(8.47)and the constraints byNj=1j− 1 = 0(8.48)Nj=1aij j= 0,i= 1, 2, . . . , n(8.49)This problem will be the dual for read more..

  • Page - 522

    8.6Primal–Dual Relationship and Sufficiency Conditions in the Unconstrained Case505Primal and Dual Problems.We saw that geometric programming treats the prob-lem of minimizing posynomials and maximizing product functions. The minimizationproblems are called primal programsand the maximization problems are called dualprograms. Table 8.1 gives the primal and dual programs corresponding to an uncon-strained minimization problem.Computational Procedure.To solve a given unconstrained minimization read more..

  • Page - 523

    506Geometric ProgrammingThe pumping cost is given by (300Q2/D5). Find the optimal size of the pipe and theamount of fluid handled for minimum overall cost.SOLUTIONf (D, Q)= 100D1Q0 + 50D2Q0 + 20D0Q−1 + 300D−5Q2(E1)Here we can see thatc1= 100, c2= 50, c3= 20, c4= 300a11 a12 a13 a14a21 a22 a23 a24=1 20 −50 0 −1 2The orthogonality and normality conditions are given by1 20 −50 0 −1 21 1111234= read more..

  • Page - 524

    8.6Primal–Dual Relationship and Sufficiency Conditions in the Unconstrained Case507Since ln vis expressed as a function of4 alone, the value of4 that maximizes ln vmust be unique (because the primal problem has a unique solution). The necessarycondition for the maximum of ln vgives∂∂ 4(ln v)= −11[ln100 − ln(2 − 11 4)]+ (2 − 11 4)112 − 11 4+ 8 [ln 50 − ln(8 4− 1)] + (8 4− 1) −88 4− 1+ 2 [ln 20 − ln(2 4)]+ 2 4−22 4+ 1 [ln 300 − ln( 4)]+ 4−14= 0This gives after read more..

  • Page - 525

    508Geometric ProgrammingThe optimum values of the design variables can be found fromU∗1= ∗1f ∗ = (0.385)(242) = 92.2U∗2= ∗2f ∗ = (0.175)(242) = 42.4U∗3= ∗3f ∗ = (0.294)(242) = 71.1U∗4= ∗4f ∗ = (0.147)(242) = 35.6(E4)From Eqs. (E1) and (E4), we haveU∗1= 100D∗ = 92.2U∗2= 50D∗2 = 42.4U∗3=20Q∗ = 71.1U∗4=300Q∗2D∗5= 35.6These equations can be solved to find the desired solution D∗ = 0.922 cm, Q∗ =0.281 m3/s.8.7CONSTRAINED MINIMIZATIONMost engineering read more..

  • Page - 526

    8.8Solution of a Constrained Geometric Programming Problem509and akij (k= 1, 2, . . . , m; i= 1, 2, . . . , n; j= 1, 2, . . . , Nk) are any real numbers,mindicates the total number of constraints, N0 represents the number of terms inthe objective function, and Nk denotes the number of terms in the kth constraint.The design variables x1, x2, . . . , xn are assumed to take only positive values inEqs. (8.52) and (8.53). The solution of the constrained minimization problem statedabove is considered read more..

  • Page - 527

    510Geometric ProgrammingIf the function f(X) is known to possess a minimum, the stationary value f∗ givenby Eq. (8.59) will be the global minimum of fsince, in this case, there is a uniquesolution for λ∗.The degree of difficulty of the problem (D) is defined asD= N− n− 1(8.60)where Ndenotes the total number of posynomial terms in the problem:N=mk=0Nk(8.61)If the problem has a positive degree of difficulty, the linear Eqs. (8.57) and (8.58)can be used to express any (n + 1) of the read more..

  • Page - 528

    8.9Primal and Dual Programs in the Case of Less-Than Inequalities511gk(X)≤ 1, the signum functions σk are all equal to +1, and the objective functiong0(X) will be a strictly convex function of the transformed variables w1, w2, . . . , wn,wherexi= ewi ,i= 0, 1, 2, . . . , n(8.64)In this case, the following primal–dual relationship can be shown to be valid:f (X)≥ f∗ ≡ v∗ ≥ v(λ)(8.65)Table 8.2 gives the primal and the corresponding dual programs. The following char-acteristics can read more..

  • Page - 529

    512Geometric ProgrammingTable 8.2Corresponding Primal and Dual ProgramsPrimal programDual programFind X=x1x2...xnso thatg0(X)≡ f (X)→ minimumsubject to the constraintsx1 >0x2 >0...xn >0,g1(X)≤ 1g2(X)≤≤ 1,Find read more..

  • Page - 530

    8.9Primal and Dual Programs in the Case of Less-Than Inequalities513Table 8.2(continued )Primal programDual programthe exponents akij are real numbers, andthe coefficients ckj are positive numbers.N0j=1λ0j= 1mk=0Nkj=1akij λkj= 0,i= 1, 2, . . . , nthe factors ckj are positive, and thecoefficients akij are real numbers.Terminologyg0= f= primal functionx1, x2, . . . , xn= primal variablesgk≤ 1 are primal constraints(k= 1, 2, . . . , m)xi >0, i= 1, 2, . . . , npositive restrictions.n= read more..

  • Page - 531

    514Geometric Programminga021λ01+ a022λ02+ a023λ03+ a121λ11= 0a031λ01+ a032λ02+ a033λ03+ a131λ11= 0(E2)λ0j≥ 0,j= 1, 2, 3λ11≥ 0In this problem, c01= 20, c02= 40, c03= 80, c11= 8, a011= 1, a021= 0, a031= 1,a012= 0, a022= 1, a032= 1, a013= 1, a023= 1, a033= 0, a111= −1, a121= −1, anda131= −1. Hence Eqs. (E1) and (E2) becomev(λ)=20λ01(λ01+ λ02+ λ03)λ0140λ02(λ01+ λ02+ λ03)λ02×80λ03(λ01+ λ02+ λ03)λ038λ11λ11λ11(E3)subject toλ01+ λ02+ λ03= 1λ01+ λ03− λ11= read more..

  • Page - 532

    8.9Primal and Dual Programs in the Case of Less-Than Inequalities515λ∗03=c03(x∗1 )a013(x∗2 )a023(x∗3 )a033x∗013=80(x∗1 )(x∗2 )480=x∗1 x∗26(E7)λ∗11λ∗11 = c11(x∗1)a111(x∗2 )a121(x∗3 )a1311 = 8(x∗1 )−1(x∗2 )−1(x∗3 )−1 =8x∗1 x∗2 x∗3(E8)Equations (E5) to (E8) givex∗1= 2,x∗2= 1,x∗3= 4Example 8.4 One-degree-of-difficulty ProblemMinimize f= x1x22 x−13+ 2x−11 x−32 x4+ 10x1x3subject to3x−11 x3x−24+ 4x3x4 ≤ 15x1x2 ≤ 1SOLUTIONHere N0= 3, read more..

  • Page - 533

    516Geometric Programminga142= 1, a211= 1, a221= 1, a231= 0, and a241= 0, Eqs. (E1) becomeMaximize v(λ)=c01λ01(λ01+ λ02+ λ03)λ01c02λ02(λ01+ λ02+ λ03)λ02×c03λ03(λ01+ λ02+ λ03)λ03c11λ11(λ11+ λ12)λ11×c12λ12(λ11+ λ12)λ12c21λ21λ21λ21subject toλ01+ λ02+ λ03= 1a011λ01+ a012λ02+ a013λ03+ a111λ11+ a112λ12+ a211λ21= 0a021λ01+ a022λ02+ a023λ03+ a121λ11+ a122λ12+ a221λ21= 0a031λ01+ a032λ02+ a033λ03+ a131λ11+ a132λ12+ a231λ21= 0a041λ01+ a042λ02+ a043λ03+ read more..

  • Page - 534

    8.9Primal and Dual Programs in the Case of Less-Than Inequalities517λ12= λ01− λ03− λ11(E7)λ12= 2λ11 − λ02(E8)From Eqs. (E7) and (E8), we haveλ12= λ01− λ03− λ11= 2λ11 − λ023λ11 − λ02+ λ03= λ01(E9)Adding Eqs. (E5) and (E9), we obtainλ21= 4λ11 − 2λ01(E10)= 3λ02 − 2λ01from Eq. (E6)λ11=34 λ02(E11)Substitution of Eq. (E11) in Eq. (E8) givesλ12=32 λ02− λ02=12 λ02(E12)Equations (E11), (E12), and (E7) giveλ03= λ01− λ11− λ12= λ01−34 λ02−12 read more..

  • Page - 535

    518Geometric ProgrammingTo find the maximum of v, we set the derivative of vwith respect to λ01 equal tozero. To simplify the calculations, we set d (ln v)/dλ01= 0 and find the value of λ∗01.Then the values of λ∗02, λ∗03, λ∗11, λ∗12, and λ∗21 can be found from Eqs. (E14) to (E18).Once the dual variables (λ∗kj )are known, Eqs. (8.62) and (8.63) can be used to findthe optimum values of the design variables as in Example PROGRAMMING WITH MIXEDINEQUALITY read more..

  • Page - 536

    8.10Geometric Programming with Mixed Inequality Constraints519The constraints are given by (see Table 8.2)N0j=1λ0j= 1mk=0Nkj=1σkakij λkj= 0,i= 1, 2, . . . , nNkj=1λkj≥ 0,k= 1, 2, . . . , mthat is,λ01+ λ02+ λ03= 1σ0a011λ01+ σ0a012λ02+ σ0a013λ03+ σ1a111λ11+ σ1a112λ12+ σ2a211λ21= 0σ0a021λ01+ σ0a022λ02+ σ0a023λ03+ σ1a121λ11+ σ1a122λ12+ σ2a221λ21= 0σ0a031λ01+ σ0a032λ02+ σ0a033λ03+ σ1a131λ11+ σ1a132λ12+ σ2a231λ21= 0σ0a041λ01+ σ0a042λ02+ σ0a043λ03+ read more..

  • Page - 537

    520Geometric ProgrammingBy using Eqs. (E3), the dual objective function of Eq. (E1) can be expressed asv (λ01)=1λ01λ0128λ01 − 48λ01−410−9λ01 + 55−9λ01×3(10λ01 − 5)6λ01 − 3−6λ01+3 4(10λ01 − 5)4λ01 − 2−4λ01+2(5)22λ01−12=1λ01λ0114λ01 − 28λ01−4105 − 9λ015−9λ01(5)3−6λ01(10)2−4λ01× (5)22λ01−12=1λ01λ0114λ01 − 28λ01−4105 − 9λ015−9λ01(5)12λ01−7(2)2−4λ01To maximize v, set d(ln v)/dλ01 = 0 and find λ∗01. Once λ∗01 is read more..

  • Page - 538

    8.11Complementary Geometric Programming521where Ak(X), Bk(X), Ck(X), and Dk(X) are posynomials in Xand possibly some ofthem may be absent. We assume that R0(X) >0 for all feasible X. This assumptioncan always be satisfied by adding, if necessary, a sufficiently large constant to R0(X).To solve the problem stated in Eq. (8.66), we introduce a new variable x0 >0,constrained to satisfy the relation x0≥ R0(X) [i.e., R0(X)/x0≤ 1], so that the problemcan be restated asMinimize read more..

  • Page - 539

    522Geometric ProgrammingSolution Procedure.1.Approximate each of the posynomials Q(X)† by a posynomial term. Then allthe constraints in Eq. (8.71) can be expressed as a posynomial to be less thanor equal to 1. This follows because a posynomial divided by a posynomialterm is again a posynomial. Thus with this approximation, the problem reducesto an ordinary geometric programming problem. To approximate Q(X) by asingle-term posynomial, we choose any˜X > 0and letUj= qj (X)(8.75)j=qj read more..

  • Page - 540

    8.11Complementary Geometric Programming523Degree of Difficulty.The degree of difficulty of a complementary geometric pro-gramming problem (CGP) is also defined asdegree of difficulty = N− n− 1where Nindicates the total number of terms appearing in the numerators of Eq. (8.71).The relation between the degree of difficulty of a CGP and that of the OGPα, theapproximating ordinary geometric program, is important. The degree of difficulty of aCGP is always equal to that of the read more..

  • Page - 541

    524Geometric Programmingthey can each be approximated by a single-term posynomial with the help of Eq. (8.78)as˜Q1(X,˜X)= (1 + 4˜x21)x1˜x28˜x21/(1+4˜x21)˜Q2(X,˜X)= 1 + ˜x2˜x1x1˜x1−˜x2/(˜x1+˜x2)x2˜x1˜x2/(˜x1+˜x2)Let us start the iterative process from the point X(1)=11 , which can be seen to befeasible. By taking˜X= X(1), we obtain˜Q1(X, X(1)) = 5x8/51˜Q2(X, X(1)) = 2x−1/21x1/22and we formulate the first ordinary geometric programming problem (OGP1) asMinimize x1subject read more..

  • Page - 542

    8.12Applications of Geometric Programming525Next we choose X(2)to be the optimal solution of OGP1 [i.e., X(1)opt ] and approx-imate Q1 and Q2 about this point, solve OGP2, and so on. The sequence of optimalsolutions of OGPα as generated by the iterative procedure is shown below:XoptIteration number, αx1x201.01.010.53850.464320.50190.500730.50000.5000The optimal values of the variables for the CGP are x∗1= 0.5 and x∗2= 0.5. It can beseen that in three iterations, the solution of the read more..

  • Page - 543

    526Geometric ProgrammingIf the maximum feed allowable on the lathe is Fmax, we have the constraintC11F≤ 1(E4)whereC11= F−1max(E5)Since the total number of terms is three and the number of variables is two, the degreeof difficulty of the problem is zero. By using the dataKm= 0.10,Kt= 0.50,tc= 0.5,th= 2.0,D= 6.0,L= 8.0,a= 140.0,b= 0.29,c= 0.25,Fmax= 0.005the solution of the problem [minimize fgiven in Eq. (E2) subject to the constraint(E4)] can be obtained asf∗ = $1.03 per piece,V∗ = 323 read more..

  • Page - 544

    8.12Applications of Geometric Programming527whereC31= a2S−1max(E9)If the constraint (E8) is also included, the problem will have a degree of difficultytwo. By taking a2= 1.36 × 108, b2= −1.52, c2= 1.004, Smax= 100 µin., Fmax=0.01, and Pmax= 2.0 in addition to the previous data, we obtain the following result:f∗ = $1.11 per piece,V∗ = 311 ft/min,F∗ = 0.0046 in./revExample 8.8 Design of a Hydraulic Cylinder [8.11]The minimum volume designof a hydraulic cylinder (subject to internal read more..

  • Page - 545

    528Geometric ProgrammingFigure 8.2Cantilever beam of rectangular cross section.SOLUTIONThe width and depth of the beam are considered as design variables.The objective function (weight) is given byf (X)= ρlx1x2(E1)where ρis the weight density and lis the length of the beam. The maximum stressinduced at the fixed end is given byσ=McI= P lx221112 x1x32 =6P lx1x22(E2)and the constraint becomes6P lσyx−11 x−22≤ 1(E3)Example 8.10 Design of a Cone Clutch [8.23]Find the minimum volume read more..

  • Page - 546

    8.12Applications of Geometric Programming529The axial force applied (F )and the torque developed (T ) are given by [8.37]F= p dAsin α=R1R2p2π r drsin αsin α= πp(R21− R22 )(E5)T= rfp dA=R1R2rfp2π rsin αdr=2πfp3 sin α(R31− R32 )(E6)where pis the pressure, fthe coefficient of friction, and Athe area of contact.Substitution of pfrom Eq. (E5) into (E6) leads toT=k2(R21+ R1R2+ R22 )R1+ R2(E7)wherek2=2Ff3 sin α(E8)Since k1 is a constant, the objective function can be taken as f= R31− read more..

  • Page - 547

    530Geometric ProgrammingTable 8.3Results for Example 8.10IterationStartingOrdinary geometric programmingSolutionnumberdesignproblemof OGP1x1= R0= 40Minimize x11 x02 x03x1= 162.5x2= R1= 3subject tox2= 5.0x3= R2= 30.507x−0.5971x32 x−1.213≤ 1x3= 2.51.667(x−12+ x−13 )≤ 12x1= R0= 162.5Minimize x11 x02 x03x1= 82.2x2= R1= 5.0subject tox2= 4.53x3= R2= 2.50.744x−0.9121x32 x−0.26353≤ 1x3= 2.2653.05(x−0.432x−0.5713+ x−1.432x0.4293)≤ 12x−12 x3≤ 13x1= R0= 82.2Minimize x11 x02 read more..

  • Page - 548

    8.12Applications of Geometric Programming531To avoid fatigue failure, the natural frequency of the spring (fn) is to be restricted tobe greater than (fn)min. The natural frequency of the spring is given byfn=2dπD2nGg32ρ1/2(E8)where gis the acceleration due to gravity. Using g= 9.81 m/s2, G= 8.56 ×1010 N/m2, and (fn)min = 13, Eq. (E8) becomes13(fn)minδG288,800Pd3D≤ 1(E9)Similarly, in order to avoid buckling, the free length of the spring is to be limitedasL≤11.5(D/2)2P /K1(E10)Using the read more..

  • Page - 549

    532Geometric Programmingof the shaft, Rthe radius of the journal, Lthe half-length of the bearing, Se the shearstress, lthe length between the driving point and the rotating mass, and Gthe shearmodulus. The load on each bearing (W ) is given byW=2µ RL2nc2(1 − n2)2[π 2(1 − n2) + 16n2]1/2(E3)For the data W= 1000 lb, c/R= 0.0015, n= 0.9, l= 10 in., Se= 30,000 psi,µ= 10−6 lb-s/in2, and G= 12 × 106 psi, the objective function and the constraintreduce tof (R, L)= aM+ bφ= 0.038 R2L + read more..

  • Page - 550

    8.12Applications of Geometric Programming533subject to8.62R−1L3 ≤ 1(E13)The solution of this one-degree-of-difficulty problem can be found as R∗ = 1.29, L∗ =0.53, and f∗ = 16.2.Example 8.13 Design of a Two-bar Truss [8.33]The two-bar truss shown in Fig. 8.3is subjected to a vertical load 2P and is to be designed for minimum weight. Themembers have a tubular section with mean diameter dand wall thickness tand themaximum permissible stress in each member (σ0) is equal to 60,000 psi. read more..

  • Page - 551

    534Geometric Programmingor1.75√900 + h2dh≤ 1(E2)It can be seen that the functions in Eqs. (E1) and (E2) are not posynomials, due to thepresence of the term √900 + h2. The functions can be converted to posynomials byintroducing a new variable yasy= 900 + h2ory2 = 900 + h2and a new constraint as900 + h2y2≤ 1(E3)Thus the optimization problem can be stated, with x1= y, x2= h, and x3= dasdesign variables, asMinimize f= 0.188yd(E4)subject to1.75yh−1d−1 ≤ 1(E5)900y−2 + y−2h2 ≤ read more..

  • Page - 552

    8.12Applications of Geometric Programming535The optimum values of xi can be found from Eqs. (8.62) and (8.63):1 =0.188y∗d∗19.81 = 1.75y∗h∗−1d∗−112= 900y∗−212= y∗−2h∗2These equations give the solution: y∗ = 42.426, h∗ = 30 in., and d∗ = 2.475 in.Example 8.14 Design of a Four-bar Mechanism [8.24]Find the link lengths of thefour-bar linkage shown in Fig. 8.4 for minimum structural error.SOLUTIONLet a, b, c, and ddenote the link lengths, θthe input angle, and φthe read more..

  • Page - 553

    536Geometric ProgrammingThe objective function for minimization is taken as the sum of squares of structuralerror at a number of precision or design positions, so thatf=ni=1ε2i(E5)where ndenotes the total number of precision points considered. Note that the error εiis minimized when fis minimized (εi will not be zero, usually).For simplicity, we assume that a≪ dand that the error εi is zero at θ0. Thusε0= 0 at θi= θ0, and Eq. (E3) yieldsK= 2cd cos φdi+ 2ac cos θ0 cos(φd0 − θ0)− read more..

  • Page - 554

    References and Bibliography537subject to3ad≤ 1Noting that c1= 0.1563, c2= 0.76, and c3= 3/d, we see that Eq. (E12) givesv( )=0.1563−1−1 −0.76223d1(1)1 =2.772dNoting that0.1563a2c2= −2.772d(−1) =2.772d−0.76ac= −2.772d(2) = −5.544dand using a= 1, we find that c∗ = 0.41 and d∗ = 3.0. In addition, Eqs. (E6) and(E4) yielda2 − b2 + c2 + d2= 2cd cos φd0+ 2ac cos θ0 cos(φd0 − θ0)− 2ad cos θ0or b∗ = 3.662. Thus the optimal link dimensions are given by a∗ = 1, b∗ = read more..

  • Page - 555

    538Geometric Programming8.11D. Wilde, Monotonicity and dominance in optimal hydraulic cylinder design, Jour-nal of Engineering for Industry, Transactions of ASME, Vol. 97, pp. 1390–1394, Nov.1975.8.12A. B. Templeman, On the solution of geometric programs via separable programming,Operations Research Quarterly, Vol. 25, pp. 184–185, 1974.8.13J. J. Dinkel and G. A. Kochenberger, On a cofferdam design optimization, MathematicalProgramming, Vol. 6, pp. 114–117, 1974.8.14A. J. Morris, A read more..

  • Page - 556

    Review Questions539interpretation, pp. 15–21, in Progress in Engineering Optimization–1981, R. W. Mayneand K. M. Ragsdell, Eds., ASME, New York, 1981.8.31M. Avriel, R. Dembo, and U. Passey, Solution of generalized geometric programs, Inter-national Journal for Numerical Methods in Engineering, Vol. 9, pp. 149–168, 1975.8.32Computational aspects of geometric programming: 1. Introduction and basic notation,pp. 115–120 (A. B. Templeman), 2. Polynomial programming, pp. 121–145 (J. read more..

  • Page - 557

    540Geometric Programming8.5What is normality condition in a geometric programming problem?8.6Define a complementary geometric programming problem.PROBLEMSUsing arithmetic mean–geometric mean inequality, obtain a lower bound vfor each function[f (x)≥ v, where vis a constant] in Problems 8.1– (x)=x−23+23x−3 +43x3/28.2f (x)= 1 + x+1x+1x28.3f (x)=12 x−3 + x2+ 2x8.4An open cylindrical vessel is to be constructed to transport 80 m3 of grain from a ware-house to a factory. The read more..

  • Page - 558

    Problems5418.12Minimize f (X)= x−21+14 x22 x3subject to34 x21 x−22+38 x2x−23≤ 1xi >0,i= 1, 2, 38.13Minimize f (X)= x−31 x2+ x3/21x−13subject tox21 x−12+12 x−21 x33≤ 1x1 >0,x2 >0,x3 >08.14Minimize f= x−11 x−22 x−23subject tox31+ x22+ x3≤ 1xi >0,i= 1, 2, 38.15Prove that the function y= c1ea1x1 + c2ea2x2 + ··· + cneanxn , ci≥ 0, i= 1, 2, . . . , n, isa convex function with respect to x1, x2, . . . , xn.8.16Prove that f= ln xis a concave function for read more..

  • Page - 559

    542Geometric ProgrammingFigure 8.5Floor consisting of a plate with supporting beams [8.36].8.21A rectangular area of dimensions Aand Bis to be covered by steel plates with supportingbeams as shown in Fig. 8.5. The problem of minimum cost design of the floor subject to aconstraint on the maximum deflection of the floor under a specified uniformly distributedlive load can be stated as [8.36]Minimize f (X)= cost of plates + cost of beams= kf γ ABt+ kbγ Ak1nZ2/3(1)subject read more..

  • Page - 560

    Problems543subject toF(k)ixiσ∗i≤ 1,i= 1, 2, . . . , n,k= 1, 2, . . . , q(2)ni=1F(k)ilixiE∗isij≤ 1,j= 1, 2, . . . , m,k= 1, 2, . . . , q(3)where F(k)iis the tension in the ith member in the kth load condition, xi the cross-sectionalarea of member i, li the length of member i, Eis Young’s modulus, σ∗ithe maximumpermissible stress in member i, and∗j the maximum allowable displacement of node j.Develop a suitable transformation technique and express the problem of Eqs. (1) to (3)as a read more..

  • Page - 561

    9Dynamic Programming9.1INTRODUCTIONIn most practical problems, decisions have to be made sequentially at different pointsin time, at different points in space, and at different levels, say, for a component, fora subsystem, and/or for a system. The problems in which the decisions are to be madesequentially are called sequential decision problems. Since these decisions are to bemade at a number of stages, they are also referred to as multistage decision problems.Dynamic programming is a read more..

  • Page - 562

    9.2Multistage Decision Processes545technique suffers from a major drawback, known as the curse of dimensionality. How-ever, despite this disadvantage, it is very suitable for the solution of a wide range ofcomplex problems in several areas of decision making.9.2MULTISTAGE DECISION PROCESSES9.2.1Definition and ExamplesAs applied to dynamic programming, a multistage decision process is one in whicha number of single-stage processes are connected in series so that the output of onestage is the read more..

  • Page - 563

    546Dynamic Programmingseen to be in series and the system has to be treated as a multistage decision problem.Finally, consider the problem of loading a vessel with stocks of Nitems. Each unitof item ihas a weight wi and a monetary value ci. The maximum permissible cargoweight is W. It is required to determine the cargo load that corresponds to maximummonetary value without exceeding the limitation of the total cargo weight. Although themultistage nature of this problem is not directly evident, read more..

  • Page - 564

    9.2Multistage Decision Processes547Figure 9.3Multistage decision problem (initial value problem).where xi denotes the vector of decision variables at stage i. The state transformationequations (9.3) are also called design equations.The objective of a multistage decision problem is to find x1, x2, . . . , xn so asto optimize some function of the individual statge returns, say, f (R1, R2, . . . , Rn)and satisfy Eqs. (9.3) and (9.4). The nature of the n-stage return function, f, deter-mines read more..

  • Page - 565

    548Dynamic Programming9.2.3Conversion of a Nonserial System to a Serial SystemAccording to the definition, a serial system is one whose components (stages) are con-nected in such a way that the output of any component is the input of the succeedingcomponent. As an example of a nonserial system, consider a steam power plant con-sisting of a pump, a feedwater heater, a boiler, a superheater, a steam turbine, and anelectric generator, as shown in Fig. 9.4. If we assume that some steam is taken read more..

  • Page - 566

    9.3Concept of Suboptimization and Principle of Optimality549Figure 9.5Types of multistage problems: (a) initial value problem; (b) final value problem;(c) boundary value problem.3. Boundary value problem.If the values of both the input and output variablesare specified, the problem is called a boundary value problem. The three typesof problems are shown schematically in Fig. 9.5, where the symbol |→is usedto indicate a prescribed state variable.9.3CONCEPT OF SUBOPTIMIZATION AND PRINCIPLEOF read more..

  • Page - 567

    550Dynamic ProgrammingFigure 9.6Water tank system.Example 9.1Explain the concept of suboptimization in the context of the design ofthe water tank shown in Fig. 9.6a. The tank is required to have a capacity of 100,000liters of water and is to be designed for minimum cost [9.10].SOLUTIONInstead of trying to optimize the complete system as a single unit, itwould be desirable to breakthe system into components which could be optimizedmore or less individually. For this breaking and component read more..

  • Page - 568

    9.3Concept of Suboptimization and Principle of Optimality551Figure 9.7Suboptimization (principle of optimality).suboptimization is shown in Fig. 9.7. Since the suboptimizations are to be done in thereverse order, the components of the system are also numbered in the same manner forconvenience (see Fig. 9.3).The process of suboptimization was stated by Bellman [9.2] as the principle ofoptimality:An optimal policy (or a set of decisions) has the property that whatever theinitial state and initial read more..

  • Page - 569

    552Dynamic ProgrammingConsider the first subproblem by starting at the final stage, i =1. If the input to thisstage s2 is specified, then according to the principle of optimality, x1 must be selectedto optimize R1. Irrespective of what happens to the other stages, x1 must be selectedsuch that R1(x1, s2) is an optimum for the input s2. If the optimum is denoted as f∗1 ,we havef∗1 (s2) =optx1[R1(x1, s2)](9.11)This is called a one-stage policysince once the input state s2 is specified, the read more..

  • Page - 570

    9.4Computational Procedure in Dynamic Programming553the optimum value of fi =ik=1 Rk for any specified value of the input si+1. Thisproblem, by using the principle of optimality, has been decomposed into iseparateproblems, each involving only one decision variable. Equation (9.16) is the desiredrecurrence relationship valid for i =2, 3, . . . , n.9.4COMPUTATIONAL PROCEDURE IN DYNAMICPROGRAMMINGThe use of the recurrence relationship derived in Section 9.3 in actual computations isdiscussed in read more..

  • Page - 571

    554Dynamic Programmingget the following simplified statement:f∗2 (s3) =optx2[R2(x2, s3) +f∗1 (s2)](9.19)Thus the number of variables to be considered has been reduced from two (x1 and x2)to one (x2). A range of possible values of s3 must be considered and for each one, x∗2must be found so as to optimize [R2 +f ∗1 (s2)]. The results (x∗2 and f ∗2 for differents3) of this suboptimization are entered in a table as shown in Fig. 9.9.Figure 9.9Suboptimization of components 1 and 2 for read more..

  • Page - 572

    9.5Example Illustrating the Calculus Method of Solution555Assuming that the suboptimization sequence has been carried on to include i −1of the end components, the next step will be to suboptimize the iend components.This requires the solution off∗i (si+1) =optxi ,xi−1,...,x1[Ri +Ri−1 + · · · +R1](9.20)However, again, all the information regarding the suboptimization of i −1 end com-ponents is known and has been entered in the table corresponding to f ∗i−1. Hence thisinformation read more..

  • Page - 573

    556Dynamic ProgrammingFigure 9.10Suboptimization of components 1, 2, . . . , ifor various settings of the input statevariable si+1.weight of the truss is given byf (x1, x2, x3, x4) =0.01(100x1 +120x2 +100x3 +60x4)=x1 +1.2x2 +x3 +0.6x4(E1)From structural analysis [9.5], the force developed in member idue to a unit load actingat joint A(pi ), the deformation of member i(di), and the contribution of member itothe vertical deflection of A(δi =pidi) can be determined as follows: read more..

  • Page - 574

    9.5Example Illustrating the Calculus Method of Solution557Figure 9.11Four-bar truss.Member ipidi =(stressi)liE=Ppi lixiE(in.)δi =pidi (in.)1−1.25−1.25/x11.5625/x120.750.9/x20.6750/x231.251.25/x31.5625/x34−1.50−0.9/x41.3500/x4The vertical deflection of joint Ais given bydA =4i=1δi =1.5625x1+0.6750x2+1.5625x3+1.3500x4(E2)Thus the optimization problem can be stated asMinimize f (X) =x1 +1.2x2 +x3 +0.6x4subject to1.5625x1+0.6750x2+1.5625x3+1.3500x4=0.5(E3)x1 ≥0, x2 ≥0, x3 ≥0, x4 read more..

  • Page - 575

    558Dynamic ProgrammingFigure 9.12Example 9.2 as a four-stage decision problem.since δ1 =s2, andx∗1 =1.5625s2(E5)Let s3 be the displacement available for allocation to the first two members, δ2 thedisplacement contribution due to the second member, and f∗2 (s3) the minimum weightof the first two members. Then we have, from the recurrence relationship of Eq. (9.16),f∗2 (s3) =minx2≥0[R2 +f∗1 (s2)](E6)where s2 represents the resource available after allocation to stage 2 and is given read more..

  • Page - 576

    9.5Example Illustrating the Calculus Method of Solution559Let s4 be the displacement available for allocation to the first three members. Letδ3 be the displacement contribution due to the third member and f ∗3 (s4) the minimumweight of the first three members. Thenf∗3 (s4) =minx3≥0[x3 +f∗2 (s3)](E11)where s3 is the resource available after allocation to stage 3 and is given bys3 =s4 −δ3 =s4 −1.5625x3From Eq. (E10) we havef∗2 (s3) =4.6169s4 −1.5625/x3(E12)and Eq. (E11) can be read more..

  • Page - 577

    560Dynamic Programmingthe minimum of F (s5, x4), for any specified value of s5, is given by∂F∂x4=0.6 −(11.5596)(1.3500)(s5x4 −1.3500)2=0orx∗4 =6.44s5(E20)f∗4 (s5) =0.6x∗4 +11.5596s5 −1.3500/x∗4=3.864s5+16.492s5=20.356s5(E21)Since the value of s5 is specified as 0.5 in., the minimum weight of the structure canbe calculated from Eq. (E21) asf∗4 (s5 =0.5) =20.3560.5=40.712 lb(E22)Once the optimum value of the objective function is found, the optimum values of thedesign read more..

  • Page - 578

    9.6Example Illustrating the Tabular Method of Solution561Table 9.1Component 3 (Tank)Type of tankLoad acting onthe tank, s4(kgf)R3 cost ($)Self-weight ofthe component(kgf)s3 =s4 +self-weight(kgf)(a) Cylindrical RCC tank100,0005,00045,000145,000(b) Spherical RCC tank100,0008,00030,000130,000(c) Rectangular RCC tank100,0006,00025,000125,000(d) Cylindrical steel tank100,0009,00015,000115,000(e) Spherical steel tank100,00015,0005,000105,000(f) Rectangular steel tank100,00012,00010,000110,000(g) read more..

  • Page - 579

    562Dynamic ProgrammingTable 9.3Component 1 (Foundation)s1 =s2 +Type of foundations2 (kgf)R1 cost ($)Self-weight (kgf)self-weight (kgf)(a) Mat foundation220,0005,00060,000280,000200,0004,00045,000245,000180,0003,00035,000215,000140,0002,50025,000165,000100,00050020,000120,000(b) Concrete pile foundation220,0003,50055,000275,000200,0003,00040,000240,000180,0002,50030,000210,000140,0001,50020,000160,000100,0001,00015,000115,000(c) Steel pile read more..

  • Page - 580

    9.6Example Illustrating the Tabular Method of Solution563Figure 9.14Various stages of suboptimization of Example 9.3: (a) suboptimization of com-ponent 1; (b) suboptimization of components 1 and 2; (c) suboptimization of components 1, 2,and 3. read more..

  • Page - 581

    564Dynamic ProgrammingSpecific valuex∗1 (type of foundationf ∗1Corresponding valueof s2 (kgf)for minimum cost)($)of s1 (kgf)220,000(c)3,000230,000200,000(c)2,500209,000180,000(c)2,000188,000140,000(b)1,500160,000100,000(a)500120,000Suboptimization of Stages 2 and 1 (Components 2 and 1)Here we combine components 2 and 1 as shown in Fig. 9.14b and minimizethe cost (R2 +R1) for any specified value s3 to obtain f∗2 (s3) asf∗2 (s3) =minx2,x1[R2(x2, s3) +R1(x1, s2)] =minx2[R2(x2, s3) +f∗1 read more..

  • Page - 582

    9.6Example Illustrating the Tabular Method of Solution565quantities (i.e., f∗2 and x∗2 )corresponding to the various discrete values of s3 can besummarized as follows:Specified value ofs3 (kgf)Type of columnscorresponding tominimum cost ofstages 2 and 1, (x∗2 )Minimum cost ofstages 2 and 1, f∗2($)Value of thecorrespondingstate variable, s2(kgf)150,000(a)9,000220,000130,000(a)7,000180,000110,000(b)5,500140,000100,000(b)3,875115,000Suboptimization of Stages 3, 2, and 1 (Components 3, 2, read more..

  • Page - 583

    566Dynamic ProgrammingNow, we retrace the steps to collect the optimum values of x∗3 , x∗2 , and x∗1 and obtainx∗3 =type (c) tank,s3 =125,000 kgfx∗2 =type (a) columns,s2 =170,000 kgfx∗1 =type (c) foundation, s1 =181,000 kgfand the total minimum cost of the water tank is $12,625. Thus the minimum cost watertank consists of a rectangular RCC tank, RCC columns, and a steel pile foundation.9.7CONVERSION OF A FINAL VALUE PROBLEM INTOAN INITIAL VALUE PROBLEMIn previous sections the dynamic read more..

  • Page - 584

    9.7Conversion of a Final Value Problem into an Initial Value Problem567Figure 9.15Conversion of a final value problem to an initial value problem: (a) final valueproblem; (b) initial value initial value problem as shown in Fig. 9.15b. This initial value problem is identicalto the one considered in Fig. 9.3 except for the stage numbers. If the stage numbers1, 2, . . . , nare reversed to n, n −1, . . . ,1, Fig. 9.15b will become identical to Fig. 9.3.Once this is done, the solution read more..

  • Page - 585

    568Dynamic Programmingwhere x1 and x2 indicate the number of drilling machines manufactured in the firstmonth and the second month, respectively. To solve this problem as a final valueproblem, we start from the second month and go backward. If I2 is the inventory atthe beginning of the second month, the optimum number of drilling machines to bemanufactured in the second month is given byx∗2 =120 −I2(E1)and the cost incurred in the second month byR2(x∗2 , I2) =8I2 +50x∗2 +0.2x∗22By read more..

  • Page - 586

    9.8Linear Programming as a Case of Dynamic Programming5699.8LINEAR PROGRAMMING AS A CASE OF DYNAMICPROGRAMMINGA linear programming problem with ndecision variables and mconstraints canbe considered as an n-stage dynamic programming problem with mstate vari-ables. In fact, a linear programming problem can be formulated as a dynamicprogramming problem. To illustrate the conversion of a linear programming probleminto a dynamic programming problem, consider the following linear read more..

  • Page - 587

    570Dynamic Programmingsuch thatnj =1aij xj ≤bi,i =1, 2, . . . , m(9.29)xj ≥0,j =1, 2, . . . , n(9.30)The recurrence relationship (9.16), when applied to this problem yieldsf∗1 (β1, β2, . . . , βm) =max0≤xi ≤β[cixi +f∗i−1(β1 −a1ixi,β2 −a2ixi, . . . , βm −ami xi)],i =2, 3, . . . , n(9.31)where β1, β2, . . . , βm are the resources available for allocation at stage i; a1ixi, . . . ,amixi are the resources allocated to the activity xi, β1 −a1ixi, β2 −a2ixi, . . . , read more..

  • Page - 588

    9.8Linear Programming as a Case of Dynamic Programming571where β1, β2, and β3 are the resources available for allocation at stage 1, and x1 is anonnegative value that satisfies the side constraints 10x1 ≤β1, 4x1 ≤β2, and x1 ≤β3.Here β1 =2500 −5x2, β2 =2000 −10x2, and β3 =450 −1.5x2, and hence the max-imum value βthat x1 can assume is given byβ =x∗1 =min2500 −5x210,2000 −10x24,450 −1.5x2(E1)Thusf∗12500 −5x210,2000 −10x24,450 −1.5x2 =50x∗1=50 min2500 read more..

  • Page - 589

    572Dynamic Programmingwe obtainmax0 ≤x2 ≤200100x2 +50 min2500 −5x210,2000 −10x24,450 −1.5x2=max 100x2 +502500 −5x210if0 ≤x2 ≤125100x2 +502000 −10x24if125 ≤x2 ≤200=max75x2 +12,500if0 ≤x2 ≤12525,000 −25x2if125 ≤x2 ≤200Now,max(75x2 +12,500) =21,875 at x2 =125max(25,000 −25x2) =21,875 at x2 =125Hencef∗2 (2500, 2000, 450) =21,875atx∗2 =125.0From Eq. (E1) we havex∗1 =min2500 −5x∗210,2000 −10x∗24,450 read more..

  • Page - 590

    9.9Continuous Dynamic Programming573These comments are equally applicable for all dynamic programming problemsinvolving many state variables, since the computations have to be performed for dif-ferent possible values of each of the state variables. Thus this problem causes not onlyan increase in the computational time, but also requires a large computer memory. Thisproblem is known as the problem of dimensionalityor the curse of dimensionality, astermed by Bellman. This presents a serious read more..

  • Page - 591

    574Dynamic Programmingperiod t1 to t2 is given byf =t2t1p a +bp−c p,dpdt, t−x(t) dt(E1)where p =p{x(t), t}. Thus the optimization problem can be stated as follows: Findx(t), t1 ≤t ≤t2, which maximizes the total profit, fgiven by Eq. (E1).Example 9.7Consider the problem of determining the optimal temperature distribu-tion in a plug-flow tubular reactor [9.1]. Let the reactions carried in this type of reactorbe shown as follows:X1k1⇄k2X2k3−−−→X3where X1 is the reactant, X2 the read more..

  • Page - 592

    9.9Continuous Dynamic Programming575to the solution of continuous decision problems, consider the following simple (uncon-strained) problem. Find the function y(x)that minimizes the integralf =bx=aRdydx, y, xdx(9.33)subject to the known end conditions y(x =a) =α, and y(x =b) =β. We shall seehow dynamic programming can be used to determine y(x)numerically. This approachwill not yield an analytical expression for y(x)but yields the value of y(x)at a finitenumber of points in the interval a ≤x read more..

  • Page - 593

    576Dynamic ProgrammingIn Eqs. (9.37) to (9.39), θor yi is a continuous variable. However, for simplicity,we treat θor yi as a discrete variable. Hence for each value of i, we find a set ofdiscrete values that θor yi can assume and find the value of f∗i (θ )for each discretevalue of θor yi. Thus f∗i (θ )will be tabulated for only those discrete values that θcantake. At the final stage, we find the values of f ∗0 (α)and y∗1 . Once y∗1 is known, theoptimal values of y2, y3, . read more..

  • Page - 594

    9.10Additional Applications577Figure 9.17Continuous beam on rigid applicable. Accordingly, the complete bending moment distribution can be deter-mined once the reactant support moments m1, m2, . . . , mn are known. Once the supportmoments are known (chosen), the plastic limit moment necessary for each span can bedetermined and the span can be designed. The bending moment at the center of the ithspan is given by −Pili/4 and the largest bending moment in the ith span, Mi, can read more..

  • Page - 595

    578Dynamic ProgrammingFigure 9.18Multibay cantilever truss.For specificness, consider a three-bay truss for which the following relationshipsare valid (see Fig. 9.18):yi+1 =yi +di,i =1, 2, 3(9.41)Since the value of y1 is fixed, the problem can be treated as an initial value problem.If the ycoordinate of each node is limited to a finite number of alternatives that cantake one of the four values 0.25, 0.5, 0.75, 1 (arbitrary units are used), there will be64 possible designs, as shown in Fig. read more..

  • Page - 596

    9.10Additional Applications5799.10.3Optimal Design of a Gear TrainConsider the gear train shown in Fig. 9.20, in which the gear pairs are numbered from1 to n. The pitch diameters (or the number of teeth) of the gears are assumed to beknown and the face widths of the gear pairs are treated as design variables [9.19, 9.20].The minimization of the total weight of the gear train is considered as the objective.When the gear train transmits power at any particular speed, bending and surface read more..

  • Page - 597

    580Dynamic ProgrammingFigure 9.21Typical drainage network.123R1R2R3h0h1h2h3D1D2D3h0h1h2D3Element 3D2Element 2D1Element 1h31230l1l2l3(a)(b)Figure 9.22Representation of a three-element pipe segment [9.14]. read more..

  • Page - 598

    References and Bibliography581element consists of selecting values for the diameter of the pipe, the slope of thepipe, and the mean depth of the pipe (Di, hi−1, and hi). The construction cost of anelement, Ri, includes cost of the pipe, cost of the upstream manhole, and earthworkrelated to excavation, backfilling, and compaction. Some of the constraints can be statedas follows:1.The pipe must be able to discharge the specified flow.2.The flow velocity must be sufficiently large.3.The pipe read more..

  • Page - 599

    582Dynamic Programming9.17W. S. Duff, Minimum cost solar thermal electric power systems: a dynamic programmingbased approach, Engineering Optimization, Vol. 2, pp. 83–95, 1976.9.18M. J. Harley and T. R. E. Chidley, Deterministic dynamic programming for long termreservoir operating policies, Engineering Optimization, Vol. 3, pp. 63–70, 1978.9.19S. G. Dhande, Reliability Based Design of Gear Trains: A Dynamic ProgrammingApproach, Design Technology Transfer, ASME, New York, pp. 413–422, read more..

  • Page - 600

    Problems583(c)The objective function, f =(R1 +R2)R3, is separable.(d)A nonserial system can always be converted to an equivalent serial system byregrouping the components.(e)Both the input and the output variables are specified in a boundary value problem.(f)The state transformation equations are same as the design equations.(g)The principle of optimality and the concept of suboptimization are same.(h)A final value problem can always be converted into an initial value problem.PROBLEMS9.1Four read more..

  • Page - 601

    584Dynamic ProgrammingFigure 9.23Possible paths from Ato P.Figure 9.24Three subsystems connected in series.9.4The altitude of an airplane flying between two cities Aand F, separated by a distance of2000 miles, can be changed at points B, C, D, and E(Fig. 9.25). The fuel cost involvedin changing from one altitude to another between any two consecutive points is given inthe following table. Determine the altitudes of the airplane at the intermediate points forminimum fuel cost. read more..

  • Page - 602

    Problems585Figure 9.25Altitudes of the airplane in Example 9.4.To altitude (ft):From altitude (ft):08,00016,00024,00032,00040,0000—400048005520616067208,0008001600268040004720608016,00032048080022403120464024,00001603205601600304032,0000080240480160040,00000001602409.5Determine the path (route) corresponding to minimum cost in Problem 9.2 if a personwants to travel from city Dto city M.9.6Each of the nlathes available in a machine shop can be used to produce two types ofparts. If zlathes are read more..

  • Page - 603

    586Dynamic ProgrammingFigure 9.26Pipe network.For the segments Bi to Cj and Ci to DjTo node jFrom node i123181219291113371514Find the solution using dynamic programming.9.8Consider the problem of controlling a chemical reactor. The desired concentration ofmaterial leaving the reactor is 0.8 and the initial concentration is 0.2. The concentrationat any time t, x(t ), is given bydxdt=1 −x1 +xu(t )where u(t )is a design variable (control function).Find u(t )which minimizesf =T0{[x(t) −0.8]2 read more..

  • Page - 604

    Problems587Thermal Station, iReturn function, Ri(x)123Ri(0)000Ri(1)213Ri(2)455Ri(3)666Find the investment policy for maximizing the total electric power generated.9.10Solve the following LP problem by dynamic programming:Maximize f (x1, x2) =10x1 +8x2subject to2x1 +x2 ≤253x1 +2x2 ≤45x2 ≤10x1 ≥0,x2 ≥0Verify your solution by solving it graphically.9.11A fertilizer company needs to supply 50 tons of fertilizer at the end of the first month,70 tons at the end of second month, and 90 tons read more..

  • Page - 605

    10Integer Programming10.1INTRODUCTIONIn all the optimization techniques considered so far, the design variables are assumedto be continuous, which can take any real value. In many situations it is entirelyappropriate and possible to have fractional solutions. For example, it is possible to usea plate of thickness 2.60 mm in the construction of a boiler shell, 3.34 hours of labortime in a project, and 1.78 lb of nitrate to produce a fertilizer. Also, in many engineeringsystems, certain design read more..

  • Page - 606

    10.2Graphical Representation589Table 10.1Integer Programming MethodsLinear programming problemsNonlinear programming problemsAll-integerproblemMixed-integerproblemMixed-integerproblemZero–oneproblemPolynomialprogrammingproblemGeneral nonlinearproblemCutting plane methodBranch-and-bound methodCutting plane methodBranch-and-bound methodBalas methodAll-integerproblemGeneralized penalty functionmethodSequential linear integer(discrete) programmingmethodsolve all integer and mixed-integer nonlinear read more..

  • Page - 607

    590Integer ProgrammingFigure 10.1Graphical solution of the problem stated in Eqs. (10.1).by changing the constraint 3x1+ 11x2≤ 66 to 7x1+ 11x2≤ 88 in Eqs. (10.1). Withthis altered constraint, the feasible region and the solution of the LP problem, withoutconsidering the integer requirement, are shown in Fig. 10.2. The optimum solutionof this problem is identical with that of the preceding problem: namely, x1= 512 ,Figure 10.2Graphical solution with modified constraint. read more..

  • Page - 608

    10.3Gomory’s Cutting Plane Method591x2= 412 , and f= 3412 . The truncation of the fractional part of this solution givesx1= 5, x2= 4, and f= 31. Although this truncated solution happened to be optimumto the corresponding integer problem in the earlier case, it is not so in the present case.In this case the optimum solution of the integer programming problem is given byx∗1= 0, x∗2= 8, and f∗ = 32.10.3GOMORY’S CUTTING PLANE METHOD10.3.1Concept of a Cutting PlaneGomory’s method is based read more..

  • Page - 609

    592Integer Programmingadding these additional constraints is to reduce the original feasible convex regionABCDto a new feasible convex region (such as ABEFGD) such that an extremepoint of the new feasible region becomes an integer optimal solution to the integerprogramming problem. There are two main considerations to be taken while select-ing the additional constraints: (1) the new feasible region should also be a convexset, and (2) the part of the original feasible region that is sliced off read more..

  • Page - 610

    10.3Gomory’s Cutting Plane Method593expressed, from the ith equation of Table 10.2, asxi= bi−nj=1aij yj(10.2)where bi is a noninteger. Let us writebi= ˆbi + βi(10.3)aij= ˆaij + αij(10.4)whereˆbi andˆaij denote the integers obtained by truncating the fractional parts from biand aij , respectively. Thus βi will be a strictly positive fraction (0 < βi <1) and αijwill be a nonnegative fraction (0≤ αij <1). With the help of Eqs. (10.3) and (10.4),Eq. (10.2) can be rewritten read more..

  • Page - 611

    594Integer ProgrammingTable 10.3Optimal Solution with Gomory ConstraintCoefficient corresponding to:Basicvariablesx1x2 . . . xi . . . xmy1y2 . . .yj . . .ynfsiConstantsx11000a11a12a1ja1n00b1x20100a21a22a2ja2n00b2...xi0010ai1ai2aijain00bi...xm0001am1am2amjamn00bmf0000c1c2cjcn10fsi0000−αi1−αi2−αij−αin01−βiComputational Procedure.Once the Gomory constraint is derived, the coefficients ofthis constraint are inserted in a new row of the final tableau of the ordinary LP problem(i.e., read more..

  • Page - 612

    10.3Gomory’s Cutting Plane Method595be avoided by using the all-integer integer programming algorithm developedby Gomory [10.10].4. For obtaining the optimal solution of an ordinary LP problem, we start from abasic feasible solution (at the start of phase II) and find a sequence of improvedbasic feasible solutions until the optimum basic feasible solution is found. Dur-ing this process, if the computations have to be terminated at any stage (forsome reason), the current basic feasible read more..

  • Page - 613

    596Integer ProgrammingSOLUTIONStep 1:Solve the LP problem by neglecting the integer requirement of the variablesxi, i= 1 to 4, using the regular simplex method as shown below:Coefficients of variablesBasicvariablesx1x2x3x4−fbibi/ais for ais > 0x33−110012x4311010666←Pivotelement−f−3−40010↑Most negative cjResult of pivoting:x3361101111018112← SmalleronePivotelementx2311101110622−f−211100411124↑Most negative cjResult of read more..

  • Page - 614

    10.3Gomory’s Cutting Plane Method597between x1 and x2, let us select x1 as the basic variable having the largestfractional value. From the row corresponding to x1 in the last tableau, we canwritex1=112−1136 y1−136 y2(E1)where y1 and y2 are used in place of x3 and x4 to denote the nonbasic variables.By comparing Eq. (E1) with Eq. (10.2), we find thati= 1,b1=112 ,ˆb1 = 5,β1=12 ,a11=1136 ,ˆa11 = 0,α11=1136 ,a12=136 ,ˆa12 = 0,andα12=136From Eq. (10.9), the Gomory constraint can be read more..

  • Page - 615

    598Integer ProgrammingHerecj−arj =712×3611=2111for column y1=512×361= 15for column y2.Since2111 is minimum out of2111 and 15, the pivot element will be−1136 . Theresult of pivot operation is given in the following tableau:Coefficients of variablesBasicvariablesx1x2y1y2−fs1bibi/aisfor ais > 0x11000015x20101110−3115111−f0004111211136911y10011110−36111811The solution given by the present tableau is x1= 5, x2= 4711 , y1= 1711 , andf= −33611 , in which some variables are still read more..

  • Page - 616

    10.3Gomory’s Cutting Plane Method599Since only arj corresponding to column y2 is negative, the pivot elementwill be−111 in the s2 row. The pivot operation on this element leads to thefollowing tableau:Coefficients of variablesBasicvariablesx1x2y1y2−fs1s2bix110000105x201000014y100100−311−f000013431y200010−3−117The solution given by this tableau is x1= 5, x2= 4, y1= 1, y2= 7, andf= −31, which can be seen to satisfy the integer requirement. Hence this isthe desired read more..

  • Page - 617

    600Integer Programmingwherea+ij=aijif aij≥ 00if aij <0(10.11)a−ij=0if aij≥ 0aijif aij <0(10.12)Eq. (10.2) can be rewritten asnj=1(a+ij+ a−ij )yj= βi+ (ˆbi − xi)(10.13)Here, by assumption, xi is restricted to integer values while bi is not an integer. Since0 < βi <1 andˆbi is an integer, we can have the value of βi+ (ˆbi − xi) either≥ 0 or<0. First, we consider the case whereβi+ (ˆbi − xi)≥ 0(10.14)In this case, in order for xi to be an integer, we must read more..

  • Page - 618

    10.3Gomory’s Cutting Plane Method601Thus Eq. (10.13) yieldsnj=1(a+ij+ a−ij )yj≤ βi− 1(10.21)Sincenj=1a−ij yj≤nj=1(a+ij+ a−ij )yjwe obtainnj=1a−ij yj≤ βi− 1(10.22)Upon dividing this inequality by the negative quantity (βi− 1), we obtain1βi− 1nj=1a−ij yj≥ 1(10.23)Multiplying both sides of this inequality by βi > 0, we can write the inequality(10.23) asβiβi− 1nj=1a−ij yj≥ βi(10.24)Since one of the inequalities in (10.18) and (10.24) must be satisfied, read more..

  • Page - 619

    602Integer Programmingwhich can be seen to be infeasible. Hence the constraint Eq. (10.26) is added at theend of Table 10.2, and the dual simplex method applied. This procedure is repeatedthe required number of times until the optimal mixed integer solution is found.Discussion.In the derivation of the Gomory constraint, Eq. (10.26), we have notmade use of the fact that some of the variables (yj )might be integer variables. Wenotice that any integer value can be added to or subtracted from the read more..

  • Page - 620

    10.3Gomory’s Cutting Plane Method603SOLUTIONStep 1:Solve the LP problem by simplex method by neglecting the integer requirement.This gives the following optimal tableau:Coefficients of variablesBasicvariablesx1x2y1y2−fbix11011361360112x201−112112092−f007125121692The noninteger solution given by this tableau isx1= 512 ,x2= 412 ,y1= y2= 0, and fmin= −3412 .Step 2: Formulate a Gomory constraint. Since x2 is the only variable that is restrictedto take integer values, we construct the read more..

  • Page - 621

    604Integer ProgrammingWhen this constraint is added to the tableau above, we obtain the following:Coefficients of variablesBasicvariablesx1x2y1y2−fs2bix110113613600112x201−1121120092−f0071251210692s200112−11201−12Step 3: Apply the dual simplex method to find a new optimum solution. Since−12 is theonly negative bi term, the pivot operation has to be done in s2 row. Further, aijcorresponding to y2 column is the only negative coefficient in s2 row and hencepivoting has to be done on read more..

  • Page - 622

    10.4Balas’ Algorithm for Zero – One Programming Problems605method by introducing the additional constraint that all the variables must be less thanor equal to 1. This additional constraint will restrict each of the variables to take avalue of either zero (0) or one (1). Since the cutting plane and the branch-and-boundalgorithms were developed primarily to solve a general integer LP problem, they do nottake advantage of the special features of zero–one LP problems. Thus several methodshave read more..

  • Page - 623

    606Integer ProgrammingInitial Solution.An initial solution for the problem stated in Eqs. (10.28) can betaken asf0= 0xi= 0,i= 1, 2, . . . , n(10.29)Y(0) = BIf B≥ 0, this solution will be feasible and optimal since C≥ 0 in Eqs. (10.28). In thiscase there is nothing more to be done as the starting solution itself happens to be opti-mal. On the other hand, if some of the components bj are negative, the solution givenby Eqs. (10.29) will be optimal (since C≥ 0) but infeasible. Thus the method read more..

  • Page - 624

    10.5Integer Polynomial Programming607in two stages. In the first stage we see how an integer variable, xi, can be representedby an equivalent system of zero–one (binary) variables. We consider the conversionof a zero–one polynomial programming problem into a zero–one LP problem in thesecond stage.10.5.1Representation of an Integer Variable by an Equivalent Systemof Binary VariablesLet xi be any integer variable whose upper bound is given by ui so thatxi≤ ui <∞(10.32)We assume that read more..

  • Page - 625

    608Integer ProgrammingMethod of Finding q0, q1, q2, . . . .Let Mbe the given positive integer. To find itsbinary representation qnqn−1 . . . q1q0, we compute the following recursively:b0= M(10.35)b1=b0− q02b2=b1− q12...bk=bk−1− qk−12where qk= 1 if bk is odd and qk= 0 if bk is even. The procedure terminates whenbk= 0.Equation (10.33) guarantees that xi can take any feasible integer value less thanor equal to ui. The use of Eq. (10.33) in the problem stated in Eq. (10.30) will read more..

  • Page - 626

    10.6Branch-and-Bound Method609the kth term of the polynomial simply becomes ckyk. However, we need to add thefollowing constraints to ensure that yk= 1 when all xi= 1 and zero otherwise:yk≥nki=1xi− (nk− 1)(10.39)yk≤1nknki=1xi(10.40)It can be seen that if all xi= 1,nki=1xi= nk, and Eqs. (10.39) and (10.40) yieldyk≥ 1(10.41)yk≤ 1(10.42)which can be satisfied only if yk= 1. If at least one xi= 0, we havenki=1xi < nk,and Eqs. (10.39) and (10.40) giveyk≥ −(nk − 1)(10.43)yk read more..

  • Page - 627

    610Integer ProgrammingX satisfies constraints (10.46) and (10.47). A design vector X that satisfies all theconstraints, Eqs. (10.46) to (10.48), is called an integer feasible solution.The simplest method of solving an integer optimization problem involves enumer-ating all integer points, discarding infeasible ones, evaluating the objective functionat all integer feasible points, and identifying the point that has the best objectivefunction value. Although such an exhaustive search in the read more..

  • Page - 628

    10.6Branch-and-Bound Method611It can be seen that a node can be fathomed if any of the following conditionsare true:1. The continuous solution is an integer feasible solution.2. The problem does not have a continuous feasible solution.3. The optimal value of the continuous problem is larger than the current upperbound.The algorithm continues to select a node for further branching until all the nodes havebeen fathomed. At that stage, the particular fathomed node that has the integer read more..

  • Page - 629

    612Integer Programmingx1Figure 10.4Graphical solution of problem (E3).Step 3:The next branching process, with integer bounds on x2, leads to the followingproblems:Maximize f= 3x1+ 4x2subject to(E5)7x1+ 11x2≤ 88,3x1− x2≤ 12,x1≤ 5,x2≤ 4andMaximize f= 3x1+ 4x2subject to(E6)7x1+ 11x2≤ 88,3x1− x2≤ 12,x1≤ 5,x2≥ 5 read more..

  • Page - 630

    10.6Branch-and-Bound Method613Figure 10.5Graphical solution of problem (E4).The solutions of problems (E5) and (E6) are given byProblem (E5) : Fig. 10.6; (x∗1= 5, x∗2= 4, f∗ = 31)Problem (E6) : Fig. 10.7; (x∗1= 0, x∗2= 8, f∗ = 32)Since both the variables assumed integer values, the optimum solution of theinteger LP problem, Eqs. (E1) and (E2), is given by (x∗1= 0, x∗2= 8, f∗ = 32).Example 10.4Find the solution of the welded beam problem of Section 7.22.3 bytreating it as a read more..

  • Page - 631

    614Integer ProgrammingFigure 10.6Graphical solution of problem (E5).Next, the branching problems, with integer bounds on x3, are solved and the pro-cedure is continued until the desired optimum solution is found. The results are shownin Fig. LINEAR DISCRETE PROGRAMMINGLet the nonlinear programming problem with discrete variables be stated as follows:Minimize f (X)(10.52)subject togj (X)≤ 0,j= 1, 2, . . . , m(10.53)hk(X)= 0,k= 1, 2, . . . , p(10.54)xi∈ {di1,di2,...,diq}, read more..

  • Page - 632

    10.7Sequential Linear Discrete Programming615Figure 10.7Graphical solution of problem (E6).where the first n0 design variables are assumed to be discrete, dij is the jth discretevalue for the variable i, and X= {x1,x2,...,xn}T. It is possible to find the solutionof this problem by solving a series of mixed-integer linear programming problems.The nonlinear expressions in Eqs. (10.52) to (10.54) are linearized about a pointX0 using a first-order Taylor’s series expansion and the problem is read more..

  • Page - 633

    616Integer ProgrammingFigure 10.8Solution of the welded beam problem using branch-and-bound method. [10.25]The problem stated in Eqs. (10.57) to (10.62) cannot be solved using mixed-integerlinear programming techniques since some of the design variables are discrete andnoninteger. The discrete variables are redefined as [10.26]xi= yi1di1+ yi2di2+ ··· + yiq diq=qj=1yij dij ,i= 1, 2, . . . , n0(10.63)withyi1+ yi2+ ··· + yiq=qj=1yij= 1(10.64)yij= 0 or 1,i= 1, 2, . . . , n0,j= 1, 2, . . . , read more..

  • Page - 634

    10.7Sequential Linear Discrete Programming617subject togj (X)≈ gj (X0)+n0i=1∂gi∂xin0l=1yildil− x0i+ni=n0+1∂gj∂xi(xi− x0i )≤ 0,j= 1, 2, . . . , m(10.67)hk(X)≈ hk(X0) +n0i=1∂hk∂xin0l=1yildil− x0i+ni=n0+1∂hk∂xi(xi− x0i )= 0,k= 1, 2, . . . , p(10.68)qj=1yij= 1,i= 1, 2, . . . , n0(10.69)yij= 0 or 1,i= 1, 2, . . . , n0,j= 1, 2, . . . , q(10.70)x(l)i≤ x0i+ δxi≤ x(u)i,i= n0+ 1, n0+ 2, . . . , n(10.71)The problem stated in Eqs. (10.66) to (10.71) can now be solved as a read more..

  • Page - 635

    618Integer Programmingadjacent lower value —for simplifying the computations. Using X0=1.21.1 , we havef (X0)= 6.51,g(X0)= −2.26∇f (X0)=4x16x2X0=4.86.6,∇g(X0) =−1x21−1x22X0= −0.69−0.83Nowx1= y11(0.8)+ y12(1.2)+ y13(1.5)x2= y21(0.8)+ y22(1.1)+ y23(1.4)δx1= y11(0.8− 1.2)+ y12(1.2− 1.2)+ y13(1.5− 1.2)δx2= y21(0.8− 1.1)+ y22(1.1− 1.1)+ y23(1.4− 1.1)f≈ 6.51+ {4.8 6.6} −0.4y11 + 0.3y13−0.3y21 + 0.3y23g≈ −2.26 read more..

  • Page - 636

    10.8Generalized Penalty Function Method61910.8GENERALIZED PENALTY FUNCTION METHODThe solution of an integer nonlinear programming problem, based on the concept ofpenalty functions, was originally suggested by Gellatly and Marcal in 1967 [10.5].This approach was later applied by Gisvold and Moe [10.4] and Shin et al. [10.24]to solve some design problems that have been formulated as nonlinear mixed-integerprogramming problems. The method can be considered as an extension of the interiorpenalty read more..

  • Page - 637

    620Integer Programmingthe point always remains in the feasible region. The term skQk(Xd )can be consideredas a penalty term with sk playing the role of a weighing factor (penalty parameter). Thefunction Qk(Xd )is constructed so as to give a penalty whenever some of the variablesin Xd take values other than integer values. Thus the function Qk(Xd )has the propertythatQk(Xd )=0if Xd∈ Sdµ > 0 if Xd /∈ Sd(10.75)We can take, for example,Qk(Xd )=xi∈Xd4xi− yizi− yi1−xi− yizi− read more..

  • Page - 638

    10.8Generalized Penalty Function Method621Figure 10.10Solution of a single-variable integer problem by penalty function method. x1,discrete variable; xj1 , jth value of x1 [10.4].Choice of the Initial Values of rk, sk, and βk.The numerical values of rk, sk, andβk have to be chosen carefully to achieve fast convergence. If these values are chosensuch that they give the response surfaces of φfunction as shown in Fig. 10.10c, severallocal minima will be introduced and the risk in finding the read more..

  • Page - 639

    622Integer Programmingwhere∇Pk =∂Pk/∂x1∂Pk/∂x2...∂Pk/∂xn(10.82)The initial value of s1, according to the requirement of Eq. (10.78), is given bys1= c1P′1(X1, r1)Q′1(X(d)1 , β1)(10.83)where X1 is the initial starting point for the minimization of φ1, X(d)1the set of startingvalues of integer-restricted variables, and c1 a constant whose value is generally takenin the range 0.001 and 0.1.To choose the weighting factor r1, read more..

  • Page - 640

    10.8Generalized Penalty Function Method623Figure 10.11Three-bar truss.A general convergence proof of the penalty function method, including the integerprogramming problems, was given by Fiacco [10.6]. Hence the present method isguaranteed to converge at least to a local minimum if the recovery procedure is appliedthe required number of times.Example 10.6 [10.24] Find the minimum weight design of the three-bar truss shownin Fig. 10.11 with constraints on the stresses induced in the members. Treat read more..

  • Page - 641

    624Integer Programmingg3(X)= 1−0.5x1− 2x21.5x1x2+√2x2x3+ 1.319x1x3 ≥ 0g4(X)= 1+0.5x1− 2x21.5x1x2+ √2x2x3+ 1.319x1x3 ≥ 0xi∈ {0.1,0.2,0.3,0.5,0.8,1.0,1.2}, i= 1, 2, 3The optimum solution of the continuous variable problem is given by f∗ = 2.7336,x∗1= 1.1549, x∗2= 0.4232, and x∗3= 0.0004. The optimum solution of the discretevariable problem is given by f∗ = 3.0414, x∗1= 1.2, x∗2= 0.5, and x∗3= OF BINARY PROGRAMMING PROBLEMS USINGMATLABThe MATLAB read more..

  • Page - 642

    References and Bibliography625Step 3:The output of the program is shown below:Optimization terminated.x =11111REFERENCES AND BIBLIOGRAPHY10.1M. L. Balinski, Integer programming: methods, uses, computation, Management Science,Vol. 12, pp. 253 –313, 1965.10.2L. J. Watters, Reduction of integer polynomial programming problems to zero –one linearprogramming problems, Operations Research, Vol. 15, No. 6, pp. 1171 –1174, 1967.10.3S. Retter and D. B. Rice, Discrete optimizing solution procedures read more..

  • Page - 643

    626Integer Programming10.18C. A. Trauth, Jr., and R. E. Woolsey, Integer linear programming: a study in computa-tional efficiency, Management Science, Vol. 15, No. 9, pp. 481 –493, 1969.10.19E. L. Lawler and M. D. Bell, A method for solving discrete optimization problems,Operations Research, Vol. 14, pp. 1098 –1112, 1966.10.20P. Hansen, Quadratic zero – one programming by implicit enumeration, pp. 265 –278 inNumerical Methods for Nonlinear Optimization, F. A. Lootsma, Ed., Academic read more..

  • Page - 644

    Problems62710.2Define the following terms:(a) Cutting plane(b) Gomory’s constraint(c) Mixed-integer programming problem(d) Additive algorithm10.3Give two engineering examples of a discrete programming problem.10.4Name two engineering systems for which zero – one programming is applicable.10.5What are the disadvantages of truncating the fractional part of a continuous solution foran integer problem?10.6How can you solve an integer nonlinear programming problem?10.7What is a branch-and-bound read more..

  • Page - 645

    628Integer Programming10.3Maximize f= 4x1+ 3x2subject to3x1+ 2x2≤ 18x1, x2≥ 0, integers10.4Maximize f= 3x1− x2subject to3x1− 2x2≤ 3−5x1 − 4x2≤ −10x1, x2≥ 0, integers10.5Maximize f= 2x1+ x2subject to8x1+ 5x2≤ 15x1, x2≥ 0, integers10.6Solve the following problem using Gomory’s cutting plane method:Maximize f= 6x1+ 7x2subject to7x1+ 6x2≤ 425x1+ 9x2≤ 45x1− x2≤ 4xi≥ 0 and integer,i= 1, 210.7Solve the following problem using Gomory’s cutting plane method:Maximize read more..

  • Page - 646

    Problems6292. The pipes leading out of Bor of Cshould have total capacities of either 2 or 3.3. No pipe between any two cities must have a capacity exceeding 2.Only pipes of an integer number of capacity units are available and the cost of a pipe isproportional to its capacity and to its length. Determine the capacities of the pipe linesto minimize the total cost.10.10Convert the following integer quadratic problem into a zero –one linear programmingproblem:Minimize f= 2x21+ 3x22+ 4x1x2− read more..

  • Page - 647

    630Integer Programming10.14Find the solution of Problem 10.1 using the branch-and-bound method coupled with thegraphical method of solution for the branching problems.10.15Find the solution of the following problem using the branch-and-bound method coupledwith the graphical method of solution for the branching problems:Maximize f= x1− 4x2subject tox1− x2≥ −4, 4x1+ 5x2≤ 455x1− 2x2≤ 20,5x1+ 2x2≥ 10xi≥ 0 and integer,i= 1, 210.16Solve the following mixed integer programming problem read more..

  • Page - 648

    Problems63110.20Find the solution of the following problem using a graphical method based on thegeneralized penalty function approach:Minimize f= xsubject tox− 1≥ 0withx= {1,2,3,...}Select suitable values of rk and sk to construct the φk function.10.21Find the solution of the following binary programming problem using the MATLABfunction bintprog:Minimize fTx subject to Ax≤ band Aeq x= beqwhereA=−1 1 000 000 00−1 100 000 000 0−1 1 000 000 00−1 100 000 000 0−1 read more..

  • Page - 649

    11Stochastic Programming11.1INTRODUCTIONStochasticor probabilistic programmingdeals with situations where some or all ofthe parameters of the optimization problem are described by stochastic (or random orprobabilistic) variables rather than by deterministic quantities. The sources of randomvariables may be several, depending on the nature and the type of problem. For instance,in the design of concrete structures, the strength of concrete is a random variable sincethe compressive strength of read more..

  • Page - 650

    11.2Basic Concepts of Probability Theory633phenomena are chance dependent and one has to resort to probability theory to describethe characteristics of such phenomena.Before introducing the concept of probability, it is necessary to define certain termssuch as experiment and event. An experimentdenotes the act of performing somethingthe outcome of which is subject to uncertainty and is not known exactly. For example,tossing a coin, rolling a die, and measuring the yield strength of steel can be read more..

  • Page - 651

    634Stochastic Programmingthe range −∞ to ∞. Such a quantity (like X)is called a random variable. We denotea random variable by a capital letter and the particular value taken by it by a lowercaseletter. Random variables are of two types: (1) discrete and (2) continuous. If the randomvariable is allowed to take only discrete values x1, x2, . . . , xn, it is called a discreterandom variable. On the other hand, if the random variable is permitted to take anyreal value in a specified range, read more..

  • Page - 652

    11.2Basic Concepts of Probability Theory635possible value is T, thenFX(x) = 0for all x < SandFX(x) = 1for all x > TProbability Density Function (Continuous Case).The probability density functionof a random variable is defined byfX(x) dx = P (x ≤ X ≤ x + dx)(11.6)which is equal to the probability of detecting Xin the infinitesimal interval (x, x+dx). The distribution function of Xis defined as the probability of detecting Xlessthan or equal to x, that is,FX(x) =x−∞fX(x′) read more..

  • Page - 653

    636Stochastic ProgrammingDiscrete Case. Let us assume that there are ntrials in which the random variableXis observed to take on the value x1 (n1 times), x2 (n2 times), and so on, andn1 + n2 + · · · + nm = n. Then the arithmetic mean of X, denoted as X, is given byX =mk=1xknkn=mk=1xknkn=mk=1xkfX(xk)(11.9)where nk/n is the relative frequency of occurrence of xk and is same as the probabilitymass function fX(xk). Hence in general, the expected value, E(X), of a discrete randomvariable can be read more..

  • Page - 654

    11.2Basic Concepts of Probability Theory637Figure 11.2Two density functions with same mean.SOLUTIONX =6i=0xipX(xi) = 0(0.02) + 1(0.15) + 2(0.22) + 3(0.26)+ 4(0.17) + 5(0.14) + 6(0.04)= 2.99X2 =6i=0x2i pX(xi) = 0(0.02) + 1(0.15) + 4(0.22) + 9(0.26)+ 16(0.17) + 25(0.14) + 36(0.04)= 11.03Thusσ2X= X 2 − (X)2 = 11.03 − (2.99)2 = 2.0899orσX = 1.4456Example 11.3The force applied on an engine brake (X) is given byfX(x) = x48,0 ≤ x ≤ 8 lb12 − x24,8 ≤ x ≤ 12 read more..

  • Page - 655

    638Stochastic Programming= 21.3333 + 29.3333 = 50.6666σ2X= E[X2] − (E[X])2 = 50.6666 − (6.6667)2= 6.2222orσX = 2.494411.2.4Function of a Random VariableIf Xis a random variable, any other variable Ydefined as a function of Xwill also bea random variable. If fX(x) and FX(x) denote, respectively, the probability density anddistribution functions of X, the problem is to find the density function fY (y)and thedistribution function FY (y)of the random variable Y. Let the functional relation read more..

  • Page - 656

    11.2Basic Concepts of Probability Theory63911.2.5Jointly Distributed Random VariablesWhen two or more random variables are being considered simultaneously, their jointbehavior is determined by a joint probability distribution function. The probabilitydistributions of single random variables are called univariate distributionsand thedistributions that involve two random variables are called bivariate distributions. Ingeneral, if a distribution involves more than one random variable, it is called read more..

  • Page - 657

    640Stochastic Programmingexclusive ways of obtaining the points lying between xand x + dx. Let the lower andupper limits of y be a1(x) and b1(x). ThenP[x ≤ x′ ≤ x + dx] =b1(x)a1(x)fX,Y (x, y) dy dx = fX(x) dxfX(x) =y2=b1(x)y1=a1(x)fX,Y (x, y) dy(11.25)Similarly, we can show thatfY (y) =x2=b2(y)x1=a2(y)fX,Y (x, y) dx(11.26)11.2.6Covariance and CorrelationIf Xand Yare two jointly distributed random variables, the variances of Xand Yaredefined asE[(X − X)2] = Var[X] =∞−∞(x − X)2fX read more..

  • Page - 658

    11.2Basic Concepts of Probability Theory641Then the joint distribution function FY (y), by definition, is given byFY (y) = P (Y ≤ y)=x1x2 · · ·xnfX1,X2,...,Xn (x1, x2, . . . , xn)dx1 dx2 · ·· dxng(x1,x2,...,xn) ≤ y(11.32)where the integration is to be done over the domain of the n-dimensional(X1, X2, . . . , Xn) space in which the inequality g(x1, x2, . . . , xn) ≤ yis satisfied. Bydifferentiating Eq. (11.32), we can get the density function of y, fY (y).As in the case of a read more..

  • Page - 659

    642Stochastic ProgrammingThese results can be generalized to the case when Yis a linear function of severalrandom variables. Thus ifY =ni=1aiXi(11.38)thenE(Y ) =ni=1aiE(Xi)(11.39)Var(Y ) =ni=1a2i Var(Xi) +ni=1nj =1aiaj Cov(Xi, Xj ),i = j(11.40)Approximate Mean and Variance of a Function of Several Random Variables.If Y = g(X1, . . . , Xn), the approximate mean and variance of Ycan be obtained asfollows. Expand the function gin a Taylor series about the mean values X1, X2, . . . , Xnto obtainY = read more..

  • Page - 660

    11.2Basic Concepts of Probability Theory64311.2.8Probability DistributionsThere are several types of probability distributions (analytical models) for describ-ing various types of discrete and continuous random variables. Some of the commondistributions are given below:Discrete caseContinuous caseDiscrete uniform distributionUniform distributionBinomialNormal or GaussianGeometricGammaMultinomialExponentialPoissonBetaHypergeometricRayleighNegative binomial (or Pascal’s)WeibullIn any physical read more..

  • Page - 661

    644Stochastic ProgrammingFigure 11.4Standard normal density function.By the same token, the values of zcorresponding to p <0.5 can be obtained asz = φ−1(p) = −φ−1(1 − p)(11.50)Notice that any normally distributed variable (X) can be reduced to a standard normalvariable by using the transformationz =x − µXσX(11.51)For example, if P (a < X ≤ b)is required, we haveP (a < X ≤ b) =1σX√2πbae−(1/2)[(x−µX)/σX]2dx(11.52)By using Eq. (11.51) and dx = σX dz, Eq. read more..

  • Page - 662

    11.2Basic Concepts of Probability Theory645Table 11.1Standard Normal Distribution Tablezf (z)φ read more..

  • Page - 663

    646Stochastic Programming= [1 − P (Z ≤ 1.667)] + [1 − P (Z ≤ 1.667)]= 2.0 − 2P (Z ≤ 1.667)= 2.0 − 2(0.9525) = 0.095= 9.5 %Joint Normal Density Function.If X1, X2, . . . , Xn follow normal distribution, anylinear function, Y = a1X1 + a2X2 + · ·· + anXn, also follows normal distribution withmeanY = a1X1 + a2X2 + · ·· + anXn(11.55)and varianceVar(Y ) = a21 Var(X1) + a22 Var(X2) + · · · + a2n Var(Xn)(11.56)if X1, X2, . . . , Xn are independent. In general, the joint normal read more..

  • Page - 664

    11.3Stochastic Linear Programming64711.2.9Central Limit TheoremIf X1, X2, . . . , Xn are nmutually independent random variables with finite mean andvariance (they may follow different distributions), the sumSn =ni=1Xi(11.60)tends to a normal variable if no single variable contributes significantly to the sum asntends to infinity. Because of this theorem, we can approximate most of the physicalphenomena as normal random variables. Physically, Sn may represent, for example, thetensile strength read more..

  • Page - 665

    648Stochastic Programmingwhere cj , aij , and bi are random variables and pi are specified probabilities. Noticethat Eqs. (11.65) indicate that the ith constraint,nj =1aij xj ≤ bihas to be satisfied with a probability of at least pi where 0 ≤ pi ≤ 1. For simplicity,we assume that the design variables xj are deterministic and cj , aij , and bi are randomvariables. We shall further assume that all the random variables are normally distributedwith known mean and standard deviations.Since cj read more..

  • Page - 666

    11.3Stochastic Linear Programming649where hi is a new random variable defined ashi =nj =1aij xj − bi =n+1k=1qikyk(11.72)whereqik = aik,k = 1, 2, . . . , nqi,n+1 = biyk = xk,k = 1, 2, . . . , n,yn+1 = −1Notice that the constant yn+1 is introduced for convenience. Since hi is given by alinear combination of the normally distributed random variables qik, it will also follownormal distribution. The mean and the variance of hi are given byhi =n+1k=1qikyk =nj =1aij xj − bi(11.73)Var(hi) = YT read more..

  • Page - 667

    650Stochastic Programming=nk=1x2k Var(aik) + 2nl=k+1xkxl Cov(aik, ail)+ Var(bi) − 2nk=1xk Cov(aik, bi)(11.77)Thus the constraints in Eqs. (11.71) can be restated asPhi − hi√Var(hi ) ≤ −hi√Var(hi )≥ pi,i = 1, 2, . . . , m(11.78)where [(hi − hi)]/√Var(hi) represents a standard normal variable with a mean valueof zero and a variance of 1.Thus if si denotes the value of the standard normal variable at whichφ(si ) = pi(11.79)the constraints of Eq. (11.78) can be stated read more..

  • Page - 668

    11.3Stochastic Linear Programming651Machining time required per unit (min)Maximum timeavailablePart IPart IIper week (min)Type ofmachineMeanStandarddeviationMeanStandarddeviationMeanStandarddeviationLathesa11 = 10σa11= 6a12 = 5σa12= 4b1 = 2500σb1= 500Millingmachinesa21 = 4σa21= 4a22 = 10σa22= 7b2 = 2000σb2= 400Grindingmachinesa31 = 1σa31= 2a32 = 1.5σa32= 3b3 = 450σb3= 50Profit per unitc1 = 50σc1= 20c2 = 100σc2= 50SOLUTIONBy defining new random variables hi ashi =nj =1aij xj − read more..

  • Page - 669

    652Stochastic ProgrammingAs the value of the standard normal variate (si) corresponding to the probability 0.99is 2.33 (obtained from Table 11.1), we can state the equivalent deterministic nonlinearoptimization problem as follows:Minimize F = k1(50x1 + 100x2) + k2 400x21+ 2500x22subject to10x1 + 5x2 + 2.33 36x21+ 16x22+ 250,000 − 2500 ≤ 04x1 + 10x2 + 2.33 16x21+ 49x22+ 160,000 − 2000 ≤ 0x1 + 1.5x2 + 2.33 4x21+ 9x22+ 2500 − 450 ≤ 0x1 ≥ 0,x2 ≥ 0This problem can be solved by any of read more..

  • Page - 670

    11.4Stochastic Nonlinear Programming653If the standard deviations of yi, σyi, are small, f (Y)can be approximated by the firsttwo terms of Eq. (11.85):f (Y) ≃ (Y) −Ni=1∂f∂yi Yyi +Ni=1∂f∂yi Yyi = ψ(Y)(11.86)If all yi (i = 1, 2, . . . , N )follow normal distribution, ψ(Y), which is a linear functionof Y, also follows normal distribution. The mean and the variance of ψare givenbyψ = ψ(Y)(11.87)Var(ψ) = σ2ψ=Ni=1∂f∂yi Y2σ2yi(11.88)since all yi are independent. For the read more..

  • Page - 671

    654Stochastic ProgrammingBy introducing the new variableθ =gj − gjσgj(11.94)and noting that∞−∞1√2π e−t2/2dt = 1(11.95)Eq. (11.90) can be expressed as∞−(gj/σgj)1√2π e−θ2/2dθ ≥∞−φj(pj)1√2π e−t2/2dt(11.96)where φj (pj )is the value of the standard normal variate corresponding to the proba-bility pj . Thus−gjσgj ≤ −φj(pj)or−gj + σgj φj (pj ) ≤ 0(11.97)Equation (11.97) can be rewritten asgj − φj (pj )Ni=1∂gj∂yi Y2σ2yi1/2≥ 0,j = 1, 2, . . read more..

  • Page - 672

    11.4Stochastic Nonlinear Programming655Figure 11.5Column under compressive load.Mean diameter of the section = (d, σd ) = (d,0.01d)Column length = (l, σl) = (250, 2.5) cmSOLUTIONThis problem, by neglecting standard deviations of the various quantities,can be seen to be identical to the one considered in Example 1.1. We will take thedesign variables as the mean tubular diameter (d) and the tube thickness (t):X =x1x2 =dtNotice that one of the design variables (d) is probabilistic in this case read more..

  • Page - 673

    656Stochastic Programmingthe objective function can be expressed as f (Y) = 5 W + 2d = 5ρlπ dt + 2d. SinceY =PEρf yld=25000.85 × 1060.0025500250df (Y) = 5ρlπ dt + 2d = 9.8175dt + 2d∂f∂y1 Y=∂f∂y2 Y=∂f∂y4 Y= 0∂f∂y3 Y= 5π l dt = 3927.0dt∂f∂y5 Y= 5π ρdt = read more..

  • Page - 674

    11.4Stochastic Nonlinear Programming657The mean values of the constraint functions are given by Eq. (11.92) asg1 =Pπ dt − f y=2500π dt − 500g2 =Pπ dt −π 2E(d2+ t2)8l2=2500π dt −π 2(0.85 × 106)(d2+ t2)8(250)2g3 = −d + 2.0g4 = d − 14.0g5 = −t + 0.2g6 = t − 0.8The partial derivatives of the constraint functions can be computed as follows:∂g1∂y2 Y=∂g1∂y3 Y=∂g1∂y5 Y= 0∂g1∂y1 Y=1π dt∂g1∂y4 Y= −1∂g1∂y6 Y= −Pπ d2t = −2500π d2t∂g2∂y3 read more..

  • Page - 675

    658Stochastic Programming∂g4∂y6 Y= 1.0∂g5∂yi Y=∂g6∂yi Y= 0for i = 1 to 6Since the value of the standard normal variate φj (pj )corresponding to the probabilitypj = 0.95 is 1.645 (obtained from Table 11.1), the constraints in Eq. (11.98) can beexpressed as follows.For j = 1†:2500π dt − 500 − 1.645σ 2Pπ 2d2t2 + σ2fy+(2500)2π 2d4t2σ2d1/2≤ 0795dt − 500 − 1.64525, 320d2t2+ 2500 +63.3d2t21/2≤ 0(E10)For j = 2:2500π dt − 16.78(d2+ t2) − 1.645σ 2Pπ 2d2t2 +π 4(d2+ read more..

  • Page - 676

    11.5Stochastic Geometric Programming659Thus the equivalent deterministic optimization problem can be stated as follows: Min-imize F (d, t)given by Eq. (E3) subject to the constraints given by Eqs. (E10) to (E15).The solution of the problem can be found by applying any of the standard nonlinearprogramming techniques discussed in Chapter 7. In the present case, since the numberof design variables is only two, a graphical method can also be used to find the solution.11.5STOCHASTIC GEOMETRIC read more..

  • Page - 677

    660Stochastic Programmingwhere Nc is the number of active turns, Qthe number of inactive turns, and ρtheweight density. Noting that the deflection of the spring (δ) is given byδ =8P C3NcGd(E2)where Pis the load, C = D/d, and Gis the shear modulus. By substituting theexpression of Nc given by Eq. (E2) into Eq. (E1), the objective function can be expressedasf (X) =π 2ρGδ32Pd6D2+π 2ρQ4d2D(E3)The yield constraint can be expressed, in deterministic form, asτ =8KPCπ d2≤ τmax(E4)where read more..

  • Page - 678

    References and Bibliography661REFERENCES AND BIBLIOGRAPHY11.1E. Parzen, Modern Probability Theory and Its Applications, Wiley, New York, 1960.11.2A. H. S. Ang and W. H. Tang, Probability Concepts in Engineering Planning and Design,Vol. I, Basic Principles, Wiley, New York, 1975.11.3S. S. Rao, Reliability-based Design, McGraw-Hill, New York, 1992.11.4G. B. Dantzig, Linear programming under uncertainty, Management Science, Vol. 1, pp.197–207, 1955.11.5A. Charnes and W.W. Cooper, Chance read more..

  • Page - 679

    662Stochastic Programming11.22S. S. Rao and C. P. Reddy, Mechanism design by chance constrained programming,Mechanism and Machine Theory, Vol. 14, pp. 413–424, 1979.REVIEW QUESTIONS11.1Define the following terms:(a)Mean(b)Variance(c)Standard deviation(d)Probability(e)Independent events(f)Joint density function(g)Covariance(h)Central limit theorem(i)Chance constrained programming11.2Match the following terms and descriptions:(a)Marginal density functionDescribes sum of several random read more..

  • Page - 680

    Problems66311.5What is a random variable?11.6Give two examples of random design parameters.11.7What is the difference between probability density and probability distribution functions?11.8What is the difference between discrete and continuous random variables?11.9How does correlation coefficient relate two random variables?11.10Identify possible random variables in a LP problem.11.11How do you find the mean and standard deviation of a sum of several random variables?PROBLEMS11.1A contractor read more..

  • Page - 681

    664Stochastic Programmingdistribution,fV (ν) = 1V0e−(ν/V0) for ν ≥ 00for ν <0where V0 is the mean velocity, derive the density function for the head loss H.11.7The joint density function of two random variables Xand Yis given byfX,Y (x, y) =3x2y + 3y2xfor 0 ≤ x ≤ 1,0 ≤ y ≤ 10elsewhereFind the marginal density functions of Xand Y.11.8Steel rods, manufactured with a nominal diameter of 3 cm, are considered acceptableif the diameter falls within the limits of 2.99 read more..

  • Page - 682

    Problems66511.13The range (R) of a projectile is given byR =V 20gsin 2φwhere V0 is the initial velocity of the projectile, gthe acceleration due to gravity, and φthe angle from the horizontal as shown in Fig. 11.6. If the mean and standard deviationsof V0 and φare given by V 0= 100 ft/s, σV0= 10 ft/s, φ = 30◦, and σφ = 3◦, findthe first-order mean and standard deviation of the range R, assuming that V0 and φare statistically independent. Evaluate also the second-order mean range. read more..

  • Page - 683

    666Stochastic ProgrammingTime per unit (min) for product:Stage capacityABC(mins/day)StageMeanStandarddeviationMeanStandarddeviationMeanStandarddeviationMeanStandarddeviation14183441720172212200821840276342164001680336The profit per unit is also a random variable with the following data:Profit($)ProductMeanStandard deviationA62B41C103Assuming that all amounts produced are absorbed by the market, determine the dailynumber of units to be manufactured of each product for the following cases.(a)The read more..

  • Page - 684

    Problems66711.19An article is to be restocked every three months in a year. The quarterly demand Uisrandom and its law of probability in any of the quarters is as given below:UProbability mass function, PU (u)00.210.320.430.1>30.0The cost of stocking an article for a unit of time is 4, and when the stock is exhausted,there is a scarcity charge of 12. The orders that are not satisfied are lost, in other words,are not carried forward to the next period. Further, the stock cannot exceed three read more..

  • Page - 685

    12Optimal Control and OptimalityCriteria Methods12.1INTRODUCTIONIn this chapter we give a brief introduction to the following techniques of optimization:1. Calculus of variations2. Optimal control theory3. Optimality criteria methodsIf an optimization problem involves the minimization (or maximization) of a functionalsubject to the constraints of the same type, the decision variable will not be a number,but it will be a function. The calculus of variations can be used to solve this type read more..

  • Page - 686

    12.2Calculus of Variations66912.2.2Problem of Calculus of VariationsA simple problem in the theory of the calculus of variations with no constraints canbe stated as follows:Find a function u(x)that minimizes the functional (integral)A=x2x1F (x, u, u′, u′′) dx(12.1)where Aand Fcan be called functionals (functions of other functions). Here xis theindependent variable,u= u(x), u′ =du(x)dx,andu′′ =d2u(x)dx2In mechanics, the functional usually possesses a clear physical meaning. For read more..

  • Page - 687

    670Optimal Control and Optimality Criteria MethodsFigure 12.1Tentative and exact solutions.Also, we define the variation of a function of several variables or a functional in amanner similar to the calculus definition of a total differential:δF=∂F∂uδu+∂F∂u′δu′ +∂F∂u′′δu′′+∂F∂xδx(12.5)↑0(since we are finding variation of Ffor a fixed value of x, i.e., δx= 0).Now, let us consider the variation in A(δA) corresponding to variations in thesolution (δu). If we read more..

  • Page - 688

    12.2Calculus of Variations671ThusδA=x2x1∂F∂u−ddx∂F∂u′ +d2dx2∂F∂u′′δu dx+∂F∂u′ −ddx∂F∂u′′δux2x1 +∂F∂u′′δu′x2x1 = 0(12.9)Since δuis arbitrary, each term must vanish individually:∂F∂u−ddx∂F∂u′ +d2dx2∂F∂u′′ = 0(12.10)∂F∂u′ −ddx∂F∂u′′δux2x1 = 0(12.11)∂F∂u′′δu′x2x1 = 0(12.12)Equation (12.10) will be the governing differential equation for the given problem andis called Euler equation or Euler– Lagrange read more..

  • Page - 689

    672Optimal Control and Optimality Criteria MethodsFigure 12.2Curve of minimum time ofdescent.Since potential energy is converted to kinetic energy as the particle moves down thepath, we can write12 mν2= mgxHencedt=1 + (y′)22gx1/2dx(E1)and the integral to be stationary ist=xB01 + (y′)22gx1/2dx(E2)The integrand is a function of xand y′ and so is a special case of Eq. (12.1). Usingthe Euler–Lagrange equation,ddx∂F∂y′ −∂F∂y= 0with F=1 + (y′)22gx1/2we obtainddxy′{x[1 + read more..

  • Page - 690

    12.2Calculus of Variations673depend on the shape of the body and the relative velocity in a very complex manner.However, if the density of the fluid is sufficiently small, the normal pressure (p) actingon the solid body can be approximately taken as [12.3]p= 2ρν2 sin2 θ(E1)where ρis the density of the fluid, vthe velocity of the fluid relative to the solid body,and θthe angle between the direction of the velocity of the fluid and the tangent to thesurface as shown in Fig. 12.3.Since read more..

  • Page - 691

    674Optimal Control and Optimality Criteria MethodsFigure 12.4Element of surface area acted on by thepressure p.Find y(x)which minimizes the drag Pgiven by Eq. (E5) subject to the conditionthat y(x)satisfies the end conditionsy(x= 0) = 0andy(x= L)= R(E6)By comparing the functional Pof Eq. (E5) with Aof Eq. (12.1), we find thatF (x, y, y′, y′′) = 4πρν2(y′)3y(E7)The Euler–Lagrange equation, Eq. (12.10), corresponding to this functional can beobtained as(y′)3 − 3ddx[y(y′)2] = read more..

  • Page - 692

    12.2Calculus of Variations675Hence the shape of the solid body having minimum drag is given by the equationy(x)= RxL3/412.2.3Lagrange Multipliers and ConstraintsIf the variable xis not completely independent but has to satisfy some condition(s) ofconstraint, the problem can be stated as follows:Find the function y(x)such that the integralA=x2x1Fx, y,dydxdx→ minimumsubject to the constraint(12.17)g x, y,dydx= 0where gmay be an integral function. The stationary value of a constrained calculus read more..

  • Page - 693

    676Optimal Control and Optimality Criteria MethodsTo formulate the problem, we first write the heat balance equation for an elementallength, dx, of the fin:heat inflow by conduction = heat outflow by conduction and convectionthat is,−kAdtdxx = −kAdtdxx+dx+ hS (t− t∞)(E2)where kis the thermal conductivity, Athe cross-sectional area of the fin = 2y(x)per unit width of the fin, hthe heat transfer coefficient, Sthe surface area of the finelement = 2 1 + (y′)2 dxper unit width, and read more..

  • Page - 694

    12.2Calculus of Variations677By substituting Eq. (E9) in (E7), the variational problem can be restated asFind y(x)which maximizesH= 2hL0t (x) dx(E10)subject to the constraintg(x, t, t′) = 2ρhkL01dt/dxLxt (x) dx dx+ m= 0(E11)This problem can be solved by using the Lagrange multiplier method. The functionalIto be extremized is given byI=L0(H+ λg) dx= 2hL0t (x)+λρk1dt/dxLxt (x) dx dx(E12)where λis the Lagrange multiplier.By comparing Eq. (E12) with Eq. (12.1) we find thatF (x, t, t′) = read more..

  • Page - 695

    678Optimal Control and Optimality Criteria MethodsThe value of the unknown constant λcan be found by using Eq. (E7) asm= 2ρL0y(x) dx= 2ρ c1L+ c2L22+ c3L33that is,m2ρL= c1+ c2L2+ c3L23=hL2(kρλ)1/2−13hL2k(E20)Equation (E20) givesλ1/2 =hL(kρ)1/21(m/ρL)+23 (hL2/k)(E21)Hence the desired solution can be obtained by substituting Eq. (E21) in Eq. (E16).12.2.4GeneralizationThe concept of including constraints can be generalized as follows. Let the problem beto find the functions u1(x, y, z), read more..

  • Page - 696

    12.3Optimal Control Theory679which minimizes the functional, called the performance index,J=T0f0(x, u, t ) dt(12.21)wherex =x1x2...xnis called the state vector, tthe time parameter, Tthe terminal time, and f0 is a functionof x, u, and t. The state variables xi and the control variables ui are related asdxidt= fi(x1, x2, . . . , xn; u1, u2, . . . , um; t),i= 1, 2, . . . , nor˙x = f(x, u, t )(12.22)In many problems, the system is linear and read more..

  • Page - 697

    680Optimal Control and Optimality Criteria Methodsis a function of the two variables xand u, we can write the Euler–Lagrange equations[with u1= x, u′1= ∂x/∂t= ˙x, u2= uand u′2= ∂u/∂t= ˙u in Eq. (12.10)] as∂F∂x−ddt∂F∂˙x = 0(12.28)∂F∂u−ddt∂F∂˙u = 0(12.29)In view of relation (12.27), Eqs. (12.28) and (12.29) can be expressed as∂f0∂x+ λ∂f∂x+ ˙λ = 0(12.30)∂f∂u+ λ∂f∂u= 0(12.31)A new functional H, called the Hamiltonian, is defined asH= f0+ read more..

  • Page - 698

    12.3Optimal Control Theory681Differentiation of Eq. (E5) leads to2 ˙u + ˙λ = 0(E6)Equations (E4) and (E6) yield˙u = x(E7)Since ˙x = u[Eq. (E2)], we obtain¨x = ˙u = xthat is,¨x − x= 0(E8)The solution of Eq. (E8) is given byx(t)= c1 sinh t+ c2 cosh t(E9)where c1 and c2 are constants. By using the initial condition x(0) = 1, we obtain c2= 1.Since xis not fixed at the terminal point t= T= 1, we use the condition λ= 0 att= 1 in Eq. (E5) and obtain u(t= 1) = 0. Butu= ˙x = c1 cosh t+ sinh read more..

  • Page - 699

    682Optimal Control and Optimality Criteria MethodsNow we introduce a Lagrange multiplier pi, also known as the adjoint variable, forthe ith constraint equation in (12.36) and form an augmented functional J∗ asJ∗ =T0f0+ni=1pi(fi− ˙xi) dt(12.37)The Hamiltonian functional, H, is defined asH= f0+ni=1pifi(12.38)such thatJ∗ =T0H−ni=1pi˙xi dt(12.39)Since the integrandF= H−ni=1pi˙xi(12.40)depends on x, u, and t, there are n+ mdependent variables (x and u) and hence theEuler–Lagrange read more..

  • Page - 700

    12.4Optimality Criteria Methods68312.4OPTIMALITY CRITERIA METHODSThe optimality criteria methods are based on the derivation of an appropriate criteria forspecialized design conditions and developing an iterative procedure to find the optimumdesign. The optimality criteria methods were originally developed by Prager and hisassociates for distributed (continuous) systems [12.6] and extended by Venkayya, Khot,and Berke for discrete systems [12.7–12.10]. The methods were first presented read more..

  • Page - 701

    684Optimal Control and Optimality Criteria Methodsdetermined by other considerations). Assuming that the first nvariables denote theactive variables, we can rewrite Eqs. (12.46) and (12.47) asf= f+ni=1cixi(12.52)ni=1aixi = ymax− y= y∗(12.53)where fand ydenote the contribution of the passive variables to fand y, respectively.Equation (12.51) now givesxk=√λakck,k= 1, 2, . . . , n(12.54)Substituting Eq. (12.54) into Eq. (12.53), and solving for λ, we obtain√λ=1y∗nk=1√ak read more..

  • Page - 702

    12.4Optimality Criteria Methods685where Jdenotes the number of displacement (equality) constraints, y∗j the maximumpermissible value of the displacement yj , and aji is a parameter that depends on theforce induced in member idue to the applied loads, length of member i, and Young’smodulus of member i. The Lagrangian function corresponding to Eqs. (12.58) and(12.59) can be expressed asL(X, λ1, . . . , λJ )= f0+ni=1cixi+Jj=1λjni=1ajixi − y∗j(12.60)and the necessary conditions of read more..

  • Page - 703

    686Optimal Control and Optimality Criteria MethodsThe necessary condition of optimality can be expressed as∂f∂zi + λ∂g∂zi = 0,i= 1, 2, . . . , n(12.66)Assuming fto be linear in terms of the areas of cross section (original variables,xi= Ai) and gto be linear in terms of zi, we have∂f∂zi =∂f∂xi∂xi∂zi = −1z2i∂f∂xi(12.67)and Eqs. (12.66) and (12.67) yieldxi= λ∂g/∂zi∂f/∂xi1/2,i= 1, 2, . . . , n(12.68)To find λwe first find the linear approximation of gat a read more..

  • Page - 704

    12.4Optimality Criteria Methods687Figure 12.6Three-bar truss.where ρis the weight density, Eis Young’s modulus, Umax the maximum permissibledisplacement, x1 the area of cross section of bars 1 and 3, x2 the area of cross sectionof bar 2, and the vertical displacement of node Sis given byU1=P1lE1x1+√2 x2(E3)Find the solution of the problem using the optimality criteria method.SOLUTIONThe partial derivatives of fand grequired by Eqs. (12.68) and (12.71)can be computed as∂f∂x1 = read more..

  • Page - 705

    read more..

  • Page - 706

    Review Questions689REFERENCES AND BIBLIOGRAPHY12.1R. S. Schechter, The Variational Method in Engineering, McGraw-Hill, New York, 1967.12.2M. M. Denn, Optimization by Variational Methods, McGraw-Hill, New York, 1969.12.3M. J. Forray, Variational Calculus in Science and Engineering, McGraw-Hill, New York,1968.12.4A. E. Bryson and Y. C. Ho, Applied Optimal Control, Wiley, New York, 1975.12.5A. Shamaly, G. S. Christensen, and M. E. El-Hawary, Optimal control of a large tur-boalternator, Journal of read more..

  • Page - 707

    690Optimal Control and Optimality Criteria Methods(g) Functional(h) Hamiltonian12.3Match the following terms and descriptions:(a) Adjoint variablesLinear elastic structures(b) Optimality criteria methodsLagrange multipliers(c) Calculus of variationsNecessary conditions of optimality(d) Optimal control theoryOptimization of functionals(e) Governing equationsHamiltonian used12.4What are the characteristics of a variational operator?12.5What are Euler–Lagrange equations?12.6Which method can be read more..

  • Page - 708

    Problems691Figure 12.7Circular annular plate under load.where Dis the flexural rigidity of the plate, wthe transverse deflection of the plate, νthe Poisson’s ratio, Mthe radial bending moment per unit of circumferential length, andQthe radial shear force per unit of circumferential length. Find the differential equationand the boundary conditions to be satisfied by minimizing π0.12.6Consider the two-bar truss shown in Fig. 12.8. For the minimum-weight design of thetruss with a bound on read more..

  • Page - 709

    692Optimal Control and Optimality Criteria MethodsFigure 12.8Two-bar truss subjected to horizontal load.12.8The problem of the minimum-weight design of the four-bar truss shown in Fig. 1.32(Problem 1.31) subject to a constraint on the vertical displacement of joint Aand limi-tations on design variables can be stated as follows:Find X = {x1 x2}T which minimizesf (X)= 0.1x1 + 0.05773x2subject tog(X)=0.6x1 +0.3464x2− 0.1 ≤ 0xi≥ 4,i= 1, 2where the maximum permissible vertical displacement of read more..

  • Page - 710

    13Modern Methods of Optimization13.1INTRODUCTIONIn recent years, some optimization methods that are conceptually different from the tra-ditional mathematical programming techniques have been developed. These methodsare labeled as modern or nontraditional methods of optimization. Most of these meth-ods are based on certain characteristics and behavior of biological, molecular, swarmof insects, and neurobiological systems. The following methods are described in thischapter:1. Genetic algorithms2. read more..

  • Page - 711

    694Modern Methods of Optimization13.2GENETIC ALGORITHMS13.2.1IntroductionMany practical optimum design problems are characterized by mixed continuous–discrete variables, and discontinuous and nonconvex design spaces. If standard nonlin-ear programming techniques are used for this type of problem they will be inefficient,computationally expensive, and, in most cases, find a relative optimum that is closest tothe starting point. Genetic algorithms (GAs) are well suited for solving such read more..

  • Page - 712

    13.2Genetic Algorithms695String of length 201 0 0 1 0 0 0 0 11 00 0 0 1 0 0 1 0 0x1x2x3x4In general, if a binary number is given by bq bq−1 · · ·b2b1b0, where bk =0 or 1,k =0, 1, 2, . . . , q, then its equivalent decimal number y(integer) is given byy =qk=02k bk(13.1)This indicates that a continuous design variable xcan only be represented by a setof discrete values if binary representation is used. If a variable x(whose bounds aregiven by x(l)and x(u)) is represented by a string of qbinary read more..

  • Page - 713

    696Modern Methods of Optimization13.2.3Representation of Objective Function and ConstraintsBecause genetic algorithms are based on the survival-of-the-fittest principle of nature,they try to maximize a function called the fitness function. Thus GAs are naturallysuitable for solving unconstrained maximization problems. The fitness function, F (X),can be taken to be same as the objective function f (X)of an unconstrained maximiza-tion problem so that F (X) =f (X). A minimization problem can be read more..

  • Page - 714

    13.2Genetic Algorithms697Equations (13.7) and (13.8) show that the penalty will be proportional to the square ofthe amount of violation of the inequality and equality constraints at the design vectorX, while there will be no penalty added to f (X)if all the constraints are satisfied atthe design vector X.13.2.4Genetic OperatorsThe solution of an optimization problem by GAs starts with a population of randomstrings denoting several (population of) design vectors. The population size in GAs (n)is read more..

  • Page - 715

    698Modern Methods of OptimizationRoulette wheelFitnessvalues321654Pointer16%8%36%24%12%4%StringnumbersFigure 13.1Roulette-wheel selection scheme.In Fig. 13.1, the population size is assumed to be 6 with fitness values of the strings1, 2, 3, 4, 5, and 6 given by 12, 4, 16, 8, 36, and 24, respectively. Since the fifthstring (individual) has the highest value, it is expected to be selected most of the time(36% of the time, probabilistically) when the roulette wheel is spun ntimes (n =6 inFig. read more..

  • Page - 716

    13.2Genetic Algorithms699size nduring numerical computations, nrandom numbers, each in the range of zeroto one, are generated (or chosen). By treating each random number as the cumulativeprobability of the string to be copied to the mating pool, nstrings corresponding to the nrandom numbers are selected as members of the mating pool. By this process, the stringwith a higher (lower) fitness value will be selected more (less) frequently to the matingpool because it has a larger (smaller) range of read more..

  • Page - 717

    700Modern Methods of Optimizationsingle-point crossover operator, a crossover site is selected at random along the stringlength, and the binary digits (alleles) lying on the right side of the crossover site areswapped (exchanged) between the two strings. The two strings selected for participationin the crossover operators are known as parent strings and the strings generated by thecrossover operator are known as child strings.For example, if two design vectors (parents), each with a string read more..

  • Page - 718

    13.2Genetic Algorithms701The purpose of mutation is (1) to generate a string (design point) in the neigh-borhood of the current string, thereby accomplishing a local search around the currentsolution, (2) to safeguard against a premature loss of important genetic material at aparticular position, and (3) to maintain diversity in the population.As an example, consider the following population of size n =5 with a stringlength 10:1 0 0 0 1 0 0 0 1 11 0 1 1 1 1 0 1 0 01 1 0 0 0 0 1 1 0 11 0 1 1 0 1 read more..

  • Page - 719

    702Modern Methods of Optimization5. Carry out the mutation operation using the mutation probability pm to find thenew generation of mstrings.6. Evaluate the fitness values Fi, i =1, 2, . . . , m, of the mstrings of the newpopulation. Find the standard deviation of the mfitness values.7. Test for the convergence of the algorithm or process. If sf ≤(sf )max, the con-vergence criterion is satisfied and hence the process may be stopped. Otherwise,go to step 8.8. Test for the generation number. read more..

  • Page - 720

    13.3Simulated Annealing70313.3.2ProcedureThe simulated annealing method simulates the process of slow cooling of moltenmetal to achieve the minimum function value in a minimization problem. The coolingphenomenon of the molten metal is simulated by introducing a temperature-like param-eter and controlling it using the concept of Boltzmann’s probability distribution. TheBoltzmann’s probability distribution implies that the energy (E) of a system in thermalequilibrium at temperature Tis read more..

  • Page - 721

    704Modern Methods of Optimizationis not same in all situations. As can be seen from Eq. (13.18), this probability dependson the values ofEand T. If the temperature Tis large, the probability will be highfor design points Xi+1 with larger function values (with larger values ofE =f).Thus at high temperatures, even worse design points Xi+1 are likely to be acceptedbecause of larger probabilities. However, if the temperature Tis small, the probabilityof accepting worse design points Xi+1 (with read more..

  • Page - 722

    13.3Simulated Annealing705computational effort. A smaller value of n, on the other hand, might result either in apremature convergence or convergence to a local minimum (due to inadequate explo-ration of the design space for the global minimum). Unfortunately, no unique set ofvalues are available for T, n, and cthat will work well for every problem. However,certain guidelines can be given for selecting these values. The initial temperature Tcan be chosen as the average value of the objective read more..

  • Page - 723

    706Modern Methods of OptimizationStart with initial vector, X1,initial temperature and other parameters(T, n, c)Find f1 = f(X1),Set iteration number i = 1and cycle number p = 1Accept or reject Xi+1 usingMetropolis criterionUpdate iteration number as i = i + 1Update the number ofcycles as p = p + 1Set iteration number i = 1Reduce temperature.StopConvergence criteriasatisfied?Is number ofIterations i ≥ n ?Generate new design point Xi+1In the vicinity of Xi . Computefi+1 = f(Xi+1) and Df = fi+1 read more..

  • Page - 724

    13.3Simulated Annealing707Step 1:Choose the parameters of the SA method. The initial temperature is takenas the average value of fevaluated at four randomly selected points in thedesign space. By selecting the random points as X(1) =20 , X(2) =510 ,X(3) =85 , X(4) =1010 , we find the corresponding values of the objectivefunction as f (1) =476, f (2) =340, f (3) =381, f (4) =340, respectively. Not-ing that the average value of the objective functions f (1), f (2), f (3), and f (4)is 384.25, we read more..

  • Page - 725

    708Modern Methods of OptimizationStep 3:Generate a new design point in the vicinity of the current design point X2 =1.725.84 . For this, we choose the range of each design variable as ±6 aboutits current value so that the ranges are given by (−6 +1.72, 6 +1.72) =(−4.28, 7.72) for x1 and (−6 +5.84, 6 +5.84) =(−0.16, 11.84) for x2. Byselecting two uniformly distributed random numbers in the range (0, 1) asu1 =0.92 and u2 =0.73, the corresponding uniformly distributed randomnumbers in the read more..

  • Page - 726

    13.4Particle Swarm Optimization709As an example, consider the behavior of birds in a flock. Although each bird hasa limited intelligence by itself, it follows the following simple rules:1. It tries not to come too close to other birds.2. It steers toward the average direction of other birds.3. It tries to fit the “average position” between other birds with no wide gaps inthe flock.Thus the behavior of the flock or swarm is based on a combination of three simplefactors:1. Cohesion— read more..

  • Page - 727

    710Modern Methods of Optimization(similar to chromosomes in genetic algorithms). Evaluate the objective functionvalues corresponding to the particles as f[X1(0)], f[X2(0)], . . . , f[XN (0)].3. Find the velocities of particles. All particles will be moving to the optimal pointwith a velocity. Initially, all particle velocities are assumed to be zero. Set theiteration number as i =1.4. In the ith iteration, find the following two important parameters used by atypical particle j:(a) The read more..

  • Page - 728

    13.4Particle Swarm Optimization711Vj (i) =θ Vj (i −1) +c1r1[Pbest,j −Xj (i −1)]+c2r2[Gbest −Xj (i −1)]; j =1, 2, . . . , N(13.23)The inertia weight θwas originally introduced by Shi and Eberhart in 1999 [13.36] todampen the velocities over time (or iterations), enabling the swarm to converge moreaccurately and efficiently compared to the original PSO algorthm with Eq. (13.21).Equation (13.23) denotes an adapting velocity formulation, which improves its finetuning ability in read more..

  • Page - 729

    712Modern Methods of OptimizationC(i) =(ci)α(13.27)H (X) =mj =1ϕ[gj (X)][qj (X)]γ[qi (X)](13.28)ϕ[qj (X)] =a1 −1eqj (X)+b(13.29)qj (X) =max 0, gj (X) ;j =1, 2, . . . , m(13.30)where c, α, a, and bare constants. Note that the function qj (X)denotes the magnitudeof violation of the jth constraint, ϕ[qj (X)] indicates a continuous assignment function,assumed to be of exponential form, as shown in Eq. (13.29), and γ[qi(X)] representsthe power of the violated function. The values of c =0.5, read more..

  • Page - 730

    13.4Particle Swarm Optimization713so thatv1(1) =0 +0.3294(−1.5 +1.5) +0.9542(1.25 +1.5) =2.6241v2(1) =0 +0.3294(0.0 −0.0) +0.9542(1.25 −0.0) =1.1927v3(1) =0 +0.3294(0.5 −0.5) +0.9542(1.25 −0.5) =0.7156v4(1) =0 +0.3294(1.25 −1.25) +0.9542(1.25 −1.25) =0.0(c) Find the new values of xj (1), j =1 , 2 , 3 , 4, as xj (i) =xj (i −1) +vj (i):x1(1) = −1.5 +2.6241 =1.1241x2(1) =0.0 +1.1927 =1.1927x3(1) =0.5 +0.7156 =1.2156x4(1) =1.25 +0.0 =1.255. Evaluate the objective function values at read more..

  • Page - 731

    714Modern Methods of Optimization6. Find the objective function values at the current xj (i):f[x1(2)] =4.4480, f[x2(2)] =10.1721, f[x3(2)] =11.2138,f[x4(2)] =11.9644Check the convergence of the process. Since the values of xj (i)did not con-verge, we increment the iteration number as i =3 and go to step 4. Repeatstep 4 until the convergence of the process is achieved.13.5ANT COLONY OPTIMIZATION13.5.1Basic ConceptAnt colony optimization (ACO) is based on the cooperative behavior of real read more..

  • Page - 732

    13.5Ant Colony Optimization715layers is equal to the number of design variables and the number of nodes in a par-ticular layer is equal to the number of discrete values permitted for the correspondingdesign variable. Thus each node is associated with a permissible discrete value of adesign variable. Figure 13.3 denotes a problem with six design variables with eightpermissible discrete values for each design variable.The ACO process can be explained as follows. Let the colony consist of Nants.The read more..

  • Page - 733

    716Modern Methods of Optimizationis updated as follows:τij ←τij +τ(k)(13.33)Because of the increase in the pheromone, the probability of this arc being selected bythe forthcoming ants will increase.13.5.4Pheromone Trail EvaporationWhen an ant kmoves to the next node, the pheromone evaporates from all the arcs ijaccording to the relationτij ←(1 −p)τij; ∀(i, j ) ∈A(13.34)where p ∈(0, 1] is a parameter and Adenotes the segments or arcs traveled by ant kin its path from home to read more..

  • Page - 734

    13.5Ant Colony Optimization71713.5.5AlgorithmThe step-by-step procedure of ACO algorithm for solving a minimization problem canbe summarized as follows:Step 1:Assume a suitable number of ants in the colony (N ). Assume a set ofpermissible discrete values for each of the ndesign variables. Denote thepermissible discrete values of the design variable xi as xil, xi2, . . . , xip(i =1, 2, . . . , n). Assume equal amounts of pheromone τ(1)ijinitially along allthe arcs or rays (discrete values of read more..

  • Page - 735

    718Modern Methods of OptimizationStep 4:Test for the convergence of the process. The process is assumed to have con-verged if all Nants take the same best path. If convergence is not achieved,assume that all the ants return home and start again in search of food. Set thenew iteration number as l =l +1, and update the pheromones on differentarcs (or discrete values of design variables) asτ(l)ij=τ(old)ij+kτ(k)ij(13.42)where τ(old)ijdenotes the pheromone amount of the previous iteration left read more..

  • Page - 736

    13.5Ant Colony Optimization719x11 = 0·0x12 = 0·5x13 = 1·0x14 = 1·5x15 = 2·0x16 = 2·5x17 = 3·0Food (Destination)HomeFigure 13.4Possible paths for an ant (possible discrete values of x ≡x1).To select the specific path (or discrete variable) chosen by an ant using arandom number generated in the range (0, 1), cumulative probability ranges areassociated with different paths of Fig. 13.4 as (using roulette-wheel selectionprocess in step 3):x11 =0,17=(0.0, 0.1428), x12 = 17 ,27=(0.1428, read more..

  • Page - 737

    720Modern Methods of OptimizationStep 4:Assuming that the ants return home and start again in search of food, we setthe iteration number as l =2. We need to update the pheromone array asτ(2)1j=τ(old)1j+kτ(k)(E1)wherekτ (k) is the pheromone deposited by the best ant kand the summationextends over all the best ants k(if multiple ants take the best path). In thepresent case, there is only one best ant, k =1, which used the path x13. Thusthe value ofkτ (k) can be determined in this case read more..

  • Page - 738

    13.5Ant Colony Optimization721value assumed (or the path selected in Fig. 13.4) by different ants can be seento beant 1 : x13 =1.0; ant 2 : x16 =2.5; ant 3 : x11 =0.0; ant 4 : x13 =1.0This shows that two ants (probabilistically) selected the path x13 due to higherpheromone left on the best path (x13) found in the previous iteration. Theobjective function values corresponding to the paths chosen by different antsare given byant 1 : f1 =f (x13) =f (1.0) = −12.0; ant 2 : f2 =f (x16) =f (2.5) = read more..

  • Page - 739

    722Modern Methods of OptimizationStep 2:For any ant k, the probability of selecting path x1j in Fig. 13.4 is given byp1j =τ1j7p=1τ1p;j =1, 2, . . . ,7where τ1j =0.25; j =1, 2, 4, 5, 6, 7 and τ13 =8.9231. This givesp1j =0.2510.4231=0.0240, j =1, 2, 4, 5, 6, 7; p13 = 8.923110.4231=0.8561To determine the discrete value or path selected by an ant using a random num-ber selected in the range (0, 1), cumulative probabilities are associated withdifferent paths as (roulette-wheel selection read more..

  • Page - 740

    13.6Optimization of Fuzzy Systems723The set [0, 1] is called a valuation set.A set Ais called a fuzzy setif the valuation setis allowed to be the whole interval [0, 1]. The fuzzy set Ais characterized by the setof all pairs of points denoted asA = {x, µA(x)},x ∈X(13.45)where µA(x) is called the membership functionof xin A. The closer the value ofµA(x) is to 1, the more xbelongs to A. For example, let X = {62646668707274767880} be possible temperature settings of the thermostat (◦F) in read more..

  • Page - 741

    724Modern Methods of Optimization01mm1m2y2ynmny10(a)(b)1my2yny10(c)1myFigure 13.5Crisp and fuzzy sets: (a)crisp set; (b)discrete fuzzy set; (c)continuous fuzzyset. [13.22], with permission of ASME.BABAAA–(a)(b)(c)Figure 13.6Basic set operations in crisp set theory: (a) Aor Bor both: A ∪B; (b) AandB: A ∩B; (c)not A: A. [13.22], with permission of ASME.The result of this operation is shown in Fig. 13.7a. The intersection of the fuzzy setsAand Bis defined asµA∩B (y) =µA(y) ∧µB (y) read more..

  • Page - 742

    13.6Optimization of Fuzzy Systems725mAUB(y)mB(y)m( y)UB( y)mAmA(y)mA(y)(a)Ay(b)y(c)y111mB(y)mA(y)A–mA(y)–Figure 13.7Basic set operations in fuzzy set theory: (a)union; (b)intersection; (c)comple-ment. [13.22], with permission of ASME.13.6.2Optimization of Fuzzy SystemsThe conventional optimization methods deal with selection of the design variables thatoptimizes an objective function subject to the satisfaction of the stated constraints.For a fuzzy system, this notion of optimization has to read more..

  • Page - 743

    726Modern Methods of OptimizationConstraint0x (in.)1mObjective functionDesign (decision)Figure 13.8Concept of fuzzy decision. [13.22], with permission of ASME.subject togj (X) ∈Gj ,j =1, 2, . . . , m(13.55)where Gj denotes the fuzzy interval to which the function gj (X) should belong. Thusthe fuzzy feasible region, S, which denotes the intersection of all Gj , is defined by themembership functionµS(X) =minj =1,2,...,m{µGj [gj (X)]}(13.56)Since a design vector X is considered feasible when read more..

  • Page - 744

    13.7Neural-Network-Based Optimization727subject toλ ≤µf (X)λ ≤µg(l)j (X),j =1, 2, . . . , mλ ≤µg(u)j(X),j =1, 2, . . . , m(13.59)13.6.4Numerical ResultsThe minimization of the error between the generated and specified outputs of thefour-bar mechanism shown in Fig. 13.9 is considered. The design vector is taken asX = {abcβ}T. The mechanism is constrained to be a crank-rocker mecha-nism so thata −b ≤0,a −c ≤0,a ≤1d =[(a +c) −(b +1)][(c −a)2 −(b −1)2] ≤0The maximum read more..

  • Page - 745

    728Modern Methods of Optimizationprocessing capability. The neural computing strategies have been adopted to solveoptimization problems in recent years [13.23, 13.24]. A neural networkis a massivelyparallel network of interconnected simple processors (neurons) in which each neuronaccepts a set of inputs from other neurons and computes an output that is propagatedto the output nodes. Thus a neural network can be described in terms of the individualneurons, the network connectivity, the weights read more..

  • Page - 746

    13.7Neural-Network-Based Optimization729Several neural network architectures, such as the Hopfield and Kohonen networks,have been proposed to reflect the basic characteristics of a single neuron. These archi-tectures differ one from the other in terms of the number of neurons in the network,the nature of the threshold functions, the connectivities of the various neurons, andthe learning procedures. A typical architecture, known as the multilayer feedforwardnetwork,is shown in Fig. 13.11. In read more..

  • Page - 747

    730Modern Methods of Optimizationq3r2r3r4q2w2q4w3w4ghFigure 13.12Network used to train relationships for a four-bar mechanism. [12.23], reprintedwith permission of Gordon & Breach Science Publishers.advantage (η)of the mechanism. The network is trained by inputting several possiblecombinations of the values of r2, r3, r4, θ2, and ω2 and supplying the correspondingvalues of θ3, θ4, ω3, ω4, γ, and η. The difference between the values predicted by thenetwork and the actual output is read more..

  • Page - 748

    References and Bibliography73113.4S. S. Rao, T. S. Pan, A. K. Dhingra, V. B. Venkayya, and V. Kumar, Genetic-evolution-based optimization methods for engineering design, pp. 318 –323 in Pro-ceedings of the 3rd Air Force/NASA Symposium on Recent Advances in MultidisciplinaryAnalysis and Optimization,San Francisco, Sept. 24 –26, 1990.13.5S. S. Rao, T. S. Pan, and V. B. Venkayya, Optimal placement of actuators inactively controlled structures using genetic algorithms, AIAA Journal, Vol. 29, No. read more..

  • Page - 749

    732Modern Methods of Optimization13.23A. K. Dhingra and S. S. Rao, A neural network based approach to mechanical designoptimization, Engineering Optimization, Vol. 20, pp. 187 –203, 1992.13.24L. Berke and P. Hajela, Applications of artificial neural nets in structural mechanics,Structural Optimization, Vol. 4, pp. 90 –98, 1992.13.25S. S. Rao, Multiobjective optimization of fuzzy structural systems, International Journalfor Numerical Methods in Engineering, Vol. 24, pp. 1157 –1171, read more..

  • Page - 750

    Review Questions733(c) Roulette wheel selection process(d) Pheromone evaporation rate(e) Neural network(f) Fuzzy feasible domain(g) Membership function(h) Multilayer feedforward network13.2Match the following terms:(a) Fuzzy optimizationBased on shortest path(b) Genetic algorithmsAnalysis equations not programmed(c) Neural network methodLinguistic data can be used(d) Simulated annealingBased on the behavior of a flock of birds(e) Particle swarm optimizationBased on principle of survival of the read more..

  • Page - 751

    734Modern Methods of Optimization(b) How is an inequality constrained optimization problem converted into an uncon-strained problem for use in GAs?(c) What is the difference between a crisp set and a fuzzy set?(d) How is the output of a neuron described commonly?(e) What are the basic operations used in GAs?(f) What is a fitness function in GAs?(g) Can you consider SA as a zeroth-order search method?(h) How do you select the length of the binary string to represent a design variable?(i) read more..

  • Page - 752

    Problems735(a) x(l) =0, x(u) =5(b) x(l) =0, x(u) =10(c) x(l) =0, x(u) =2013.4A design variable, with lower and upper bounds 2 and 13, respectively, is to be repre-sented with an accuracy of 0.02. Determine the size of the binary string to be used.13.5Find the minimum of f =x5 −5 x3 −20 x +5 in the range (0, 3) using the ant colonyoptimization method. Show detailed calculations for 2 iterations with 4 ants.13.6In the ACO method, the amounts of pheromone along the various arcs from node iare read more..

  • Page - 753

    736Modern Methods of Optimizationsubject tox21 +x22 −x23 ≤04 −x21 −x22 −x23 ≤0x3 −5 ≤0−xi ≤0; i =1, 2, 3Define the fitness function to be used in GA for this problem.13.14The bounds on the design variables in an optimization problem are given by−10 ≤x1 ≤10,0 ≤x2 ≤8,150 ≤x3 ≤750Find the minimum binary string length of a design vector X = {x1, x2, x3}T to achievean accuracy of 0.01. read more..

  • Page - 754

    14Practical Aspects of Optimization14.1INTRODUCTIONAlthough the mathematical techniques described in Chapters 3 to 13 can be usedto solve all engineering optimization problems, the use of engineering judgment andapproximations help in reducing the computational effort involved. In this chapter weconsider several types of approximation techniques that can speed up the analysis timewithout introducing too much error [14.1].These techniques are especially useful in finite element analysis-based read more..

  • Page - 755

    738Practical Aspects of Optimization14.2.2Design Variable Linking TechniqueWhen the number of elements or members in a structure is large, it is possible toreduce the number of design variables by using a technique known as design variablelinking[14.25]. To see this procedure, consider the 12-member truss structure shownin Fig. 14.1. If the area of cross section of each member is varied independently, wewill have 12 design variables. On the other hand, if symmetry of members about thevertical (Y read more..

  • Page - 756

    14.2Reduction of Size of an Optimization Problem739the relationship between Z and X can be expressed asZ12×1 = [T ]12×6X6×1(14.3)where the matrix [T ] is given by[T ]=1 0 0 0 0 00 1 0 0 0 00 0 1 0 0 01 0 0 0 0 00 1 0 0 0 00 0 1 0 0 00 0 0 1 0 00 0 0 1 0 00 0 0 0 1 00 0 0 0 1 00 0 0 0 0 10 0 0 0 0 3(14.4)The concept can be extended to many other situations. For example, if the read more..

  • Page - 757

    740Practical Aspects of Optimization14.3FAST REANALYSIS TECHNIQUES14.3.1Incremental Response ApproachLet the displacement vector of the structure or machine, Y0, corresponding to the loadvector, P0, be given by the solution of the equilibrium equations[K0]Y0= P0(14.7)orY0= [K0]−1P0(14.8)where [K0] is the stiffness matrix corresponding to the design vector, X0. When thedesign vector is changed to X0+ X, let the stiffness matrix of the system change to[K0]+ [ K], the displacement vector to Y0+ read more..

  • Page - 758

    14.3Fast Reanalysis Techniques741Neglecting the term [ K] Y2, Eq. (14.17) can be used to obtain the second approxi-mation toY,Y2, asY2= −[K0]−1([ K] Y1)(14.18)From Eq. (14.16),Y can be written asY=2i=1Yi(14.19)This process can be continued andY can be expressed, in general, asY=∞i=1Yi(14.20)whereYi is found by solving the equations[K0] Yi= −[ K] Yi−1(14.21)Note that the series given by Eq. (14.20) may not converge if the change in thedesign vector,X, is not small. Hence it is important read more..

  • Page - 759

    742Practical Aspects of Optimization14231324y6y5y2y1y4y8y7y350 in.25 in.1000 lb100 in.50 in.100 in.Figure 14.2Crane (planar truss).Table 14.1Area ofGlobal node of:Direction cosines of memberMember,crossLength,CornerCorneresection, Aele (in.)1, i2, jlij=Xj− Xilemij=Yj− Yile1A155.9017130.89440.44722A255.9017320.8944– 0.44723A3167.7051340.89440.44724A4141.4214240.70710.7071where Ae is the cross-sectional area, Ee is Young’s modulus, le is the length, and(lij , mij ) are the direction read more..

  • Page - 760

    14.3Fast Reanalysis Techniques743Thus the equilibrium equations of the structure can be expressed as[K]Y= P(E3)whereY= y5y6y7y8andP= p5p6p7p8= 000−1000(a) At the base design, A1= A2= 2 in.2, A3= A4= 1 in.2, and the exact solutionof Eqs. (E3) gives the displacements of nodes 3 and 4 asYbase= y5y6y7y8base= read more..

  • Page - 761

    744Practical Aspects of OptimizationTable 14.2Exact Y0=0.116462E− 020.232923E− 020.514654E− 01−0.703216E − 01Exact (Y0+ Y)=0.970515E− 030.194103E− 020.428879E− 01−0.586014E − 01Value of iYiYi= Y0+ik=1Yk1−0.232922E − 03−0.465844E − 03−0.102930E − 010.140642E− 010.931695E− read more..

  • Page - 762

    14.4Derivatives of Static Displacements and Stresses745By premultiplying Eq. (14.27) by [Y ]T we obtain[˜K]r×rcr×1 = ˜Pr×1(14.28)where[˜K] = [Y ]T[KN ][Y ](14.29)˜P = [Y ]T P(14.30)It can be seen that an approximate displacement vector YN can be obtained by solv-ing a smaller (r) system of equations, Eq. (14.28), instead of computing the exactsolution YN by solving a larger (n) system of equations, Eq. (14.25). The foregoingmethod is equivalent to applying the Ritz –Galerkin principle read more..

  • Page - 763

    746Practical Aspects of Optimizationanalysis for computing the values of the objective function and/or constraint functionsat any design vector. Since the objective and/or constraint functions are to be evaluatedat a large number of trial design vectors during optimization, the computation of thederivatives of the response quantities requires substantial computational effort. It ispossible to derive approximate expressions for the response quantities. The derivativesof static displacements, read more..

  • Page - 764

    14.5Derivatives of Eigenvalues and Eigenvectors74714.5DERIVATIVES OF EIGENVALUES AND EIGENVECTORSLet the eigenvalue problem be given by [14.4, 14.6, 14.10][K]m×mYm×1 = λ[M]m×mYm×1(14.38)where λis the eigenvalue, Y the eigenvector, [K] the stiffness matrix, and [M] themass matrix corresponding to the design vector X= {x1, x2,· · ·, xn}T. Let the solutionof Eq. (14.38) be given by the eigenvalues λi and the eigenvectors Yi, i= 1, 2, . . . , m:[Pi]Yi= 0(14.39)where [Pi] is a symmetric read more..

  • Page - 765

    748Practical Aspects of Optimization14.5.2Derivatives of YiThe differentiation of Eqs. (14.39) and (14.45) with respect to xj results in[Pi]∂Yi∂xj = −∂[Pi]∂xjYi(14.47)2YTi [M]∂Yi∂xj = −YTi∂[M]∂xjYi(14.48)where ∂[Pi]/∂xj is given by Eq. (14.44). Equations (14.47) and (14.48) can be shownto be linearly independent and can be written together as[Pi]2YTi [M](m+1)×m∂Yi∂xjm×1= − ∂[Pi]∂xjYTi∂[M]∂xj(m+1)×mYim×1(14.49)By premultiplying Eq. read more..

  • Page - 766

    14.6Derivatives of Transient Response749Table 14.3Derivatives of Eigenvalues [14.4]iEigenvalue, λi10−9∂λi∂x110−9∂λi∂x310−2∂Y5i∂x110−2∂Y6i∂x1124.660.3209– 0.15821.478– 2.2982974.73.86– 0.41440.057– 3.04637782.023.521.670.335– 5.307For illustration, a cylindrical cantilever beam is considered [14.4]. The beam ismodeled with three finite elements with six degrees of freedom as indicated in Fig. 14.4.The diameters of the beam are considered as the design read more..

  • Page - 767

    750Practical Aspects of OptimizationNote that if the undamped natural modes of vibration are used as basis functions and if[C] is assumed to be a linear combination of [M] and [K] (called proportional damp-ing), Eqs. (14.54) represent a set of runcoupled second-order differential equationswhich can be solved independently [14.10]. Once q(t) is found, the displacement solu-tion Y(t) can be determined from Eq. (14.53).In the formulation of optimization problems with restrictions on the read more..

  • Page - 768

    14.7Sensitivity of Optimum Solution to Problem Parameters751derivative, ∂yj /∂xi, can be computed using Eq. (14.53) as∂yj∂xi =rk=1j k∂qk(t)∂xi,i= 1, 2, . . . , n(14.61)where, for simplicity, the elements of the matrix [ ] are assumed to be constants(independent of the design vector X). Note that for higher accuracy, the derivativesofj k with respect to xi (sensitivity of eigenvectors, if the mode shapes are used asthe basis vectors) obtained from an equation similar to Eq. (14.51) read more..

  • Page - 769

    752Practical Aspects of Optimization14.7.1Sensitivity Equations Using Kuhn –Tucker ConditionsThe Kuhn– Tucker conditions satisfied at the constrained optimum design X∗ are givenby [see Eqs. (2.73) and (2.74)]∂f (X)∂xi+j∈J1λj∂gj (X)∂xi= 0,i= 1, 2, . . . , n(14.64)gj (X)= 0,j∈ J1(14.65)λj > 0,j∈ J1(14.66)where J1 is the set of active constraints and Eqs. (14.64) to (14.66) are valid withX= X∗ and λj= λ∗j . When a problem parameter changes by a small amount, weassume read more..

  • Page - 770

    14.7Sensitivity of Optimum Solution to Problem Parameters753∂X∂p= ∂x1∂p...∂xn∂p,∂λ∂p= ∂λ1∂p...∂λq∂p(14.74)The following can be noted in Eqs. (14.69):1. Equations (14.69) denote (n+ q) simultaneous equations in terms of therequired sensitivity derivatives, ∂xi/∂p (i= 1, 2, . . . , n)and ∂λj /∂p (j=1, 2, . . . , q). Both X∗ and λ∗ are assumed to be known in Eqs. (14.69). If read more..

  • Page - 771

    754Practical Aspects of OptimizationSimilarly, a currently inactive constraint will become critical due topif the newvalue of gj becomes zero:gj (X)+dgjdpp= gj (X)+ni=1∂gj∂xi∂xi∂pp(14.80)Thus the changepnecessary to make an inactive constraint active can befound asp= −gj (X)ni=1∂gj∂xi∂xi∂p(14.81)14.7.2Sensitivity Equations Using the Concept of Feasible DirectionHere we treat the problem parameter pas a design variable so that the new designvector becomesX= {x1 x2·· · xn read more..

  • Page - 772

    14.8Multilevel Optimization755If the vector S is normalized by dividing its components by sn+1, Eq. (14.86) givesλ= pand hence Eq. (14.85) gives the desired sensitivity derivatives as∂x1∂p...∂xn∂p=1sn+1S(14.87)Thus the sensitivity of the objective function with respect to pcan be computed asdf (X)dp= ∇f (X)TSsn+1(14.88)Note that unlike the previous method, this method does not require the values of λ∗and the second derivatives of fand gj to read more..

  • Page - 773

    756Practical Aspects of Optimization14.8.2MethodLet the optimization problem be stated as follows:Find X= {x1 x2·· · xn}T which minimizes f (X)(14.89)subject togj (X)≤ 0,j= 1, 2, . . . , m(14.90)hk(X)= 0,k= 1, 2, . . . , p(14.91)x(l)i≤ xi≤ x(u)i,i= 1, 2, . . . , n(14.92)where x(l)iand x(u)idenote the lower and upper bounds on xi. Most systems permit thepartitioning of the vector X into two subvectors Y and Z:X=YZ(14.93)where the subvector Y denotes the coordination or interaction read more..

  • Page - 774

    14.8Multilevel Optimization757Y(l) ≤ Y≤ Y(u)Z(l)k≤ Zk≤ Z(u)k,k= 1, 2, . . . , K(14.97)Similarly, the objective function f (X)can be expressed asf (X)=Kk=1f(k)(Y, Zk)(14.98)where f (k)(Y, Zk) denotes the contribution of the kth subsystem to the overall objectivefunction. Using Eqs. (14.95) to (14.98), the two-level approach can be stated as follows.First-level Problem.Tentatively fix the values of Y at Y∗ so that the problem ofEqs. (14.89) to (14.92) [or Eqs. (14.95) to (14.98)] can be read more..

  • Page - 775

    758Practical Aspects of Optimization3. Solve the second-level optimization problem stated in Eqs. (14.101) and find anew vector Y∗.4. Check for the convergence of f∗ and Y∗ (compared to the value Y∗ usedearlier).5. If the process has not converged, go to step 2 and repeat the process untilconvergence.The following example illustrates the procedure.Example 14.2 Find the minimum-weight design of the two-bar truss shown in Fig. 14.6with constraints on the depth of the truss (y= h), read more..

  • Page - 776

    14.8Multilevel Optimization759We treat the bars 1 and 2 as subsystems 1 and 2, respectively, with yas the coordinationvariable (Y= {y}) and z1 and z2 as the subsystem variables (Z1= {z1} and Z2= {z2}).By fixing the value of yat y∗, we formulate the first-level problems as follows.Subproblem 1.Find z1 which minimizesf(1)(y∗, z1)= 76,500z1(y∗)2 + 36(E1)subject tog1(y∗, z1)=(1428.5714× 10−6) (y∗)2 + 36y∗z1− 1≤ 0(E2)0≤ z1≤ 0.1(E3)Subproblem 2.Find z2 which read more..

  • Page - 777

    760Practical Aspects of OptimizationFind ywhich minimizesf= 76,500z∗1y2+ 36+ 76,500z∗2y2+ 1= 109.2857y2+ 36y+ 655.7143y2+ 1y(E10)subject to1≤ y≤ 6 and fmust be definedThe graph of f, given by Eq. (E10), is shown in Fig. 14.7 over the range 1≤ y≤ 6from which the solution can be determined as f∗ = 3747.7 N, y∗ = h∗ = 2.45 m, z∗1=A∗1= 3.7790×10−3 m2, and z∗2= A∗2= 9.2579×10−3 m2.14.9PARALLEL PROCESSINGLarge-scale optimization problems can be solved efficiently read more..

  • Page - 778

    14.10Multiobjective Optimization761If a multilevel (decomposition) approach is used, the optimization of various subsystems(at different levels) can be performed on parallel processors while the solution of thecoordinating optimization problem can be accomplished on the main processor. If theoptimization problem involves an extensive analysis, such as a finite-element analysis,the problem can be decomposed into subsystems (substructures) and the analyses ofsubsystems can be conducted on read more..

  • Page - 779

    762Practical Aspects of OptimizationInitialize node iData from the host nodeRandomly perturb one variable out of S(i)Change the designExchange updated informationfrom other nodesAll variable perturbedout of S(i)?YesNoNoYesGlobally assemble allupdated designvariablesAll cycles done?Final design, stopFigure 14.8Flow diagram of parallel simulated annealing on a single node. S(i), set of designvariables assigned to node i; node i= processor i.Find read more..

  • Page - 780

    14.10Multiobjective Optimization763604020012345678f1 = (x−3)4f2 = (x−6)2PQxfFigure 14.9Pareto optimal solutions.In general, no solution vector X exists that minimizes all the kobjective functionssimultaneously. Hence, a new concept, known as the Pareto optimum solution, is usedin multiobjective optimization problems. A feasible solution X is called Pareto optimalif there exists no other feasible solution Y such that fi(Y)≤ fi(X) for i= 1, 2, . . . , kwith fj (Y) < fi(X) for at least one read more..

  • Page - 781

    764Practical Aspects of Optimizationwhere wi is a scalar weighting factor associated with the ith objective function. Thismethod [Eq.(14.106)] is also known as the weighting function method.14.10.2Inverted Utility Function MethodIn the inverted utility function method, we invert each utility and try to minimize orreduce the total undesirability. Thus if Ui(fi )denotes the utility function correspondingto the ith objective function, the total undesirability is obtained asU−1 read more..

  • Page - 782

    14.10Multiobjective Optimization76514.10.5Lexicographic MethodIn the lexicographic method, the objectives are ranked in order of importance by thedesigner. The optimum solutoin X∗ is then found by minimizing the objective functionsstarting with the most important and proceeding according to the order of importanceof the objectives. Let the subscripts of the objectives indicate not only the objectivefunction number, but also the priorities of the objectives. Thus f1(X) and fk(X) denotethe most read more..

  • Page - 783

    766Practical Aspects of Optimizationsubject togj (X)≤ 0,j= 1, 2, . . . , mfj (X)+ d+j− d−j= bj ,j= 1, 2, . . . , kd+j≥ 0,j= 1, 2, . . . , k(14.113)d−j≥ 0,j= 1, 2, . . . , kd+j d−j= 0,j= 1, 2, . . . , kwhere bj is the goal set by the designer for the jth objective and d+jand d−jare,respectively, the underachievement and overachievement of the jth goal. The value ofpis based on the utility function chosen by the designer. Often the goal for the jthobjective, bj , is found by first read more..

  • Page - 784

    14.11Solution of Multiobjective Problems Using MATLAB767with the weights satisfying the normalization conditionki=1wi= 114.11SOLUTION OF MULTIOBJECTIVE PROBLEMS USINGMATLABThe MATLAB function fgoalattaincan be used to solve a multiobjective optimiza-tion problem using the goal attainment method. The following example illustrates theprocedure.Example 14.3 Find the solution of the following three-objective optimization problemusing goal attainment method using the MATLAB function read more..

  • Page - 785

    768Practical Aspects of Optimization- 4- x(2); ...x(2)- 4; ...x(2)+4*x(1)- 4; ...- 1- x(1); ...x(1)- 2- x(2)]ceq = [];Step 3:Ctreate an m-file for the main program and save it as fgoalat-tain_main.mclc; clear all;x0 = [0.1 0.1]weight = [0.2 0.50.3]goal = [5 -8 20]x,fval,attainfactor,exitflag] = fgoalattain (@fgoalattainobj,x0,goal,weight,[],[],[],[],[],[],@fgoalattaincon)Step 4:Run the program fgoalattain_main.mto obtain the following result:Initial design vector:0.1,0.1Initial objective read more..

  • Page - 786

    References and Bibliography76914.3R. L. Fox and H. Miura, An approximate analysis technique for design calculations,AIAA Journal, Vol. 9, No. 1, pp. 177 –179, 1971.14.4R. L. Fox and M. P. Kapoor, Rates of change of eigenvalues and eigenvectors, AIAAJournal, Vol. 6, No. 12, pp. 2426 –2429, 1968.14.5D. V. Murthy and R. T. Haftka, Derivatives of eigenvalues and eigenvectors of generalcomplex matrix, International Journal for Numerical Methods in Engineering, Vol. 26,pp. 293 –311, 1988.14.6R. read more..

  • Page - 787

    770Practical Aspects of Optimization14.23L. A. Schmit and C. Fleury, Structural synthesis by combining approximation conceptsand dual methods, AIAA Journal, Vol. 18, pp. 1252 –1260, 1980.14.24T. S. Pan, S. S. Rao, and V. B. Venkayya, Rates of change of closed-loop eigenvaluesand eigenvectors of actively controlled structures, International Journal for NumericalMethods in Engineering, Vol. 30, No. 5, pp. 1013 –1028, 1990.14.25G. N. Vanderplaats, Numerical Optimization Techniques for read more..

  • Page - 788

    Review Questions77114.44W. Stadler, Ed., Multicriteria Optimization in Engineering and in the Sciences, PlenumPress, New York, 1988.14.45The Rand Corporation, A Million Random Digits with 100,000 Normal Deviates, TheFree Press, Glencoe, IL, 1955.14.46C. A. Coello Coello, D. A. Van Veldhuizen, and G. B. Lamont, Evolutionary Algorithmsfor Solving Multi-objective Problems, Kluwer Academic/Plenum, New York, 2002.REVIEW QUESTIONS14.1What is a reduced basis technique?14.2State two methods of reducing read more..

  • Page - 789

    772Practical Aspects of Optimization(e) Bounded objective function method(f) Lexicographic methodPROBLEMS14.1Consider the minimum-volume design of the four-bar truss shown in Fig. 14.2 subjectto a constraint on the vertical displacement of node 4. Let X1= {1,1, 0.5, 0.5}T andX2= {0.5,0.5,1, 1}T be two design vectors, with xi denoting the area of cross sectionof bar i(i= 1, 2, 3, 4). By expressing the optimum design vectors as X= c1X1+ c2X2,determine the values of c1 and c2 through graphical read more..

  • Page - 790

    Problems773(a) Exact displacement solution U0 at X0(b) Exact displacement solution (U0+ U) at the perturbed design, (X0+ X)(c) Approximate displacement solution, (U0+ U), at (X0+ X) using Eq. (14.20)with four terms forU14.5Consider the four-bar truss shown in Fig. 14.2 whose stiffness matrix is given byEq. (E2) of Example 14.1. Determine the values of the derivatives of yi with respectto the area A1, ∂yi/∂x1(i= 5, 6, 7, 8) at the reference design X0= {A1 A2 A3 A4}T ={2.0,2.0,1.0,1.0}T read more..

  • Page - 791

    774Practical Aspects of Optimization14.10The eigenvalue problem for the stepped bar shown in Fig. 14.11 can be expressed as[K]Y= λ[M]Y with the mass matrix, [M], given by[M]=(2ρ1A1l1+ ρ2A2l2) ρ2A2l2ρ2A2l2ρ2A2l2where ρi, Ai , and li denote the mass density, area of cross section, and length of the seg-ment i, and the stiffness matrix, [K], is given by Eq. (2) of Problem 14.9. If A1= 2 in2,A2= 1 in2, E1= E2= 30× 106 psi, 2l1= l2= 50 in., and ρ1g= ρ2g= 0.283 lb/in3,determine(a) read more..

  • Page - 792

    Problems775Y1Y2k1k2k3m1m2Figure 14.13Two-degree-of-freedom spring –mass system.where Eis Young’s modulus, Ithe area moment of inertia, lthe length, ρthe massdensity, Athe cross-sectional area, λthe eigenvalue, and Y= {Y1,Y2}T = eigenvector. IfE= 30× 106 psi, d= 2 in., t= 0.1 in., l= 20 in., and ρg= 0.283 lb/in3, determine(a) Eigenvalues λi and eigenvectors Yi (i= 1, 2)(b) Values of ∂λi /∂dand ∂λi /∂t (i= 1, 2)14.14In Problem 14.13, determine the derivatives of the eigenvectors read more..

  • Page - 793

    776Practical Aspects of OptimizationBar 1(area = A1)Bar 2(area = A2)P = 1000 N1 m9 mFigure 14.14Two-bar truss.induced in the bars (σ1 and σ2). Treat y, A1, and A2 as design variables with σi≤ 105 Pa(i= 1, 2), 1 m≤ y≤ 4 m, and 0≤ Ai≤ 0.2 m2 (i= 1, 2). Use multilevel optimizationapproach for the solution.14.17Find the sensitivities of x∗1 , x∗2 , and f∗ with respect to Young’s modulus of the tubularcolumn considered in Example the two-bar truss shown in Fig. read more..

  • Page - 794

    Problems777123P45°XYxxhFigure 14.15Two-bar truss.subject tog1(X)=P (1+ x1) 1+ x212√2x1x2− σ0≤ 0g2(X)=P (x1− 1) 1+ x212√2x1x2− σ0≤ 0xi≥ x(l)i,i= 1, 2where x1= x/ h, x2= A/Aref, hthe depth, Eis Young’s modulus, ρthe weight density,σ0 the permissible stress, and x(l)ithe lower bound on xi. Find the optimum solutions ofthe individual objective functions subject to the stated constraints using a graphical pro-cedure. Data: P= 10,000 lb, ρ= 0.283 lb/in3, E= 30× 106 psi, h= 100 read more..

  • Page - 795

    778Practical Aspects of Optimizationf1(X)= −25(x1 − 2)2 − (x2− 2)2 − (x3− 1)2 − (x4− 4)2 − (x5− 1)2f2(X)= x21+ x22+ x23+ x24+ x25+ x26subject to− x1− x2+ 2≤ 0; x1+ x2− 6≤ 0; −x1 + x2− 2≤ 0; x1− 3x2− 2≤ 0;(x3− 3)2 + x24− 4≤ 0; −(x5 − 3)2 − x6+ 4≤ 0; 0≤ xi≤ 10, i= 1, 2, 61≤ xi≤ 5, i= 3, 5; 0≤ x4≤ 6Find the minima of the individual objective functions under the stated constraints usingthe MATLAB function fmincon.14.24Find the read more..

  • Page - 796

    AConvex and Concave FunctionsConvex Function.A function f (X)is said to be convex if for any pair of pointsX1 =x(1)1x(1)2...x(1)nandX2 =x(2)1x(2)2...x(2)nand all λ, 0 ≤λ ≤1,f[λX2 +(1 −λ)X1] ≤λf (X2) +(1 −λ)f (X1)(A.1)that is, if the segment joining the two points lies entirely above or on the graph off (X). Figures A.1a and read more..

  • Page - 797

    780Convex and Concave FunctionsFigure A.1Functions of one variable: (a)convex function in one variable; (b)concave functionin one variable.Figure A.2Functions of two variables: (a)convex function in two variables; (b)concavefunction in two variables.Figure A.3Function that is convex over certain region and concave over certain other region. read more..

  • Page - 798

    Convex and Concave Functions781Theorem A.1 A function f (X)is convex if for any two points X1 and X2, we havef (X2) ≥f (X1) + ∇fT(X1)(X2 −X1)Proof: If f (X)is convex, we have by definitionf[λX2 +(1 −λ)X1] ≤λf (X2) +(1 −λ)f (X1)that is,f[X1 +λ(X2 −X1)] ≤f (X1) +λ[f (X2) −f (X1)](A.3)This inequality can be rewritten asf (X2) −f (X1) ≥f[X1 +λ(X2 −X1)] −f (X1)λ(X2 −X1)(X2 −X1)(A.4)By definingX =λ(X2 −X1), the inequality (A.4) can be written asf (X2) −f read more..

  • Page - 799

    782Convex and Concave FunctionsThe following theorem establishes a very important relation, namely, that any localminimum is a global minimum for a convex function.Theorem A.3 Any local minimum of a convex function f (X)is a global minimum.Proof: Let us prove this theorem by contradiction. Suppose that there exist two differentlocal minima, say, X1 and X2, for the function f (X). Let f (X2) < f (X1). Since f (X)is convex, X1 and X2 have to satisfy the relation (A.6), that is,f (X2) −f (X1) read more..

  • Page - 800

    Convex and Concave Functions783(d) f =4x21 +3x22 +5x23 +6x1x2 +x1x3 −3x1 −2x2 +15:H(X) =∂2f/∂x21∂2f/∂x1 ∂x2 ∂2f/∂x1 ∂x3∂2f/∂x1 ∂x2 ∂2f/∂x22∂2f/∂x2 ∂x3∂2f/∂x1∂x3 ∂2f/∂x2 ∂x3 ∂2f/∂x23=8 616 601 0 10Here the principal minors are given by|8| =8 > 08 66 6=12 > 08 616 601 0 10=114 > 0and hence the matrix H(X) is positive definite for all real values of x1, x2, andx3. Therefore, f (X)is a strictly convex read more..

  • Page - 801

    BSome Computational Aspectsof OptimizationSeveral methods were presented for solving different types of optimization problemsin Chapters 3 to 14. This appendix is intended to give some guidance to the reader inchoosing a suitable method for solving a particular problem along with some computa-tional details. Most of the discussion is aimed at the solution of nonlinear programmingproblems.B.1CHOICE OF METHODSeveral factors are to be considered in deciding a particular method to solve a read more..

  • Page - 802

    B.3Comparison of Constrained Methods785A comparison of several variable metric algorithms was made by Shanno and Phua[B.4]. Sargent and Sebastian presented numerical experiences with unconstrained min-imization algorithms [B.5]. On the basis of these studies, the following general con-clusions can be drawn.If the first and second derivatives of the objective function ( f) can be evaluatedeasily (either in closed form or by a finite-difference scheme), and if the number ofdesign variables is read more..

  • Page - 803

    786Some Computational Aspects of Optimizationformulated test problems that are not related to real-life problems. Thus each new prac-tical problem has to be tackled almost independently based on past experience. Thefollowing guidelines are applicable for a general problem.The sequential quadratic programming approach can be used for solving a variety ofproblems efficiently. The GRG method and Zoutendijk’s method of feasible directions,although slightly less efficient, can also be used for read more..

  • Page - 804

    B.5Scaling of Design Variables and Constraints787Table B.1Summary of Some Structural Optimization PackagesSoftware systemSourceCapabilities and(program)(developer)characteristicsASTROS (AutomatedSTRucturalOptimization System)Air Force Wright LaboratoriesFIBRAWright-Patterson Air ForceBase, OH 45433-6553Structural optimization withstatic, eigenvalue, modalanalysis, and flutterconstraints;approximation concepts;compatibility withNASTRAN; sensitivityanalysisANSYSSwanson Analysis Systems,Inc.P.O. read more..

  • Page - 805

    788Some Computational Aspects of Optimizationxmaxi), i =1, 2, . . . , n. The values of these bounds are not critical and there will notbe any harm even if they span partially the infeasible domain. Another aspect ofscaling is encountered with constraint functions. This becomes necessary whenever thevalues of the constraint functions differ by large magnitudes. This aspect of scaling(normalization) of constraints was discussed in Section 7.13.B.6COMPUTER PROGRAMS FOR MODERN METHODSOF read more..

  • Page - 806

    References and Bibliography789Ant Colony Optimization.An m-file to implement the ant colony optimizationmethod in the Matlab environment for the solution of symmetrical and unsymmetricaltraveling salesman problem was created by H. Wang. The link is given below: Optimization.An m-file to implement multiobjective optimizationusing evolutionary algorithms (based on nondominated sorting genetic algorithm, abbre-viated NSGA) in read more..

  • Page - 807

    790Some Computational Aspects of OptimizationB.13J. L. Kuester and J. H. Mize, Optimization Techniques with Fortran, McGraw-Hill, NewYork, 1973.B.14E. H. Johnson, Tools for structural optimization, Chapter 29, pp. 851 –863, in StructuralOptimization: Status and Promise, M. P. Kamat, Ed., AIAA, Washington, DC, 1993.B.15H.R.E.M. H¨ornlein and K. Schittkowski, Software Systems for Structural Optimization,Birkhauser, Basel, 1993.B.16J. J. Mor´e and S. J. Wright, Optimization Software Guide, read more..

  • Page - 808

    CIntroduction to MATLAB®MATLAB, derived from MATrix LABoratory, is a software package that was originallydeveloped in the late 1970s for the solution of scientific and engineering problems. Thesoftware can be used to execute a single statement or a list of statements, called a scriptor m-file. MATLAB family includes the Optimization Toolbox, which is a library ofprograms or m-files to solve different types of optimization problems. Some basicfeatures of MATLAB are summarized in this read more..

  • Page - 809

    792Introduction to MATLAB®C.2DEFINING MATRICES IN MATLABBefore performing arithmetic operations or using them in developing MATLAB pro-grams or m-files, the relevant matrices need to be defined using statements such as thefollowing.1. A row vector or 1 ×nmatrix, denoted A, can be defined by enclosing itselements in brackets and separated by either spaces or commas.Example:A =[123]2. A column vector or n ×1 matrix, denoted A, can be defined by entering itselements in different lines or in read more..

  • Page - 810

    C.4Optimization Toolbox793(ii) To generate all numbers between 0 and πin increments of π/6> > 0 : pi/6 : piThis command generates the numbers00.52361.04721.57082.09442.61803.1416C.3CREATING m-FILESMATLAB can be used in an interactive mode by typing each command from thekeyboard. In this mode, MATLAB performs the operations much like an extended cal-culator. However, there are situations in which this mode of operation is inefficient.For example, if the same set of commands is to be read more..

  • Page - 811

    794Introduction to MATLAB®•Distinction between linear and nonlinear constraints.•Identification of lower and upper bounds on design variables.•Setting/changing the parameters of the optimization algorithm (based on the avail-able options).Using MATLAB Programs.Each program or m-file in MATLAB can be imple-mented in several ways. The details can be found either in the reference given aboveor online using the help command. For illustration, the help command and the responsefor the program read more..

  • Page - 812

    Answers to Selected ProblemsCHAPTER 11.1 Min. f= 5xA− 80xB+ 160xC+ 15xD, 0.05xA+ 0.05xB+ 0.1xC+0.15xD≤ 1000, 0.1xA+ 0.15xB+ 0.2xC+ 0.05xD≤ 2000, 0.05xA+ 0.1xB+0.1xC+ 0.15xD≤ 1500, xA≥ 5000, xB≥ 0, xC≥ 0, xD≥ 40001.2(a) X∗ = {0.65,0.53521} (b) X∗ = {0.9, 2.5}(c) X∗ = {0.65, 0.53521} 1.5 x∗1= x∗2= 3001.9(a) R∗1= 4.472, R∗2= 2.236(b) R∗1= 3.536, R∗2= 3.536(c) R∗1= 6.67, R∗2= 3.331.11(a) y1= ln x1, y2= ln x2, ln f= 2y1+ 3y2(b) f= 10y2x2 , x1= 10y2 ,ln (log10 f read more..

  • Page - 813

    796Answers to Selected Problems2.13 positive semidefinite2.15 positive definite2.17 negative definite2.19 indefinite2.21 x∗1= 0.2507 m, x∗2= 5.0879× 10−3 m2.23 a= 328, b= −376 2.26 x∗ = 27, y∗ = 212.27 x∗ = 1002.28(a) minimum(b) minimum(c) saddle point(d) none2.30 saddle point at (0, 0)2.33 dx1= arbitrary, dx2= 02.36 radius= 2r/3, length= h/32.38 length= (a2/3+ b2/3)3/22.40 h∗ =4Vπ1/3, r∗ =h∗22.41 x∗1= x∗3= (S/3)1/2, x∗2= (S/12)1/22.43 d∗ =16{(a + b)−√a2− read more..

  • Page - 814

    Answers to Selected Problems7973.55 x∗A= 1.5, x∗B= 03.57 x∗m= 16, x∗d= 203.60 x∗ = 36/11, y∗ = 35/113.66 all points on the line joining (7.4286, 15.4286) and (10, 18)3.71 x∗ = 3.6207, y∗ = 8.44833.75 x∗ = 2/7, y∗ = 30/73.79 x∗ = 56/23, y∗ = 45/233.85 x∗ = −4/3, y∗ = 73.89 x∗ = 0, y∗ = 33.92 (x1, x2)= amounts of mixed nuts (A, B) used, lb. x∗1= 80/7, x∗2= 120/73.94 x∗A= 62.5, x∗B= 31.253.96 xi= number of units of Pi produced per week. x∗1= 100/3, x∗2= read more..

  • Page - 815

    798Answers to Selected Problems(b) 20(c) 19(d) 14(e) 145.17(a) 2.7814(b) 2.71835.18(a) 2.7183(b) 2.7289(c) 2.71835.20 0.255.21 0.0012575.22 0.001265.24 0.00125631CHAPTER 66.1 Min. f= P0(0.5u21+ 0.5u22− u1u2− u2)6.2˜f1 = 7.0751,˜f2 = 74.8087 where˜f =3fρl4Eh26.4 x1= 65.567, x2= 52.9746.5 x∗1= 4.5454, x∗2= 5.45456.7 f= 4250x21− 1000x1x2− 2500x1x3+ 1500x22− 500x2x3+ 5750x23− 1000x1− 2000x2− 3000x3, X∗ = {0.3241, 0.8360, 0.3677}6.9 X∗ ≈ {1, 1} 6.12 X∗ = {0.9465, read more..

  • Page - 816

    Answers to Selected Problems799(b) φk= 2x+ rk( 2− x 2 + x− 10 2)7.27 x∗1= 0.989637, x∗2= 1.9792747.29 14 x21+116 x22− 1≤ 0, x1/5+ x2/3− 1≤ 0, r1= 1.57.31 x∗1= 4.1, x∗2= 5.97.34 X∗ ≈ {0.8984, 0}, f∗ ≈ 2.20797.36 X∗ ≈ {1.671, 17.6} 7.39 x1= 0.4028, x2= 0.80567.42 optimum, λ1= λ2=14√2, λ3= 117.45 X∗ ≈ {1.3480, 0.7722, 0.4299}, f∗ ≈ 0.1154CHAPTER 88.1 f≥ 2.2688668.2 f≥ 3.4641028.3 f≥ 38.5 radius= 0.4174 m, height= 1.6695 m8.6 radius= 0.3633 m, read more..

  • Page - 817

    800Answers to Selected Problems9.10 x∗1= 7.5, x∗2= 10.09.11 x∗1= 60, x∗2= 70, x∗3= 809.13 x∗1= 5, x∗2= 0, x∗3= 5, x∗4= 0CHAPTER 1010.1 X∗ = {2,1}, f∗ = 1310.3 X∗ = {0,9}, f∗ = 2710.4 X∗ = {1,0}, f∗ = 310.5 X∗ = {0,3}, f∗ = 310.6 X∗ = {3,3}, f∗ = 3910.7 X∗ = {4,3}, f∗ = 1010.8 187= 1 0 1 1 1 0 1 110.9 X∗ = {1, 2, 0}, f∗ = 310.12 X∗ = {1,1, 1}, f∗ = 1810.13 X∗ = {1,1, 1, 1, 0}, f∗ = 910.15 X∗ = {4,0}, f∗ = 410.16 X∗ = {2, 2.5}, f∗ = read more..

  • Page - 818

    Answers to Selected Problems801CHAPTER 1313.1 Before: X1=1713, X2=1522; After: X1=2322, X2=91313.3 (a) 9, (b) 10, (c) 1113.4 1013.6 (i j )= (i4)13.8 x∗ = 213.9 x1(2)= 2.8297, x2(2)= 1.9345, x3(2)= 1.6362, x4(2)= 1.188713.12 Number of copies of strings 1, 2, 3, 4, 5, 6, 7 are 0, 0, 1, 2, 5, 2, 2, respectively13.14 String length= 37CHAPTER 1414.1 c∗1= 0.04, c∗2= 0.8114.3(a){0.001165, 0.002329, 0.03949,−0.05635},(b){0.0009705, 0.001941, 0.05273,−0.084102},(c){0.0009704, 0.001941, read more..

  • Page - 819

    IndexAAbsolute minimum, 63Active constraint, 8, 94Addition of constraints, 218Addition of new variables, 214Additive algorithm, 605Adjoint equations, 682Adjoint variable, 682Admissible variations, 78All-integer problem, 588Analytical methods, 253Answers to selected problems, 795Ant colony optimization, 3, 693, 714algorithm, 717ant searching behavior, 715basic concept, 714evaporation, 716path retracing, 715pheromone trail, 715pheromone updating, 715Applications of geometric read more..

Write Your Review