Skip to content

Numerical Optimization

Best in textbook rentals since 2012!

ISBN-10: 0387303030

ISBN-13: 9780387303031

Edition: 2nd 2006 (Revised)

Authors: Jorge Nocedal, Stephen J. Wright

List price: $79.95
Shipping box This item qualifies for FREE shipping.
Blue ribbon 30 day, 100% satisfaction guarantee!

Rental notice: supplementary materials (access codes, CDs, etc.) are not guaranteed with rental orders.

what's this?
Rush Rewards U
Members Receive:
Carrot Coin icon
XP icon
You have reached 400 XP and carrot coins. That is the daily max!

Description:

Numerical Optimization presents a comprehensive and up-to-date description of the most effective methods in continuous optimization. It responds to the growing interest in optimization in engineering, science, and business by focusing on the methods that are best suited to practical problems. For this new edition the book has been thoroughly updated throughout. There are new chapters on nonlinear interior methods and derivative-free methods for optimization, both of which are used widely in practice and the focus of much current research. Because of the emphasis on practical methods, as well as the extensive illustrations and exercises, the book is accessible to a wide audience. It can be…    
Customers also bought

Book details

List price: $79.95
Edition: 2nd
Copyright year: 2006
Publisher: Springer
Publication date: 7/27/2006
Binding: Hardcover
Pages: 664
Size: 7.00" wide x 9.25" long x 1.50" tall
Weight: 3.300
Language: English

Stephen J. Wright is Professor in the Computer Sciences Department at the University of Wisconsin, Madison.

Preface
Preface to the Second Edition
Introduction
Mathematical Formulation
Example: A Transportation Problem
Continuous versus Discrete Optimization
Constrained and Unconstrained Optimization
Global and Local Optimization
Stochastic and Deterministic Optimization
Convexity
Optimization Algorithms
Notes and References
Fundamentals of Unconstrained Optimization
What Is a Solution?
Recognizing a Local Minimum
Nonsmooth Problems
Overview of Algorithms
Two Strategies: Line Search and Trust Region
Search Directions for Line Search Methods
Models for Trust-Region Methods
Scaling
Exercises
Line Search Methods
Step Length
The Wolfe Conditions
The Goldstein Conditions
Sufficient Decrease and Backtracking
Convergence of Line Search Methods
Rate of Convergence
Convergence Rate of Steepest Descent
Newton's Method
Quasi-Newton Methods
Newton's Method with Hessian Modification
Eigenvalue Modification
Adding a Multiple of the Identity
Modified Cholesky Factorization
Modified Symmetric Indefinite Factorization
Step-Length Selection Algorithms
Interpolation
Initial Step Length
A Line Search Algorithm for the Wolfe Conditions
Notes and References
Exercises
Trust-Region Methods
Outline of the Trust-Region Approach
Algorithms Based on the Cauchy Point
The Cauchy Point
Improving on the Cauchy Point
The Dogleg Method
Two-Dimensional Subspace Minimization
Global Convergence
Reduction Obtained by the Cauchy Point
Convergence to Stationary Points
Iterative Solution of the Subproblem
The Hard Case
Proof of Theorem 4.1
Convergence of Algorithms Based on Nearly Exact Solutions
Local Convergence of Trust-Region Newton Methods
Other Enhancements
Scaling
Trust Regions in Other Norms
Notes and References
Exercises
Conjugate Gradient Methods
The Linear Conjugate Gradient Method
Conjugate Direction Methods
Basic Properties of the Conjugate Gradient Method
A Practical Form of the Conjugate Gradient Method
Rate of Convergence
Preconditioning
Practical Preconditioners
Nonlinear Conjugate Gradient Methods
The Fletcher-Reeves Method
The Polak-Ribiere Method and Variants
Quadratic Termination and Restarts
Behavior of the Fletcher-Reeves Method
Global Convergence
Numerical Performance
Notes and References
Exercises
Quasi-Newton Methods
The BFGS Method
Properties of the BFGS Method
Implementation
The SR1 Method
Properties of SR1 Updating
The Broyden Class
Convergence Analysis
Global Convergence of the BFGS Method
Superlinear Convergence of the BFGS Method
Convergence Analysis of the SR1 Method
Notes and References
Exercises
Large-Scale Unconstrained Optimization
Inexact Newton Methods
Local Convergence of Inexact Newton Methods
Line Search Newton-CG Method
Trust-Region Newton-CG Method
Preconditioning the Trust-Region Newton-CG Method
Trust-Region Newton-Lanczos Method
Limited-Memory Quasi-Newton Methods
Limited-Memory BFGS
Relationship with Conjugate Gradient Methods
General Limited-Memory Updating
Compact Representation of BFGS Updating
Unrolling the Update
Sparse Quasi-Newton Updates
Algorithms for Partially Separable Functions
Perspectives and Software
Notes and References
Exercises
Calculating Derivatives
Finite-Difference Derivative Approximations
Approximating the Gradient
Approximating a Sparse Jacobian
Approximating the Hessian
Approximating a Sparse Hessian
Automatic Differentiation
An Example
The Forward Mode
The Reverse Mode
Vector Functions and Partial Separability
Calculating Jacobians of Vector Functions
Calculating Hessians: Forward Mode
Calculating Hessians: Reverse Mode
Current Limitations
Notes and References
Exercises
Derivative-Free Optimization
Finite Differences and Noise
Model-Based Methods
Interpolation and Polynomial Bases
Updating the Interpolation Set
A Method Based on Minimum-Change Updating
Coordinate and Pattern-Search Methods
Coordinate Search Method
Pattern-Search Methods
A Conjugate-Direction Method
Nelder-Mead Method
Implicit Filtering
Notes and References
Exercises
Least-Squares Problems
Background
Linear Least-Squares Problems
Algorithms for Nonlinear Least-Squares Problems
The Gauss-Newton Method
Convergence of the Gauss-Newton Method
The Levenberg-Marquardt Method
Implementation of the Levenberg-Marquardt Method
Convergence of the Levenberg-Marquardt Method
Methods for Large-Residual Problems
Orthogonal Distance Regression
Notes and References
Exercises
Nonlinear Equations
Local Algorithms
Newton's Method for Nonlinear Equations
Inexact Newton Methods
Broyden's Method
Tensor Methods
Practical Methods
Merit Functions
Line Search Methods
Trust-Region Methods
Continuation/Homotopy Methods
Motivation
Practical Continuation Methods
Notes and References
Exercises
Theory of Constrained Optimization
Local and Global Solutions
Smoothness
Examples
A Single Equality Constraint
A Single Inequality Constraint
Two Inequality Constraints
Tangent Cone and Constraint Qualifications
First-Order Optimality Conditions
First-Order Optimality Conditions: Proof
Relating the Tangent Cone and the First-Order Feasible Direction Set
A Fundamental Necessary Condition
Farkas' Lemma
Proof of Theorem 12.1
Second-Order Conditions
Second-Order Conditions and Projected Hessians
Other Constraint Qualifications
A Geometric Viewpoint
Lagrange Multipliers and Sensitivity
Duality
Notes and References
Exercises
Linear Programming: The Simplex Method
Linear Programming
Optimality and Duality
Optimality Conditions
The Dual Problem
Geometry of the Feasible Set
Bases and Basic Feasible Points
Vertices of the Feasible Polytope
The Simplex Method
Outline
A Single Step of the Method
Linear Algebra in the Simplex Method
Other Important Details
Pricing and Selection of the Entering Index
Starting the Simplex Method
Degenerate Steps and Cycling
The Dual Simplex Method
Presolving
Where Does the Simplex Method Fit?
Notes and References
Exercises
Linear Programming: Interior-Point Methods
Primal-Dual Methods
Outline
The Central Path
Central Path Neighborhoods and Path-Following Methods
Practical Primal-Dual Algorithms
Corrector and Centering Steps
Step Lengths
Starting Point
A Practical Algorithm
Solving the Linear Systems
Other Primal-Dual Algorithms and Extensions
Other Path-Following Methods
Potential-Reduction Methods
Extensions
Perspectives and Software
Notes and References
Exercises
Fundamentals of Algorithms for Nonlinear Constrained Optimization
Categorizing Optimization Algorithms
The Combinatorial Difficulty of Inequality-Constrained Problems
Elimination of Variables
Simple Elimination using Linear Constraints
General Reduction Strategies for Linear Constraints
Effect of Inequality Constraints
Merit Functions and Filters
Merit Functions
Filters
The Maratos Effect
Second-Order Correction and Nonmonotone Techniques
Nonmonotone (Watchdog) Strategy
Notes and References
Exercises
Quadratic Programming
Equality-Constrained Quadratic Programs
Properties of Equality-Constrained QPs
Direct Solution of the KKT System
Factoring the Full KKT System
Schur-Complement Method
Null-Space Method
Iterative Solution of the KKT System
CG Applied to the Reduced System
The Projected CG Method
Inequality-Constrained Problems
Optimality Conditions for Inequality-Constrained Problems
Degeneracy
Active-Set Methods for Convex QPs
Specification of the Active-Set Method for Convex QP
Further Remarks on the Active-Set Method
Finite Termination of Active-Set Algorithm on Strictly Convex QPs
Updating Factorizations
Interior-Point Methods
Solving the Primal-Dual System
Step Length Selection
A Practical Primal-Dual Method
The Gradient Projection Method
Cauchy Point Computation
Subspace Minimization
Perspectives and Software
Notes and References
Exercises
Penalty and Augmented Lagrangian Methods
The Quadratic Penalty Method
Motivation
Algorithmic Framework
Convergence of the Quadratic Penalty Method
Ill Conditioning and Reformulations
Nonsmooth Penalty Functions
A Practical l[subscript 1] Penalty Method
A General Class of Nonsmooth Penalty Methods
Augmented Lagrangian Method: Equality Constraints
Motivation and Algorithmic Framework
Properties of the Augmented Lagrangian
Practical Augmented Lagrangian Methods
Bound-Constrained Formulation
Linearly Constrained Formulation
Unconstrained Formulation
Perspectives and Software
Notes and References
Exercises
Sequential Quadratic Programming
Local SQP Method
SQP Framework
Inequality Constraints
Preview of Practical SQP Methods
IQP and EQP
Enforcing Convergence
Algorithmic Development
Handling Inconsistent Linearizations
Full Quasi-Newton Approximations
Reduced-Hessian Quasi-Newton Approximations
Merit Functions
Second-Order Correction
A Practical Line Search SQP Method
Trust-Region SQP Methods
A Relaxation Method for Equality-Constrained Optimization
Sl[subscript 1] QP (Sequential l[subscript 1] Quadratic Programming)
Sequential Linear-Quadratic Programming (SLQP)
A Technique for Updating the Penalty Parameter
Nonlinear Gradient Projection
Convergence Analysis
Rate of Convergence
Perspectives and Software
Notes and References
Exercises
Interior-Point Methods for Nonlinear Programming
Two Interpretations
A Basic Interior-Point Algorithm
Algorithmic Development
Primal vs. Primal-Dual System
Solving the Primal-Dual System
Updating the Barrier Parameter
Handling Nonconvexity and Singularity
Step Acceptance: Merit Functions and Filters
Quasi-Newton Approximations
Feasible Interior-Point Methods
A Line Search Interior-Point Method
A Trust-Region Interior-Point Method
An Algorithm for Solving the Barrier Problem
Step Computation
Lagrange Multipliers Estimates and Step Acceptance
Description of a Trust-Region Interior-Point Method
The Primal Log-Barrier Method
Global Convergence Properties
Failure of the Line Search Approach
Modified Line Search Methods
Global Convergence of the Trust-Region Approach
Superlinear Convergence
Perspectives and Software
Notes and References
Exercises
Background Material
Elements of Linear Algebra
Vectors and Matrices
Norms
Subspaces
Eigenvalues, Eigenvectors, and the Singular-Value Decomposition
Determinant and Trace
Matrix Factorizations: Cholesky, LU, QR
Symmetric Indefinite Factorization
Sherman-Morrison-Woodbury Formula
Interlacing Eigenvalue Theorem
Error Analysis and Floating-Point Arithmetic
Conditioning and Stability
Elements of Analysis, Geometry, Topology
Sequences
Rates of Convergence
Topology of the Euclidean Space R[superscript n]
Convex Sets in R[superscript n]
Continuity and Limits
Derivatives
Directional Derivatives
Mean Value Theorem
Implicit Function Theorem
Order Notation
Root-Finding for Scalar Equations
A Regularization Procedure
References
Index