Trust Region Methods

Download Trust Region Methods PDF/ePub or read online books in Mobi eBooks. Click Download or Read Online button to get Trust Region Methods book now. This website allows unlimited access to, at the time of writing, more than 1.5 million titles, including hundreds of thousands of titles in various foreign languages.
Trust Region Methods

This is the first comprehensive reference on trust-region methods, a class of numerical algorithms for the solution of nonlinear convex optimization methods. Its unified treatment covers both unconstrained and constrained problems and reviews a large part of the specialized literature on the subject. It also provides an up-to-date view of numerical optimization.
Iterative Methods for Optimization

This book presents a carefully selected group of methods for unconstrained and bound constrained optimization problems and analyzes them in depth both theoretically and algorithmically. It focuses on clarity in algorithmic description and analysis rather than generality, and while it provides pointers to the literature for the most general theoretical results and robust software, the author thinks it is more important that readers have a complete understanding of special cases that convey essential ideas. A companion to Kelley's book, Iterative Methods for Linear and Nonlinear Equations (SIAM, 1995), this book contains many exercises and examples and can be used as a text, a tutorial for self-study, or a reference. Iterative Methods for Optimization does more than cover traditional gradient-based optimization: it is the first book to treat sampling methods, including the Hooke-Jeeves, implicit filtering, MDS, and Nelder-Mead schemes in a unified way, and also the first book to make connections between sampling methods and the traditional gradient-methods. Each of the main algorithms in the text is described in pseudocode, and a collection of MATLAB codes is available. Thus, readers can experiment with the algorithms in an easy way as well as implement them in other languages.
Trust-Region Methods for Unconstrained Optimization Problems

We present trust-region methods for the general unconstrained minimization problem. Trust-region algorithms iteratively minimize a model of the objective function within the trust-region and update the size of the region to find a first-order stationary point for the objective function. The radius of the trust-region is updated based on the agreement between the model and the objective function at the new trial point. The efficiency of the trust-region algorithms depends significantly on the size of the trust-region, the agreement between the model and the objective function and the model value reduction at each step. The size of the trust-region at each step plays a key role in the efficiency of the trust-region algorithm, particularly for large scale problems, because constructing and minimizing the model at each step requires gradient and Hessian information of the objective function. If the trust-region is too small or too large, then more models must be constructed and minimized, which is computationally expensive. We propose two adaptive trust-region algorithms that explore beyond the trust region if the boundary of the region prevents the algorithm from accepting a more beneficial point. It occurs when there is very good agreement between the model and the objective function on the trust-region boundary and we can find a step outside the trust-region with smaller model value while maintaining good agreement between the model and the objective function. We also take a different approach to derivative-free unconstrained optimization problems, where the objective function is possibly nonsmooth. We do an exploratory study by using deep neural-networks and their well-known capability as universal function approximator. We propose and investigate two derivative-free trust-region methods for solving unconstrained minimization problems, where we employ artificial neural-networks to construct a model within the trust-region. We directly find an estimate of the objective function minimizer without explicitly constructing a model function through a parent-child neural-network. This approach may provide improved practical performance in cases where the objective function is extremely noisy or stochastic. We provide a framework for future work in this area.