Distributed Convex Optimization With Proximal Methods

Download Distributed Convex Optimization With Proximal Methods PDF/ePub or read online books in Mobi eBooks. Click Download or Read Online button to get Distributed Convex Optimization With Proximal Methods book now. This website allows unlimited access to, at the time of writing, more than 1.5 million titles, including hundreds of thousands of titles in various foreign languages.
Distributed Convex Optimization with Proximal Methods

This thesis is about a class of optimization algorithms called proximal algorithms. Much like Newton's method is a standard tool for solving unconstrained smooth optimization problems of modest size, proximal algorithms can be viewed as an analogous tool for nonsmooth, constrained, large-scale, distributed, or decentralized versions of these problems. They are very generally applicable, but are especially well-suited to problems of substantial recent interest involving large or high-dimensional datasets. Proximal methods sit at a higher level of abstraction than classical algorithms like Newton's method: the base operation is evaluating the \emph{proximal operator} of a function, which itself involves solving a small convex optimization problem. These subproblems, which generalize the problem of projecting a point onto a convex set, often admit closed-form solutions or can be solved very quickly with standard or simple specialized methods. Here, we discuss many different interpretations of proximal operators and algorithms, describe their connections to many other topics in optimization and applied mathematics, survey some fundamental algorithms, and provide a large number of examples of proximal operators that arise in practice.
Large-Scale and Distributed Optimization

This book presents tools and methods for large-scale and distributed optimization. Since many methods in "Big Data" fields rely on solving large-scale optimization problems, often in distributed fashion, this topic has over the last decade emerged to become very important. As well as specific coverage of this active research field, the book serves as a powerful source of information for practitioners as well as theoreticians. Large-Scale and Distributed Optimization is a unique combination of contributions from leading experts in the field, who were speakers at the LCCC Focus Period on Large-Scale and Distributed Optimization, held in Lund, 14th–16th June 2017. A source of information and innovative ideas for current and future research, this book will appeal to researchers, academics, and students who are interested in large-scale optimization.
Distributed Optimization: Advances in Theories, Methods, and Applications

This book offers a valuable reference guide for researchers in distributed optimization and for senior undergraduate and graduate students alike. Focusing on the natures and functions of agents, communication networks and algorithms in the context of distributed optimization for networked control systems, this book introduces readers to the background of distributed optimization; recent developments in distributed algorithms for various types of underlying communication networks; the implementation of computation-efficient and communication-efficient strategies in the execution of distributed algorithms; and the frameworks of convergence analysis and performance evaluation. On this basis, the book then thoroughly studies 1) distributed constrained optimization and the random sleep scheme, from an agent perspective; 2) asynchronous broadcast-based algorithms, event-triggered communication, quantized communication, unbalanced directed networks, and time-varying networks, from a communication network perspective; and 3) accelerated algorithms and stochastic gradient algorithms, from an algorithm perspective. Finally, the applications of distributed optimization in large-scale statistical learning, wireless sensor networks, and for optimal energy management in smart grids are discussed.