Reinforcement Learning And Optimal Control

Download Reinforcement Learning And Optimal Control PDF/ePub or read online books in Mobi eBooks. Click Download or Read Online button to get Reinforcement Learning And Optimal Control book now. This website allows unlimited access to, at the time of writing, more than 1.5 million titles, including hundreds of thousands of titles in various foreign languages.
Reinforcement Learning and Optimal Control

Author: Dimitri Bertsekas
language: en
Publisher: Athena Scientific
Release Date: 2019-07-01
This book considers large and challenging multistage decision problems, which can be solved in principle by dynamic programming (DP), but their exact solution is computationally intractable. We discuss solution methods that rely on approximations to produce suboptimal policies with adequate performance. These methods are collectively known by several essentially equivalent names: reinforcement learning, approximate dynamic programming, neuro-dynamic programming. They have been at the forefront of research for the last 25 years, and they underlie, among others, the recent impressive successes of self-learning in the context of games such as chess and Go. Our subject has benefited greatly from the interplay of ideas from optimal control and from artificial intelligence, as it relates to reinforcement learning and simulation-based neural network methods. One of the aims of the book is to explore the common boundary between these two fields and to form a bridge that is accessible by workers with background in either field. Another aim is to organize coherently the broad mosaic of methods that have proved successful in practice while having a solid theoretical and/or logical foundation. This may help researchers and practitioners to find their way through the maze of competing ideas that constitute the current state of the art. This book relates to several of our other books: Neuro-Dynamic Programming (Athena Scientific, 1996), Dynamic Programming and Optimal Control (4th edition, Athena Scientific, 2017), Abstract Dynamic Programming (2nd edition, Athena Scientific, 2018), and Nonlinear Programming (Athena Scientific, 2016). However, the mathematical style of this book is somewhat different. While we provide a rigorous, albeit short, mathematical account of the theory of finite and infinite horizon dynamic programming, and some fundamental approximation methods, we rely more on intuitive explanations and less on proof-based insights. Moreover, our mathematical requirements are quite modest: calculus, a minimal use of matrix-vector algebra, and elementary probability (mathematically complicated arguments involving laws of large numbers and stochastic convergence are bypassed in favor of intuitive explanations). The book illustrates the methodology with many examples and illustrations, and uses a gradual expository approach, which proceeds along four directions: (a) From exact DP to approximate DP: We first discuss exact DP algorithms, explain why they may be difficult to implement, and then use them as the basis for approximations. (b) From finite horizon to infinite horizon problems: We first discuss finite horizon exact and approximate DP methodologies, which are intuitive and mathematically simple, and then progress to infinite horizon problems. (c) From deterministic to stochastic models: We often discuss separately deterministic and stochastic problems, since deterministic problems are simpler and offer special advantages for some of our methods. (d) From model-based to model-free implementations: We first discuss model-based implementations, and then we identify schemes that can be appropriately modified to work with a simulator. The book is related and supplemented by the companion research monograph Rollout, Policy Iteration, and Distributed Reinforcement Learning (Athena Scientific, 2020), which focuses more closely on several topics related to rollout, approximate policy iteration, multiagent problems, discrete and Bayesian optimization, and distributed computation, which are either discussed in less detail or not covered at all in the present book. The author's website contains class notes, and a series of videolectures and slides from a 2021 course at ASU, which address a selection of topics from both books.
Integral and Inverse Reinforcement Learning for Optimal Control Systems and Games

Integral and Inverse Reinforcement Learning for Optimal Control Systems and Games develops its specific learning techniques, motivated by application to autonomous driving and microgrid systems, with breadth and depth: integral reinforcement learning (RL) achieves model-free control without system estimation compared with system identification methods and their inevitable estimation errors; novel inverse RL methods fill a gap that will help them to attract readers interested in finding data-driven model-free solutions for inverse optimization and optimal control, imitation learning and autonomous driving among other areas. Graduate students will find that this book offers a thorough introduction to integral and inverse RL for feedback control related to optimal regulation and tracking, disturbance rejection, and multiplayer and multiagent systems. For researchers, it provides a combination of theoretical analysis, rigorous algorithms, and a wide-ranging selection of examples. The book equips practitioners working in various domains – aircraft, robotics, power systems, and communication networks among them – with theoretical insights valuable in tackling the real-world challenges they face.
Reinforcement Learning for Sequential Decision and Optimal Control

Have you ever wondered how AlphaZero learns to defeat the top human Go players? Do you have any clues about how an autonomous driving system can gradually develop self-driving skills beyond normal drivers? What is the key that enables AlphaStar to make decisions in Starcraft, a notoriously difficult strategy game that has partial information and complex rules? The core mechanism underlying those recent technical breakthroughs is reinforcement learning (RL), a theory that can help an agent to develop the self-evolution ability through continuing environment interactions. In the past few years, the AI community has witnessed phenomenal success of reinforcement learning in various fields, including chess games, computer games and robotic control. RL is also considered to be a promising and powerful tool to create general artificial intelligence in the future. As an interdisciplinary field of trial-and-error learning and optimal control, RL resembles how humans reinforce their intelligence by interacting with the environment and provides a principled solution for sequential decision making and optimal control in large-scale and complex problems. Since RL contains a wide range of new concepts and theories, scholars may be plagued by a number of questions: What is the inherent mechanism of reinforcement learning? What is the internal connection between RL and optimal control? How has RL evolved in the past few decades, and what are the milestones? How do we choose and implement practical and effective RL algorithms for real-world scenarios? What are the key challenges that RL faces today, and how can we solve them? What is the current trend of RL research? You can find answers to all those questions in this book. The purpose of the book is to help researchers and practitioners take a comprehensive view of RL and understand the in-depth connection between RL and optimal control. The book includes not only systematic and thorough explanations of theoretical basics but also methodical guidance of practical algorithm implementations. The book intends to provide a comprehensive coverage of both classic theories and recent achievements, and the content is carefully and logically organized, including basic topics such as the main concepts and terminologies of RL, Markov decision process (MDP), Bellman’s optimality condition, Monte Carlo learning, temporal difference learning, stochastic dynamic programming, function approximation, policy gradient methods, approximate dynamic programming, and deep RL, as well as the latest advances in action and state constraints, safety guarantee, reference harmonization, robust RL, partially observable MDP, multiagent RL, inverse RL, offline RL, and so on.