Online Learning Via Offline Greedy Algorithms

Download Online Learning Via Offline Greedy Algorithms PDF/ePub or read online books in Mobi eBooks. Click Download or Read Online button to get Online Learning Via Offline Greedy Algorithms book now. This website allows unlimited access to, at the time of writing, more than 1.5 million titles, including hundreds of thousands of titles in various foreign languages.
Online Learning Via Offline Greedy Algorithms

Motivated by online decision-making in time-varying combinatorial environments, we study the problem of transforming offline algorithms to their online counterparts. We focus on offline combinatorial problems that are amenable to a constant factor approximation using a greedy algorithm that is robust to local errors. For such problems, we provide a general framework that efficiently transforms offline robust greedy algorithms to online ones using Blackwell approachability. We show that the resulting online algorithms have $O( sqrt{T})$ (approximate) regret under the full information setting. We further introduce a bandit extension of Blackwell approachability that we call Bandit Blackwell approachability. We leverage this notion to transform greedy robust offline algorithms into a $O(T^{2/3})$ (approximate) regret in the bandit setting. Demonstrating the flexibility of our framework, we apply our offline-to-online transformation to several problems at the intersection of revenue management, market design, and online optimization, including product ranking optimization in online platforms, reserve price optimization in auctions, and submodular maximization. We also extend our reduction to greedy-like first-order methods used in continuous optimization, such as those used for maximizing continuous strong DR monotone submodular functions subject to convex constraints. We show that our transformation, when applied to these applications, leads to new regret bounds or improves the current known bounds. We complement our theoretical studies by conducting numerical simulations for two of our applications, in both of which we observe that the numerical performance of our transformations outperforms the theoretical guarantees in practical instances.
Beyond the Worst-Case Analysis of Algorithms

Author: Tim Roughgarden
language: en
Publisher: Cambridge University Press
Release Date: 2021-01-14
Introduces exciting new methods for assessing algorithms for problems ranging from clustering to linear programming to neural networks.
Mathematical Foundations of Reinforcement Learning

This book provides a mathematical yet accessible introduction to the fundamental concepts, core challenges, and classic reinforcement learning algorithms. It aims to help readers understand the theoretical foundations of algorithms, providing insights into their design and functionality. Numerous illustrative examples are included throughout. The mathematical content is carefully structured to ensure readability and approachability. The book is divided into two parts. The first part is on the mathematical foundations of reinforcement learning, covering topics such as the Bellman equation, Bellman optimality equation, and stochastic approximation. The second part explicates reinforcement learning algorithms, including value iteration and policy iteration, Monte Carlo methods, temporal-difference methods, value function methods, policy gradient methods, and actor-critic methods. With its comprehensive scope, the book will appeal to undergraduate and graduate students, post-doctoral researchers, lecturers, industrial researchers, and anyone interested in reinforcement learning.