Value Based Planning For Teams Of Agents In Stochastic Partially Observable Environments


Download Value Based Planning For Teams Of Agents In Stochastic Partially Observable Environments PDF/ePub or read online books in Mobi eBooks. Click Download or Read Online button to get Value Based Planning For Teams Of Agents In Stochastic Partially Observable Environments book now. This website allows unlimited access to, at the time of writing, more than 1.5 million titles, including hundreds of thousands of titles in various foreign languages.

Download

Value-Based Planning for Teams of Agents in Stochastic Partially Observable Environments


Value-Based Planning for Teams of Agents in Stochastic Partially Observable Environments

Author: Frans Oliehoek

language: en

Publisher: Amsterdam University Press

Release Date: 2010


DOWNLOAD





In this thesis decision-making problems are formalized using a stochastic discrete-time model called decentralized partially observable Markov decision process (Dec-POMDP).

A Concise Introduction to Decentralized POMDPs


A Concise Introduction to Decentralized POMDPs

Author: Frans A. Oliehoek

language: en

Publisher: Springer

Release Date: 2016-06-03


DOWNLOAD





This book introduces multiagent planning under uncertainty as formalized by decentralized partially observable Markov decision processes (Dec-POMDPs). The intended audience is researchers and graduate students working in the fields of artificial intelligence related to sequential decision making: reinforcement learning, decision-theoretic planning for single agents, classical multiagent planning, decentralized control, and operations research.

Multi-Objective Decision Making


Multi-Objective Decision Making

Author: Diederik M. Roijers

language: en

Publisher: Springer Nature

Release Date: 2022-05-31


DOWNLOAD





Many real-world decision problems have multiple objectives. For example, when choosing a medical treatment plan, we want to maximize the efficacy of the treatment, but also minimize the side effects. These objectives typically conflict, e.g., we can often increase the efficacy of the treatment, but at the cost of more severe side effects. In this book, we outline how to deal with multiple objectives in decision-theoretic planning and reinforcement learning algorithms. To illustrate this, we employ the popular problem classes of multi-objective Markov decision processes (MOMDPs) and multi-objective coordination graphs (MO-CoGs). First, we discuss different use cases for multi-objective decision making, and why they often necessitate explicitly multi-objective algorithms. We advocate a utility-based approach to multi-objective decision making, i.e., that what constitutes an optimal solution to a multi-objective decision problem should be derived from the available information about user utility. We show how different assumptions about user utility and what types of policies are allowed lead to different solution concepts, which we outline in a taxonomy of multi-objective decision problems. Second, we show how to create new methods for multi-objective decision making using existing single-objective methods as a basis. Focusing on planning, we describe two ways to creating multi-objective algorithms: in the inner loop approach, the inner workings of a single-objective method are adapted to work with multi-objective solution concepts; in the outer loop approach, a wrapper is created around a single-objective method that solves the multi-objective problem as a series of single-objective problems. After discussing the creation of such methods for the planning setting, we discuss how these approaches apply to the learning setting. Next, we discuss three promising application domains for multi-objective decision making algorithms: energy, health, and infrastructure and transportation. Finally, we conclude by outlining important open problems and promising future directions.