Learning And Reasoning In Hybrid Structured Spaces

Download Learning And Reasoning In Hybrid Structured Spaces PDF/ePub or read online books in Mobi eBooks. Click Download or Read Online button to get Learning And Reasoning In Hybrid Structured Spaces book now. This website allows unlimited access to, at the time of writing, more than 1.5 million titles, including hundreds of thousands of titles in various foreign languages.
Learning and Reasoning in Hybrid Structured Spaces

Artificial intelligence often has to deal with uncertain scenarios, such as a partially observed environment or noisy observations. Traditional probabilistic models, while being very principled approaches in these contexts, are incapable of dealing with both algebraic and logical constraints. Existing hybrid continuous/discrete models are typically limited in expressivity, or do not offer any guarantee on the approximation errors. This book, Learning and Reasoning in Hybrid Structured Spaces, discusses a recent and general formalism called Weighted Model Integration (WMI), which enables probabilistic modeling and inference in hybrid structured domains. WMI-based inference algorithms differ with respect to most alternatives in that probabilities are computed inside a structured support involving both logical and algebraic relationships between variables. While the research in this area is at an early stage, we are witnessing an increasing interest in the study and development of scalable inference procedures and effective learning algorithms in this setting. This book details some of the most impactful contributions in context of WMI-based inference in the last 5 years. Moreover, by providing a gentle introduction to the main concepts related to WMI, the book can be useful for both theoretical researchers and practitioners alike.
Deep Learning with Relational Logic Representations

Deep learning has been used with great success in a number of diverse applications, ranging from image processing to game playing, and the fast progress of this learning paradigm has even been seen as paving the way towards general artificial intelligence. However, the current deep learning models are still principally limited in many ways. This book, ‘Deep Learning with Relational Logic Representations’, addresses the limited expressiveness of the common tensor-based learning representation used in standard deep learning, by generalizing it to relational representations based in mathematical logic. This is the natural formalism for the relational data omnipresent in the interlinked structures of the Internet and relational databases, as well as for the background knowledge often present in the form of relational rules and constraints. These are impossible to properly exploit with standard neural networks, but the book introduces a new declarative deep relational learning framework called Lifted Relational Neural Networks, which generalizes the standard deep learning models into the relational setting by means of a ‘lifting’ paradigm, known from Statistical Relational Learning. The author explains how this approach allows for effective end-to-end deep learning with relational data and knowledge, introduces several enhancements and optimizations to the framework, and demonstrates its expressiveness with various novel deep relational learning concepts, including efficient generalizations of popular contemporary models, such as Graph Neural Networks. Demonstrating the framework across various learning scenarios and benchmarks, including computational efficiency, the book will be of interest to all those interested in the theory and practice of advancing representations of modern deep learning architectures.
Exploiting Environment Configurability in Reinforcement Learning

In recent decades, Reinforcement Learning (RL) has emerged as an effective approach to address complex control tasks. In a Markov Decision Process (MDP), the framework typically used, the environment is assumed to be a fixed entity that cannot be altered externally. There are, however, several real-world scenarios in which the environment can be modified to a limited extent. This book, Exploiting Environment Configurability in Reinforcement Learning, aims to formalize and study diverse aspects of environment configuration. In a traditional MDP, the agent perceives the state of the environment and performs actions. As a consequence, the environment transitions to a new state and generates a reward signal. The goal of the agent consists of learning a policy, i.e., a prescription of actions that maximize the long-term reward. Although environment configuration arises quite often in real applications, the topic is very little explored in the literature. The contributions in the book are theoretical, algorithmic, and experimental and can be broadly subdivided into three parts. The first part introduces the novel formalism of Configurable Markov Decision Processes (Conf-MDPs) to model the configuration opportunities offered by the environment. The second part of the book focuses on the cooperative Conf-MDP setting and investigates the problem of finding an agent policy and an environment configuration that jointly optimize the long-term reward. The third part addresses two specific applications of the Conf-MDP framework: policy space identification and control frequency adaptation. The book will be of interest to all those using RL as part of their work.