The Alignment Problem Machine Learning And Human Values Epub

Download The Alignment Problem Machine Learning And Human Values Epub PDF/ePub or read online books in Mobi eBooks. Click Download or Read Online button to get The Alignment Problem Machine Learning And Human Values Epub book now. This website allows unlimited access to, at the time of writing, more than 1.5 million titles, including hundreds of thousands of titles in various foreign languages.
The Alignment Problem: Machine Learning and Human Values

Author: Brian Christian
language: en
Publisher: W. W. Norton & Company
Release Date: 2020-10-06
"If you’re going to read one book on artificial intelligence, this is the one." —Stephen Marche, New York Times A jaw-dropping exploration of everything that goes wrong when we build AI systems and the movement to fix them. Today’s “machine-learning” systems, trained by data, are so effective that we’ve invited them to see and hear for us—and to make decisions on our behalf. But alarm bells are ringing. Recent years have seen an eruption of concern as the field of machine learning advances. When the systems we attempt to teach will not, in the end, do what we want or what we expect, ethical and potentially existential risks emerge. Researchers call this the alignment problem. Systems cull résumés until, years later, we discover that they have inherent gender biases. Algorithms decide bail and parole—and appear to assess Black and White defendants differently. We can no longer assume that our mortgage application, or even our medical tests, will be seen by human eyes. And as autonomous vehicles share our streets, we are increasingly putting our lives in their hands. The mathematical and computational models driving these changes range in complexity from something that can fit on a spreadsheet to a complex system that might credibly be called “artificial intelligence.” They are steadily replacing both human judgment and explicitly programmed software. In best-selling author Brian Christian’s riveting account, we meet the alignment problem’s “first-responders,” and learn their ambitious plan to solve it before our hands are completely off the wheel. In a masterful blend of history and on-the ground reporting, Christian traces the explosive growth in the field of machine learning and surveys its current, sprawling frontier. Readers encounter a discipline finding its legs amid exhilarating and sometimes terrifying progress. Whether they—and we—succeed or fail in solving the alignment problem will be a defining human story. The Alignment Problem offers an unflinching reckoning with humanity’s biases and blind spots, our own unstated assumptions and often contradictory goals. A dazzlingly interdisciplinary work, it takes a hard look not only at our technology but at our culture—and finds a story by turns harrowing and hopeful.
Summary of Brian Christian’s The Alignment Problem

Buy now to get the main key ideas from Brian Christian’s The Alignment Problem As machine-learning systems grow not only more prevalent, but also more powerful, humans want to ensure that they understand us and do what we want, eliminating the possibility of catastrophic divergence. In the field of computer science, this question is known as the alignment problem. In The Alignment Problem (2020), Brian Christian raises questions of safety and ethics in a world where humans are turning into machines and machines are turning into humans. He discusses tools that, through imitation, curiosity, inference, and shaping, exhibit human skills without being programmed to do so. The future of machine learning holds risks, but also great promise.
Using Computational Narratology to Address the Artificial Intelligence Value Alignment Problem

This thesis provides a novel conceptual contribution to artificial intelligence (AI) safety by finding a tractable method for solving the AI value alignment problem: the creation of more complete audience models using narrative information extraction techniques from the field of computational narratology. With a thorough analysis of results from the field of computational narratology, I show that research into narrative for autonomous agents can contribute to solving the AI value alignment problem. In short, we can create artificial intelligence systems that automatically act in the best interest of humanity by teaching them to read and understand stories.The novelty of this thesis lies in the combination of two disparate academic fields: AI safety and computational narratology. Reviewing the current work and ongoing issues in both fields, I show that methods used in computational narratology to model stories can be used to solve the value alignment problem from the field of AI safety. In Chapter 2, I show why value alignment is the best solution to the problem of controlling intelligent agents. In Chapter 2, I discuss how stories encode tacit human values, and how the creation of a better audience model will contribute to solving the value alignment problem. In Chapter 3, I present two case studies providing evidence that value alignment from narrative information extraction is not only viable, but effective. Finally, I conclude by acknowledging the shortcomings of the field and pressing areas of future work.