Adversarial Robustness For Machine Learning


Download Adversarial Robustness For Machine Learning PDF/ePub or read online books in Mobi eBooks. Click Download or Read Online button to get Adversarial Robustness For Machine Learning book now. This website allows unlimited access to, at the time of writing, more than 1.5 million titles, including hundreds of thousands of titles in various foreign languages.

Download

Adversarial Robustness for Machine Learning


Adversarial Robustness for Machine Learning

Author: Pin-Yu Chen

language: en

Publisher: Academic Press

Release Date: 2022-08-20


DOWNLOAD





Adversarial Robustness for Machine Learning summarizes the recent progress on this topic and introduces popular algorithms on adversarial attack, defense and verification. Sections cover adversarial attack, verification and defense, mainly focusing on image classification applications which are the standard benchmark considered in the adversarial robustness community. Other sections discuss adversarial examples beyond image classification, other threat models beyond testing time attack, and applications on adversarial robustness. For researchers, this book provides a thorough literature review that summarizes latest progress in the area, which can be a good reference for conducting future research. In addition, the book can also be used as a textbook for graduate courses on adversarial robustness or trustworthy machine learning. While machine learning (ML) algorithms have achieved remarkable performance in many applications, recent studies have demonstrated their lack of robustness against adversarial disturbance. The lack of robustness brings security concerns in ML models for real applications such as self-driving cars, robotics controls and healthcare systems. - Summarizes the whole field of adversarial robustness for Machine learning models - Provides a clearly explained, self-contained reference - Introduces formulations, algorithms and intuitions - Includes applications based on adversarial robustness

Machine Learning Algorithms


Machine Learning Algorithms

Author: Fuwei Li

language: en

Publisher: Springer Nature

Release Date: 2022-11-14


DOWNLOAD





This book demonstrates the optimal adversarial attacks against several important signal processing algorithms. Through presenting the optimal attacks in wireless sensor networks, array signal processing, principal component analysis, etc, the authors reveal the robustness of the signal processing algorithms against adversarial attacks. Since data quality is crucial in signal processing, the adversary that can poison the data will be a significant threat to signal processing. Therefore, it is necessary and urgent to investigate the behavior of machine learning algorithms in signal processing under adversarial attacks. The authors in this book mainly examine the adversarial robustness of three commonly used machine learning algorithms in signal processing respectively: linear regression, LASSO-based feature selection, and principal component analysis (PCA). As to linear regression, the authors derive the optimal poisoning data sample and the optimal feature modifications, and also demonstrate the effectiveness of the attack against a wireless distributed learning system. The authors further extend the linear regression to LASSO-based feature selection and study the best strategy to mislead the learning system to select the wrong features. The authors find the optimal attack strategy by solving a bi-level optimization problem and also illustrate how this attack influences array signal processing and weather data analysis. In the end, the authors consider the adversarial robustness of the subspace learning problem. The authors examine the optimal modification strategy under the energy constraints to delude the PCA-based subspace learning algorithm. This book targets researchers working in machine learning, electronic information, and information theory as well as advanced-level students studying these subjects. R&D engineers who are working in machine learning, adversarial machine learning, robust machine learning, and technical consultants working on the security and robustness of machine learning are likely to purchase this book as a reference guide.

Evaluating and Understanding Adversarial Robustness in Deep Learning


Evaluating and Understanding Adversarial Robustness in Deep Learning

Author: Jinghui Chen

language: en

Publisher:

Release Date: 2021


DOWNLOAD





Deep Neural Networks (DNNs) have made many breakthroughs in different areas of artificial intelligence. However, recent studies show that DNNs are vulnerable to adversarial examples. A tiny perturbation on an image that is almost invisible to human eyes could mislead a well-trained image classifier towards misclassification. This raises serious security concerns and trustworthy issues towards the robustness of Deep Neural Networks in solving real world challenges. Researchers have been working on this problem for a while and it has further led to a vigorous arms race between heuristic defenses that propose ways to defend against existing attacks and newly-devised attacks that are able to penetrate such defenses. While the arm race continues, it becomes more and more crucial to accurately evaluate model robustness effectively and efficiently under different threat models and identify those ``falsely'' robust models that may give us a false sense of robustness. On the other hand, despite the fast development of various kinds of heuristic defenses, their practical robustness is still far from satisfactory, and there are actually little algorithmic improvements in terms of defenses during recent years. This suggests that there still lacks further understandings toward the fundamentals of adversarial robustness in deep learning, which might prevent us from designing more powerful defenses. \\The overarching goal of this research is to enable accurate evaluations of model robustness under different practical settings as well as to establish a deeper understanding towards other factors in the machine learning training pipeline that might affect model robustness. Specifically, we develop efficient and effective Frank-Wolfe attack algorithms under white-box and black-box settings and a hard-label adversarial attack, RayS, which is capable of detecting ``falsely'' robust models. In terms of understanding adversarial robustness, we propose to theoretically study the relationship between model robustness and data distributions, the relationship between model robustness and model architectures, as well as the relationship between model robustness and loss smoothness. The techniques proposed in this dissertation form a line of researches that deepens our understandings towards adversarial robustness and could further guide us in designing better and faster robust training methods.