Adversarial Training For Improving The Robustness Of Deep Neural Networks

Download Adversarial Training For Improving The Robustness Of Deep Neural Networks PDF/ePub or read online books in Mobi eBooks. Click Download or Read Online button to get Adversarial Training For Improving The Robustness Of Deep Neural Networks book now. This website allows unlimited access to, at the time of writing, more than 1.5 million titles, including hundreds of thousands of titles in various foreign languages.
Adversarial Training for Improving the Robustness of Deep Neural Networks

Since 2013, Deep Neural Networks (DNNs) have caught up to a human-level performance at various benchmarks. Meanwhile, it is essential to ensure its safety and reliability. Recently an avenue of study questions the robustness of deep learning models and shows that adversarial samples with human-imperceptible noise can easily fool DNNs. Since then, many strategies have been proposed to improve the robustness of DNNs against such adversarial perturbations. Among many defense strategies, adversarial training (AT) is one of the most recognized methods and constantly yields state-of-the-art performance. It treats adversarial samples as augmented data and uses them in model optimization. Despite its promising results, AT has two problems to be improved: (1) poor generalizability on adversarial data (e.g. large robustness performance gap between training and testing data), and (2) a big drop in model's standard performance. This thesis tackles the above-mentioned drawbacks in AT and introduces two AT strategies. To improve the generalizability of AT-trained models, the first part of the thesis introduces a representation similarity-based AT strategy, namely self-paced adversarial training (SPAT). We investigate the imbalanced semantic similarity among different categories in natural images and discover that DNN models are easily fooled by adversarial samples from their hard-class pairs. With this insight, we propose SPAT to re-weight training samples adaptively during model optimization, enforcing AT to focus on those data from their hard class pairs. To address the second problem in AT, a big performance drop on clean data, the second part of this thesis attempts to answer the question: to what extent the robustness of the model can be improved without sacrificing standard performance? Toward this goal, we propose a simple yet effective transfer learning-based adversarial training strategy that disentangles the negative effects of adversarial samples on model's standard performance. In addition, we introduce a training-friendly adversarial attack algorithm, which boosts adversarial robustness without introducing significant training complexity. Compared to prior arts, extensive experiments demonstrate that the training strategy leads to a more robust model while preserving the model's standard accuracy on clean data.
Strengthening Deep Neural Networks

As deep neural networks (DNNs) become increasingly common in real-world applications, the potential to deliberately "fool" them with data that wouldn’t trick a human presents a new attack vector. This practical book examines real-world scenarios where DNNs—the algorithms intrinsic to much of AI—are used daily to process image, audio, and video data. Author Katy Warr considers attack motivations, the risks posed by this adversarial input, and methods for increasing AI robustness to these attacks. If you’re a data scientist developing DNN algorithms, a security architect interested in how to make AI systems more resilient to attack, or someone fascinated by the differences between artificial and biological perception, this book is for you. Delve into DNNs and discover how they could be tricked by adversarial input Investigate methods used to generate adversarial input capable of fooling DNNs Explore real-world scenarios and model the adversarial threat Evaluate neural network robustness; learn methods to increase resilience of AI systems to adversarial data Examine some ways in which AI might become better at mimicking human perception in years to come
An Introduction to Computer Security

Covers: elements of computer security; roles and responsibilities; common threats; computer security policy; computer security program and risk management; security and planning in the computer system life cycle; assurance; personnel/user issues; preparing for contingencies and disasters; computer security incident handling; awareness, training, and education; physical and environmental security; identification and authentication; logical access control; audit trails; cryptography; and assessing and mitigating the risks to a hypothetical computer system.