Enhancing Adversarial Robustness of Deep Neural Networks

Download Enhancing Adversarial Robustness of Deep Neural Networks PDF Online Free

Author :
Release : 2019
Genre :
Kind :
Book Rating : /5 ( reviews)

Enhancing Adversarial Robustness of Deep Neural Networks - read free eBook in online reader or directly download on the web page. Select files or add your book in reader. Download and read online ebook Enhancing Adversarial Robustness of Deep Neural Networks write by Jeffrey Zhang (M. Eng.). This book was released on 2019. Enhancing Adversarial Robustness of Deep Neural Networks available in PDF, EPUB and Kindle. Logit-based regularization and pretrain-then-tune are two approaches that have recently been shown to enhance adversarial robustness of machine learning models. In the realm of regularization, Zhang et al. (2019) proposed TRADES, a logit-based regularization optimization function that has been shown to improve upon the robust optimization framework developed by Madry et al. (2018) [14, 9]. They were able to achieve state-of-the-art adversarial accuracy on CIFAR10. In the realm of pretrain- then-tune models, Hendrycks el al. (2019) demonstrated that adversarially pretraining a model on ImageNet then adversarially tuning on CIFAR10 greatly improves the adversarial robustness of machine learning models. In this work, we propose Adversarial Regularization, another logit-based regularization optimization framework that surpasses TRADES in adversarial generalization. Furthermore, we explore the impact of trying different types of adversarial training on the pretrain-then-tune paradigm.

Adversarial Training for Improving the Robustness of Deep Neural Networks

Download Adversarial Training for Improving the Robustness of Deep Neural Networks PDF Online Free

Author :
Release : 2022
Genre : Computer vision
Kind :
Book Rating : /5 ( reviews)

Adversarial Training for Improving the Robustness of Deep Neural Networks - read free eBook in online reader or directly download on the web page. Select files or add your book in reader. Download and read online ebook Adversarial Training for Improving the Robustness of Deep Neural Networks write by Pengyue Hou. This book was released on 2022. Adversarial Training for Improving the Robustness of Deep Neural Networks available in PDF, EPUB and Kindle. Since 2013, Deep Neural Networks (DNNs) have caught up to a human-level performance at various benchmarks. Meanwhile, it is essential to ensure its safety and reliability. Recently an avenue of study questions the robustness of deep learning models and shows that adversarial samples with human-imperceptible noise can easily fool DNNs. Since then, many strategies have been proposed to improve the robustness of DNNs against such adversarial perturbations. Among many defense strategies, adversarial training (AT) is one of the most recognized methods and constantly yields state-of-the-art performance. It treats adversarial samples as augmented data and uses them in model optimization. Despite its promising results, AT has two problems to be improved: (1) poor generalizability on adversarial data (e.g. large robustness performance gap between training and testing data), and (2) a big drop in model's standard performance. This thesis tackles the above-mentioned drawbacks in AT and introduces two AT strategies. To improve the generalizability of AT-trained models, the first part of the thesis introduces a representation similarity-based AT strategy, namely self-paced adversarial training (SPAT). We investigate the imbalanced semantic similarity among different categories in natural images and discover that DNN models are easily fooled by adversarial samples from their hard-class pairs. With this insight, we propose SPAT to re-weight training samples adaptively during model optimization, enforcing AT to focus on those data from their hard class pairs. To address the second problem in AT, a big performance drop on clean data, the second part of this thesis attempts to answer the question: to what extent the robustness of the model can be improved without sacrificing standard performance? Toward this goal, we propose a simple yet effective transfer learning-based adversarial training strategy that disentangles the negative effects of adversarial samples on model's standard performance. In addition, we introduce a training-friendly adversarial attack algorithm, which boosts adversarial robustness without introducing significant training complexity. Compared to prior arts, extensive experiments demonstrate that the training strategy leads to a more robust model while preserving the model's standard accuracy on clean data.

Strengthening Deep Neural Networks

Download Strengthening Deep Neural Networks PDF Online Free

Author :
Release : 2019-07-03
Genre : Computers
Kind :
Book Rating : 903/5 ( reviews)

Strengthening Deep Neural Networks - read free eBook in online reader or directly download on the web page. Select files or add your book in reader. Download and read online ebook Strengthening Deep Neural Networks write by Katy Warr. This book was released on 2019-07-03. Strengthening Deep Neural Networks available in PDF, EPUB and Kindle. As deep neural networks (DNNs) become increasingly common in real-world applications, the potential to deliberately "fool" them with data that wouldn’t trick a human presents a new attack vector. This practical book examines real-world scenarios where DNNs—the algorithms intrinsic to much of AI—are used daily to process image, audio, and video data. Author Katy Warr considers attack motivations, the risks posed by this adversarial input, and methods for increasing AI robustness to these attacks. If you’re a data scientist developing DNN algorithms, a security architect interested in how to make AI systems more resilient to attack, or someone fascinated by the differences between artificial and biological perception, this book is for you. Delve into DNNs and discover how they could be tricked by adversarial input Investigate methods used to generate adversarial input capable of fooling DNNs Explore real-world scenarios and model the adversarial threat Evaluate neural network robustness; learn methods to increase resilience of AI systems to adversarial data Examine some ways in which AI might become better at mimicking human perception in years to come

Adversarial Robustness of Deep Learning Models

Download Adversarial Robustness of Deep Learning Models PDF Online Free

Author :
Release : 2020
Genre :
Kind :
Book Rating : /5 ( reviews)

Adversarial Robustness of Deep Learning Models - read free eBook in online reader or directly download on the web page. Select files or add your book in reader. Download and read online ebook Adversarial Robustness of Deep Learning Models write by Samarth Gupta (S.M.). This book was released on 2020. Adversarial Robustness of Deep Learning Models available in PDF, EPUB and Kindle. Efficient operation and control of modern day urban systems such as transportation networks is now more important than ever due to huge societal benefits. Low cost network-wide sensors generate large amounts of data which needs to processed to extract useful information necessary for operational maintenance and to perform real-time control. Modern Machine Learning (ML) systems, particularly Deep Neural Networks (DNNs), provide a scalable solution to the problem of information retrieval from sensor data. Therefore, Deep Learning systems are increasingly playing an important role in day-to-day operations of our urban systems and hence cannot not be treated as standalone systems anymore. This naturally raises questions from a security viewpoint. Are modern ML systems robust to adversarial attacks for deployment in critical real-world applications? If not, then how can we make progress in securing these systems against such attacks? In this thesis we first demonstrate the vulnerability of modern ML systems on a real world scenario relevant to transportation networks by successfully attacking a commercial ML platform using a traffic-camera image. We review different methods of defense and various challenges associated in training an adversarially robust classifier. In terms of contributions, we propose and investigate a new method of defense to build adversarially robust classifiers using Error-Correcting Codes (ECCs). The idea of using Error-Correcting Codes for multi-class classification has been investigated in the past but only under nominal settings. We build upon this idea in the context of adversarial robustness of Deep Neural Networks. Following the guidelines of code-book design from literature, we formulate a discrete optimization problem to generate codebooks in a systematic manner. This optimization problem maximizes minimum hamming distance between codewords of the codebook while maintaining high column separation. Using the optimal solution of the discrete optimization problem as our codebook, we then build a (robust) multi-class classifier from that codebook. To estimate the adversarial accuracy of ECC based classifiers resulting from different codebooks, we provide methods to generate gradient based white-box attacks. We discuss estimation of class probability estimates (or scores) which are in itself useful for real-world applications along with their use in generating black-box and white-box attacks. We also discuss differentiable decoding methods, which can also be used to generate white-box attacks. We are able to outperform standard all-pairs codebook, providing evidence to the fact that compact codebooks generated using our discrete optimization approach can indeed provide high performance. Most importantly, we show that ECC based classifiers can be partially robust even without any adversarial training. We also show that this robustness is simply not a manifestation of the large network capacity of the overall classifier. Our approach can be seen as the first step towards designing classifiers which are robust by design. These contributions suggest that ECCs based approach can be useful to improve the robustness of modern ML systems and thus making urban systems more resilient to adversarial attacks.

The Good, the Bad and the Ugly

Download The Good, the Bad and the Ugly PDF Online Free

Author :
Release : 2022
Genre :
Kind :
Book Rating : /5 ( reviews)

The Good, the Bad and the Ugly - read free eBook in online reader or directly download on the web page. Select files or add your book in reader. Download and read online ebook The Good, the Bad and the Ugly write by Xiaoting Li. This book was released on 2022. The Good, the Bad and the Ugly available in PDF, EPUB and Kindle. Neural networks have been widely adopted to address different real-world problems. Despite the remarkable achievements in machine learning tasks, they remain vulnerable to adversarial examples that are imperceptible to humans but can mislead the state-of-the-art models. More specifically, such adversarial examples can be generalized to a variety of common data structures, including images, texts and networked data. Faced with the significant threat that adversarial attacks pose to security-critical applications, in this thesis, we explore the good, the bad and the ugly of adversarial machine learning. In particular, we focus on the investigation on the applicability of adversarial attacks in real-world scenarios for social good and their defensive paradigms. The rapid progress of adversarial attacking techniques aids us to better understand the underlying vulnerabilities of neural networks that inspires us to explore their potential usage for good purposes. In real world, social media has extremely reshaped our daily life due to their worldwide accessibility, but its data privacy also suffers from inference attacks. Based on the fact that deep neural networks are vulnerable to adversarial examples, we attempt a novel perspective of protecting data privacy in social media and design a defense framework called Adv4SG, where we introduce adversarial attacks to forge latent feature representations and mislead attribute inference attacks. Considering that text data in social media shares the most significant privacy of users, we investigate how text-space adversarial attacks can be leveraged to protect users' attributes. Specifically, we integrate social media property to advance Adv4SG, and introduce cost-effective mechanisms to expedite attribute protection over text data under the black-box setting. By conducting extensive experiments on real-world social media datasets, we show that Adv4SG is an appealing method to mitigate the inference attacks. Second, we extend our study to more complex networked data. Social network is more of a heterogeneous environment which is naturally represented as graph-structured data, maintaining rich user activities and complicated relationships among them. This enables attackers to deploy graph neural networks (GNNs) to automate attribute inferences from user features and relationships, which makes such privacy disclosure hard to avoid. To address that, we take advantage of the vulnerability of GNNs to adversarial attacks, and propose a new graph poisoning attack, called AttrOBF to mislead GNNs into misclassification and thus protect personal attribute privacy against GNN-based inference attacks on social networks. AttrOBF provides a more practical formulation through obfuscating optimal training user attribute values for real-world social graphs. Our results demonstrate the promising potential of applying adversarial attacks to attribute protection on social graphs. Third, we introduce a watermarking-based defense strategy against adversarial attacks on deep neural networks. With the ever-increasing arms race between defenses and attacks, most existing defense methods ignore fact that attackers can possibly detect and reproduce the differentiable model, which leaves the window for evolving attacks to adaptively evade the defense. Based on this observation, we propose a defense mechanism that creates a knowledge gap between attackers and defenders by imposing a secret watermarking process into standard deep neural networks. We analyze the experimental results of a wide range of watermarking algorithms in our defense method against state-of-the-art attacks on baseline image datasets, and validate the effectiveness our method in protesting adversarial examples. Our research expands the investigation of enhancing the deep learning model robustness against adversarial attacks and unveil the insights of applying adversary for social good. We design Adv4SG and AttrOBF to take advantage of the superiority of adversarial attacking techniques to protect the social media user's privacy on the basis of discrete textual data and networked data, respectively. Both of them can be realized under the practical black-box setting. We also provide the first attempt at utilizing digital watermark to increase model's randomness that suppresses attacker's capability. Through our evaluation, we validate their effectiveness and demonstrate their promising value in real-world use.