Evaluating and Certifying the Adversarial Robustness of Neural Language Models

Download Evaluating and Certifying the Adversarial Robustness of Neural Language Models PDF Online Free

Author :
Release : 2024
Genre :
Kind :
Book Rating : /5 ( reviews)

Evaluating and Certifying the Adversarial Robustness of Neural Language Models - read free eBook in online reader or directly download on the web page. Select files or add your book in reader. Download and read online ebook Evaluating and Certifying the Adversarial Robustness of Neural Language Models write by Muchao Ye. This book was released on 2024. Evaluating and Certifying the Adversarial Robustness of Neural Language Models available in PDF, EPUB and Kindle. Language models (LMs) built by deep neural networks (DNNs) have achieved great success in various areas of artificial intelligence, which have played an increasingly vital role in profound applications including chatbots and smart healthcare. Nonetheless, the vulnerability of DNNs against adversarial examples still threatens the application of neural LMs to safety-critical tasks. To specify, DNNs will change their correct predictions into incorrect ones when small perturbations are added to the original input texts. In this dissertation, we identify key challenges in evaluating and certifying the adversarial robustness of neural LMs and bridge those gaps through efficient hard-label text adversarial attacks and a unified certified robust training framework. The first step of developing neural LMs with high adversarial robustness is evaluating whether they are empirically robust against perturbed texts. The vital technique related to that is the text adversarial attack, which aims to construct a text that can fool LMs. Ideally, it shall output high-quality adversarial examples in a realistic setting with high efficiency. However, current evaluation pipelines proposed in the realistic hard-label setting adopt heuristic search methods, consequently meeting an inefficiency problem. To tackle this limitation, we introduce a series of hard-label text adversarial attack methods, which successfully tackle the inefficiency problem by using a pretrained word embedding space as an intermediate. A deep dive into this idea illustrates that utilizing an estimated decision boundary in the introduced word embedding space helps improve the quality of crafted adversarial examples. The ultimate goal of constructing robust neural LMs is obtaining ones for which adversarial examples do not exist, which can be realized through certified robust training. The research community has proposed different types of certified robust training either in the discrete input space or in the continuous latent feature space. We discover the structural gap within current pipelines and unify them in the word embedding space. By removing unnecessary bound computation modules, i.e., interval bound propagation, and harnessing a new decoupled regularization learning paradigm, our unification can provide a stronger robustness guarantee. Given the aforementioned contributions, we believe our findings will help contribute to the development of robust neural LMs.

Advances in Reliably Evaluating and Improving Adversarial Robustness

Download Advances in Reliably Evaluating and Improving Adversarial Robustness PDF Online Free

Author :
Release : 2021
Genre :
Kind :
Book Rating : /5 ( reviews)

Advances in Reliably Evaluating and Improving Adversarial Robustness - read free eBook in online reader or directly download on the web page. Select files or add your book in reader. Download and read online ebook Advances in Reliably Evaluating and Improving Adversarial Robustness write by Jonas Rauber. This book was released on 2021. Advances in Reliably Evaluating and Improving Adversarial Robustness available in PDF, EPUB and Kindle. Machine learning has made enormous progress in the last five to ten years. We can now make a computer, a machine, learn complex perceptual tasks from data rather than explicitly programming it. When we compare modern speech or image recognition systems to those from a decade ago, the advances are awe-inspiring. The susceptibility of machine learning systems to small, maliciously crafted adversarial perturbations is less impressive. Almost imperceptible pixel shifts or background noises can completely derail their performance. While humans are often amused by the stupidity of artificial intelligence, engineers worry about the security and safety of their machine learning applications, and scientists wonder how to make machine learning models more robust and more human-like. This dissertation summarizes and discusses advances in three areas of adversarial robustness. First, we introduce a new type of adversarial attack against machine learning models in real-world black-box scenarios. Unlike previous attacks, it does not require any insider knowledge or special access. Our results demonstrate the concrete threat caused by the current lack of robustness in machine learning applications. Second, we present several contributions to deal with the diverse challenges around evaluating adversarial robustness. The most fundamental challenge is that common attacks cannot distinguish robust models from models with misleading gradients. We help uncover and solve this problem through two new types of attacks immune to gradient masking. Misaligned incentives are another reason for insufficient evaluations. We published joint guidelines and organized an interactive competition to mitigate this problem. Finally, our open-source adversarial attacks library Foolbox empowers countless researchers to overcome common technical obstacles. Since robustness evaluations are inherently unstandardized, straightforward access to various attacks is more than a technical convenience; it promotes thorough evaluations. Third, we showcase a fundamentally new neural network architecture for robust classification. It uses a generative analysis-by-synthesis approach. We demonstrate its robustness using a digit recognition task and simultaneously reveal the limitations of prior work that uses adversarial training. Moreover, further studies have shown that our model best predicts human judgments on so-called controversial stimuli and that our approach scales to more complex datasets.

Improved Methodology for Evaluating Adversarial Robustness in Deep Neural Networks

Download Improved Methodology for Evaluating Adversarial Robustness in Deep Neural Networks PDF Online Free

Author :
Release : 2020
Genre :
Kind :
Book Rating : /5 ( reviews)

Improved Methodology for Evaluating Adversarial Robustness in Deep Neural Networks - read free eBook in online reader or directly download on the web page. Select files or add your book in reader. Download and read online ebook Improved Methodology for Evaluating Adversarial Robustness in Deep Neural Networks write by Kyungmi Lee (S. M.). This book was released on 2020. Improved Methodology for Evaluating Adversarial Robustness in Deep Neural Networks available in PDF, EPUB and Kindle. Deep neural networks are known to be vulnerable to adversarial perturbations, which are often imperceptible to humans but can alter predictions of machine learning systems. Since the exact value of adversarial robustness is difficult to obtain for complex deep neural networks, accuracy of the models against perturbed examples generated by attack methods is empirically used as a proxy to adversarial robustness. However, failure of attack methods to find adversarial perturbations cannot be equated with being robust. In this work, we identify three common cases that lead to overestimation of accuracy against perturbed examples generated by bounded first-order attack methods: 1) the value of cross-entropy loss numerically becoming zero when using standard floating point representation, resulting in non-useful gradients; 2) innately non-differentiable functions in deep neural networks, such as Rectified Linear Unit (ReLU) activation and MaxPool operation, incurring “gradient masking” [2]; and 3) certain regularization methods used during training inducing the model to be less amenable to first-order approximation. We show that these phenomena exist in a wide range of deep neural networks, and that these phenomena are not limited to specific defense methods they have been previously investigated for. For each case, we propose compensation methods that either address sources of inaccurate gradient computation, such as numerical saturation for near zero values and non-differentiability, or reduce the total number of back-propagations for iterative attacks by approximating second-order information. These compensation methods can be combined with existing attack methods for a more precise empirical evaluation metric. We illustrate the impact of these three phenomena with examples of practical interest, such as benchmarking model capacity and regularization techniques for robustness. Furthermore, we show that the gap between adversarial accuracy and the guaranteed lower bound of robustness can be partially explained by these phenomena. Overall, our work shows that overestimated adversarial accuracy that is not indicative of robustness is prevalent even for conventionally trained deep neural networks, and highlights cautions of using empirical evaluation without guaranteed bounds.

Towards Adversarial Robustness of Feed-forward and Recurrent Neural Networks

Download Towards Adversarial Robustness of Feed-forward and Recurrent Neural Networks PDF Online Free

Author :
Release : 2020
Genre :
Kind :
Book Rating : /5 ( reviews)

Towards Adversarial Robustness of Feed-forward and Recurrent Neural Networks - read free eBook in online reader or directly download on the web page. Select files or add your book in reader. Download and read online ebook Towards Adversarial Robustness of Feed-forward and Recurrent Neural Networks write by Qinglong Wang. This book was released on 2020. Towards Adversarial Robustness of Feed-forward and Recurrent Neural Networks available in PDF, EPUB and Kindle. "Recent years witnessed the successful resurgence of neural networks through the lens of deep learning research. As the spread of deep neural network (DNN) continues to reach multifarious branches of research, including computer vision, natural language processing, and malware detection, it has been found that the vulnerability of these powerful models is equally impressive as their capability in classification tasks. Specifically, research on the adversarial example problem exposes that DNNs, albeit powerful when confronted with legitimate samples, suffer severely from adversarial examples. These synthetic examples can be created by slightly modifying legitimate samples. We speculate that this vulnerability may significantly impede an extensive adoption of DNNs in safety-critical domains. This thesis aims to comprehend some of the mysteries of this vulnerability of DNN, design generic frameworks and deployable algorithms to protect DNNs with different architectures from attacks armed with adversarial examples. We first conduct a thorough exploration of existing research on explaining the pervasiveness of adversarial examples. We unify the hypotheses raised in existing work by extracting three major influencing factors, i.e., data, model, and training. These factors are also helpful in locating different attack and defense methods proposed in the research spectrum and analyzing their effectiveness and limitations. Then we perform two threads of research on neural networks with feed-forward and recurrent architectures, respectively. In the first thread, we focus on the adversarial robustness of feed-forward neural networks, which have been widely applied to process images. Under our proposed generic framework, we design two types of adversary resistant feed-forward networks that weaken the destructive power of adversarial examples and even prevent their creation. We theoretically validate the effectiveness of our methods and empirically demonstrate that they significantly boost a DNN's adversarial robustness while maintaining high accuracy in classification. Our second thread of study focuses on the adversarial robustness of the recurrent neural network (RNN), which represents a variety of networks typically used for processing sequential data. We develop an evaluation framework and propose to quantitatively evaluate RNN's adversarial robustness with deterministic finite automata (DFA), which represent rigorous rules and can be extracted from RNNs, and a distance metric suitable for strings. We demonstrate the feasibility of using extracted DFA as rules through conducting careful experimental studies to identify key conditions that affect the extraction performance. Moreover, we theoretically establish the correspondence between different RNNs and different DFA, and empirically validate the correspondence by evaluating and comparing different RNNs for their extraction performance. At last, we develop an algorithm under our framework and conduct a case study to evaluate the adversarial robustness of different RNNs on a set of regular grammars"--

ECML PKDD 2020 Workshops

Download ECML PKDD 2020 Workshops PDF Online Free

Author :
Release : 2021-02-01
Genre : Computers
Kind :
Book Rating : 658/5 ( reviews)

ECML PKDD 2020 Workshops - read free eBook in online reader or directly download on the web page. Select files or add your book in reader. Download and read online ebook ECML PKDD 2020 Workshops write by Irena Koprinska. This book was released on 2021-02-01. ECML PKDD 2020 Workshops available in PDF, EPUB and Kindle. This volume constitutes the refereed proceedings of the workshops which complemented the 20th Joint European Conference on Machine Learning and Knowledge Discovery in Databases, ECML PKDD, held in September 2020. Due to the COVID-19 pandemic the conference and workshops were held online. The 43 papers presented in volume were carefully reviewed and selected from numerous submissions. The volume presents the papers that have been accepted for the following workshops: 5th Workshop on Data Science for Social Good, SoGood 2020; Workshop on Parallel, Distributed and Federated Learning, PDFL 2020; Second Workshop on Machine Learning for Cybersecurity, MLCS 2020, 9th International Workshop on New Frontiers in Mining Complex Patterns, NFMCP 2020, Workshop on Data Integration and Applications, DINA 2020, Second Workshop on Evaluation and Experimental Design in Data Mining and Machine Learning, EDML 2020, Second International Workshop on eXplainable Knowledge Discovery in Data Mining, XKDD 2020; 8th International Workshop on News Recommendation and Analytics, INRA 2020. The papers from INRA 2020 are published open access and licensed under the terms of the Creative Commons Attribution 4.0 International License.