UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Surveying the effects of data on adversarial robustness Xiong, Peiyu

Abstract

Machine Learning (ML) has been widely applied in different aspects of our lives due to its accuracy and scalability. However, the vulnerability to adversarial examples, which are intentionally designed by attackers to confuse the models, impedes the adoption of them in life- and safety-critical applications. To address this problem, the area of adversarial robustness investigates the mechanisms behind adversarial attacks and defenses against these attacks. Literature in this area exhibits an arm-race trend where defense techniques proposed to address existing attacks were "broken" by newly proposed attacks. A line of research has been conducted in response to such a trend, investigating the reasons for adversarial vulnerabilities, some of them focusing on the inherent limitations of data. Existing surveys on adversarial robustness focus on collecting state-of-the-art attacks and defense techniques, and few of them discuss how the model and/or data explain the adversarial vulnerability observed. In this thesis, we review literature that focuses on the effects of data used to train a model on the model’s adversarial robustness. We systematically identified 57 relevant papers from top publication venues, and categorized them based on the properties of the data discussed. This thesis summarizes the impact of data across eight categories of data properties. Seven of these are general to all applications, and one is specific to a particular application domain. Additionally, we discuss gaps in knowledge and promising future research directions to further improve our understanding of adversarial robustness.

Item Media

Item Citations and Data

Rights

Attribution-NonCommercial-NoDerivatives 4.0 International