Publication Date

2024

Document Type

Thesis

Committee Members

Lingwei Chen, Ph.D. (Advisor); Meilin Liu, Ph.D. (Committee Member); Junjie Zhang, Ph.D. (Committee Member)

Degree Name

Master of Science in Cyber Security (MSCS)

Abstract

The rapid growth and widespread reliance on machine learning (ML) systems across critical applications such as healthcare, autonomous driving, and cybersecurity have un- derscored their transformative potential and heightened their susceptibility to adversarial attacks and vulnerabilities. This thesis investigates vulnerabilities in ML models, focusing on backdoor attacks, including naive backdoor attack, feature collision backdoor attack, hidden trigger backdoor attack, and test-time backdoor attack using universal perturbation technique. These methodologies demonstrate how adversaries can automate and conceal malicious behaviors to achieve specific objectives, posing significant challenges to ML model integrity and trustworthiness. The research provides a comprehensive analysis of the theoretical foundations and prac- tical implications of these attack vectors. The naive backdoor attack establishes a baseline, highlighting its simplicity and effectiveness despite limited stealth. Feature collision back- door attack leverages neural networks to exploit overlaps in the feature space, addressing mislabeled issue yet yielding lower attack success rates. Hidden trigger attack emphasizes stealth by embedding imperceptible patterns into training datasets, enabling targeted ma- nipulations with high discretion. Test-time backdoor attack using universal perturbation introduces a novel approach that applies imperceptible perturbations during inference, by- passing the need for poisoned training data or model retraining. This method addresses practical constraints, allowing adversaries to execute covert attacks even without access to the target model or dataset. By exploring these advanced backdoor attack techniques, this thesis contributes to un- derstanding the vulnerabilities inherent in ML systems and highlights critical challenges in defending against stealthy and automated adversarial behaviors.

Page Count

78

Department or Program

Department of Computer Science and Engineering

Year Degree Awarded

2024


Share

COinS