Lingwei Chen, Ph.D. (Advisor); Tanvi Banerjee, Ph.D. (Committee Member); Junjie Zhang, Ph.D. (Committee Member)
Master of Science in Cyber Security (M.S.C.S.)
The increasing sophistication of malware has made detecting and defending against new strains a major challenge for cybersecurity. One promising approach to this problem is using machine learning techniques that extract representative features and train classification models to detect malware in an early stage. However, training such machine learning-based malware detection models represents a significant challenge that requires a large number of high-quality labeled data samples while it is very costly to obtain them in real-world scenarios. In other words, training machine learning models for malware detection requires the capability to learn from only a few labeled examples. To address this challenge, in this thesis, we propose a novel adversarial reprogramming model for few-shot malware detection. Our model is based on the idea to re-purpose high-performance ImageNet classification model to perform malware detection using the features of malicious and benign files. We first embed the features of software files and a small perturbation to a host image chosen randomly from ImageNet, and then create an image dataset to train and test the model; after that, the model transforms the output into malware and benign classes. We evaluate the effectiveness of our model on a dataset of real-world malware and show that it significantly outperforms baseline few-shot learning methods. Additionally, we evaluate the impact of different pre-trained models, different data sizes, and different parameter values. Overall, our results suggest that the proposed adversarial reprogramming model is a promising direction for improving few-shot malware detection.
Department or Program
Department of Computer Science and Engineering
Year Degree Awarded
Copyright 2022, all rights reserved. My ETD will be available under the "Fair Use" terms of copyright law.