NOTE: This blog is for a special topics course at Texas A&M (ML for Cyber Defenses). During each lecture a student presents information from the assigned paper. This blog summarizes and further discusses each topic.

During this seminar, Vishal Adepu, Rohith Yogi, and Bhavan Dondapati presented Mal-LSGAN: An Effective Adversarial Malware Example Generation Model. After their presentation our class had an open discussion related to the paper and more. This blog post will cover a summary of the information presented as well as a summary of our class discussion.

Presentation Summary

Introduction

  • Despite successes in AI and cybersecurity, ML-based malware detectors struggle against adversarial examples created through minor data alterations
  • Adversarial example generation methods include gradient-based, optimization-based, and GAN-based, with the paper focusing on designing effective GANs for adversarial malware
  • Challenges with existing GAN approaches like MalGAN include unstable training and low-quality adversarial examples, whereas LSGAN faces issues with discrete data and lacks comparative performance evaluation
  • Mal-LSGAN is proposed and is a novel model superior in generating adversarial malware examples, verified through extensive experiments against current GAN-based models
  • Utilized dynamic semantic feature API calls in a virtual sandbox for dataset preprocessing and model performance testing, investigating various malware detectors
  • Highlighted the unique use of the Least Square loss function and activation function combinations in Mal-LSGAN, conducting detailed analysis to determine their impact on performance enhancement
  • Mal-LSGAN outperforms MalGAN and Imp-MalGAN in generating effective adversarial malware examples
  • Generated adversarial examples by Mal-LSGAN exhibit strong transferability across various black-box ML detectors, indicating superior generalization capability and enhancing detector robustness through adversarial training
  • The superior performance of Mal-LSGAN is attributed more to its combination of activation functions than to the LS loss function, as detailed in the analysis
  • Adversarial examples:
    • Slight modifications causing misclassification in neural networks
    • These examples are generated through gradient-based, optimization-based, and GAN-based methods, with the first two requiring model access and the GAN-based methods being slow for individual optimizations
    • The paper introduces Mal-LSGAN, a GAN-based approach for creating adversarial malware examples for effective black-box attacks
  • Generative adversarial networks:
    • Generate data mimicking the training set distribution, achieving dynamic Nash equilibrium after alternating generator and discriminator training
    • GAN-based adversarial examples, particularly in malware detection have proven effective in evading black-box machine learning detectors
    • Despite their success, GANs face challenges with discrete data, such as malware features, and issues like instability and training difficulties, leading to proposals for improved network structures to enhance MalGAN’s performance and example transferability

Model Description

  • Mal-Lsgan components:
    • Mal-LSGAN contains a generator and a discriminator, creates adversarial malware by blending random noise with original malware, which is then assessed by a malware detector trained on a separate dataset
    • Utilizing adversarial training, Mal-LSGAN generates high-quality adversarial malware examples to target black-box detectors, with its generator and discriminator employing distinct activation functions and loss functions
    • The generator:
      • Transforms an M-dimensional malware and a Z-dimensional uniform noise vector into an adversarial example, leveraging DCGAN’s architecture with batch normalization and replacing convolution layers with fully connected layers
      • The final layer of the generator model uses the Sigmoid function to adjust outputs to the (0,1) range, diverging from DCGAN’s use of the ReLU activation function
      • ReLU sets all negative gradients to zero, which will cause neurons to never be activated, thus LeakyReLU and PReLU are proposed
      • LeakyReLU produces the best performance, so it, along with mean-square error, is utilized
    • The discriminator:
      • Malware creators face challenges in crafting detailed adversarial examples without knowledge of the black-box detector’s structure
      • The discriminator aims to distinguish between malware and benign applications, trained with both generated adversarial malware examples and real benign apps
      • The discriminator processes input through layers using LeakyReLU and a Sigmoid output function, employs dropout to avoid overfitting, and optimizes with MSE loss and Adam loss optimizer
  • Mal-Lsgan model training:
    • LSGAN applies penalties to examples at the classification boundary, unlike GANs, which cease optimization once adversarial examples are classified correctly
    • A least square loss function replaces the Cross-Entropy loss in MalGAN for smoother gradients and less saturation
    • The generator aims to minimize its loss to lower the discriminator’s probability of identifying adversarial examples as malware
    • Training involves phased optimization, alternately fine-tuning generator and discriminator parameters to achieve equilibrium

Experimental Analysis and Validation

  • Data preprocessing:
    • Malware (2733) and benign (1357) samples were gathered from VirusShare and AndroZoo, respectively, with API calls extracted using Cuckoo sandbox and represented as vectors based on the 128 most common API features
    • The data distribution for analysis included 20% for testing, with the remaining 80% split evenly between training the Mal-LSGAN model and malware detection
  • Malware detector training:
    • Basic ML classifiers like SVM, DT, AdaBoost, LR, RF, MLP, and KNN are used for malware detection in the Scikit-learn library
    • Malware detection is treated as a binary classification, evaluated on accuracy, F-score, FPR, and FNR
    • All detectors achieve over 95% accuracy with low FPR and FNR, displaying effective malware detection performance
  • Evaluation of Mal-Lsgan:
    • Performance evaluation of adversarial samples:
      • After 200 epochs, TPRs for AB, LR, DT, and MLP detectors are below 0.1 and 0.05, indicating over 95% of adversarial examples remain undetected
      • The loss stabilizes after 200 epochs, with D loss remaining between 0.00-0.02 and G loss stable across all but RF detectors, demonstrating Mal-LSGAN’s effective convergence and its capacity to attack most detectors
    • Adversarial examples transferability evaluation:
      • After 200 epochs, Mal-LSGAN’s adversarial examples have a TPR under 0.1 and the ROC area 44% lower, indicating effective attacks on the MLP detector with a 98.65% ASR
      • Random forest and kNN detectors exhibit lower transferability for Mal-LSGAN’s adversarial examples due to their distinct structures from neural networks
  • Experimental comparison:
    • Different combinations of activation function and loss function:
      • GANs with Sigmoid Cross-Entropy (SCE) loss can significantly lower the accuracy of most detectors, prompting a Control Experiment (CE) for structural comparison involving Mal-LSGAN’s new activation function
      • The study compared different combinations of activation functions (LeakyReLU/Sigmoid) and loss functions (LS/SCE), finding that Mal-LSGAN’s configurations outperformed others in various detectors, achieving similar TPR in random forest detectors as CEs
    • Different model comparisons:
      • Using the same dataset, MalGAN, Imp-MalGAN, and Mal-LSGAN were compared, showing stable losses after 20 epochs, except for MalGAN, due to its difficulty with discrete data
      • Mal-LSGAN outperformed MalGAN and Imp-MalGAN in TPR and accuracy across various machine learning detectors, highlighting the limitations of traditional GANs
      • Mal-LSGAN achieved the highest accuracy and ASR, with some detectors’ ASR exceeding 95.98%, although its effectiveness varied by detector, notably underperforming with the random forest detector
      • Random forest and neural networks differ significantly in structure and response to adversarial examples, explaining Mal-LSGAN’s varied success rate, particularly its limited impact on random forest
      • Improvements and structural adaptations in these models demonstrate the evolving strategies to enhance malware detection evasion techniques

My Thoughts

This paper did an excellent job at proposing a state-of-the-art model that surpasses existing GAN models. The main factor is Mal-LSGAN’s ability to produce higher quality adversarial examples, using semantic features and a novel combination of least squares and activation functions. The use of penalties also assists in enhancing the learning process, as the model can understand which parameters or weights to adjust, rather than just classifying examples correctly and moving on. Mal-LSGAN also shows strong transferability across different ML detectors and indicating a superior generalization capability, which is essential in accomodating for various kinds of adversarial examples.

Regarding the challenges presented by existing GAN models, such as MalGAN, it’s clear that instability and the poor handling of discrete data have been significant hurdles. Mal-LSGAN is valuable for future research in the field, especially with generative AI rapidly growing, and can lead to more effective adversarial examples. Future works can consider different combinations of architectural components such as loss functions, activation functions, and etc.

Discussion Summary


  • Key message for any paper is to not completely trust their results. The authors did run their own experiments, but you should also run your own experiments to find out
  • Main idea of Mal-LSGAN is to create two neural networks that fight each other (since we automate malware generation and detection)
    • Use the feedback of one model (either the generator or discriminator) to update the other
    • When the two models reach equilibrium, since there is a feedback loop, then the process stops. In the end you have the best attacker on one side and best defender on another side
  • Why do we need a discriminator, why not just attack the target directly?
    • In reality, sometimes we do not need a discriminator, we can just attack the black box detector
    • However, the detector being blackbox is one reason why it sometimes can’t work as there could be rate limiting and more
    • That’s why we need our own local model, the discriminator
    • Bypassing the GAN is the same as bypassing the detector when they are closely similar. Changing the architecture of the GAN will not transfer the same results
  • Why don’t we get a pre-trained discriminator?
    • We want the ideal discriminator, a pre-trained one will not be ideal
  • Using a pool of models is a plausible defense against adversarial examples
  • The generator can cause concept drift, so over time the discriminator needs to be updated
  • These types of generated adversarial attacks most commonly reflect state and nations attacks
  • Skilled attackers sell their tools and botnets to others
    • There is also Malware as a Service (MaaS), and can be sold inside a packer/dropper and etc.
    • Even though this is not practical right now, the concept of GANs can become sold and widespread in the future
  • GANs come from image generation
    • “This person does not exist” website for example. Generative AI (stable diffusion) can be applied to various domains: images, malware, etc.

My Thoughts

This discussion was very effective in explaining the purpose of the two models. For instance, having a discriminator combats the issue of vast querying models with an individual. Overly querying a model can raise flags on the defense. Additionally, both the generator and discriminator are being trained and enhanced while they are continually attempting to outperform each other.

The consideraton of concept drift caused by the generator is very thought-provoking. Mal-LSGAN is not concept drift proof, thus both the generator and discriminator need to be updated periodically. While using an ensemble of classifiers is a plausible defense against adversarial networks, it can also aid in updating the models due to concept drift. While one model is being updated, a selected model from the ensemble can temporarily replace it. It is very interesting and alarming to think of GANs being sold in the future. That calls for more advancements in this industry.


That is all, thanks for reading!


<
Previous Post
Functionality-Preserving Black-Box Optimization of Adversarial Windows Malware (Seminar 7.1)
>
Next Post
EvadeDroid: A Practical Evasion Attack on Machine Learning for Black-box Android Malware Detection (Seminar 8.1)