NOTE: This blog is for a special topics course at Texas A&M (ML for Cyber Defenses). During each lecture a student presents information from the assigned paper. This blog summarizes and further discusses each topic.

During this seminar, Yash Shende and Pratik Nath presented No Need to Teach New Tricks to Old Malware: Winning an Evasion Challenge with XOR-based Adversarial Samples. After their presentation our class had an open discussion related to the paper and more. This blog post will cover a summary of the information presented as well as a summary of our class discussion.

Presentation Summary


Introduction

  • Combating malicious programs has evolved, with ML now prominent in malware detection, despite its vulnerability to adversarial attacks where malware is modified to be classified as non-malicious
  • The Machine Learning Security Evasion Competition (MLSEC) was launched by tech companies to understand the impact of adversarial attacks on ML models, where participants modify malware binaries to bypass detection while retaining their functionality
  • The contest went from being a white-box challenge in 2019 to an attack-and-defense model in 2020
  • The authors successfully bypassed all ML models in both editions, showcasing significant insights into model vulnerabilities and achieving maximum scores
  • The team bypassed three models in the contest by exploiting specific weaknesses:
    • Evading the first model through malicious payloads in dropper executables
    • Circumventing the second by adding fake imports using a TF-IDF method
    • Deceiving the third with XORing or base64 encoding to hide payloads
  • Demonstrated the practicality of simple adversarial attack techniques, such as XORing and base64 encoding, over complex automated methods, offering explainable insights for developing more robust models
  • Showed the real-world impact by reducing antivirus detection rates using these methods, and shared their attack and defense solutions with the community to encourage the development of new security measures

The Challenge

  • Definitions:
    • The contest was run by Elastic, Endgame, MRG-Effitas, and VMRay where they tasked participants with creating adversarial samples to bypass ML-based malware detectors, where the authors succeeded in evading all provided models using modified malware binaries that retained their original behavior
    • The 2020 Machine Learning Security Evasion Competition, joined by Microsoft and CUJO AI’s Vulnerability Research Lab, added defensive solution generation to its format, enhancing the previous edition’s attack-and-defense challenge
    • The defender’s challenge is where participants developed their ML defensive solutions, packaged as a Docker image, <1% FPR, <10% FNR, and response times <5 seconds per sample; only three models: ember, needforspeed, and domumpqb met these criteria
    • The outcome of the challenge depended on each model’s performance against adversarial attacks, with the best-performing defense against such attacks declared the winner
    • In the attacker’s challenge, participants could attack models through black box attacks using 50 unique Windows malware samples
    • Scores were based on successful bypasses, with each classifier bypass counting for 1 point, up to a total of 150 points
    • Tiebreakers were decided by the number of ML queries used, rewarding those with fewer queries
    • The analysis involved submitting 50 malware samples to the Virus Total API, normalized using AVClass, with two prevalent malware families (gamarue and remcos) among a total of 30, all of which are real and previously observed in the wild
    • Gamarue malware allows hackers full control over a PC to steal information and modify security settings, while Remcos malware executes binary with parameters for a REMCOS RAT, enabling full control over the infected PC for activities like keylogging and surveillance
  • Defenders challenge:
    • The authors selected a research model that performed well with textual features (TF-IDF), static analysis, and a Random Forest classifier for defense against adversarial attacks
    • Initially, using malware samples from Brazilian cyberspace, the model showed nearly 98% f1-score with a low false-negative rate, indicating high effectiveness
    • However, when tested against the 2019 contest samples, the model underperformed, suggesting that the distinct characteristics of the global samples compared to the Brazilian ones affected its effectiveness
    • The team retrained their classifier using the ember dataset, mirroring the approach used by contest organizers for their LightGBM model, resulting in a model that generalized better when tested with both Brazilian and Ember samples
    • Evaluation showed that each version of the model performed well in detecting malware within their respective datasets, althought with regional limitations
    • The globally trained model had high detection rates but performed poorly with Brazilian samples, and vice versa
    • ML models trained on data from specific regions may fail to detect threats originating from other areas, emphasizing the need for region-specific model crafting for improved detection and effectiveness
    • They developed a model for the competition using EMBER datasets, selecting attributes less likely to be manipulated by adversaries, categorized into numerical, categorical, and textual types, and trained it with 1.6 million labeled samples using a Random Forest
    • The competition organizers provided adversarial samples from the previous year’s challenge, which were not used in the model’s training phase to instead test the model’s robustness during the evaluation phase
    • Categorical attributes like machine and magic from the PE header were converted into one-hot encoding arrays, and textual features were transformed into TF-IDF vectors for the top 300 words
    • Numerical features extracted from the PE header were normalized using MinMaxScaler, with a module created to extract these attributes from raw PE binaries for testing attacker samples and real-world applications
    • The transition from PEfile to lief for PE parsing in a cybersecurity contest improved parsing speed but required model adjustments due to differences in parsing results, impacting feature representation and classification performance
    • Implemented a custom prediction function using model class probabilities and a threshold T=80% to classify samples as “goodware” or “malware,” significantly reducing False Positive Rate (FPR) to under 0.1%, outperforming the default Random Forest prediction’s 8.5% FPR
    • Their model defeated 88.91% of the 594 samples, all being variations of the 50 original samples from the previous year
  • Attackers Challenge
    • Initial strategy of appending goodware strings and random bytes to samples achieved 44 points but failed to bypass one of the models, indicating increased model strength over the previous year
    • Advanced strategy using a dropper to embed and execute malware samples bypassed the first model but highlighted the increased complexity of the current year’s challenge
    • The malware dropper failed to fully bypass all detectors except one, leading to a focus on bypassing the team’s own model by exploiting knowledge of its reliance on library imports and functions, particularly targeting the FindResource function
    • Attempts to conceal the FindResource API calls using packers increased the classifier’s confidence in identifying malware, prompting a strategy to mimic benign applications like the Calculator, which shared similar imports but was classified as benign
    • The approach’s success relied on mimicking all libraries and functions accurately, including matching the architecture, leading to the model’s complete bypass
    • Challenges in manipulating PE files due to the lack of standardized libraries impacted adversarial sample development, with issues in existing tools like lief breaking malware binaries, though the strategy of adding dead code successfully bypassed the ML model
    • Successful evasion relies on exact mimicry of all libraries and functions, including matching the architecture and linked libraries of the intended goodware binary, to avoid detection by analysis of function and library combinations
    • Altering compilation settings, such as changing from 32 to 64 bits or from Release to Debug mode, affects the imports and linked libraries, impacting the ability to evade detection; precise configuration to match the goodware is crucial for evasion success
    • The team bypassed a malware detection model by hiding the embedded payload using base64 encoding or XOR encryption, making the malware unrecognizable to the model
    • These obfuscation techniques effectively concealed key identifiers like the MZ flag, allowing the team to fully bypass the model despite its detection mechanisms
    • The dropping approach, using section entropy as a feature, remained undetectable by models due to the similar or lower entropy values of dropper variations compared to original samples
    • Modifying malware samples to evade ML models also reduced their detection by antivirus (AV) scanners, demonstrating the real-world impact on system security
    • Adversarial samples were detected under significantly different labels, suggesting distinct detection mechanisms, with many being misclassified into a specific malware family targeting browser extensions for cryptocurrency theft
    • Embedding malware into droppers increases their similarity, as indicated by ssdeep scores, which could both help and hinder detection efforts due to shared characteristics and evolving malware tactics
    • There’s potential for using similarity scores in security solutions to detect adversarial malware, requiring attackers to consistently bypass all ML models to avoid detection

Why defending is harder:

  • Adversarial samples were not detected by models because they were made to resemble benign applications like a Calculator
  • Testing the hypothesis of model hardening by training with adversarial samples showed an attempt to improve security coverage, using datasets that included both developed adversaries and original benign samples
  • Training with only EMBER datasets detected all original malware (0% FNR) with low false positives (0.1% FPR), but failed to detect new adversarial malware (100% FNR)
  • Adding adversarial samples to training: Detected all malware types (0% FN R) but significantly misclassified goodware as malware (78.54% FPR)
  • Detecting adversarial samples is complex, as AV companies gather them over time, mixing adversarial attack issues with concept drift, involving context shifts and obfuscations

Discussion

  • ML-based malware detectors need greater robustness against adversarial attacks, as even the most resistant models in the latest competition were bypassed, indicating a need for more effective feature selection and representation
  • There’s a push for explainable attacks and defenses, highlighting the importance of understanding model failures through knowledge-based, manual attacks to guide the development of future, more secure and understandable ML solutions
  • Adversarial attacks are effective and should be incorporated into threat models and ML training; code and tools for generating attacks are released to the community
  • Findings indicate the need for next-gen security solutions to address payload embedding in binaries and suggest improvements in ML model defenses
  • Anticipates an ongoing arms race in ML-based malware detection, highlighting the evolution of attack techniques and the need for adaptive, performance-efficient defense mechanisms

My Thoughts

This paper significantly highlighted the importance of malware detection robustness by presenting challenges in both defending adversarial malware and generating adversarial malware. The success of adversarial attacks in evading sophisticated ML models not only highlights the ingenuity and persistence of attackers but also underscores the inherent vulnerabilities in current defensive mechanisms, which calls for enhancement in feature extraction, accounting for adversarial malware, etc. As someone deeply interested in cybersecurity, I find these developments both alarming and fascinating. The ability of attackers to adapt and circumvent even the most advanced models, through simple techniques such as base64 and XOR, speaks volumes about the dynamic nature of cyber threats. The competitions, by facilitating a direct confrontation between offensive and defensive strategies, provide invaluable insights into the effectiveness of current approaches and the urgent need for innovation in this space.

The release of adversarial attack tools and techniques to the public domain, as mentioned, could serve as a double-edged sword, potentially elevating the overall security posture by enabling widespread testing and improvement of defensive models, yet also offering adversaries new methods to exploit. This ongoing arms race in ML-based malware detection necessitates a continuous, collaborative effort to refine and develop more resilient defense mechanisms. The challenges highlighted in bypassing malware detection models using simple yet effective techniques such as XOR encryption or base64 encoding reveal the sophistication of threats and the need for equally sophisticated responses. Moreover, the shift towards explainable AI and the emphasis on understanding the failures of ML models through adversarial attacks pave the way for more secure, transparent, and robust cybersecurity solutions.


Discussion Summary


  • Typically you don’t attack a model directly, you make a copy of the model and attack it locally
  • All droppers have the same skeleton and everything except the payload
    • Locality sensitive hashing: clustering variants in front of the classifier (if you encounter a sample that is similar to one detected before, reject it before it reaches the model)
    • Pipeline of solutions is important, the model isn’t going to do everything
  • Using both base64 and XOR
    • XORing a payload with key can cause issues (XORing a value with itself results in 0 which terminates a scan)
    • Base64 is used because it doesn’t result in 0
    • XOR is also reversible, and in the case of the paper the key was hardcoded in the binary. A solution is to utilize a remote server to extract a key
  • Frequency analysis
    • Ex: caesar cipher shifts letters by an offset, so if you know the most popular characters, after a while you will see that the most popular letter has changed
    • XOR is the same, using frequency analysis and multiple encounters, one can find the key
    • XOR is encoding, to be more robust encryption methods can be used
  • Deadcodes in this competition were strings (name of functions)
    • Using names of the functions as features makes it easy to bypass
    • Call graph is more robust as adding dead code would change the call graph
    • Bypassing all of these methods can be accomplished by mimicking

My Thoughts

The discussion expanded on the main ideas presented in the seminar. As mentioned before, it is better practice to attack a copy of a model locally. This stems from various reasons, the most important being the potential of raising flags. Attacks have a low success rate the first time around, thus repeated attempts can display anomalies on the defenders’ end, hinting that an attack is occurring. Locality sensitive hashing is an excellent and simple approach that I have not previously considered (head slap). From an optimization point of view, it reduces the complexity of having to completely pass through a model with many parameters to produce a prediction. Intuitively, it also prevents a potential misclassification by just simply following the law of locality, as seen in various other computer science fields such as computer architecture. The discussion of the consequences of encompassing a key within a payload modified by XOR roused new insights.

As discussed, XOR has its issues, which is why both base64 and XOR are used. Attaching a key to payload modified by XOR can cause issues such as terminating a scan or allowing the key to be intercepted. The suggestion of using a remote server to extract the key is an essential measure for attackers. There are more pitfalls of utilizing XOR, such as patterns being reversible due to frequency analysis. While this adversarial malware generation method demosntrated its impressive success, attackers have a plethora of other encryption algorithms to enhance their approaches, one being RSA. The authors explained how they bypassed the competition’s models by best mimicking goodware functions and libraries to the best of their abilities using deadcodes. A better alternative discussed is the use of call graphs. However, with any approach, models can be bypassed by purely mimicking. Thus, as a defense, it is imperative to have an adversarial mindset, and include every detail in threat models.


That is all for my summary, thanks for reading!


<
Previous Post
Shallow Security: on the Creation of Adversarial Variants to Evade Machine Learning-Based Malware Detectors (Seminar 6.1)
>
Next Post
Functionality-Preserving Black-Box Optimization of Adversarial Windows Malware (Seminar 7.1)