Functionality-Preserving Black-Box Optimization of Adversarial Windows Malware (Seminar 7.1)
NOTE: This blog is for a special topics course at Texas A&M (ML for Cyber Defenses). During each lecture a student presents information from the assigned paper. This blog summarizes and further discusses each topic.
During this seminar, Vasudeva Vallabhajosyula presented Functionality-Preserving Black-Box Optimization of Adversarial Windows Malware. After their presentation our class had an open discussion related to the paper and more. This blog post will cover a summary of the information presented as well as a summary of our class discussion.
Presentation Summary
Introduction
- Existing ML-based malware detection systems face challenges with adversarial machine learning, where attackers manipulate data to evade detection, questioning the security of such systems, especially when deployed as cloud services
- Black-box attacks on malware detection are inefficient due to high query numbers, complex optimizations, and excessive input manipulations, often lacking targeted evasion strategies
- These attacks require computationally demanding verification steps, including sandbox testing for each iteration, and result in easily detectable anomalies due to significant changes in file characteristics
- This paper introduces a novel set of black-box attacks for adversarial malware that are query-efficient, utilize benign content for evasion, and preserve the malware’s functionality
- Attacks use file format ambiguities to inject content without changing execution, aiming for stealth through constrained minimization to penalize payload size
- Demonstrated efficacy on two learning-based Windows malware detectors through empirical evaluation, showing ability to bypass with few changes and also evade over 12 commercial antivirus engines
Programs and Malware Detection
- Windows PE Format
- DOS Header and Stub: The header contains metadata for loading the executable inside a DOS environment and the stub prints a statement if executed inside a DOS environment
- Header and stub are used to maintain compatibility with older Microsoft OS’s
- In modern applications there are only two relevant portions inside the DOS header: MZ (a 2-byte long signature for the file) and $0x3$c (a 4-byte long integer offset that works as a pointer to the real header)
- Those two portions must be aligned properly, otherwise the program is considered corrupted and will not be executed
- PE Header: contains the magic number PE and other file characteristics (header size, target architecture, file attributes, etc.)
- Optional Header: contains essential information for the OS to initialize the program, and offsets that point to useful structures such as the Import Address Table and the Export Table offset
- Section Table: A list of entries that denote each core component’s characteristics and where the OS loader should find them
- Sections: Contiguous chunks of bytes that host the real content of the executable (.text contains the code, .data contains global variables, .rdata contains read-only constants, and etc.)
- DOS Header and Stub: The header contains metadata for loading the executable inside a DOS environment and the stub prints a statement if executed inside a DOS environment
- Learning-Based Windows Malware Detection
- MalConv:
- End-to-end CNN that takes 2 MB of an executable as input and outputs the probability of being malware
- Either truncates the executable size or pads with zeroes and shifted by one to meet the 2 MB threshold
- The representation space consists of 8 dimensions with each byte being embedded and the convolutional layers are used to correlate spatially-distant bytes inside the input (jumps and function calls)
- Gradient Boosting Decision Trees (GBDT):
- Fixed representation consisting of 2381 features from:
- General file information (virutal file size, number of imported and exported functions, presence of debug sections)
- Header information (accounts for the characteristics of the executable, target architecture, version, etc.)
- Byte histogram (counts the occurrences of each byte divided by the total number of bytes)
- Byte-entropy histogram (accounts for the entropy of the byte distribution of the file, applying a sliding window over the binary)
- Information taken from strings (counts the number of occurrences of each string)
- Imported and exported functions ( tracks all the functions imported from libraries, and all the ones that are exposed to the other programs)
- Fixed representation consisting of 2381 features from:
- MalConv:
Black-Box Optimization of Adversarial Windows Malware
- GAMMA is a novel black-box attack framework for optimizing adversarial malware samples without needing internal access to the target model
- It uses functionality-preserving manipulations within the PE format to inject benign-looking content, avoiding the need for validation steps and making evasion more efficient
- The approach is framed as a constrained optimization problem to minimize detection risk and injected content size, enhancing stealthiness
- The objective function F(s) combines classification output of manipulated malware (f(x⊕s)) and a penalty for injected bytes (C(s)), balanced by hyperparameter λ to trade off attack effectiveness against payload size
- Adjusting λ varies the attack’s effectiveness relative to the size of the adversarial payload injected into the malware
- Manipulations are limited to injecting content from a set of benign sections, optimizing for fewer injections while ensuring the payload’s size is minimized, leading to sparse injection strategies
- Solution algorithm:
- Begins by randomly creating an initial population of candidate vectors, then iterates through selection, crossover, and mutation steps to mimic biological evolution, refining candidates based on their objective function values
- During each iteration, the algorithm selects the best candidates, generates new ones by mixing values of randomly chosen pairs, and introduces mutations to explore different solution spaces
- In hard-label attacks on models that only give classification labels, GAMMA is adapted by treating inputs as benign (f(x) = 0) or not (f(x) = ∞), focusing on discarding malware samples that can’t evade detection
- This approach involves randomly transforming malware until an evasive variant is found, then reducing payload size while maintaining misclassification, with success not significantly hindered by random exploration
- Functionality-Preserving Manipulations:
- Structural:
- This family of manipulations affects only the structure of the input program, by exploiting ambiguities inside the file format, without altering its behaviour
- Perturb header fields, filling slack space, padding, manipulating DOS header and stub, extend the DOS header, content shifting, import function injection, section injection
- Behavioral:
- This family of perturbations can change the program behavior and execution traces, but still preserving the intended functionality of the malware program
- Packing, direct, minimal invasive, full translation, dropper
- Padding and section-injection attacks:
- GAMMA utilizes padding and section-injection attacks for structural manipulation, injecting content into executables with varying degrees of header component alteration
- Padding attack injects into unused spaces without changing headers, whereas section injection alters the executable structure by adding to the section table
- Structural:
Experiments
- Conducted empirical evaluations of attack effectiveness on GBDT and MalConv malware detectors using a high-spec workstation, testing with varying query budgets and employing a genetic optimizer developed with DEAP
- Modified MalConv architecture for the experiments, applied a genetic optimization approach with specific parameters, and utilized a dataset for enhancing attack input, with the process and tools openly shared as secml-malware
- Performance in the Absence of Attack:
- Evaluated GBDT and MalConv on 15,000 benign and 15,000 malware samples, finding GBDT has a lower False Positive Rate (3.9%) and higher True Positive Rate (95%) than MalConv with its default threshold
- Data sourced from VirusTotal (malware) and GitHub (goodware), with results supporting GBDT’s effectiveness as a baseline for analysis, despite slightly lower scores than initially reported
- Attack evaluation:
- A 500-sample set from a 15K malware collection, featuring various malware types, was used to evaluate adversarial attacks, observing that more queries and lower regularization parameter values lead to more evasive samples with larger payloads
- Increasing the regularization parameter results in smaller, more detectable adversarial examples due to the penalty term dominating the optimization process, while more queries improve the stealth and efficacy of these examples
- Experimental results show that attacks incorporating benign content injection are significantly more effective at evading detection than random perturbations, with section-injection attacks outperforming padding by affecting more features
- Hard-label attacks:
- Hard-label attacks improve by optimizing the adversarial payload size through iterations without needing the confidence score, influenced by the regularization parameter λ and the nature of the injected content
- Increasing query numbers reduces adversarial payload size with minimal impact on misclassification confidence, demonstrating the method’s effectiveness in mimicking benign class behaviors
- Injected adversarial payloads, though larger than original malware (averaging 343 KB), remain comparable in size to benign programs (average 240 KB), even after manipulation
- Temporal analysis:
- The complexity of GAMMA mainly depends on the time taken to query the detector
- The combined time for feature extraction and GBDT prediction is shorter than the neural network’s processing time for all bytes
- Packing effect:
- Encrypting program content with packing (compression, encryption, encoding) changes its disk representation, making analysis and reverse-engineering more challenging
- Applying the UPX packer to malware and goodware demonstrates that detectors often misclassify packed goodware as malware due to learned associations with malicious features
- The prevalence of packed malware in training sets, along with packer-specific artifacts, biases detectors to classify packed programs as malicious
- Evaluation on AV programs (VirusTotal):
- The study evaluates the effectiveness of commercial antivirus programs against minimally modified malware samples, focusing on syntactical changes rather than evasion techniques
- Utilizing VirusTotal, an online service that aggregates numerous threat detectors, the study assesses the impact of a section-injection attack versus a baseline random attack on malware detection rates
- The section-injection attack, which injects an adversarial payload into malware samples, bypassed an average of over 12 antivirus detectors per sample, outperforming the random attack
- Among the antivirus programs tested, some showed a significant decrease in detection rates after the section-injection attack, indicating vulnerability to machine learning-based detection methods
- The findings suggest commercial antivirus products can be evaded using sophisticated attacks, highlighting a potential decrease in detection effectiveness with optimized attacks
My Thoughts
This paper presents a significant step forward in understanding the vulnerabilities of machine learning-based malware detection systems, particularly those deployed as cloud services. The introduction of an innovative set of black-box attacks that are both query-efficient and stealthy, through the use of benign content injection and file format ambiguities, highlights a critical gap in current detection methodologies. It’s fascinating how these attacks preserve malware functionality while evading detection, suggesting that our current defenses might be lackluster against adversaries who exploit the inherent limitations of these systems. The success of these methods against prominent Windows malware detectors, and their ability to bypass over 12 commercial antivirus engines with minimal modifications, raises important questions about the resilience of our digital security infrastructures.
The GAMMA framework’s approach to optimizing adversarial malware through benign content injection and constrained optimization problems is a clever technique that could have broader implications for both offensive and defensive cybersecurity practices. It’s intriguing to think about future work in this area, especially in extending these methods to target dynamic malware analysis classifiers, which could potentially lead to even more robust evasion techniques. The balance between detection efficiency and maintaining system security seems increasingly weak, necessitating ongoing innovation in both malware creation and detection methodologies.
Discussion Summary
- This paper gives more formal treatment about adversarial attack resources and functionalities (provides more guarantees)
- Simplest level is attackers want to minimize the detection rate
- To do that attackers need to modify the sample (easiest way is to add more content to the file)
- Multi-objective function: minimize detection rate while also minimizing the size of the payload. Would also want to minimize the number of queries as excessive amounts of queries can raise a flag.
- Minimizing one component raises the other(s), so there needs to be a balance
- These problems can grow exponentially and be hard to minimize. Thus, options to find the optimal solution is to find heuristics, approximate, etc.
- The key message of the paper is that malware detection can be seen as a multi-objective function
- Need to preserve the functionality when minimizing the objective (for instance removing the encryption of ransomware defeats the purpose of it being malicious)
- Whatever modification you do in the feature space (mathematically) always maps to the problem space in the real world
- Modify regions of the binary that do not affect the binary’s functionality (eg. adding something to the end of the binary)
- Removal from binaries is a problem, you end up breaking the binary
- Adding is more suggested
- The paper suggests spaces of the binary that are safe to modify, so add bytes to those spaces
- If there are no safe spaces, create those spaces
- Can open up space in the middle of a binary as the main entry point of binary execution, let’s say C language for example, is the C library directory
- Soft labels vs hard labels attacks
- Two other defined attacks, aside from white box and black box
- Information pulled from the classifiers
- Hard labels tell you if a sample is malicious or not
- Soft labels tell you whether a sample is malicious or not, and a confidence level
- As a defense, to defend against these attacks you can:
- Provide less information publicly (less information about the models, using hard labels, etc.)
- Soft labels provide attackers with feedback information, so use a pool of models. One model provides a correct confidence label, then another provides another confidence label during the next query
- This information is still useful for users, but cannot help the attacker infer anything
- This defense is called Moving Target
My Thoughts
This discussion helped in solidifying my understanding of the paper’s main idea. The paper and seminar provided formal proofs for developing efficient adversarial attacks, especially with black box attacks. As mentioned in the seminar, the authors formulated the adversarial attack problem as an objective function with constraint minimization. There are multiple objectives to minimize, such as the detection rate, size of the payload, and number of queries. Ideally, all of these objectives would be minimized. However, the paper and discussion demonstrates that it is not possible to do so as optimizing one objective can counter the effects of the others, as explained in the experiments regarding the tradeoffs of more queries and regularization size.
The discussion of modifiable spaces was very intriguing as well. When considering adversarial attacks, it is important to test attacks by only modifying spaces that do not impact the binary’s functionality. This is expanded upon through the discussion of the modifiable spaces. Not all binaries will have these spaces, thus, adding those spaces can be a potential solution. Often times, we see white box and black box attacks as the main category of attacks. However, soft labels versus hard labels can be seen as a median between white box and black box. Soft labels provide more information to the users which can make adversarial black box attacks more efficient. Therefore, accounting for that issue through utilizing a pool of models deems imperative.
That is all, thanks for reading!