Over the past two decades, research has been conducted on using AI to detect malware by extracting features and then classifying them using machine learning algorithms. This has led to several malware authors to focus their time and effort to attacking such malware detection techniques. Our team aims to accomplish the same.
PEsidious is our research developed to showcase how Artificial Intelligence can be used to create mutated evasive malware samples. After working on malware detection using machine learning for over 8 months, we wanted to research and design a model that can defeat the ML models by generating evasive malware by mutating already existing samples. Our objective is to demonstrate that this technique is not restricted to offensive strategies by black hats, but can be used by numerous security researchers and other security professionals to enhance their defenses.
Our approach uses a Generative Adversarial Network to first generate an adversarial feature vector that can make malware look benign and fool machine learning models which is then fed into a deep reinforcement learning agent. We used deep reinforcement learning to teach a model the sequence of additional mutations can reduce the detection rate for malware. The various mutations include changes to imports, exports, headers, signature, sections, and size.
Approaches using Generative Adversarial Networks and Reinforcement Learning has been taken before but separately and with its own limitations. Our research does not only overcomes these limitations but also brings the models together to get better results.
Pesidious bagged the first place prize and a whopping $40000 in the HITB CyberWeek AI challenge 2019, and we are back again with some additional features to improve its efficiency and chaos it brings with it!