EvilModel 2.0: Bringing Neural Network Models into Malware Attacks

Published in Computers & Security, 2022

Recommended citation: Wang, Z., Liu, C., Cui, X., et al. (2022). EvilModel 2.0: Bringing Neural Network Models into Malware Attacks. Computers & Security, 120, 102807. https://doi.org/10.1016/j.cose.2022.102807

BIB Paper EvilModel

Implementations: See EvilModel .

Abstract Security issues have gradually emerged with the continuous development of artificial intelligence (AI). Earlier work verified the possibility of converting neural network models into stegomalware, embedding malware into a model with limited impact on the model’s performance. However, existing methods are not applicable in real-world attack scenarios and do not attract enough attention from the security community due to performance degradation and additional workload. Therefore, we propose an improved stegomalware EvilModel. By analyzing the composition of the neural network model, three new methods for embedding malware into the model are proposed: MSB reservation, fast substitution, and half substitution, which can embed malware that accounts for half of the model’s volume without affecting the model’s performance. We built 550 EvilModels using ten mainstream neural network models and 19 malware samples. The experiment shows that EvilModel achieved an embedding rate of 48.52%. A quantitative algorithm is proposed to evaluate the existing embedding methods. We also design a trigger and propose a threat scenario for the targeted attack. The practicality and effectiveness of the proposed methods were demonstrated by experiments and analyses of the embedding capacity, performance impact, and detection evasion.

Keywords Neural network, Malware, AI-Powered attack, Network security, Steganography