Evading deep learning-based DGA detectors: current problems and solutions

other
As a result of the increased use of machine learning (ML) applications in current society, and multi-domain operations in the foreseeable future, there is a surge in adversarial machine learning (AML). One of the domains where AML can prove to be a critical problem is cyber security; the use of AI in security software creates new attack vectors for adversaries. One example of such an attack vector is the use of adversarial domain generation algorithms (DGAs). These adversarial DGAs claim to generate malicious domains that successfully evade deep learning-based DGA detectors. We test two state-of-the-art DGA detectors that make use of deep learning: DGA Detective1 and B-ResNet, 2 against four different adversarial DGAs. The tested DGAs all use different adversarial techniques and provide a fitting reflection of the types of DGAs present in literature. Additionally, they can be implemented by adversaries with basic programming and AI knowledge. We find that both DGA detectors reach near-perfect performance on real malware domains, but see a dramatic decline in performance on adversarially generated domains. To counteract the adversarial DGAs, we test two methods to improve adversarial robustness of the detectors: adversarial training and residual loss. Adversarial training results in a ∼12%-20% average increase in accuracy on a data set comprised of benign domains, existing malware domains and adversarially generated domains, for both detectors. The residual loss improves average detection accuracy on the same data set with ∼4% for DGA Detective, but makes for a ∼5% decline in average accuracy for B-ResNet. © 2024 SPIE.
TNO Identifier
1011975
ISSN
0277786X
ISBN
9781510674202
Publisher
SPIE
Source title
Proceedings of SPIE - The International Society for Optical Engineering
Files
To receive the publication files, please send an e-mail request to TNO Repository.