
In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp.

5–14 (2007)įredrikson, M., Jha, S., Ristenpart, T.: Model inversion attacks that exploit confidence information and basic countermeasures.
#Functionflip malware software
In: Proceedings of the the 6th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering, pp. PMLR (2017)Ĭhristodorescu, M., Jha, S., Kruegel, C.: Mining specifications of malicious behavior. In: Machine Learning for Healthcare Conference, pp.

3389–3396 (2020)Ĭhoi, E., Biswal, S., Malin, B., Duke, J., Stewart, W.F., Sun, J.: Generating multi-label discrete patient records using generative adversarial networks. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. arXiv preprint arXiv:1206.6389 (2012)Ĭhang, H., et al.: A restricted black-box adversarial framework towards attacking graph embedding models. īiggio, B., Nelson, B., Laskov, P.: Poisoning attacks against support vector machines. In: Blockeel, H., Kersting, K., Nijssen, S., Železný, F. arXiv preprint arXiv:1704.02654 2 (2017)īiggio, B., et al.: Evasion attacks against machine learning at test time.

arXiv preprint arXiv:1804.04637 (2018)īhagoji, A.N., Cullina, D., Mittal, P.: Dimensionality reduction as a defense against evasion attacks on machine learning classifiers. KeywordsĪnderson, H.S., Roth, P.: EMBER: an open dataset for training static PE malware machine learning models. Our results suggest a new possibility of autoencoders as a countermeasure against poisoning attacks. The results of our experiments show that we succeeded in significantly reducing the attack success rate while maintaining the high prediction accuracy of the clean data using replacement with the autoencoder. We replaced all potentially attackable dimensions with surrogate data generated by autoencoders instead of using autoencoders as anomaly detectors. In this paper, we propose the first countermeasure based on autoencoders in a realistic threat model such that a defender is available for the contaminated training data only.

To the best of our knowledge, no fundamental countermeasure against these attacks has been proposed. They achieved an attack success rate of more than \(90\%\) by adding only \(1\%\) of the poison data to approximately \(2\%\) of the entire features with a backdoor. proposed the first backdoor poisoning attack in the input space towards malware detectors by injecting poison into the actual binary files in the data accumulation phase. Although various poisoning attacks that inject poison into the feature space of malware classification models have been proposed, Severi et al. A data poisoning attack is an attack technique in which an attacker mixes poisoned data into the training data, and the model learns from the poisoned training data to cause misclassification of specific (or unspecified) data. In the malware classification problem, several papers have suggested the possibility of real-world attacks against machine learning-based malware classification models. Attacks on machine learning systems have been systematized as adversarial machine learning, and a variety of attack algorithms have been studied until today.
