Randomizing SVM Against Adversarial Attacks Under Uncertainty

Yan Chen, Wei Wang, Xiangliang Zhang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Scopus citations

Abstract

Robust machine learning algorithms have been widely studied in adversarial environments where the adversary maliciously manipulates data samples to evade security systems. In this paper, we propose randomized SVMs against generalized adversarial attacks under uncertainty, through learning a classifier distribution rather than a single classifier in traditional robust SVMs. The randomized SVMs have advantages on better resistance against attacks while preserving high accuracy of classification, especially for non-separable cases. The experimental results demonstrate the effectiveness of our proposed models on defending against various attacks, including aggressive attacks with uncertainty.
Original languageEnglish (US)
Title of host publicationAdvances in Knowledge Discovery and Data Mining
PublisherSpringer Nature
Pages556-568
Number of pages13
ISBN (Print)9783319930398
DOIs
StatePublished - Jun 17 2018

Fingerprint

Dive into the research topics of 'Randomizing SVM Against Adversarial Attacks Under Uncertainty'. Together they form a unique fingerprint.

Cite this