Don't Fool Me : Detecting Adversarial Examples in Deep Networks Public Deposited

http://ir.library.oregonstate.edu/concern/graduate_thesis_or_dissertations/gt54ks38t

Descriptions

Attribute NameValues
Creator
Abstract or Summary
  • Deep learning has greatly improved visual recognition in recent years. However, recent research has shown that there exist many adversarial examples that can negatively impact the performance of such an architecture. Different from previous perspectives that focus on improving the classifiers to detect the adversarial examples, this work focuses on detecting those adversarial examples by analyzing whether they come from the same distribution as the normal examples. An approach is proposed based on spectral analysis deeply inside the network. The insights gained from such an approach help to develop a comprehensive framework that can detect almost all the adversarial examples. After detecting adversarial examples, we show that many of them can be recovered by simply performing a small average filter on the image. Those findings should provoke us to think more about the classification mechanisms in deep convolutional neural networks.
License
Resource Type
Date Available
Date Copyright
Date Issued
Degree Level
Degree Name
Degree Field
Degree Grantor
Commencement Year
Advisor
Committee Member
Academic Affiliation
Non-Academic Affiliation
Keyword
Subject
Rights Statement
Peer Reviewed
Language
Replaces

Relationships

In Administrative Set:
Last modified: 11/02/2017

Downloadable Content

Download PDF
Citations:

EndNote | Zotero | Mendeley

Items