Graduate Thesis Or Dissertation
 

Two Sides of a Coin: Adversarial-Based Image Privacy and Defending Against Adversarial Perturbations for Robust CNNs

Public Deposited

Downloadable Content

Download PDF
https://ir.library.oregonstate.edu/concern/graduate_thesis_or_dissertations/9s161d595

Descriptions

Attribute NameValues
Creator
Abstract
  • Emergence of highly accurate Convolutional Neural Networks (CNNs) with the capability to process large datasets, has led to their popularity in many applications, including safety/security-sensitive (e.g. disease recognition, self-driving cars). Despite the high accuracy of convolutional neural networks, they have been found to be susceptible to adversarial noise added to benign examples and out-distribution samples that are classified confidently into in-distribution classes. The applications of CNNs in surveillance services necessitate the need for secure and robust CNNs. On the other hand, despite the benefits of CNNs to surveillance applications, they pose a privacy threat as they are able to undertake image face recognition on a large scale. Coupled with the availability of large image datasets on online social networks and at image storage providers, this poses a serious privacy threat. Emergence of Super Resolution Convolutional Neural Networks (SRCNNs) which improve the image resolution for face recognition classifiers further exacerbates this threat. In this dissertation, we address both these problems. We first propose taking advantage of CNNs vulnerability to adversarial perturbations by adding adversarial noise to images to fool CNNs to protect privacy of images in cloud image storage setting. We propose and evaluate two adversarial-based protection methods: (i) a semantic perturbation-based method called, k-Randomized Transparent Image Overlays (k-RTIO), and (ii) a learning-based method called, Universal Ensemble Perturbation (UEP). These methods can thwart unknown face recognition models (i.e. black-box) while requiring low computational resources. We then evaluate the practicality of adversarial perturbations learned for CNNs on SRCNNs and show that adversarial perturbations are transparent to SRCNNs. In the last part of our dissertation, We propose mechanisms to make CNNs robust against adversarial and out-distribution examples by rejecting suspicious inputs. In particular, we propose an Augmented CNN (A-CNN) with an extra class that is trained on limited out-distribution samples, which can improve CNNs resiliency against adversarial examples. Further, to protect pre-trained highly accurate CNNs, post-processing methods that analyze the output of intermediate layers of CNNs for distinguishing in- and out-distribution have attracted attention. we propose using adversarial profiles, perturbations that misclassify samples of a source class (not other classes) to a target class, as a post-processing step to detect out-distribution examples.
License
Resource Type
Date Issued
Degree Level
Degree Name
Degree Field
Degree Grantor
Commencement Year
Advisor
Committee Member
Academic Affiliation
Rights Statement
Publisher
Peer Reviewed
Language
Embargo reason
  • Pending Publication
Embargo date range
  • 2021-03-22 to 2022-04-22

Relationships

Parents:

This work has no parents.

In Collection:

Items