Graduate Thesis Or Dissertation
 

Adversarial Attacks in Natural Language Question-Answering

公开 Deposited

可下载的内容

下载PDF文件
https://ir.library.oregonstate.edu/concern/graduate_thesis_or_dissertations/8p58pm94q

Descriptions

Attribute NameValues
Creator
Abstract
  • Question Answering in natural language processing has achieved significant progress in recent years. Yet, training and testing set methodology to evaluate the language models has proved inadequate. Adversarial examples aid us in finding loopholes inside these models and provide insights into their inner workings. In this work, an evaluation based on human-in-the-loop adversarial example generation is explored. In recent published work on adversarial question-answering, perturbations are made on the questions without changing the background context on which the question is based. In the current work, the complementary idea of perturbing the background context while keeping the same question is examined. From the user study, novel adversarial examples crafted by humans exposed the weaknesses of the models. This thesis puts forth a typology of the successful attacks as a baseline for stress-testing QA systems. It also describes a system to automatically generate adversarial examples based on the identified attacks.
Contributor
License
Resource Type
Date Issued
Degree Level
Degree Name
Degree Field
Degree Grantor
Commencement Year
Advisor
Committee Member
Academic Affiliation
权利声明
Publisher
Peer Reviewed
Language
File Format
Embargo reason
  • Ongoing Research
Embargo date range
  • 2021-03-25 to 2022-04-26

关联

Parents:

This work has no parents.

属于 Collection:

单件