Graduate Thesis Or Dissertation
 

Explanations and Processes to Enable Humans to Assess AI with Respect to Manipulable Properties

Public Deposited

Downloadable Content

Download PDF
https://ir.library.oregonstate.edu/concern/graduate_thesis_or_dissertations/s1784t85n

Descriptions

Attribute NameValues
Creator
Abstract
  • Assessing AI systems is difficult. Humans rely on AI systems in increasing ways, both visible and invisible, meaning a variety of stakeholders need a variety of assessment tools (e.g., a professional auditor, a developer, and an end user all have different needs). We posit that it is possible to provide explanations and assessment processes that enable AI non-experts observing multiple intelligent agents in sequential domains to differentiate the agents with respect to a property (e.g., quality or fairness), as well as articulate justification for their differentiation. Further, we hypothesize that if the property can be manipulated in a highly controllable fashion, then it is possible to measure the quality of an explanation and/or assessment process by its ability to expose that such manipulation has occurred. This dissertation presents our contributions in explanations, processes, and manipulations for assessment. Specifically, we present our investigations into explanations to judge fairness of a classifier, the After-Action Review for AI process to structure explanation consumption, the Ranking task for explanation evaluation, and the Mutant Agent Generation approach for introducing controllable variation. By improving explainability of AI in all these phases, we seek to empower assessors to calibrate trust in the system appropriately.
License
Resource Type
Date Issued
Degree Level
Degree Name
Degree Field
Degree Grantor
Commencement Year
Advisor
Committee Member
Academic Affiliation
Rights Statement
Publisher
Peer Reviewed
Language

Relationships

Parents:

This work has no parents.

In Collection:

Items