Assessing and understanding intelligent agents can be a difficult task for users who may lack an artificial intelligence (AI) background. A relatively new area, called “explainable AI,” is emerging to help address this problem, but little is known about how to present and structure information that an explanation system might...
How should reinforcement learning (RL) agents explain themselves to humans not trained in AI? To gain insights into this question, we conducted a 124 participant, four-treatment experiment to compare participants’ mental models of an RL agent in the context of a simple Real-Time Strategy (RTS) game. The four treatments isolated...
Until recently, research has not considered whether the design of end-user programming environments, such as spreadsheets, multimedia authoring languages, and CAD systems, affects males and females differently. As a result, we began investigating how the two genders are impacted by end-user programming software and whether attention to gender differences is...
Complex information environments are often organized as hierarchies. However, computational models of Information Foraging Theory (IFT) have almost entirely ignored this fact. Models and tools for predicting programmer navigations have ignored people’s foraging behavior across hierarchies —called hierarchical foraging. Without modeling hierarchical foraging, our ability to build tools to support...
Declarative visual programming languages (VPLs), including spreadsheets, make up a large portion of both research and commercial VPLs. Spreadsheets in particular enjoy a wide audience, including end users. Unfortunately, spreadsheets and most other declarative VPLs still suffer from some of the problems that have been solved in other languages, such...
We believe concreteness, direct manipulation and responsiveness in a visual programming language increase its usefulness. However, these characteristics present a challenge in generalizing programs for reuse, especially when concrete examples are used as one way of achieving concreteness. In this thesis, we present a technique to solve this problem by...
"What’s wrong with this AI?" Explainable AI (XAI) researchers are moving beyond explaining an AI’s actions, to helping users detect an AI’s failures. However this detection may not be enough—for actionability, we often need to pinpoint which part failed. We investigate how AAR/AI, a structured assessment process, supports users with...
Assessing AI systems is difficult. Humans rely on AI systems in increasing ways, both visible and invisible, meaning a variety of stakeholders need a variety of assessment tools (e.g., a professional auditor, a developer, and an end user all have different needs). We posit that it is possible to provide...