In text classification, labeling features is often less time consuming than labeling entire documents. In situations where very little labeled training data is available, feature relevance feedback has the potential to dramatically increase classification performance. We review previous work on incorporating feature relevance feedback in the form of labeled features...
When intelligent interfaces, such as intelligent desktop assistants, email classifiers, and recommender systems, customize themselves to a particular end user, such customizations can decrease productivity and increase frustration due to inaccurate predictions—especially in early stages when training data is limited. The end user can improve the learning algorithm by tediously...
Intelligent user interfaces, such as recommender systems and email classifiers, use machine learning algorithms to customize their behavior to the preferences of an end user. Although these learning systems are somewhat reliable, they are not perfectly accurate. Traditionally, end users who need to correct these learning systems can only provide...
Many applications include machine learning algorithms intended to learn “programs” (rules of behavior) from an end user’s actions. When these learned programs are wrong, their users receive little explanation as to why, and even less freedom of expression to help the machine learn from its mistakes. In this paper, we...
The potential for machine learning systems to improve via a mutually beneficial exchange of information with users has yet to be explored in much detail. Previously, we found that users were willing to provide a generous amount of rich feedback to machine learning systems, and that the types of some...
The results of a machine learning from user behavior can be thought of as a program, and like all programs, it may need to be debugged. Providing ways for the user to debug it matters because without the ability to fix errors, users may find that the learned program’s errors...
Image classification is a difficult problem, often requiring large training sets to get satisfactory results. However this is a task that humans perform very well, and incorporating user feedback into these learning algorithms could help reduce the dependency on large amounts of labeled training data. This process has already been...
Explainable Artificial Intelligence (XAI) systems aim to improve users’ understanding of AI but rarely consider the inclusivity aspects of XAI. Without inclusive approaches, improving explanations might not work well for everyone. This study investigates leveraging users’ diverse problem-solving styles as an inclusive strategy to fix an XAI prototype, with the...
Mixed-initiative programming entails collaboration between a computer system, and a human to achieve some desired goal or set of goals. Often these goals change or are amended in real time during the course of program execution. As such, the plans these programs are based on must adapt and evolve to...
Assessing AI systems is difficult. Humans rely on AI systems in increasing ways, both visible and invisible, meaning a variety of stakeholders need a variety of assessment tools (e.g., a professional auditor, a developer, and an end user all have different needs). We posit that it is possible to provide...