Graduate Thesis Or Dissertation
 

Improving automated email tagging with implicit feedback

Public Deposited

Downloadable Content

Download PDF
https://ir.library.oregonstate.edu/concern/graduate_thesis_or_dissertations/5425kg50c

Descriptions

Attribute NameValues
Creator
Abstract
  • Machine learning systems are generally trained offline using ground truth data that has been labeled by experts. However, these batch training methods are not a good fit for many applications, especially in the cases where complete ground truth data is not available for offline training. In addition, batch methods do not perform well in applications where the learning system is expected to quickly adapt to changes in the data with a non-stationary distribution and also remain resistant to label noise. Online learning algorithms provide solutions to these challenges, but these algorithms often assume that the ground truth is available after making every prediction. In this thesis, we describe the 'online email tagging' problem where an underlying algorithm predicts a set of user-defined tags for an incoming email message. The email client user interface displays the predicted tags for the message, and the user doesn't need to do anything unless those predictions are wrong (in which case, the user can delete the incorrect tags and add the missing tags). This means that the learning algorithm never receives confirmation that its predictions are correct - it only receives feedback when it makes a mistake. This violates the assumption of most online learning algorithms, and can lead to slower and less effective learning. In many cases, the learning algorithm would benefit from positive feedback, i.e., confirmation of correct predictions. One could assume that if the user never changes any tag, then the predictions are correct. But users sometimes forget to correct the tags, presumably because they are focused on the content of the email messages and fail to notice incorrect and missing tags. The aim of this thesis is to determine whether implicit feedback can provide useful additional training examples to the email prediction subsystem of TaskTracer, known as TAPE (Tag Assistant for Productive Email). Our hypothesis is that, the more time a user spends working on an email message, the more likely it is that the user will notice tag errors and correct them. If, after the user has spent enough time working on an email message, no corrections have been made, then perhaps it is safe for the learning system to treat the predicted tags as being correct and train accordingly. We propose four algorithms (and three baselines) for incorporating implicit feedback into the TAPE email tag predictor. These algorithms are then evaluated using (i) email interaction and tag correction events collected from 14 user-study participants as they performed email-directed tasks while using TAPE, and (ii) case studies on real knowledge workers using TAPE to manage their own email messages. The results show that implicit feedback produces important increases in training feedback, and therefore, significantly reduces subsequent prediction errors despite the fact that implicit feedback is not perfect. We conclude that implicit feedback mechanisms can provide a useful performance boost for online email tagging systems. Finally, we perform a simulation study to show how tags could provide services to help with information re-finding and several common tasks that the users often need to perform within the email system. Our simulation results show that tag services have potential to greatly reduce the number of clicks required to perform these tasks.
  • Keywords: email tagging, implicit feedback, TaskTracer
License
Resource Type
Date Available
Date Issued
Degree Level
Degree Name
Degree Field
Degree Grantor
Commencement Year
Advisor
Committee Member
Academic Affiliation
Non-Academic Affiliation
Subject
Rights Statement
Publisher
Peer Reviewed
Language
Replaces

Relationships

Parents:

This work has no parents.

In Collection:

Items