Technical Report
 

Calibrating recurrent sliding window classifiers for sequential supervised learning

Public Deposited

Downloadable Content

Download PDF
https://ir.library.oregonstate.edu/concern/technical_reports/73666580w

Descriptions

Attribute NameValues
Creator
Abstract
  • Sequential supervised learning problems involve assigning a class label to each item in a sequence. Examples include part of speech tagging and text to speech mapping. A very general-purpose strategy for solving such problems is to construct a recurrent sliding window (RSW) classifier which maps some window of the input sequence plus some number of previously predicted items into a prediction for the next item in the sequence. This paper describes a general-purpose implementation of RSW classifiers and discusses the highly practical issue of how to choose the size of the input window and the number of previous predictions to incorporate. Experiments on two real world domains show that the optimal choices vary from one learning algorithm to another. They also depend on the evaluation criterion number of correctly predicted items versus number of correctly predicted whole sequences. We conclude that window sizes must be chosen by cross validation. The results have implications for the choice of window sizes for other models.
  • Keywords: speech mapping, sequential supervised learning, recurrent sliding window classifiers
Resource Type
Date Available
Date Issued
Academic Affiliation
Series
Subject
Rights Statement
Funding Statement (additional comments about funding)
  • The authors gratefully acknowledge the support of the National Science Foundation under grants IIS-0083292 and ITR-0001197.
Publisher
Peer Reviewed
Language
Replaces

Relationships

Parents:

This work has no parents.

Items