Data can be represented in multiple views. Traditional multi-view learning methods (i.e., co-training, multi-task learning) focus on improving learning performance using information from the auxiliary view, although information from the target view is sufficient for learning task. However, this work addresses a semi-supervised case of multi-view learning, the surrogate supervision...
Spatial Supervised Learning seeks to learn how to assign a label to each pixel in a spatial grid
such as the pixels of remote-sensed images. The standard approach is to treat each grid cell
separately and to use only the measured features of the grid cell to determine the assigned...
Sequential supervised learning problems involve assigning a class label to each item in a sequence. Examples include part-of-speech tagging and text-to-speech mapping. A very general-purpose strategy for solving such problems is to construct a recurrent sliding window (RSW) classifier, which maps some window of the input sequence plus some number...
Recent work has shown that AdaBoost can be viewed as an algorithm that maximizes the margin on the training data via functional gradient descent. Under this interpretation, the weight computed by AdaBoost, for each hypothesis generated, can be viewed as a step size parameter in a gradient descent search. Friedman...
Sequential supervised learning problems involve assigning a class label to each item in a sequence. Examples include part of speech tagging and text to speech mapping. A very general-purpose strategy for solving such problems is to construct a recurrent sliding window (RSW) classifier which maps some window of the input...
Sequential supervised learning problems involve assigning a class label to each item in a sequence. Examples include part-of-speech tagging and text-to-speech mapping. A very general-purpose strategy for solving such problems is to construct a recurrent sliding window (R.SW) classifier, which maps some window of the input sequence plus some number...
This thesis proposes a novel technique that exploits spectrum occupancy behaviors inherent to wideband spectrum access to enable efficient cooperative spectrum sensing. The proposed technique reduces the number of required sensing measurements while accurately recovering spectrum occupancy information. It does so by leveraging compressive sampling theory to exploit the block-like...
This paper addresses cost-sensitive classification in the setting where there are costs for measuring each attribute as well as costs for misclassification errors. We show how to formulate this as a Markov Decision Process in which the transition model is learned from the training data. Specifically we assume a set...
Many machine learning applications require classifiers that minimize an asymmetric loss function rather than the raw misclassification rate. We study methods for modifying C4.5 to incorporate
arbitrary loss matrices. One way to incorporate loss information
into C4.5 is to manipulate the weights assigned to the examples
from different classes. For...
Supervised learning is concerned with discovering the relationship between example sets of features and their corresponding classes. The traditional supervised learning formulation assumes that all examples are independent from one another. The order of the examples contains no information. Nonetheless, many problems have a sequential nature. Classifiers for these problems...