Graduate Thesis Or Dissertation
 

Shrunken learning rates do not improve AdaBoost on benchmark datasets

Public Deposited

Downloadable Content

Download PDF
https://ir.library.oregonstate.edu/concern/graduate_thesis_or_dissertations/ks65hg77k

Descriptions

Attribute NameValues
Creator
Abstract
  • Recent work has shown that AdaBoost can be viewed as an algorithm that maximizes the margin on the training data via functional gradient descent. Under this interpretation, the weight computed by AdaBoost, for each hypothesis generated, can be viewed as a step size parameter in a gradient descent search. Friedman has suggested that shrinking these step sizes could produce improved generalization and reduce overfitting. In a series of experiments, he showed that very small step sizes did indeed reduce overfitting and improve generalization for three variants of Gradient_Boost, his generic functional gradient descent algorithm. For this report, we tested whether reduced learning rates can also improve generalization in AdaBoost. We tested AdaBoost (applied to C4.5 decision trees) with reduced learning rates on 28 benchmark datasets. The results show that reduced learning rates provide no statistically significant improvement on these datasets. We conclude that reduced learning rates cannot be recommended for use with boosted decision trees on datasets similar to these benchmark datasets.
Resource Type
Date Available
Date Issued
Degree Level
Degree Name
Degree Field
Degree Grantor
Commencement Year
Advisor
Committee Member
Academic Affiliation
Non-Academic Affiliation
Subject
Rights Statement
Publisher
Peer Reviewed
Language
Digitization Specifications
  • File scanned at 300 ppi (Monochrome) using ScandAll PRO 1.8.1 on a Fi-6670 in PDF format. CVista PdfCompressor 4.0 was used for pdf compression and textual OCR.
Replaces

Relationships

Parents:

This work has no parents.

In Collection:

Items