Technical Report
 

Learning diagnostic policies from examples by systematic search

Public Deposited

Downloadable Content

Download PDF
https://ir.library.oregonstate.edu/concern/technical_reports/f7623d78q

Descriptions

Attribute NameValues
Creator
Abstract
  • A diagnostic policy species what test to perform next based on the results of previous tests and when to stop and make a diagnosis. Cost-sensitive diagnostic policies perform tradeoffs between (a) the costs of tests and (b) the costs of misdiagnoses. An optimal diagnostic policy minimizes the expected total cost. We formalize this diagnosis process as a Markov Decision Process MDP. We investigate two types of algorithms for solving this MDP systematic search based on the AO* algorithm and greedy search (particularly the Value of Information method). We investigate the issue of learning the MDP probabilities from examples but only as they are relevant to the search for good policies. We do not learn nor assume a Bayesian network for the diagnosis process. Regularizers are developed that control overfitting and speed up the search. This research is the first that integrates overfitting prevention into systematic search. The paper has two contributions it discusses the factors that make systematic search feasible for diagnosis and it shows experimentally, on benchmark data sets, that systematic search methods produce better diagnostic policies than greedy methods.
  • Keywords: Cost-sensitive diagnostic policies, greedy search, AO*, Markov Decision Process
Resource Type
Date Available
Date Issued
Series
Subject
Rights Statement
Publisher
Peer Reviewed
Language
Replaces

Relationships

Parents:

This work has no parents.

Items