In a simulator-defined MDP, the Markovian dynamics and rewards are provided in the form of a simulator from which samples can be drawn. This paper studies MDP planning algorithms that attempt to minimize the number of simulator calls before terminating and outputting a policy that is approximately optimal with high...
Society faces many complex management problems, particularly in the area of shared public resources such as ecosystems. Existing decision making processes are often guided by personal experience and political ideology rather than state-of-the-art scientific understanding. This dissertation envisions a future in which multiple stakeholders are provided with computational tools for...
Spatiotemporal planning involves making choices at multiple locations in space over some planning horizon to maximize
utility and satisfy various constraints. In Forest Ecosystem Management, the problem is to choose actions for thousands of locations
each year including harvesting, treating trees for fire or pests, or doing nothing. The utility...
This paper studies the problem of learning diagnostic policies from training examples. A diagnostic policy is a complete description of the decision-making actions of a diagnostician (i.e., tests followed by a diagnostic decision) for all possible combinations of test results. An optimal diagnostic policy is one that minimizes the expected...
A common heuristic for solving Partially Observable Markov Decision Problems POMDPs is to first solve the underlying Markov Decision Process MDP and then construct a POMDP policy by performing a fixed depth lookahead search in the POMDP and evaluating the leaf nodes using the MDP value function. A problem with...
A diagnostic policy species what test to perform next based on the results of previous tests and when to stop and make a diagnosis. Cost-sensitive diagnostic policies perform tradeoffs between (a) the costs of tests and (b) the costs of misdiagnoses. An optimal diagnostic policy minimizes the expected total cost....
This paper introduces the even-odd POMDP an approximation to POMDPs Partially Observable Markov Decision Problems in which the world is assumed to be fully observable every other time step. This approximation works well for problems with a delayed need to observe. The even-odd POMDP can be converted into an equivalent...
This paper introduces the even-odd POMDP, an approximation to POMDPs in which the world is assumed to be fully observable every other time step. The even-odd POMDP can be converted into an equivalent MDP, the
2MDP, whose value function, V*[subscript 2MDP], can be combined online with a 2-step lookahead search...
Many tasks in AI require representation and manipulation of complex functions. First-Order Decision Diagrams (FODD) are a compact knowledge representation expressing functions over relational structures. They represent numerical functions that, when constrained to the Boolean range, use only existential quantification. Previous work has developed a set of operations for composition...
When weak and strong fish stocks are caught in the same fishery, managing for the protection of the weak
stock may result in foregone economic benefits from harvest of the strong stock, while managing for the
strong stock may result in overfishing of the weak stock. A particular complication arises...