This research explores the hypothesis that methods from decision theory and machine learning can be combined to provide practical solutions to current manufacturing control problems. This hypothesis is explored by developing an integrated approach to solving one manufacturing problem - the optimization of die-level functional test.
An integrated circuit (IC)...
A composite system of a Markovian Decision Process
with multiple criteria involves both discounting criteria
and nondiscounting criteria. The measurement of the
discounting criteria is chosen to be the total sum of
the payoffs over the time horizon and the measurement of
the non-discounting criteria is chosen to be the...
Markov Decision Process (MDP) is a well-known framework for devising the optimal decision making strategies under uncertainty. Typically, the decision maker assumes a stationary environment which is characterized by a time-invariant transition probability matrix. However, in many real-world scenarios, this assumption is not justified, thus the optimal strategy might not...
Multiobjective Decision Making (MODM) has been
suggested for the solution of complicated decision
problems. Decision analysis in numerous areas, including
industrial energy and environmental planning, necessarily
requires consideration of multiple conflicting
objectives. MODM has been successfully applied to
a number of these problems of this type. Moreover, it
has the...
Building intelligent computer assistants has been a long-cherished goal of AI. Many intelligent assistant systems were built and fine-tuned to specific application domains. In this work, we develop a general model of assistance that combines three powerful ideas: decision theory, hierarchical task models and probabilistic relational languages. We use the...
Occurrence of human error in highly complex systems, such as a cockpit, can be disastrous and/or overwhelmingly costly. Mismanagement of multiple concurrent tasks has been observed by researchers to be a type of repetitive human error in previous studies of accidents and incidents. This error may occur in the form...
Markov Decision Processes (MDPs) are the de-facto formalism for studying sequential decision making problems with uncertainty, ranging from classical problems such as inventory control and path planning, to more complex problems such as reservoir control under rainfall uncertainty and emergency response optimization for fire and medical emergencies. Most prior research...
This paper studies the problem of learning diagnostic policies from training examples. A diagnostic policy is a complete description of the decision-making actions of a diagnostician (i.e., tests followed by a diagnostic decision) for all possible combinations of test results. An optimal diagnostic policy is one that minimizes the expected...
In its simplest form, the process of diagnosis is a decision-making process in which the diagnostician performs a sequence of tests culminating in a diagnostic decision. For example, a physician might perform a series of simple measurements (body tem- perature, weight, etc.) and laboratory measurements (white blood count, CT scan,...
A large number of sequential decision-making problems in uncertain environments
can be modeled as Markov Decision Processes (MDPs). In such settings, an agent
can observe at each time step the state of the environment and then executes an
action, causing a stochastic transition to a new state of the environment...