This dissertation delves into understanding, characterizing, and addressing dataset shift in deep learning, a pervasive issue for deployed machine learning systems. Integral aspects of the problem are examined: We start with the use of counterfactual explanations in order to characterize the behavior of deep reinforcement learning agents in visual input...
This project covers the construction of a Stereo Camera System, integration with a Velodyne VLP-16 LIDAR and the creation of dataset intended to aid in the development of vision algorithms for forestry applications. This project is the first step in a future multi-stage project to implement computer vision systems for...
Machine common sense remains a broad, potentially unbounded problem in AI. Our focus is to move toward AI systems that can develop common-sense reasoning similar to humans to detect anomalies. In particular, we study the problem of detecting the violation of expectations when object appearance or motion dynamics change from...
The performance of deep learning frameworks could be significantly improved through considering the particular underlying structures for each dataset. In this thesis, I summarize our three work about boosting the performance of deep learning models through leveraging structures of the data. In the first work, we theoretically justify that, for...
In this thesis, we introduce a novel Explanation Neural Network (XNN) to explain the predictions made by a deep network. The XNN works by embedding a high-dimensional activation vector of a deep network layer non-linearly into a low-dimensional explanation space while retaining faithfulness i.e., the original deep learning predictions can...
Although deep reinforcement learning agents have produced impressive results in many domains, their decision making is difficult to explain to humans. To address this problem, past work has mainly focused on explaining why an action was chosen in a given state. A different type of explanation that is useful is...