In this thesis, we introduce a novel Explanation Neural Network (XNN) to explain the predictions made by a deep network. The XNN works by embedding a high-dimensional activation vector of a deep network layer non-linearly into a low-dimensional explanation space while retaining faithfulness i.e., the original deep learning predictions can...
Machine common sense remains a broad, potentially unbounded problem in AI. Our focus is to move toward AI systems that can develop common-sense reasoning similar to humans to detect anomalies. In particular, we study the problem of detecting the violation of expectations when object appearance or motion dynamics change from...
As one of the most popular data types, the point cloud is widely used in various appli- cations, including computer vision, computer graphics and robotics. The capability to directly measure 3D point clouds is invaluable in those applications as depth information could remove a lot of the segmentation ambiguities in...
Deep neural networks currently comprise the backbone of many applications where safety is a critical concern, for example: autonomous driving and medical diagnostics. Unfortunately these systems currently fail to detect out-of-distribution (OOD) inputs and can be prone to making dangerous errors when exposed to them. In addition, these same systems...
The advancement of artificial intelligence (AI) has led to transformative developments across multiple sectors, fostering innovation and redefining our interactions with technology. As AI matures and becomes integrated into society, it offers numerous opportunities to address global challenges and revolutionize a wide array of human endeavors. These advances are driven...
The ability to extract uncertainties from predictions is crucial for the adoption of deep learning systems to safety-critical applications. Uncertainty estimates can be used as a failure signal, which is necessary for automating complex tasks where safety is a concern. Furthermore, current deep learning systems do not provide uncertainty estimates,...
In open set recognition, a classifier must label instances of known classes while detecting instances of unknown classes not encountered during training. To detect unknown classes while still generalizing to new instances of existing classes, this thesis introduces a dataset augmentation technique called counterfactual image generation. This approach, based on...
In this dissertation, we address action segmentation in videos under limited supervision. The goal of action segmentation is to predict an action class for each frame of a video. The limited supervision means ground truth labels of video frames are not available in training. We focus on three types of...
Although deep reinforcement learning agents have produced impressive results in many domains, their decision making is difficult to explain to humans. To address this problem, past work has mainly focused on explaining why an action was chosen in a given state. A different type of explanation that is useful is...
This dissertation addresses few-shot object segmentation in images. The goal of segmentation is to label every image pixel with a class of the object occupying that pixel, where the class may represent a semantic object category or instance. In few-shot segmentation, training and test datasets have different classes. Every new...