- Object detection models are being widely used in many applications, such as autonomous driving, construction management, and cancer detection. Evaluating the performance of the object detection model is more complicated than other computer vision models such as image classification models. Most of the images have several objects to be detected, and the types of detections and their errors can be categorized in several different ways (e.g., classification error, localization error). To address this challenge, we design and develop an interactive tool that helps users evaluate and analyze results from object detection models. We first categorize detected and ground-truth bounding boxes into 7 and 11 types based on the literature. We enable users to analyze object detection results based on this categorization at three different levels: summary level, class and detection type level, and image level. This enables users to analyze a large number of and a variety of model errors from a summarized overview into individual images. From the summary level, users can explore the overall detection results such as average precision, the number of detected labels and ground truth labels, and the number of each detection type. At the class and detection level, users can see more detailed information about a certain class and detection type and images that correspond to it. At the image level, users can click each image to get a detailed analysis. We developed an interactive user interface that implements this idea with a driving dataset, trained with a state-of-the-art object detection model. We also present a usage scenario of how users can use our tool for inspecting errors from the trained object detection model. Being able to easily browse object detection results and their images with a certain class and a detection type, our interface helps users to work with object detection models more effectively in their research.