There is a strong interest in the robotics community in learning how humans grasp and manipulate objects, partly because robots need to operate in human environments and partly because humans are currently much better in physical interaction tasks than robots. This thesis seeks to identify the human heuristics for grasping by analyzing human gaze patterns when humans perform grasping tasks using their own hand and when they use a robotic hand. This thesis uses a human-subject experiment to analyze the participant's eye-gaze for finding what features people think are important for grasping objects. The features included where the fingertips settle down on the object relative to the object’s edges, center of mass, etc. It was found that while gaze patterns on the objects are similar whether the human used the robot hand or the human hand, participants spent substantially more time gazing at the robotic hand then their own, particularly the wrist and finger positions. In a subsequent study, it was also shown that choosing camera angles that clearly display the features participants are interested in enables the participants to more effectively determine the effectiveness of a grasp from images. This thesis’s findings are relevant both for robotic grasp planning algorithms (where visual cues are important for analyzing objects for potential grasps) and for designing tele-operation interfaces (how best to present the visual data to the remote operator).