Binocular information has been shown to be important for the programming and control of reaching and grasping. Even without binocular vision, people are still able to reach out and pick up objects accurately - albeit less efficiently. It remains unclear, which of the many available monocular depth cues humans use to calibrate manual prehension when binocular information is not available. In the present experiment, we examined whether or not subjects could use a learned relationship between the elevation of a goal object in the visual scene and its distance to help program and control the required grasp. The elevation of the goal object was systematically varied with distance in some blocks of trials by presenting the object at different positions along a horizontal plane 35 cm below eye level. In other blocks of trials, elevation did not vary with distance because the objects were always presented along the subject's line of sight. When subjects viewed these two displays monocularly, they showed fewer on-line adjustments in the trajectory of the limb and the aperture of the fingers when the elevation of the target object in the visual scene could be used to help program the required movements. No such difference between performance on the two arrays was seen when subjects were allowed a full binocular view. This study confirms that subjects are indeed able to use a learned relationship between the elevation of an object and its distance as a cue for programming grasping movements when binocular information is not available. Together with evidence from work with neurological patients who have difficulty perceiving pictorial cues, these findings suggest that the visuomotor system might normally "prefer" to use binocular cues, but can fall back on learned pictorial information when binocular vision is denied.