There are multiple routes from vision to action that play a role in the production of visually guided reaching and grasping. What remain to be resolved, however, are the conditions under which these various routes are recruited in the generation of actions and the nature of the information they convey. We argue in this chapter that the production of real-time actions to visible targets depends on pathways that are separate from those mediating memory-driven actions. Furthermore, the transition from real-time to memory-driven control occurs as soon as the intended target is no longer visible. Real-time movements depend on pathways from the early visual areas through to relatively encapsulated visuomotor mechanisms in the dorsal stream. These dedicated visuomotor mechanisms, together with motor centers in the premotor cortex and brainstem, compute the absolute metrics of the target object and its position in the egocentric coordinates of the effector used to perform the action. Such real-time programming is essential for the production of accurate and efficient movements in a world where the location and disposition of a goal object with respect to the observer can change quickly and often unpredictably. In contrast, we argue that memory-driven actions make use of a perceptual representation of the target object generated by the ventral stream. Unlike the real-time visuomotor mechanisms, perception-based movement planning makes use of relational metrics and scene-based coordinates. Such computations make it possible, however, to plan and execute actions upon objects long after they have vanished from view.