PITTSBURGH (April 9, 2018) … Our local environments are full of moving objects, but when we look at them, our brains can take around 50-60 milliseconds to put together an image. How does our vision compensate for that lag in time when the world around us keeps moving?
Neeraj Gandhi, professor of bioengineering in the University of Pittsburgh Swanson School of Engineering, received funding to explore that question by comparing the neural mechanisms of eye movements directed to stationary and moving objects.
Gandhi leads the Cognition and Sensorimotor Integration Laboratory which investigates neural mechanisms involved in the multiple facets of sensory-to-motor transformations and cognitive processes. In this project, the group uses eye movement as a model of motor control.
“When we look at our local environment, our eyes do not do so with steady fixation. The brain sends a signal to the eye muscles resulting in rapid eye movement -or saccade- that occurs several times per second,” said Gandhi. “Our visual information is taken from the points of fixation between these saccades. While the neural mechanisms of saccades with stationary objects have been well-researched, little is known about the interceptive saccades used for moving objects,” said Gandhi.
The National Institutes of Health awarded Gandhi $1.5M to develop experimental and computational approaches to study the “Neural Control of Interceptive Movements.”
“Consider catching a football. By the time the receiver’s brain gathers visual information, the ball has already moved further down the field,” explains Gandhi. “The athlete’s brain must then take velocity into the equation and develop an internal representation of the motion in order to successfully catch the ball.”
The team will record the activity of neurons in the superior colliculus, which is a layered structure in the midbrain and a central element in producing saccadic eye movements. They will simultaneously compare the spatiotemporal properties of the neural activity at different speeds and directions during saccades to stationary and moving targets. They will then integrate these results in a computational neural network model that simulates the neural signals and their contributions in producing both types of eye movements.
“Vision is a complicated, multidisciplinary subject,” said Gandhi. “The results of this project will hopefully piece together a part of the puzzle by providing in-depth insight into the mechanisms for generation of interceptive saccades and give us a better understanding of how we visualize our active environment.”
Contact: Leah Russell