A new study by Jonathan S. Tsay, Steven Tan, Marlena A. Chu, Richard B. Ivry, and Emily A. Cooper has been published in the Journal of Cognitive Neuroscience. Their paper is the result of a collaboration with colleagues in psychology who study movement (PhD candidate Jonathan Tsay (JT) and Dr. Rich Ivry of the CognAc Lab), vision science professor Dr. Emily Cooper, optometry student Steven Tan, and Dr. Marlena Chu, chief of the Low Vision Clinic at the Herbert Wertheim School of Optometry & Vision Science. The study, which provides new insights about how visual impairment can impact the way people correct movement errors, was designed to work fully online, with each participant doing the experiment from their home during a time when in-person research was restricted on campus.
“From a practical perspective, our results provide the first characterization of how low vision affects not only motor performance, but also motor learning. Specifically, when the sensory inputs to the sensorimotor system cannot be clearly disambiguated because of low vision (i.e., small and uncertain visual errors), the extent of implicit adaptation is attenuated. However, when visual errors are clearly disambiguated despite having low fidelity (i.e., large and uncertain errors), the extent of implicit adaptation is not impacted by low vision. This dissociation underscores how the underlying learning mechanism per se is not compromised by low vision and may be exploited to enhance motor outcomes during clinical rehabilitation (Tsay & Winstein, 2020). For example, clinicians and practitioners could use nonvisual feedback (e.g., auditory or tactile) to enhance the saliency and possibly reduce localization uncertainty of small visual error signals (Endo et al., 2016; Patel, Park, Bonato, Chan, & Rodgers, 2012). Moreover, rehabilitative specialists could provide explicit instructions to highlight the presence of small errors, such that individuals may learn to rely more on explicit reaiming strategies to compensate for these errors (Merabet, Connors, Halko, & Sánchez, 2012). Future work could examine which of these techniques is most effective to enhance motor learning when errors are small.”
Successful goal-directed actions require constant fine-tuning of the motor system. This fine-tuning is thought to rely on an implicit adaptation process that is driven by sensory prediction errors (e.g., where you see your hand after reaching vs. where you expected it to be). Individuals with low vision experience challenges with visuomotor control, but whether low vision disrupts motor adaptation is unknown. To explore this question, we assessed individuals with low vision and matched controls with normal vision on a visuomotor task designed to isolate implicit adaptation. We found that low vision was associated with attenuated implicit adaptation only for small visual errors, but not for large visual errors. This result highlights important constraints underlying how low-fidelity visual information is processed by the sensorimotor system to enable successful implicit adaptation.
Target Practice: Your Accuracy and Ability to Adapt
Click on the link below to try a task very similar to the one the authors used in their study.Target Practice
Read the PaperJournal of Cognitive Neuroscience
Labs and Clinics
The Cognition and Action Lab
Emily Cooper Lab
Low Vision Clinic