Canada Research Chair (CRC) in Visual-Motor Neuroscience and Scientific Director of Vision: Science to Applications (VISTA), Professor Doug Crawford has focused his research for the past two decades on the control of visual gaze in 3D space, eye-hand coordination and spatial memory during eye movements. The Distinguished Research Professor is also a member of the Centre for Vision Research (CVR), an institution that’s leading the way, in a global scale, in human and machine vision research. It was here where some compelling new research took place.
One of Crawford’s graduate students, Morteza Sadeh (now a neurosurgeon at the University of Illinois College of Medicine), led a study that recorded the activity of the superior colliculus, a structure in the brain that’s part of the circuit that transforms sensory input into movement output.
This research was supported by the Canadian Institute of Health Research. The findings, which could help to treat Parkinson’s disease and depression, were published in “Timing Determines Tuning: a Rapid Spatial Transformation in Superior Colliculus Neurons During Reactive Gaze Shifts” in eNeuro (2020).
Crawford and Sadeh sit down with Brainstorm to discuss the significance and applications of this new research.
Q: What were the study’s objectives?
DC: There’s an area of the mid brain called the superior colliculus that has a specific function: If you stimulate this area, it’ll make your eyes and your head move toward a target. It has been believed, for many years, that this area’s involved in converting visual input into the command to turn the eyes and head in the same direction.
Now, the problem is how do you show that? In the past, people tried to separate the visual part from the movement part. They’d shine a light and then have the subject look at it, or they’d shine a light and tell subjects to look the opposite way.
Until now, there wasn’t any technology to show what’s happening in that very short time. That’s where we come in: We wanted to see in a normal eye movement to a visual target, how does this spot, the superior colliculus, convert vision into the movement command. That was the objective.
Q: How did you go about doing this research?
DC: We recorded the activity of individual cells, in the superior colliculus, while the subject was looking at different lights. We flashed lights in the area of space that activates those cells. When the subject turned their eyes and head toward the flashed light, we recorded the eye and head movements and the cell activity.
Then we analysed the data using special software that we developed at York, which allows us to decode, at each point in time, what the neurons are actually encoding. With this new software, we can break that down into very short time periods. Even though these neurons would only be active for 200 milliseconds – that’s one-fifth of a second – we could track through that time period how that code is changing.
Q: What were the key findings, and did anything surprise you?
DC: There are two key findings: one, we expected, our hypothesis; the other was a bit surprising. We expected to find that early on, in what we call a burst of activity, the neurons encoded where the target was, and they encoded where it was relative to my eye. Then, 100 milliseconds later, they were already encoding where the subject wanted to move – that is, the gaze. This involves movements of both the eyes and the head toward the object.
That was our first finding: there’s a very rapid switch from coding, where is the target to where am I going to move my eye. We determined that it would take a person one third of a second to do this.
Then something surprised us. In the superior colliculus, you have different kinds of cells. Some only code the visual response, some only code the movement response and some code both. When we tracked all their activity, we found that they’re all involved in this transformation. We now believe they’re sharing information with each other – so as one develops a new code, it passes it on to its neighbours and, in the end, they all end up doing the same thing.
Q: Is this a “first?”
DC: Yes. We developed the software to do that here. We’re the first to show that, and track what happens in different kinds of cells.
Q: How can this work be applied?
DC: The technology we developed could be applied in pre-surgical recordings in human patients. When you do surgeries, you want to know as much as you can about the area you’re operating on, in advance. This knowledge could help when you’re removing, say, tissue or when you’re doing brain stimulation.
We could apply this to patients to, say, figure out what exactly these neurons are coding in the brain. My lab is collaborating with Dr. Adam Sachs in Ottawa right now. We’ve been applying these exact experiments on different areas of the brain in some of his patients.
MS: The findings of our study could be applied to brain stimulation, which is becoming more popular in terms of its efficacy and its application to movement disorders and psychiatric disorders. This has great potential here.
Q: Could this be used to treat depression and Parkinson’s disease?
MS: Absolutely, both depression and Parkinson’s are examples of the few disorders where brain stimulation is widely used. What we found in this study would be most applicable in terms of movement disorders like Parkinson’s because it’s about control of movement. We could use what we’ve learned to improve deep brain stimulation approaches and targets for treatment of Parkinson’s disease.
Q: York University and the CVR are leading the way in this kind of research.
DC: For many years, the Centre for Vision Research (CVR) has been the largest and, we would argue, the best, vision centre in Canada. We’re mostly known for the discovery research, but the Vision: Science to Applications (VISTA) grant enabled us to move toward applications. The collaboration I mentioned with Ottawa was funded by VISTA as well as CHIR.
With all this increased support that we have for research, trainees and collaborations with partners, we really do aim to be the top vision centre in the world.
To read the article, visit the website. To read more on the York Centre for Vision Research, see the website. To learn more about VISTA, visit the website. To read more about Doug Crawford, visit his Faculty profile page.
To learn more about Research & Innovation at York, follow us at @YUResearch; watch our new animated video, which profiles current research strengths and areas of opportunity, such as Artificial Intelligence and Indigenous futurities; and see the snapshot infographic, a glimpse of the year’s successes.
By Megan Mueller, senior manager, Research Communications, Office of the Vice-President Research & Innovation, York University, email@example.com