Driverless cars struggle to track objects while moving. So why don’t our eyes? | UniSC | University of the Sunshine Coast, Queensland, Australia

Accessibility links

Non-production environment - wwwtest.usc.edu.au

Driverless cars struggle to track objects while moving. So why don’t our eyes?

The way our brains track moving objects might be far more simple than first thought.

New research may have turned more than 100 years of thinking about the way our brains process visual information on its head.

Until now, a scientific consensus has never been reached on how our brains successfully track objects with our eyes, multiple times every second, with remarkable coordination and seemingly minimal effort.

A new paper, led by cognitive neuroscientist Dr Will Harrison from the University of the Sunshine Coast, might have an answer.

“Just like a driverless car must coordinate its movement on the road with regard to objects around it, the brain has to coordinate movement of the eyes, head and body, while also maintaining a coherent understanding of the visual world,” Dr Harrisons said.

"The prevailing hypothesis for more than 100 years has been that the brain achieves this by continually predicting what the world would look like if it executes a particular movement. However, such predictions would require a tremendous amount of computing power.

“Our research shows the answer might be far simpler,” Dr Harrison said.

Instead, it could be that the brain computes the real-world locations of objects by simply combining information about where the eyes are pointing, and where visual information falls on the retinas.

In monkeys, an animal with a similar visual system to humans, the parts of the brain that first receive visual signals from the eyes also receive information about where the eyes are pointing.

To test this idea, Dr Harrison and his colleagues conducted an experiment in which participants performed a difficult visual discrimination task while moving their eyes around the display. Using a high-speed eye tracker that measures where a person is looking 1000 times per second, Dr Harrison found people tracked the location of objects across eye movements with far greater accuracy, and with much greater temporal resolution, than previously thought.

“We found no evidence that the brain formulated a prediction with each eye movement, but we did find that the speed with which people could track objects across eye movements was very similar to the timing of activity previously observed in the monkey brain,” Dr Harrison said.

The researchers then developed a mathematical model to simulate how the brain could calculate an object’s real-world location. The effectiveness of the model confirmed that visual stability likely involves far simpler calculations than previously thought.

So, does this mean we’ll see car manufacturers adopting this new visual tracking model, in their driverless cars?

One of the primary challenges preventing these cars entering the mainstream is that engineers have difficulty working out how to process huge volumes of data within the timeframe required for a moving vehicle to operate safely.

“It’s hard to say, but our findings may demonstrate an inefficiency in the way their computers are trying to process visual data,” Dr Harrison said.

“What this research does change, is decades of conventional wisdom about how our brain processes visual information. We are hopeful that our revised theory could help explain how the brain coordinates other complex actions across with many different senses.”

The full paper, published in the Proceedings of the National Academy of Sciences, is available here.

More news from UniSC

Media enquiries: Please contact the Media Team media@usc.edu.au