itsurtee

Contact info

  33 Washington Square W, New York, NY 10011, USA

  [email protected]


Product Image

Inspired by the human brain, this chip helps robots see motion 4x faster than humans

Researchers in China have developed a neuromorphic chip inspired by the brain’s lateral geniculate nucleus to enable real-time robotic vision.

A new chip, inspired by the human brain, is helping robots see and respond in real time. Developed by researchers from China’s Beihang University and the Beijing Institute of Technology, the chip can detect object movement four times faster than the human eye.

The breakthrough, which builds on neuromorphic engineering, drew inspiration from a lesser-known brain structure called the lateral geniculate nucleus (LGN).

Located between the retina and the visual cortex, the LGN functions both as a relay and a filter, allowing the human visual system to focus processing power on fast-moving or rapidly changing objects.

A conventional robotic vision system uses cameras to capture static frames and track motion using brightness changes from one frame to the next. This method is reliable, but it is quite slow as processing a single frame often takes more than half a second.

In applications like self-driving vehicles at high speeds, this delay is quite crucial, as a fraction of a second can cause the vehicle to crash.

To fix this very issue, researchers say that they have developed a custom neuromorphic module which detects changes in light over time. The approach allows robotic vision to handle motion in real time and focus processing power in areas where the motion is taking place.

In the testing phase, researchers used the chip to simulate driving and used robotic arms to complete tasks. The result was that the processing delay was cut down by approximately by 75%. It also doubled the motion-tracking accuracy while completing complex tasks.

Compared to previously used methods, the new chip was able to detect motion four times faster.

“This study elevates video analysis speed beyond human levels by applying the brain’s visual processing principles to a semiconductor chip. It can be applied not only to collision avoidance in autonomous vehicles and real-time object tracking in drones but also to fields where robots read and respond instantly to human gestures,” says the research team.

The chip still relies on optical-flow algorithms for interpreting the final image, and often struggles in visually heavy environments where multiple motions take place simultaneously.

However, it could be useful in domestic settings, where robots are required to detect small changes like gestures and facial expressions. This could also make the human-robot interaction feel more natural.

 

A new chip, inspired by the human brain, is helping robots see and respond in real time. Developed by researchers from China’s Beihang University and the Beijing Institute of Technology, the chip can detect object movement four times faster than the human eye.

The breakthrough, which builds on neuromorphic engineering, drew inspiration from a lesser-known brain structure called the lateral geniculate nucleus (LGN).

Located between the retina and the visual cortex, the LGN functions both as a relay and a filter, allowing the human visual system to focus processing power on fast-moving or rapidly changing objects.

A conventional robotic vision system uses cameras to capture static frames and track motion using brightness changes from one frame to the next. This method is reliable, but it is quite slow as processing a single frame often takes more than half a second.

In applications like self-driving vehicles at high speeds, this delay is quite crucial, as a fraction of a second can cause the vehicle to crash.

To fix this very issue, researchers say that they have developed a custom neuromorphic module which detects changes in light over time. The approach allows robotic vision to handle motion in real time and focus processing power in areas where the motion is taking place.

In the testing phase, researchers used the chip to simulate driving and used robotic arms to complete tasks. The result was that the processing delay was cut down by approximately by 75%. It also doubled the motion-tracking accuracy while completing complex tasks.

Compared to previously used methods, the new chip was able to detect motion four times faster.

“This study elevates video analysis speed beyond human levels by applying the brain’s visual processing principles to a semiconductor chip. It can be applied not only to collision avoidance in autonomous vehicles and real-time object tracking in drones but also to fields where robots read and respond instantly to human gestures,” says the research team.

The chip still relies on optical-flow algorithms for interpreting the final image, and often struggles in visually heavy environments where multiple motions take place simultaneously.

However, it could be useful in domestic settings, where robots are required to detect small changes like gestures and facial expressions. This could also make the human-robot interaction feel more natural.

Related Articles