X
Innovation

Full color night vision coming for drivers, with help from insect eyes

Full-color night vision for drivers may soon appear in a vehicle near you, thanks to a little inspiration from the eyes of nocturnal insects.
Written by Andrew Nusca, Contributor

Full-color night vision for drivers may soon appear in a vehicle near you, thanks to a little inspiration from the eyes of nocturnal insects.

Interested in night vision systems for safety, scientists have developed a digital image-processing algorithm that allows for the capture of full-color images at night by a car moving at speed, reports The New Scientist.

Currently, night vision only allows for monochromatic images using infrared light, which is invisible to our eyes.

The new image processing algorithm works in real time, can adapt to light levels automatically and needs only the processor of a regular computer graphic card to work. It was developed by Eric Warrant, with help from colleagues Henrik Malm, Magnus Oskarsson and Almut Kelber of the University of Lund in Sweden and Jonas Ambeck-Madsen and Hiromichi Yanagihara of the Toyota Motor Europe R&D center in Brussels, Belgium.

The inspiration? Bees, beetles and moths. These insects have compound eyes with multiple lenses that, together, form a single image on light-sensitive photoreceptors. These small lenses should, in theory, be much worse at night vision than the larger human eye -- but the difference is all in how the insect processes what light is collected.

As available light reduces, the nerves in the insect's eye have two options: pool the signals from neighboring photoreceptors, or collect signals for a longer amount of time -- which makes it difficult to maintain spatial detail.

Here's how the algorithm works: First, the brightness and contrast of the image are boosted using "non-linear amplification" -- meaning the brightest sections are left unaltered while the darkest are expanded, analogous to what happens an an insect's retina.

To handle the graininess of the resulting image, "spatiotemporal noise reduction" is applied, or the pooling of signals. By crunching numbers, the algorithm can determine how much light to pool where by comparing the value stored in each pixel with the values in neighboring pixels to find continuity or correlation -- an object -- across the frame and in successive frames.

The New Scientist gets into the details:

The algorithm also looks for patterns repeating in successive frames to spot movement, and depending on the presence and speed of movement, it tunes the degree of temporal and spatial summation between pixels. Where an object is fast moving, summation between frames must stay low to avoid blurring, and the algorithm relies more on spatial summation within single frames. When objects are static the algorithm can pool in time and capture more spatial detail. Pooling pixels in this way comes at a price, though. "The fineness of spatial detail declines," Warrant says, "but the coarser details left can be seen much more clearly."

Leftover isolated pixels are most likely noise and removed.

The last step, which actually occurs during the pooling process, uses "lateral inhibition" to sharpen and restore edges lost by the noise reduction.

Installing the technology in cars and trucks is still years away, but Toyota engineers are exploring incorporating it into part of the dashboard -- say, as an alert of an object on the road ahead or as a trigger to an automated crash-prevention mechanism.

Image: Malakym

This post was originally published on Smartplanet.com

Editorial standards