Ophthalmology technology could help robots and cars see in 3D

A team of researchers based at Duke University and funded in part by the U.S. National Science Foundation leveraged their experience in optical coherence tomography — known as OCT, a noninvasive test used to image the retina — to refine and improve vision technology for robots and self-driving cars.

Traditional lidar, short for light detection and ranging, doesn’t perform optimally in 3D applications due to weak signals from dim reflected light or bright direct sunlight. The team set out to find a solution and turned to a form of lidar called a frequency-modulated continuous wave, or FMCW lidar.

Imaging advances could help cars and robots see millimeter-scale features. Image credit: Ruobing Qian, Duke University

Imaging advances could help cars and robots see millimeter-scale features. Image credit: Ruobing Qian, Duke University

“FMCW lidar shares the same working principle as OCT, which the biomedical engineering field has been developing since the early 1990s,” said Ruobing Qian, one of the authors of the study published in Nature Communications. “But 30 years ago, nobody knew autonomous cars or robots would be a thing, so the technology focused on tissue imaging. Now, to make it useful for these other emerging fields, we need to trade in its extremely high-resolution capabilities for more distance and speed.”

FMCW lidar technology sends out a laser beam that shifts between different frequencies while the detector measures the light reflection time, distinguishing different frequency patterns and light sources. The adaptable and high-speed technology is effective in variable conditions.

Instead of rotating mirrors used in traditional lidar, the team used a diffraction grating that works like a prism, splintering frequencies moving away from the source. The modifications allow for a larger area of coverage without compromising depth or accuracy. The system has unprecedented localization accuracy and data throughput that is fast enough to capture moving human body parts in high detail and in real-time.

“In much the same way that electronic cameras have become ubiquitous, our vision is to develop a new generation of lidar-based 3D cameras that are fast and capable enough to enable integration of 3D vision into all sorts of products,” study co-author Joseph Izatt said. “The world around us is 3D, so if we want robots and other automated systems to interact with us naturally and safely, they need to be able to see us as well as we can see them.”

Source: NSF

Facebook Comments Box