Cornell researchers have discovered a new low-cost method for self-driving cars to accurately perceive 3D objects around them.
Light Detection and Ranging (LiDAR) sensors that autonomous cars currently use to detect objects are expensive and energy-inefficient. However, they are extremely accurate and the only way for autonomous vehicles to safely detect pedestrians and other road hazards.
In their paper, titled “Pseudo-LiDAR from Visual Depth Estimation: Bridging the Gap in 3D Object Detection for Autonomous Driving,” researchers propose converting image-based depth maps to pseudo-LiDAR representations – essentially mimicking LiDAR signal.
The team devised a simpler method which involves using two inexpensive cameras on either side of a vehicle’s windshield capable of detecting objects with almost the same accuracy as LiDAR – but significantly less expensive.
Analyzing the captured images from a bird’s-eye view instead of a traditional frontal view more than tripled their accuracy.
The team says that this could make stereo cameras a viable and low-cost alternative to LiDAR.
Kilian Weinberger, associate professor of computer science and senior author of the paper, will present the team’s findings at the 2019 Conference on Computer Vision and Pattern Recognition, June 15-21 in Long Beach, California.
“One of the essential problems in self-driving cars is to identify objects around them – obviously that’s crucial for a car to navigate its environment,” said Weinberger.
“When you have camera images, it’s so, so, so tempting to look at the frontal view, because that’s what the camera sees,” Weinberger added. “But there also lies the problem, because if you see objects from the front then the way they’re processed actually deforms them, and you blur objects into the background and deform their shapes.”
The findings of the research suggests that stereo cameras could be used as a primary method for identifying 3D objects in more affordable car models, or as a backup method in more premium vehicles that have LiDAR.
“The common belief is that you couldn’t make self-driving cars without LiDARs,” Weinberger said. “We’ve shown, at least in principle, that it’s possible.”
Mark Campbell, the John A. Mellowes ’60 Professor and S.C. Thomas Sze Director of the Sibley School of Mechanical and Aerospace Engineering and a co-author of the paper, said:
“The self-driving car industry has been reluctant to move away from LiDAR, even with the high costs, given its excellent range accuracy – which is essential for safety around the car.”
“The dramatic improvement of range detection and accuracy, with the bird’s-eye representation of camera data, has the potential to revolutionize the industry,” he added.
Co-author Bharath Hariharan, assistant professor of computer science, commented:
“There is a tendency in current practice to feed the data as-is to complex machine learning algorithms under the assumption that these algorithms can always extract the relevant information.
“Our results suggest that this is not necessarily true, and that we should give some thought to how the data is represented.”