“DC2G” navigation approach may speed up autonomous last-mile delivery

Image adapted from MIT News original.

A new navigation method developed by MIT researchers may speed up autonomous last-mile delivery.

Robots could soon be used as last-mile delivery vehicles to drop off orders to a customer’s doorstep. However, to do this the robots need to be able to find a customer’s door – a task that has presented challenges.

The standard approach for robotic navigation involves mapping a specific area within a robot’s operational zone ahead of time and then using algorithms to guide the robot toward a specific location or GPS coordinate on the map. This approach, while adequate in certain situations, is not ideal in the context of last-mile delivery.

Jennifer Chu of MIT News Office explained how this approach can become unwieldy:

“Imagine, for instance, having to map in advance every single neighborhood within a robot’s delivery zone, including the configuration of each house within that neighborhood along with the specific coordinates of each house’s front door. Such a task can be difficult to scale to an entire city, particularly as the exteriors of houses often change with the seasons. Mapping every single house could also run into issues of security and privacy.”

To address this problem, a team of MIT engineers set out to develop a more feasible method that drastically cuts the amount of time a robot wastes exploring a property before identifying its target.

The team developed a new robotic navigation method that does not require mapping an area ahead of time.

The method allows robots to use clues in its environment to path a route to its destination, which can be described in general semantic terms, such as “front door” or “garage”.

“We wouldn’t want to have to make a map of every building that we’d need to visit,” says Michael Everett, a graduate student in MIT’s Department of Mechanical Engineering. “With this technique, we hope to drop a robot at the end of any driveway and have it find a door.”


Video – “Planning Beyond the Sensing Horizon Using a Learned Context”:


As the engineers explain in their research paper, the new approach, called Deep Cost-to-Go (DC2G), represents scene context in a semantic gridmap, learns to estimate which areas are beneficial to explore to quickly reach the goal, and then plans toward promising regions in the map.

“Now we have an ability to give robots a sense of what things are, in real-time,” Everett said.

Across a test set of 42 real test house layouts (gathered from satellite images), the DC2G planning algorithm reached its goal 189% faster than a context-unaware planner, and within 63% of the optimal path computed with a prior map.

The algorithm was inspired by image-to-image translation, “where you take a picture of a cat and make it look like a dog,” Everett said. “The same type of idea happens here where you take one image that looks like a map of the world, and turn it into this other image that looks like the map of the world but now is colored based on how close different points of the map are to the end goal.”

The research will be presented at the upcoming International Conference on Intelligent Robots and Systems. The research paper, titled “Planning Beyond The Sensing Horizon Using a Learned Context” [PDF], is a finalist for “Best Paper for Cognitive Robots.”


This articles focuses on a major advancement in the field of artificial intelligence (AI).

AI, which refers to software technologies that make computers or robots think and behave like human beings, is becoming more common in business and production processes.

The term was first coined by American computer scientist John McCarthy (1927-2011). McCarthy was awarded the Turing Prize in 1971 for his contribution to the topics of AI.