No matter what automaker marketing says, scientists confirm that current self-driving technologies are still not safe enough. Various research and development projects continue to improve autonomous driving technology, and one of them claims to have discovered a key element to make it more reliable and safer.
Heng “Hank” Yang, a graduate student at the Massachusetts Institute of Technology (MIT), is working with Luca Carlone, the Leonardo Career Development Associate Professor in Engineering, on what is known as “certifiable perception,” a project aimed at developing algorithms that can optimize the robot’s perception.
The premise is that robotic systems that interpret their environment (as implemented in driverless cars) use algorithms to make estimates, but there is no way to tell whether those estimates were correct or not. Therefore “certification” would be helpful.
For example, a self-driving car takes snapshots of an approaching car and then tries to “match” each key point in that image with the labeled key points in a 3D car model. This is a machine learning process called a “neural network”. The algorithm developed by Yang’s team tries to find the successful match. If the match is not correct, it knows how to keep trying, and if there are no better solutions, a certificate is issued.
Ultimately, the perception system should “recognize” when it has failed and alert the driver to take over the steering wheel if this happens.
The 3D model would also allow driverless cars to identify vehicle shapes that are not in their vehicle model library by morphing them until they match the 2D snapshot.
The Yang team’s algorithm has already won the “Best Paper Award in Robot Vision” at the International Conference on Robotics and Automation (ICRA) and was a finalist for the “Best Paper Award” at the Robotics: Science and Systems (RSS) Conference.
Next-generation algorithms could be the key to a “trustworthy autonomy” of vehicles, says the young researcher.