Apple researchers are usually hush-hush about what they share with the public on their autonomous vehicle research but this published paper shares some insights on what they’re working on.

Ever since Apple announced that they were building an autonomous car in 2014, lots of normally car manufacturer oriented news reporters tuned their ears onto what they were doing at One Infinite Loop in Cupertino. Since then, Apple has dialed back on their hopes for an autonomous car, but that doesn’t mean they’ve stopped research altogether. Instead, they’ve pivoted to autonomous tech possibly for use with other car manufacturers. Normally, their research would be very hush-hush, but not this set of research. According to AutoNews.com on their report on Apple’s latest publicly available research posted earlier yesterday (Nov. 20, 2017) Apple’s latest research simplifies and outperforms traditional lidar based radar detection methods for objects in front of them.

Appearing on arXiv.org and publically available through Cornell University, this paper is entitled: “VoxelNet: End-to-end learning for point cloud-based 3D objects detection.” Traditionally, an autonomous unit will use normal two-dimensional cameras and Lidar units to detect the depth at which those two-dimensional images go. Additionally, the autonomous driving system will combine these two elements to create a 3D representation, normally a grid turned into a box, that is then translated and identified as a moving object, say a cyclist or another moving car.

This proposed new way of detecting objects in front of them is called VoxelNet and does away with the two-dimensional cameras altogether and just uses Lidar. Results are reported, “highly encouraging” and seem to be outperforming the other methods already out there.

You can read up on the paper here but from what I can understand, instead of using a two-dimensional image in conjunction with their Lidar units, they use VoxelNet which is an ever-changing and learning smart layer that they project onto the Lidar point cloud taking in information in real-time. Presumably, this layer gets so good at combining real-time LiDAR data and detecting objects that it’s outperforming the traditional methods by “a large margin.”

The NY Times reported that Apple plans to launch their autonomous driving shuttle service between buildings sometime soon and I can imagine that they’ll use that opportunity as a moving testbed for this new object detection method.

I plan to follow along with new research that pops up and will hopefully report to you on their progress.

LEAVE A REPLY

Please enter your comment!
Please enter your name here