Tim Cook has publicly commented on Apple’s work in autonomous systems before, and a new research paper from two Apple research scientists dives deeper into the company’s efforts. The paper explains how Apple is using a combination of LiDAR with other technologies for 3D object detection that represents the future.

The paper is authored by Yin Zhou, an AI researcher at Apple, and Oncel Tuzel, a machine learning research scientist at the company. Both have joined Apple within the last two years. Below are just some broad highlights, read the full paper here.

The paper explains how accurate detection of objects in 3D point clouds can be used in autonomous navigation, housekeeping robots, and more:

Furthermore, it shows how the aforementioned technology can be used in LiDAR-based car, pedestrian, and cyclist detection benchmarks. Specifically, the paper presents an alternative to hand-crafted feature representations in LiDAR-based 3D detection:

In this work, we remove the need of manual feature engineering for 3D point clouds and propose VoxelNet, a generic 3D detection network that unifies feature extraction and bounding box prediction into a single stage, end-to-end trainable deep network.

Zhou and Tuzel believe that their experiments represent the future of 3D object detection, providing better results than other technologies when detecting cars, cyclists, and pedestrians “by a large margin.”

Our approach can operate directly on sparse 3D points and capture 3D shape information effectively. We also present an efficient implementation of VoxelNet that benefits from point cloud sparsity and parallel processing on a voxel grid.

The full paper is definitely worth a read and offers a rare insight into Apple’s work on autonomous systems. Check it out here.