Edge-Distributed Fusion of Camera-LiDAR for Robust Moving Object Localization
Peer reviewed, Journal article
Published version
View/ Open
Date
2023Metadata
Show full item recordCollections
Original version
Andrade, J. A. N., Dayal, A., Cenkeramaddi, L. R. & Jha, A. (2023). Edge-Distributed Fusion of Camera-LiDAR for Robust Moving Object Localization. IEEE Access, 11, 73583-73598. https://doi.org/10.1109/ACCESS.2023.3295212Abstract
Object localization plays a crucial role in computational perception, enabling applications ranging from surveillance to autonomous navigation. This can be leveraged by fusing data from cameras and LiDARs (Light Detection and Ranging). However, there are challenges in employing current fusion methods in edge devices, while keeping the process flexible and modular. This paper presents a method for multiple object localization that fuses LiDAR and camera data with low-latency, flexibility and scalability. Data is obtained from 360° surround view four cameras and a scanning LiDAR distributed over embedded devices. The proposed technique: 1) discriminates dynamic multiple objects in the scene from raw point clouds, clusters their respective points to obtain a compact representation in 3D space; and 2) asynchronously fuse the centroids with data from object detection neural networks for each camera for detection, localization, and tracking. The proposed method meets above functionalities with low-latency fusion and increased field of view for safer navigation, even with intermittent flow of labels and bounding boxes from models. That makes our system distributed, modular, scalable and agnostic to the object detection model, distinguishing it from the current state-of-art. Finally, the proposed method is implemented and validated in both indoor environment and publicly available outdoor KITTI 360 data set. The fusion occurs much faster and accurate when compared with traditional non-data driven fusion technique and the latency is competitive with other non-embedded deep learning fusion methods. The mean error is estimated to be ≈ 5 cm and precision of 2 cm for indoor navigation of 15 m (error percentage of 0.3 %). Similarly, mean error of 30 cm and precision of 3 cm for outdoor navigation of 35 m on KITTI 360 data set (error percentage of 0.8 %).