Vis enkel innførsel

dc.contributor.authorAndrade, José Amendola Netto
dc.contributor.authorDayal, Aveen
dc.contributor.authorCenkeramaddi, Linga Reddy
dc.contributor.authorJha, Ajit
dc.date.accessioned2024-04-16T12:05:59Z
dc.date.available2024-04-16T12:05:59Z
dc.date.created2023-08-28T21:11:19Z
dc.date.issued2023
dc.identifier.citationAndrade, J. A. N., Dayal, A., Cenkeramaddi, L. R. & Jha, A. (2023). Edge-Distributed Fusion of Camera-LiDAR for Robust Moving Object Localization. IEEE Access, 11, 73583-73598.en_US
dc.identifier.issn2169-3536
dc.identifier.urihttps://hdl.handle.net/11250/3126821
dc.description.abstractObject localization plays a crucial role in computational perception, enabling applications ranging from surveillance to autonomous navigation. This can be leveraged by fusing data from cameras and LiDARs (Light Detection and Ranging). However, there are challenges in employing current fusion methods in edge devices, while keeping the process flexible and modular. This paper presents a method for multiple object localization that fuses LiDAR and camera data with low-latency, flexibility and scalability. Data is obtained from 360° surround view four cameras and a scanning LiDAR distributed over embedded devices. The proposed technique: 1) discriminates dynamic multiple objects in the scene from raw point clouds, clusters their respective points to obtain a compact representation in 3D space; and 2) asynchronously fuse the centroids with data from object detection neural networks for each camera for detection, localization, and tracking. The proposed method meets above functionalities with low-latency fusion and increased field of view for safer navigation, even with intermittent flow of labels and bounding boxes from models. That makes our system distributed, modular, scalable and agnostic to the object detection model, distinguishing it from the current state-of-art. Finally, the proposed method is implemented and validated in both indoor environment and publicly available outdoor KITTI 360 data set. The fusion occurs much faster and accurate when compared with traditional non-data driven fusion technique and the latency is competitive with other non-embedded deep learning fusion methods. The mean error is estimated to be ≈ 5 cm and precision of 2 cm for indoor navigation of 15 m (error percentage of 0.3 %). Similarly, mean error of 30 cm and precision of 3 cm for outdoor navigation of 35 m on KITTI 360 data set (error percentage of 0.8 %).en_US
dc.language.isoengen_US
dc.publisherIEEEen_US
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/deed.no*
dc.titleEdge-Distributed Fusion of Camera-LiDAR for Robust Moving Object Localizationen_US
dc.title.alternativeEdge-Distributed Fusion of Camera-LiDAR for Robust Moving Object Localizationen_US
dc.typePeer revieweden_US
dc.typeJournal articleen_US
dc.description.versionpublishedVersionen_US
dc.rights.holder© 2023 The Author(s)en_US
dc.subject.nsiVDP::Teknologi: 500en_US
dc.source.pagenumber73583-73598en_US
dc.source.volume11en_US
dc.source.journalIEEE Accessen_US
dc.identifier.doihttps://doi.org/10.1109/ACCESS.2023.3295212
dc.identifier.cristin2170385
dc.relation.projectNorges forskningsråd: 287918en_US
cristin.qualitycode1


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel

Attribution-NonCommercial-NoDerivatives 4.0 Internasjonal
Med mindre annet er angitt, så er denne innførselen lisensiert som Attribution-NonCommercial-NoDerivatives 4.0 Internasjonal