Camera-LiDAR Data Fusion for Autonomous Mooring Operation
Journal article, Peer reviewed
Accepted version
Permanent lenke
https://hdl.handle.net/11250/3022338Utgivelsesdato
2020Metadata
Vis full innførselSamlinger
Originalversjon
Subedi, D., Jha, A., Tyapin, I. & Hovland, G. (2020). Camera-LiDAR Data Fusion for Autonomous Mooring Operation. 2020 15th IEEE Conference on Industrial Electronics and Applications (ICIEA) (p. 1176-1181). IEEE. https://doi.org/10.1109/ICIEA48937.2020.9248089Sammendrag
The use of camera and LiDAR sensors to sense the environment has gained increasing popularity in robotics. Individual sensors, such as cameras and LiDARs, fail to meet the growing challenges in complex autonomous systems. One such scenario is autonomous mooring, where the ship has to be tied to a fixed rigid structure (bollard) to keep it stationary safely. The detection and pose estimation of the bollard based on data fusion from the camera and LiDAR are presented here. Firstly, a single shot extrinsic calibration of LiDAR with the camera is presented. Secondly, the camera-LiDAR data fusion method using camera intrinsic parameters and camera to LiDAR extrinsic parameters is proposed. Finally, the use of an image-based segmentation method to segment the corresponding point cloud from the fused camera-LiDAR data is developed and tailored for its application in autonomous mooring operation.