Most LiDAR odometry and SLAM systems construct maps in point clouds, which are discrete and sparse when zoomed in, making them not directly suitable for navigation. Mesh maps represent a consecutive and dense map format with low memory consumption, which can approximate complex structures with simple elements, attracting significant attention of researchers in recent years. However, most implementations operate under a static environment assumption. In effect, moving objects cause ghosting, potentially degrading the quality of meshing. To address these issues, we propose a plug-and-play meshing module adapting to dynamic environments, which can be easily integrated with various LiDAR odometry to generally improve the pose estimation accuracy of odometry. In our meshing module, a novel two-stage coarse-to-fine dynamic removal method is designed to effectively filter dynamic objects, generating consistent, accurate, and dense mesh maps. Additionally, conducive to Gaussian process in mesh construction, sliding window-based keyframe aggregation and adaptive downsampling strategies are used to ensure the uniformity of point cloud. We evaluate the localization and mapping accuracy on five publicly available datasets. Both qualitative and quantitative results demonstrate the superiority of our method compared with the state-of-the-art algorithms. The code and introduction video are publicly available at https://yaepiii.github.io/CAD-Mesher.github.io/.
As a mapping module in SLAM, the system receives the raw point in the LiDAR coordinate system at current time, and the pose transformation from the LiDAR coordinate system to the global coordinate system estimated by the odometryas the system input. The keyframe, dropped by the proposed adaptive selection mechanism, is added to the database after visibility-based coarse dynamic removal. The keyframes within the sliding window are subsequently aggregated and converted to the world coordinate system W, then uniformly sampled by the adaptive downsampling strategy to enhance system efficiency. Continuity test is utilized to remove outliers and noise. The remaining points are divided into voxels, GP-based meshing is then conducted. In the optimization component, the pose estimated by the odometry is used as a prior for point-to-mesh registration, aligning the current scan to the global map and outputting the finer pose. Finally, after fine dynamic removal using the voxel-based probabilistic method, the current mesh is fused into the global mesh map for publication.
Our method achieves the highest reconstruction precision in both datasets, although its recall are not as high as that of SHINE-Mapping. We attribute this discrepancy to our coarse dynamic removal method inadvertently removing some static points, and the proposed consistency test inadvertently filtering out slender poles along with noise. Nevertheless, our method achieves the highest F1-Score, demonstrating its overall effectiveness. It is also worth noting that SHINE-Mapping is a deep learning-based approach for offline post-processing, which requires extensive training and cannot operate in real time.
In KITTI07, moving vehicles at intersections leave obvious ghosting in the maps for the other mesh baselines, which impedes following navigation applications. Although VDBFusion migrates dynamic impact using space caving technique, rough ground and residual ghosting still exist. In contrast, our CAD-Mesher effectively filters dynamic objects and ensures map consistency through the proposed two-stage coarse-to-fine dynamic removal strategy. However, few dynamic remnants are still observed in the map, likely related to the chosen resolution and voxel size.
In the GroundRobot01 sequence, the sparse-channel LiDAR presents a challenge to registration accuracy and meshing quality. Due to the sparsity of the point cloud, both VDBFusion and SLAMesh produce many holes in the ground, compromising the continuity of mesh map. Although SHINEMapping mitigates the impact of sparsity and generates a dense map, it exhibits stratification in the green box due to odometry drift. However, our method provides more accurate poses for meshing by refining pose estimation of odometry, thereby ensuring consistency of meshing.
Our CAD-Mesher mapping module can seamlessly integrate with various LiDAR odometry systems to further improve localization accuracy. Additionally, the integrated system can effectively cope with highly dynamic scenes and sparse-channel LiDAR data.