Depth Based Pruning

This represents a case where there is a place where between two time instances there is a low dynamic change (object removal). So if we individually map the scene using ORBSLAM3, the points corresponding to the object would be present in the first case and absent in the second
If we are moving in scene 2 but re-use the map from scene 1. The final map generated (after scene 2) would also contain points corresponding to the points on the shelf from scene 1. These points would be present in the map but be absent in the actual scene.
This is our strategy of selecting absent points. Keypoints from map are tested with the current depthmap. The depth at the pixel according to the stored map (using geometry) is compared with the actual depthmap value at the expected location. And based on the above conditions, it is marked as absent or not. At the end of the sequence, we will remove all points marked as absent.

Results

This is an example where we are performing SLAM in the second scene after localizing on a map generated from the first scene. You will observe that big chunks of the map turn green. These correspond to objects which were present in scene 1 but are absent in scene 2.

Full Videos

Both the following videos perform SLAM on the same three sequences, the first one performs depth based pruning, the second video does not. We will find that a lot of points are marked green. These are the point which will be deleted in the end. The quantitative analysis is provided below

Quantitative Results

These are the results for the two sets of three sequences. We find that the depth based pruning proves to be much more scaleable. The reason is that the number of points increase slowly. The ATE also does not decrease by a very significant margin