This MSCV Capstone Project is sponsored by Amazon Lab126 and advised by Prof. Kaess.
Typical visual SLAM (Simultaneous Localization and Mapping) systems track features (points, lines) in the environment to derive the camera trajectory, however, these features are typically not semantically meaningful. In this project, we attempt to investigate the use of semantic features that might coincide with objects or are derived from other forms of scene understanding. Understanding features will allow trajectory estimation to focus on structural features that can be assumed to be static (doorways, sink, stove, etc.) while tracking others that might move over time (doors, chairs).One challenge is that the approach should also work in environments with objects that have not previously been observed or learned.