Doctoral student position at the division of Robotics, Perception and Learning (RPL) at KTH Royal Institute of Technology.
In this project we study mapping for robotics. The vision is a 3D grid representation maintained in real-time and capable of modeling a dynamic scene, including the moving objects. We will extend our work on UFOMap, which already provides an efficient way to manage data such as, e.g., occupancy and semantics. Some questions to address: How to model the dynamic information and incorporate measurements of this from various sources (optical flow, object trackers, etc)? How to enable predictions of future states? How to model the strong correlation between voxels but which is ignored in the standard grid formulation?