The Monado SLAM Dataset for Egocentric Visual-Inertial Tracking
Mateo de Mayo, Daniel Cremers, Taihú Pire
人形机器人和混合现实头显受益于使用头戴式传感器进行追踪。尽管视觉-惯性里程计(VIO)和同步定位与建图(SLAM)的进步已经产生了新颖且高质量的最先进追踪系统,但我们表明这些系统仍然无法优雅处理头戴式使用场景中出现的许多挑战性情况。诸如高强度运动、动态遮挡、长时间追踪会话、低纹理区域、不利光照条件、传感器饱和等常见场景,现有文献中的数据集仍然覆盖不足。这样一来,系统可能会无意中忽略这些关键的现实问题。为解决这一问题,我们提出了Monado SLAM数据集,这是一组来自多个虚拟现实头显的真实序列。我们以宽松的CC BY 4.0许可发布该数据集,以推动VIO/SLAM研究和开发的进步。
Humanoid robots and mixed reality headsets benefit from the use of head-mounted sensors for tracking. While advancements in visual-inertial odometry (VIO) and simultaneous localization and mapping (SLAM) have produced new and high-quality state-of-the-art tracking systems, we show that these are still unable to gracefully handle many of the challenging settings presented in the head-mounted use cases. Common scenarios like high-intensity motions, dynamic occlusions, long tracking sessions, low-t...