|
Real Time Monocular Visual SLAM| old_uid | 16084 |
|---|
| title | Real Time Monocular Visual SLAM |
|---|
| start_date | 2018/06/20 |
|---|
| schedule | 10h |
|---|
| online | no |
|---|
| location_info | salle de réunion 304 |
|---|
| summary | SLAM (Simultaneous Localization And Mapping) paradigm endows a system with the capacity to produce its own maps using only its onboard sensors, while the same sensor readings are used simultaneously to self-locate the system with respect to the self-built map. When video cameras are the only sensor, we refer to it as Visual SLAM (VSLAM). VSLAM systems have proven crucial for robots to achieve autonomous operation and for accurately estimating the user 3D position in AR (Augmented Reality), hence they are also relevant in computerizing MIS (Minimally Invasive Surgery) procedures. The talk covers the visual SLAM challenges and the methods to overcome them, by means of a definite Visual SLAM system: ORBSLAM. ORBSLAM is a paradigmatic visual SLAM system, achieving top performance on the most popular benchmarks, and whose source code is publicly available under GPLv3 licence. It is able to compute, in real-time, the camera trajectory and a sparse 3D reconstruction of the scene in a wide variety of environments using commodity cameras and computers. It will be also considered the extension of ORBSLAM to deal with intracorporeal endoscopy. |
|---|
| responsibles | Baumard |
|---|
| |
|