Robot Vision for the Visually Impaired

old_uid9465
titleRobot Vision for the Visually Impaired
start_date2010/12/17
schedule10h30
onlineno
location_infosalle bleue 2
summaryVision is one of the primary sensory modalities for humans that assists in performing several life sustaining and life-enhancing tasks, including the execution of actions such as obstacle avoidance and path-planning necessary for independent locomotion. Visual impairment has a debilitating impact on such independence and the visually impaired are often forced to restrict their movements to familiar locations or employ assistive devices such as the white cane. More recently, various electronic travel aids have been proposed that incorporate electronic sensor configurations and the mechanism of sensory substitution to provide relevant information - such as obstacle locations and body position - via audio or tactile cues. By providing higher information bandwidth (compared to the white cane) and at a greater range, it is hypothesized that the independent mobility performances of the visually impaired can be improved. The challenge is to extract and deliver information in a manner that keeps cognitive load at a level suitable for a human user to interpret in real-time. We present a novel mobility aid for the visually impaired that consists of only a pair of cameras as input sensors and a tactile vest to deliver navigation cues. By adopting a head-mounted camera design, the system creates an implicit interaction scheme where scene interpretation is done in a context-driven manner, based on head rotations and body movements of the user. Novel computer vision algorithms are designed and implemented to build a rich, 3D map of the environment, estimate current position and motion of the user and detect obstacles in the vicinity. A multi-threaded and factored simultaneous localization and mapping framework is used to tie all the different software modules together for interpreting the scene in real-time and accurately. The system always maintains a safe path for traversal through the current map, and tactile cues are generated to keep the person on this path, and delivered only when deviations are detected. With this strategy, the end user only needs to focus on making incremental adjustments to the direction of travel. We also present one of the very few computer-vision based mobility aids that have been tested with visually impaired subjects. Standard techniques employed in the assessment of mobility for people with vision loss were used to quantify performance through an obstacle course. Experimental evidence demonstrates that the number of contacts with objects in the path are reduced with the proposed system. Qualitatively, subjects with the device also follow safer paths compared to white cane users in terms of proximity to obstacles.
responsiblesPonce