Deep Cross-Modal Learning for Object Grasp and Manipulation in Robotics

titleDeep Cross-Modal Learning for Object Grasp and Manipulation in Robotics
start_date2023/03/03
schedule11h
onlineno
location_infosalle de conférence Albe-Fessard
detailsinvité par Daniel Shulz
summaryFor robots to execute tasks in unstructured environments, visuo-tactile plays a key role. The vision-based technologies have become essential for an effective analysis of the scene, path planning, and observing the behavior of humans in the robot workspace. However, vision alone is often not enough to achieve sufficient perception capabilities of robotic systems in unstructured environments, due to variable light conditions, occlusions in cluttered scenes, and a requirement for contact information between robot and environment. Tactile perception is of fundamental importance for robots that physically interact with the external environment. Wisely leveraging tactile information provides robots with enhanced perceptive capabilities. For these reasons, interactive tactile perception is becoming important research directions to support visual perception. Even though tactile and visual perception has gained a great deal of interest, the field of active visuo-tactile interactive perception and cross-modal learning have not been profusely explored in robotics. A robotic system with active visuo-tactile perception and cross-modal learning capability can leverage a priori knowledge acquired with one modality and efficiently use it with other at execution time. In this talk, I will present our recent developed full-fledged active visuo-tactile perception and deep-cross modal learning framework for the robotics systems to efficiently localize cluttered objects in an unknow workspace and to recognize objects, previously inspected with one modality like vision, via tactile modality.
responsiblesPerignon