2023-07-18 |
16:15-17:00 |
2023-07-18,16:15-17:00 | LR12 (A7 3F) |
07-18 Afternoon TCIS Lecture Room 12 (A7 3F)
|
Speaker |
Robot active perception and motion generation for deformable object manipulation using multimodal deep learning Robots which work and support daily tasks are increasingly demanded. We aim realization of an AI-robot that autonomously learn, adapt to their environment, evolve in intelligence and act alongside human beings. The robot needs to recognize demanded goal, perceive the current situations and generate motions automatically by itself. Especially, in cooking scenario, humans daily use ingredients with complicated and fragile characteristics, which change and deform continuously by heat and force. We tackle on the question and realize a robot which can perceive target object characteristics in real time and generate a motion accordingly, which have not been achieved in previous research. We focus on active perception using multimodal sensorimotor data while a robot interacts with objects, and allow the robot to recognize their extrinsic and intrinsic characteristics. We construct a deep neural networks (DNN) model that learns to recognize object characteristics, acquires object–action relations, and generates motions. As examples, the robot performs an ingredients transfer task, using a turner or ladle to transfer an ingredient from a pot to a bowl. The results confirm that the robot recognizes object characteristics and servings even when the target ingredients are unknown. We also examine the contributions of images, force, and tactile data and show that learning a variety of multimodal information results in rich perception.
|