Multimodal estimation and communication of latent semantic knowledge for robust execution of robot instructions. (September 2020)
- Record Type:
- Journal Article
- Title:
- Multimodal estimation and communication of latent semantic knowledge for robust execution of robot instructions. (September 2020)
- Main Title:
- Multimodal estimation and communication of latent semantic knowledge for robust execution of robot instructions
- Authors:
- Arkin, Jacob
Park, Daehyung
Roy, Subhro
Walter, Matthew R
Roy, Nicholas
Howard, Thomas M
Paul, Rohan - Abstract:
- The goal of this article is to enable robots to perform robust task execution following human instructions in partially observable environments. A robot's ability to interpret and execute commands is fundamentally tied to its semantic world knowledge. Commonly, robots use exteroceptive sensors, such as cameras or LiDAR, to detect entities in the workspace and infer their visual properties and spatial relationships. However, semantic world properties are often visually imperceptible. We posit the use of non-exteroceptive modalities including physical proprioception, factual descriptions, and domain knowledge as mechanisms for inferring semantic properties of objects. We introduce a probabilistic model that fuses linguistic knowledge with visual and haptic observations into a cumulative belief over latent world attributes to infer the meaning of instructions and execute the instructed tasks in a manner robust to erroneous, noisy, or contradictory evidence. In addition, we provide a method that allows the robot to communicate knowledge dissonance back to the human as a means of correcting errors in the operator's world model. Finally, we propose an efficient framework that anticipates possible linguistic interactions and infers the associated groundings for the current world state, thereby bootstrapping both language understanding and generation. We present experiments on manipulators for tasks that require inference over partially observed semantic properties, and evaluate ourThe goal of this article is to enable robots to perform robust task execution following human instructions in partially observable environments. A robot's ability to interpret and execute commands is fundamentally tied to its semantic world knowledge. Commonly, robots use exteroceptive sensors, such as cameras or LiDAR, to detect entities in the workspace and infer their visual properties and spatial relationships. However, semantic world properties are often visually imperceptible. We posit the use of non-exteroceptive modalities including physical proprioception, factual descriptions, and domain knowledge as mechanisms for inferring semantic properties of objects. We introduce a probabilistic model that fuses linguistic knowledge with visual and haptic observations into a cumulative belief over latent world attributes to infer the meaning of instructions and execute the instructed tasks in a manner robust to erroneous, noisy, or contradictory evidence. In addition, we provide a method that allows the robot to communicate knowledge dissonance back to the human as a means of correcting errors in the operator's world model. Finally, we propose an efficient framework that anticipates possible linguistic interactions and infers the associated groundings for the current world state, thereby bootstrapping both language understanding and generation. We present experiments on manipulators for tasks that require inference over partially observed semantic properties, and evaluate our framework's ability to exploit expressed information and knowledge bases to facilitate convergence, and generate statements to correct declared facts that were observed to be inconsistent with the robot's estimate of object properties. … (more)
- Is Part Of:
- International journal of robotics research. Volume 39:Number 10/11(2020)
- Journal:
- International journal of robotics research
- Issue:
- Volume 39:Number 10/11(2020)
- Issue Display:
- Volume 39, Issue 10, Part 11 (2020)
- Year:
- 2020
- Volume:
- 39
- Issue:
- 10
- Part:
- 11
- Issue Sort Value:
- 2020-0039-0010-0011
- Page Start:
- 1279
- Page End:
- 1304
- Publication Date:
- 2020-09
- Subjects:
- Human–robot collaboration -- semantic state estimation -- Bayesian modeling -- multimodal interaction -- natural language understanding
Robots -- Periodicals
Robots, Industrial -- Periodicals
629.89205 - Journal URLs:
- http://ijr.sagepub.com/ ↗
http://www.uk.sagepub.com/home.nav ↗ - DOI:
- 10.1177/0278364920917755 ↗
- Languages:
- English
- ISSNs:
- 0278-3649
- Deposit Type:
- Legaldeposit
- View Content:
- Available online (eLD content is only available in our Reading Rooms) ↗
- Physical Locations:
- British Library DSC - BLDSS-3PM
British Library HMNTS - ELD Digital store - Ingest File:
- 14031.xml