Symbolic reasoning and learning of spatial relationships for robotic comprehension
Authors
Advisors
Issue Date
Type
Keywords
Citation
Abstract
Given the visual information of objects in a scene, humans can reason spatial relationships of those objects. To achieve a similar goal with robots is considerably a challenging task even though robotic technology is developing significantly. Comprehension of spatial relationships of objects in the workspace plays an important role in Human-Robot Interaction (HRI). This paper proposes a method which enables robots to comprehend spatial relationships among objects in shared environments by using visual information. A neural symbolic learning framework is introduced for that purpose, which integrates advantages of both numerical learning and logic inference. Spatial relationships which are held between objects that are represented in form of logic rules such as ∀𝑎,𝑏: 𝜆 (𝑎, 𝑏) → ¬𝜆(𝑏, 𝑎). Logic rules which describe spatial relationships are mapped to a numerical data-space in form of feature vectors. RGB-D data is used to feed into numerical model to learn spatial rules for robot reasoning. The embedded RGB-D sensor collects aligned depth images for numerical learning. In the Figure below, RGB-D data of objects in the scene has been captured by the Zivid sensor, recognized objects are assigned with labels, then a trained neural symbolic framework is used to reason spatial relationships. The spatial relationships comprehension by robots is evaluated by both simulations and execution of the Sawyer robot and the AR 10 robot hand.
Table of Contents
Description
Research completed in the Department of Electrical Engineering and Computer Science, College of Engineering
Publisher
Journal
Book Title
Series
v. 15

