Comprehension of spatial constraints by neural logic learning from a single RGB-D scan

No Thumbnail Available
Authors
He, Hongsheng
Yan, Fujian
Wang, Dali
Advisors
Issue Date
2021-09-27
Type
Conference paper
Keywords
Spatial constraints , Neural-logic learning , Logic rules
Research Projects
Organizational Units
Journal Issue
Citation
F. Yan, D. Wang and H. He, "Comprehension of Spatial Constraints by Neural Logic Learning from a Single RGB-D Scan," 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021, pp. 9008-9013, doi: 10.1109/IROS51168.2021.9635939.
Abstract

Autonomous industrial assembly relies on the precise measurement of spatial constraints as designed by computer-aided design (CAD) software such as SolidWorks. This paper proposes a framework for an intelligent industrial robot to understand the spatial constraints for model assembly. An extended generative adversary network (GAN) with a 3D long short-term memory (LSTM) network was designed to composite 3D point clouds from a single RGB-D scan. The spatial constraints of the segmented point clouds are identified by a neural-logic network that incorporates general knowledge of spatial constraints in terms of first-order logic. The model was designed to comprehend a complete set of spatial constraints that are consistent with industrial CAD software, including left, right, above, below, front, behind, parallel, perpendicular, concentric, and coincident relations. The accuracy of 3D model composition and spatial constraint identification was evaluated by the RGB-D scans and 3D models in the ABC dataset. The proposed model achieved 57.23% intersection over union (IoU) in 3D model composition, and over 99% in comprehending all spatial constraints.

Table of Contents
Description
Click on the DOI link to view this conference paper (may not be free).
Publisher
IEEE
Journal
Book Title
Series
2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS);2021
PubMed ID
DOI
ISSN
2153-0866
2153-0858
EISSN