Understanding spatial constraints for autonomous robotic assembly with neural logic learning

Loading...
Thumbnail Image
Issue Date
2021-04-02
Authors
Yan, Fujian
Advisor
He, Hongsheng
Citation

Yan, F. 2021. Understanding spatial constraints for autonomous robotic assembly with neural logic learning -- In Proceedings: 17th Annual Symposium on Graduate Research and Scholarly Projects. Wichita, KS: Wichita State University

Abstract

Spatial constraints of objects are one of the key elements that are required in industrial assembly. Robots deployed in conventional assembly lines are based on schema by referring to computer-aided design (CAD) software. Spatial constraints are modeled by computer-aided design (CAD) software. Compared with conventional assembly lines, autonomous robotic assembly requires robots to learn spatial constraints intelligently. Therefore, understanding spatial constraints are critical for autonomous robotic assembly. This work proposed a method to address the critical need of enabling robots to comprehend spatial constraints with a single RGB-D scan. The proposed method contains two parts: the first one generates 3D models to fulfill the missing point-cloud of a single RGB-D scan of objects with an extended generative adversary network (GAN). The second part enables robots to comprehended spatial constraints with a neural-logic network. The spatial constraints include left, right, above, below, front, behind, parallel, perpendicular, concentric, and coincident. The 3D composition model achieved 57.23% intersection over union (IoU), and the neural logic model that can learn spatial constraints achieved over 99% in comprehending all spatial constraints.

Table of Content
Description
Presented to the 17th Annual Symposium on Graduate Research and Scholarly Projects (GRASP) held online, Wichita State University, April 2, 2021.
Research completed in the Department of Electrical Engineering and Computer Science, College of Engineering
publication.page.dc.relation.uri