Understanding spatial relationships in tabletop scenarios for human-robot collaboration
Authors
Advisors
Issue Date
Type
Poster
Keywords
Citation
Abstract
With the recent advances in artificial intelligence (AI) models for robotics and computer vision, there is a growing emphasis on developing robust frameworks that integrate domain knowledge and physical models into AI systems. This is crucial for enhancing the accuracy and reliability of machine learning techniques. Our research, conducted at Wichita State University, focuses on addressing this challenge by training Spatial Bayesian Neural Networks (SBNNs) for scene analysis, specifically targeting semantic understanding while accounting for uncertainty. Unlike conventional neural networks, SBNNs incorporate uncertainty modeling by utilizing distributions as weights. The project comprises three key phases: (1) object detection in simulated scenes using a Faster Region-based Convolutional Neural Network (Faster R-CNN); (2) the development of a SBNN tailored for semantic understanding; and (3) the integration of the SBNN into a collaborative human-robot system. We initiate our work by simulating Red-Green-Blue Depth (RGB-D) camera data and capturing scenes containing 40 household objects within the PyBullet environment. Object detection is performed using Faster R-CNN, and their spatial relationships are recorded with a robotic arm in motion. Subsequently, we designed an SBNN to assess the spatial relationships between pairs of objects. Future endeavors will involve expanding the SBNN's capabilities to evaluate more abstract concepts such as orderliness and task feasibility. This expansion will be supported by training data derived from a broader range of scenarios generated by PyBullet, encompassing environments like kitchens, garages, and other household environments.
Table of Contents
Description
Research project completed at the School of Computing, Department of Chemistry and Biochemistry, Department of Mathematics and Statistics.