Robotic understanding of spatial relationships using neural-logic learning
MetadataShow full item record
Siddiqui, H., Rattani, A., Kisku, D. R., & Dean, T. (2020). Al-based BMI inference from facial images: An application to weight monitoring. Paper presented at the Proceedings - 19th IEEE International Conference on Machine Learning and Applications, ICMLA 2020, 1101-1105. doi:10.1109/ICMLA51294.2020.00177
Understanding spatial relations of objects is critical in many robotic applications such as grasping, manipulation, and obstacle avoidance. Humans can simply reason object’s spatial relations from a glimpse of a scene based on prior knowledge of spatial constraints. The proposed method enables a robot to comprehend spatial relationships among objects from RGB-D data. This paper proposed a neural-logic learning framework to learn and reason spatial relations from raw data by following logic rules on spatial constraints. The neural-logic network consists of three blocks: grounding block, spatial logic block, and inference block. The grounding block extracts high-level features from the raw sensory data. The spatial logic blocks can predicate fundamental spatial relations by training a neural network with spatial constraints. The inference block can infer complex spatial relations based on the predicated fundamental spatial relations. Simulations and robotic experiments evaluated the performance of the proposed method.
Click on the DOI link to access the article (may not be free).