Initiating object handover in human-robot collaboration using a multi-modal wearable and deep learning visual system
Authors
Advisors
Issue Date
Type
Keywords
Citation
Abstract
Hybrid robotic systems involving humans and machines working together are becoming a fundamental way of human life from things such as home automation, cell phones, to grocery store pickups, humans are working with machines now more than ever to accomplish everyday tasks. Within these tasks, a time may present itself where a human and a robot must collaborate by handing over objects to accomplish a shared goal. To exploit robots for these human interactive object handovers, a high degree of synergy between the human and robot is crucially required to match that of a human to human handover. Collaborative robots performing these human-robot handover operations face countless difficulties because robots lack the dexterity that a human to human handover system holds. Recent technological advances in sensor designs such as vision and wearable, have helped to close the gap for human-robot systems simulating human to human handover procedures while maintaining the utmost safety. The research presented in this thesis helps to further close the gap in producing safe and optimal human-robot object handovers, by combining a multi-modal wearable and visual system. To equip the robot with the most human-like handover abilities, a web camera and a Myo armband were combined to simulate the two most used human senses during a handover process, vision, and physical orientation. The vision sensor is used for object recognition and facial detection, while the wearable sensor is used for arm orientation and rotation. Furthermore, solutions to many problems if not all, must first focus on the root cause, or in this research, the beginning initiation stage of an object handover process. Thus, the research done herein is restricted only to the initiation stage. By analyzing the data and information needed on the front line of the process, the downstream handover actions can be increasingly more effective in providing a safe, efficient, and quality human-robot object handover system.