Just like us, robots can’t see through walls. Sometimes they need a little help to get where they’re going.
Engineers at Rice University have developed a method that allows humans to help robots “see” their environments and carry out tasks.
The strategy called Bayesian Learning IN the Dark—BLIND, for short—is a novel solution to the long-standing problem of motion planning for robots that work in environments where not everything is clearly visible all the time.
The study led by computer scientists Lydia Kavraki and Vaibhav Unhelkar and co-lead authors Carlos Quintero-Peña and Constantinos Chamzas of Rice’s George R. Brown School of Engineering was presented at the Institute of Electrical and Electronics Engineers’ International Conference on Robotics and Automation in late May.
The algorithm developed primarily by Quintero-Peña and Chamzas, both graduate students working with Kavraki, keeps a human in the loop to “augment robot perception and, importantly, prevent the execution of unsafe motion,” according to the study.
To do so, they combined Bayesian inverse reinforcement learning (by which a system learns from continually updated information and experience) with established motion planning techniques to assist robots that have “high degrees of freedom”—that is, a lot of moving parts.
Recent Comments