Computer scientists’ interactive program aids motion planning for environments with obstacles — LiveScience.Tech


Just like us, robots can’t translucent walls. Sometimes they require a little aid to get where they’re going.

Engineers at Rice University have actually established an approach that permits people to assist robots “see” their environments and perform jobs.

The technique called Bayesian Learning IN the Dark — BLIND, for short — is an unique service to the enduring issue of motion planning for robots that operate in environments where not whatever is plainly noticeable all the time.

The peer-reviewed research study led by computer researchers Lydia Kavraki and Vaibhav Unhelkar and co-lead authors Carlos Quintero-Peña and Constantinos Chamzas of Rice’s George R. Brown School of Engineering existed at the Institute of Electrical and Electronics Engineers’ International Conference on Robotics and Automation in late May.

The algorithm established mainly by Quintero-Peña and Chamzas, both college students working with Kavraki, keeps a human in the loop to “augment robot perception and, importantly, prevent the execution of unsafe motion,” according to the research study.

To do so, they integrated Bayesian inverted support knowing (by which a system gains from continuously upgraded details and experience) with developed motion planning strategies to help robots that have “high degrees of freedom” — that is, a great deal of moving parts.

To test BLIND, the Rice laboratory directed a Fetch robot, an articulated arm with 7 joints, to get a little cylinder from a table and move it to another, however in doing so it needed to move past a barrier.

“If you have more joints, instructions to the robot are complicated,” Quintero-Peña said. “If you’re directing a human, you can just say, ‘Lift up your hand.'”

But a robot’s developers need to specify about the motion of each joint at each point in its trajectory, specifically when obstacles obstruct the device’s “view” of its target.

Rather than setting a trajectory in advance, BLIND inserts a human mid-process to fine-tune the choreographed choices — or finest guesses — recommended by the robot’s algorithm. “BLIND allows us to take information in the human’s head and compute our trajectories in this high-degree-of-freedom space,” Quintero-Peña stated.

“We use a specific way of feedback called critique, basically a binary form of feedback where the human is given labels on pieces of the trajectory,” he said.

These labels look like linked green dots that represent possible courses. As BLIND actions from dot to dot, the human authorizes or declines each motion to fine-tune the course, preventing obstacles as effectively as possible.

“It’s an easy interface for people to use, because we can say, ‘I like this’ or ‘I don’t like that,’ and the robot uses this information to plan,” Chamzas stated. Once rewarded with an authorized set of motions, the robot can perform its job, he stated.

“One of the most important things here is that human preferences are hard to describe with a mathematical formula,” Quintero-Peña stated. “Our work simplifies human-robot relationships by incorporating human preferences. That’s how I think applications will get the most benefit from this work.”

“This work wonderfully exemplifies how a little, but targeted, human intervention can significantly enhance the capabilities of robots to execute complex tasks in environments where some parts are completely unknown to the robot but known to the human,” said Kavraki, a robotics leader whose resume consists of sophisticated shows for NASA’s humanoid Robonaut aboard the International Space Station.

“It shows how methods for human-robot interaction, the topic of research of my colleague Professor Unhelkar, and automated planning pioneered for years at my laboratory can blend to deliver reliable solutions that also respect human preferences.”

Rice undergraduate alumna Zhanyi Sun and Unhelkar, an assistant teacher of computer science, are co-authors of the paper. Kavraki is the Noah Harding Professor of Computer Science and a teacher of bioengineering, electrical and computer engineering and mechanical engineering, and director of the Ken Kennedy Institute.

The National Science Foundation (2008720, 1718487) and an NSF Graduate Research Fellowship Program grant (1842494) supported the research study.

Video: https://youtu.be/RbDDiApQhNo

Recommended For You

About the Author: Dr. James Goodall

Leave a Reply

Your email address will not be published.