Author

Sulabh Kumra

Abstract

In robotics, there is a need of an interactive and expedite learning method as experience is expensive. In this research, we propose two different methods to make a humanoid robot learn manipulation tasks: Learning by trial-and-error, and Learning from demonstrations. Just like the way a child learns a new task assigned to him by trying all possible alternatives and further learning from his mistakes, the robot learns in the same manner in learning by trial-and error. We used Q-learning algorithm, in which the robot tries all the possible ways to do a task and creates a matrix that consists of Q-values based on the rewards it received for the actions performed. Using this method, the robot was made to learn dance moves based on a music track.

Robot Learning from Demonstrations (RLfD) enable a human user to add new capabilities to a robot in an intuitive manner without explicitly reprogramming it. In this method, the robot learns skill from demonstrations performed by a human teacher. The robot extracts features from each demonstration called as key-points and learns a model of the demonstrated task or trajectory using Hidden Markov Model (HMM). The learned model is further used to produce a generalized trajectory. In the end, we discuss the differences between two developed systems and make conclusions based on the experiments performed.

Library of Congress Subject Headings

Machine learning; Robots--Motion; Reinforcement learning; Human-robot interaction

Publication Date

7-2015

Document Type

Thesis

Student Type

Graduate

Degree Name

Electrical Engineering (MS)

Department, Program, or Center

Electrical Engineering (KGCOE)

Advisor

Ferat Sahin

Advisor/Committee Member

Gill Tsouri

Advisor/Committee Member

Sildomar T. Monteiro

Comments

Physical copy available from RIT's Wallace Library at Q325.5 .K86 2015

Campus

RIT – Main Campus

Plan Codes

EEEE-MS

Share

COinS