In robotics, there is a need of an interactive and expedite learning method as experience is expensive. In this research, we propose two different methods to make a humanoid robot learn manipulation tasks: Learning by trial-and-error, and Learning from demonstrations. Just like the way a child learns a new task assigned to him by trying all possible alternatives and further learning from his mistakes, the robot learns in the same manner in learning by trial-and error. We used Q-learning algorithm, in which the robot tries all the possible ways to do a task and creates a matrix that consists of Q-values based on the rewards it received for the actions performed. Using this method, the robot was made to learn dance moves based on a music track.
Robot Learning from Demonstrations (RLfD) enable a human user to add new capabilities to a robot in an intuitive manner without explicitly reprogramming it. In this method, the robot learns skill from demonstrations performed by a human teacher. The robot extracts features from each demonstration called as key-points and learns a model of the demonstrated task or trajectory using Hidden Markov Model (HMM). The learned model is further used to produce a generalized trajectory. In the end, we discuss the differences between two developed systems and make conclusions based on the experiments performed.
Library of Congress Subject Headings
Machine learning; Robots--Motion; Reinforcement learning; Human-robot interaction
Electrical Engineering (MS)
Department, Program, or Center
Electrical Engineering (KGCOE)
Sildomar T. Monteiro
Kumra, Sulabh, "Robot Learning Dual-Arm Manipulation Tasks by Trial-and-Error and Multiple Human Demonstrations" (2015). Thesis. Rochester Institute of Technology. Accessed from
RIT – Main Campus