Abstract

As the field of robotic and humanoid systems expand, more research is being done on how to best control systems to perform complex, smart tasks. Many supervised learning and classification techniques require large datasets, and only result in the system mimicking what it was given. The sequential relationship within datasets used for task learning results in Markov decision problems that traditional classification algorithms cannot solve. Reinforcement learning helps to solve these types of problems using a reward/punishment and exploration/exploitation methodology without the need for datasets. While this works for simple systems, complex systems are more difficult to teach using traditional reinforcement learning. Often these systems have complex, non-linear, non-intuitive cost functions which make it near impossible to model. Inverse reinforcement learning, or apprenticeship learning algorithms, learn complex cost functions based on input from an expert system. Deep learning has also made a large impact in learning complex systems, and has achieved state of the art results in several applications. Using methods from apprenticeship learning and deep learning a system can be taught complex tasks from watching an expert. It is shown here how well these types of networks solve a specific task, and how well they generalize and understand the task through raw pixel data from an expert.

Publication Date

6-2017

Document Type

Thesis

Student Type

Graduate

Degree Name

Computer Engineering (MS)

Department, Program, or Center

Computer Engineering (KGCOE)

Advisor

Raymond Ptucha

Advisor/Committee Member

Ferat Sahin

Advisor/Committee Member

Iris Asllani

Campus

RIT – Main Campus

Share

COinS