Abstract

Unsupervised representation learning is an important task in machine learning that identifies and models underlying explanatory factors hidden in the observed data. In recent years, unsupervised representation learning has been attracting increasing attention for its abilities to improve interpretability, extract useful features without expert annotations, and enhance downstream tasks, which has been successful in many machine learning topics, such as Computer Vision, Natural Language Processing, and Anomaly Detection. Unsupervised representation learning has many desirable abilities, including disentangling generative factors, generalization between different domains, and incremental knowledge accumulation. However, existing works had faced two critical challenges. First, the unsupervised representation learning models were often designed to learn and disentangle all representations of data at the same time, which obstructed the models from learning representations in a more progressive and reasonable way (like from easy to hard), resulting in bad (often blurry) generation quality with the loss of detailed information. Second, when it comes to a more realistic problem setting, continual unsupervised representation learning, existing works tended to suffer from catastrophic forgetting, including forgetting learned representations and how to disentangle them. The continual disentangling problem was very difficult without modeling the relationship between data environments while the forgetting problem was often alleviated by generative-reply. In this dissertation, we are interested in developing advanced unsupervised repre- sentation learning methods based on answering three research questions: (1) how to progressively learn representations such that it can improve the quality and the disentanglement of representations, (2) how to continually learn and accumulate the knowledge of representations from different data environments, and (3) how to continually reuse the existing representations to facilitate learning and disentangling representations given new data environments. We first established a novel solution for resolving the first research question: progres- sively learn and disentangle representations and demonstrated the performance in a typical static data environment. And then, for answering the rest two research questions, we extended to study a more challenging and under-investigated set- ting: unsupervised continual learning and disentangling representations of dynamic data environments, where the proposed model is capable of not only remembering old representations but also reusing them to facilitate learning and disentangling representations in a sequential data stream. In summary, in this dissertation, we proposed several novel unsupervised repre- sentation learning methods and their applications by drawing ideas from different well-studied areas such as auto-encoders, variational inference, mixture distribu- tion, and self-organizing map. We demonstrated the presented methods on various benchmarks, such as dSprites, 3DShape, MNIST, Fashion-MNIST, and CelebA, to provide the quantitative and qualitative evaluation of the learned representations. We concluded by identifying the limitations of the proposed methods and discussing future research directions.

Library of Congress Subject Headings

Transfer learning (Machine learning); Self-organizing systems; Pattern recognition systems; Artificial intelligence

Publication Date

4-2023

Document Type

Dissertation

Student Type

Graduate

Degree Name

Computing and Information Sciences (Ph.D.)

Department, Program, or Center

Computer Science (GCCIS)

Advisor

Linwei Wang

Advisor/Committee Member

Nathan Cahill

Advisor/Committee Member

Rui Li

Campus

RIT – Main Campus

Plan Codes

COMPIS-PHD

Share

COinS