Abstract

Virtual cleaning of art is a key process that conservators apply to see the likely appearance of the work of art they have aimed to clean, before the process of cleaning. There have been many different approaches to virtually clean artworks but having to physically clean the artwork at a few specific places of specific colors, the need to have pure black and white paint on the painting and their low accuracy are only a few of their shortcomings prompting us to propose deep learning based approaches in this research. First we report the work we have done in this field focusing on the color estimation of the artwork virtual cleaning and then we describe our methods for the spectral reflectance estimation of artwork in virtual cleaning. In the color estimation part, a deep convolutional neural network (CNN) and a deep generative network (DGN) are suggested, which estimate the RGB image of the cleaned artwork from an RGB image of the uncleaned artwork. Applying the networks to the images of the well-known artworks (such as the Mona Lisa and The Virgin and Child with Saint Anne) and Macbeth ColorChecker and comparing the results to the only physics-based model (which is the first model that has approached the issue of virtual cleaning from the physics-point of view, hence our reference to compare our models with) shows that our methods outperform that model and have great potentials of being applied to the real situations in which there might not be much information available on the painting, and all we have is an RGB image of the uncleaned artwork. Nonetheless, the methods proposed in the first part, cannot provide us with the spectral reflectance information of the artwork, therefore, the second part of the dissertation is proposed. This part focuses on the spectral estimation of the artwork virtual cleaning. Two deep learning-based approaches are also proposed here; the first one is deep generative network. This method receives a cube of the hyperspectral image of the uncleaned artwork and tries to output another cube which is the virtually cleaned hyperspectral image of the artwork. The second approach is 1D Convolutional Autoencoder (1DCA), which is based on 1D convolutional neural network and tries to find the spectra of the virtually cleaned artwork using the spectra of the physically cleaned artworks and their corresponding uncleaned spectra. The approaches are applied to hyperspectral images of Macbeth ColorChecker (simulated in the forms of cleaned and uncleaned hyperspectral images) and the 'Haymakers' (real hyperspectral images of both cleaned and uncleaned states). The results, in terms of Euclidean distance and spectral angle between the virtually cleaned artwork and the physically cleaned one, show that the proposed approaches have outperformed the physics-based model, with DGN outperforming the 1DCA. Methods proposed herein do not rely on finding a specific type of paint and color on the painting first and take advantage of the high accuracy offered by deep learning-based approaches and they are also applicable to other paintings.

Library of Congress Subject Headings

Painting--Conservation and restoration--Technological innovations; Deep learning (Machine learning)

Publication Date

12-15-2022

Document Type

Dissertation

Student Type

Graduate

Degree Name

Imaging Science (Ph.D.)

Department, Program, or Center

Chester F. Carlson Center for Imaging Science (COS)

Advisor

David Messinger

Advisor/Committee Member

Sarah Thompson

Advisor/Committee Member

Charles Bachmann

Campus

RIT – Main Campus

Plan Codes

IMGS-PHD

Share

COinS