Using Convolutional Neural Networks To Compare Paintings By Style

Authors

  • Moritz Alkofer Heidelberg University

Abstract

One of the main challenges in the field of Computer Vision and Art History is the extraction of a numerical representation of an artwork’s style. Calculating such representation allows art historians to automatically analyze large digital collections of art. In this study we aim to transfer an approach of numerical style extraction originally developed for artistic style transfer to the task of comparing paintings by style. The approach uses a Convolutional Neural Network (CNN) trained on object detection to derive an image’s style representation in the form of Gram matrices. We use it to compare paintings either by clustering a set of paintings or retrieving a paintings’ most similar paintings from the set of paintings. We hypothesize that using a different CNN architecture trained on artistic style (instead of object) detection would lead to a significant increase in comparison quality. Using an object detection network we achieve a clustering accuracy of 22%. Using a network specifically trained on artistic style detection increases the clustering accuracy by 44%. Directly using the art detection networks output instead of Gram matrices yields an accuracy of 42%. Overall, we conclude that the approach is suitable to compare paintings by style. We significantly improved the approach's accuracy by changing the network architecture and training and show that for the improved network, Gram matrices provide little benefit.

Downloads

Published

2021-12-21 — Updated on 2022-01-02

Issue

Section

Research Articles