COMPUTER VISION

ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision

김원재(카카오), 손보경(카카오엔터프라이즈), 김일두(카카오브레인)

International Conference on Machine Learning (ICML) Long Talk

2021-07-18

Abstract

Vision-and-Language Pretraining(VLP) has improved performance on various joint vision-and language downstream tasks. Current approaches to VLP heavily rely on image feature extraction processes, most of which involve region supervision (e.g., object detection) and the convolutional architecture (e.g., ResNet). Although disregarded in the literature, we find it problematic in terms of both (1) efficiency/speed, that simply extracting input features requires much more computation than the multimodal interaction steps; and (2) expressive power, as it is upper bounded to the expressive power of the visual encoder and its predefined visual vocabulary. In this paper, we present a minimal VLP model, Vision-andLanguage Transformer (ViLT), monolithic in the sense that processing of visual inputs is drastically simplified to just the same convolution-free manner that we process textual inputs. We show that ViLT is up to 60 times faster than previous VLP models, yet with competitive or better downstream task performance.