βIs This an Example Image?β β Predicting the Relative Abstractness Level of Image and Text Chapter
Overview
abstract
- Successful multimodal search and retrieval requires the automatic understanding of semantic cross-modal relations, which, however, is still an open research problem. Previous work has suggested the metrics cross-modal mutual information and semantic correlation to model and predict cross-modal semantic relations of image and text. In this paper, we present an approach to predict the (cross-modal) relative abstractness level of a given image-text pair, that is whether the image is an abstraction of the text or vice versa. For this purpose, we introduce a new metric that captures this specific relationship between image and text at the Abstractness Level (ABS). We present a deep learning approach to predict this metric, which relies on an autoencoder architecture that allows us to significantly reduce the required amount of labeled training data. A comprehensive set of publicly available scientific documents has been gathered. Experimental results on a challenging test set demonstrate the feasibility of the approach.
authors
publication date
- April 7, 2019
publisher
- Springer, Cham Publisher
has restriction
- © Springer Nature Switzerland AG
published in
output of process or event
Identity
Digital Object Identifier (DOI)
International Standard Book Number (ISBN) 13
- 9783030157111
- 9783030157128
Additional Document Info
number of pages
- 14
start page
- 711
end page
- 725