Web Image Context Extraction (WICE) consists in obtaining the textual
information describing an image using the content of the surrounding webpage. A
common preprocessing step before performing WICE is to render the content of
the webpage. When done at a large scale (e.g., for search engine indexation),
it may become very computationally costly (up to several seconds per page). To
avoid this cost, we introduce a novel WICE approach that combines Graph Neural
Networks (GNNs) and Natural Language Processing models. Our method relies on a
graph model containing both node types and text as features. The model is fed
through several blocks of GNNs to extract the textual context. Since no labeled
WICE dataset with ground truth exists, we train and evaluate the GNNs on a
proxy task that consists in finding the semantically closest text to the image
caption. We then interpret importance weights to find the most relevant text
nodes and define them as the image context. Thanks to GNNs, our model is able
to encode both structural and semantic information from the webpage. We show
that our approach gives promising results to help address the large-scale WICE
problem using only HTML data.