3 Jun 2019

Grounded Image Captions without Localization Supervision

Learning to Generate Grounded Image Captions without Localization Supervision

Chih-Yao Ma, Yannis Kalantidis, Ghassan AlRegib, Peter Vajda, Marcus Rohrbach, Zsolt Kira
Technical Report, 2019

[arXiv] [GitHub (coming soon)]


Abstract

When generating a sentence description for an image, it frequently remains unclear how well the generated caption is grounded in the image or if the model hallucinates based on priors in the dataset and/or the language model. The most common way of relating image regions with words in caption models is through an attention mechanism over the regions that is used as input to predict the next word. The model must therefore learn to predict the attention without knowing the word it should localize. In this work, we propose a novel cyclical training regimen that forces the model to localize each word in the image after the sentence decoder generates it and then reconstruct the sentence from the localized image region(s) to match the ground-truth. The initial decoder and the proposed reconstructor share parameters during training and are learned jointly with the localizer, allowing the model to regularize the attention mechanism. Our proposed framework only requires learning one extra fully-connected layer (the localizer), a layer that can be removed at test time. We show that our model significantly improves grounding accuracy without relying on grounding supervision or introducing extra computation during inference.


Proposed Concept


Qualitative Results

We conduct qualitative analysis for comparing the baseline (Unsup.) and the proposed method in the figure below. Each highlighted word has a corresponding image region annotated on the original image. The image regions are selected based on the region with the maximum attention weight. We can see that our proposed method significantly outperformed the baseline (Unsup.) in terms of both the quality of the generated sentence and grounding accuracy.

In addition, we also show a number of correct and incorrect examples of our proposed method in the figure below.


Captioning and Grounding Performance on Flickr30k-Entities


We evaluate the proposed cyclical training regimen on the Flickr30k dataset for image captioning task. To understand how our proposed method performs on captioning as well as visual grounding, we compare with the proposed strong baseline with or without grounding supervision. We train the attention mechanism (Attn.) of the baseline method as well as adding the region classification task (Cls.) using the ground-truth grounding annotation. Using the resultant supervised baseline model as the upper bound, our proposed method with cyclical training achieves relative 39% and 29% grounding accuracy improvements for F1_all and F1_loc respectively, while maintaining the captioning evaluations performances.


Code and Paper
idk

GitHub (coming soon)


Citation

If you find this repository useful, please cite our paper:


@article{ma2019learning,
    title={Learning to Generate Grounded Image Captions without Localization Supervision},
    author={Ma, Chih-Yao and Kalantidis, Yannis and AlRegib, Ghassan and Vajda, Peter and Rohrbach, Marcus and Kira, Zsolt},
    journal={arXiv preprint arXiv:1906.00283},
    year={2019},
    url={https://arxiv.org/abs/1906.00283},
}