Rechercher

[BCC17] MUTAN: Multimodal Tucker Fusion for Visual Question Answering

Conférence Internationale avec comité de lecture : IEEE International Conference on Computer Vision (ICCV), October 2017, pp.2631-2639, Venice, Italy, (DOI: 10.1109/ICCV.2017.285)

Mots clés: Deep Learning, Visual Question Answering

Résumé: Bilinear models provide an appealing framework for mixing and merging information in Visual Question Answering (VQA) tasks. They help to learn high level associations between question meaning and visual concepts in the image, but they suffer from huge dimensionality issues. We introduce MUTAN, a multimodal tensor-based Tucker decomposition to efficiently parametrize bilinear interactions between visual and textual representations. Additionally to the Tucker framework, we design a low-rank matrix-based decomposition to explicitly constrain the interaction rank. With MUTAN, we control the complexity of the merging scheme while keeping nice interpretable fusion relations. We show how our MUTAN model generalizes some of the latest VQA architectures, providing state-of-the-art results.

BibTeX

@inproceedings {
BCC17,
title="{MUTAN: Multimodal Tucker Fusion for Visual Question Answering }",
author=" H. Ben Younes and R. Cadene and M. Cord and N. Thome ",
booktitle="{ IEEE International Conference on Computer Vision (ICCV)}",
year=2017,
month="October",
pages="2631-2639",
address="Venice, Italy",
doi="10.1109/ICCV.2017.285",
}