Cross-modal knowledge reasoning for knowledge-based visual question answering. (December 2020)
- Record Type:
- Journal Article
- Title:
- Cross-modal knowledge reasoning for knowledge-based visual question answering. (December 2020)
- Main Title:
- Cross-modal knowledge reasoning for knowledge-based visual question answering
- Authors:
- Yu, Jing
Zhu, Zihao
Wang, Yujing
Zhang, Weifeng
Hu, Yue
Tan, Jianlong - Abstract:
- Highlights: Using multiple knowledge graphs from the visual, semantic and factual views to depict the multimodal knowledge. A memory-based recurrent model for multi-step knowledge reasoning over graphstructured multimodal knowledge. Good interpretability to reveal the knowledge selection mode from different modalities. Significant improvement over state-of-the-art approaches on three benchmark datasets. Abstract: Knowledge-based Visual Question Answering (KVQA) requires external knowledge beyond the visible content to answer questions about an image. This ability is challenging but indispensable to achieve general VQA. One limitation of existing KVQA solutions is that they jointly embed all kinds of information without fine-grained selection, which introduces unexpected noises for reasoning the correct answer. How to capture the question-oriented and information-complementary evidence remains a key challenge to solve the problem. Inspired by the human cognition theory, in this paper, we depict an image by multiple knowledge graphs from the visual, semantic and factual views. Thereinto, the visual graph and semantic graph are regarded as image-conditioned instantiation of the factual graph. On top of these new representations, we re-formulate Knowledge-based Visual Question Answering as a recurrent reasoning process for obtaining complementary evidence from multimodal information. To this end, we decompose the model into a series of memory-based reasoning steps, eachHighlights: Using multiple knowledge graphs from the visual, semantic and factual views to depict the multimodal knowledge. A memory-based recurrent model for multi-step knowledge reasoning over graphstructured multimodal knowledge. Good interpretability to reveal the knowledge selection mode from different modalities. Significant improvement over state-of-the-art approaches on three benchmark datasets. Abstract: Knowledge-based Visual Question Answering (KVQA) requires external knowledge beyond the visible content to answer questions about an image. This ability is challenging but indispensable to achieve general VQA. One limitation of existing KVQA solutions is that they jointly embed all kinds of information without fine-grained selection, which introduces unexpected noises for reasoning the correct answer. How to capture the question-oriented and information-complementary evidence remains a key challenge to solve the problem. Inspired by the human cognition theory, in this paper, we depict an image by multiple knowledge graphs from the visual, semantic and factual views. Thereinto, the visual graph and semantic graph are regarded as image-conditioned instantiation of the factual graph. On top of these new representations, we re-formulate Knowledge-based Visual Question Answering as a recurrent reasoning process for obtaining complementary evidence from multimodal information. To this end, we decompose the model into a series of memory-based reasoning steps, each performed by a G raph-based R ead, U pdate, and C ontrol (GRUC ) module that conducts parallel reasoning over both visual and semantic information. By stacking the modules multiple times, our model performs transitive reasoning and obtains question-oriented concept representations under the constrain of different modalities. Finally, we perform graph neural networks to infer the global-optimal answer by jointly considering all the concepts. We achieve a new state-of-the-art performance on three popular benchmark datasets, including FVQA, Visual7W-KB and OK-VQA, and demonstrate the effectiveness and interpretability of our model with extensive experiments. The source code is available at: https://github.com/astro-zihao/gruc … (more)
- Is Part Of:
- Pattern recognition. Volume 108(2020:Dec.)
- Journal:
- Pattern recognition
- Issue:
- Volume 108(2020:Dec.)
- Issue Display:
- Volume 108 (2020)
- Year:
- 2020
- Volume:
- 108
- Issue Sort Value:
- 2020-0108-0000-0000
- Page Start:
- Page End:
- Publication Date:
- 2020-12
- Subjects:
- Cross-modal knowledge reasoning -- Multimodal knowledge graphs -- Compositional reasoning module -- Knowledge-based visual question answering -- Explainable reasoning
Pattern perception -- Periodicals
Perception des structures -- Périodiques
Patroonherkenning
006.4 - Journal URLs:
- http://www.sciencedirect.com/science/journal/00313203 ↗
http://www.sciencedirect.com/ ↗ - DOI:
- 10.1016/j.patcog.2020.107563 ↗
- Languages:
- English
- ISSNs:
- 0031-3203
- Deposit Type:
- Legaldeposit
- View Content:
- Available online (eLD content is only available in our Reading Rooms) ↗
- Physical Locations:
- British Library DSC - BLDSS-3PM
British Library HMNTS - ELD Digital store - Ingest File:
- 13920.xml