SOTAVerified

Imagination-Augmented Natural Language Understanding

2022-01-16ACL ARR January 2022Unverified0· sign in to hype

Anonymous

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Human brains integrate linguistic and perceptual information simultaneously to understand natural language and hold the critical ability to render imaginations. Such abilities enable us to construct new abstract concepts or concrete objects and are essential in involving applicable knowledge to solve problems in low-resource scenarios. However, most existing methods for Natural Language Understanding (NLU) are mainly focused on textual signals. They do not simulate human visual imagination ability, which hinders models from inferring and learning efficiently from limited data samples.Therefore, we introduce an Imagination-Augmented Cross-modal Encoder (iACE) to solve natural language understanding tasks from a novel learning perspective---imagination-augmented cross-modal understanding. iACE enables visual imagination with the external knowledge transferred from the powerful generative model and pre-trained vision-and-language model.Extensive experiments on GLUE and SWAG datasets show that iACE achieves consistent improvement over visually-supervised pre-trained models. More importantly, results in extreme and normal few-shot settings validate the effectiveness of iACE in low-resource natural language understanding circumstances.

Tasks

Reproductions