SOTAVerified

Visual Attention Model for Name Tagging in Multimodal Social Media

2018-07-01ACL 2018Unverified0· sign in to hype

Di Lu, Leonardo Neves, Vitor Carvalho, Ning Zhang, Heng Ji

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Everyday billions of multimodal posts containing both images and text are shared in social media sites such as Snapchat, Twitter or Instagram. This combination of image and text in a single message allows for more creative and expressive forms of communication, and has become increasingly common in such sites. This new paradigm brings new challenges for natural language understanding, as the textual component tends to be shorter, more informal, and often is only understood if combined with the visual context. In this paper, we explore the task of name tagging in multimodal social media posts. We start by creating two new multimodal datasets: the first based on Twitter posts and the second based on Snapchat captions (exclusively submitted to public and crowd-sourced stories). We then propose a novel model architecture based on Visual Attention that not only provides deeper visual understanding on the decisions of the model, but also significantly outperforms other state-of-the-art baseline methods for this task.

Tasks

Reproductions