Pixels to Graphs by Associative Embedding
2017-06-22NeurIPS 2017Code Available0· sign in to hype
Alejandro Newell, Jia Deng
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/umich-vl/px2graphOfficialIn papertf★ 0
- github.com/nexusapoorvacus/DeepVariationStructuredRLpytorch★ 0
- github.com/roytseng-tw/px2graph_labtf★ 0
Abstract
Graphs are a useful abstraction of image content. Not only can graphs represent details about individual objects in a scene but they can capture the interactions between pairs of objects. We present a method for training a convolutional neural network such that it takes in an input image and produces a full graph definition. This is done end-to-end in a single stage with the use of associative embeddings. The network learns to simultaneously identify all of the elements that make up a graph and piece them together. We benchmark on the Visual Genome dataset, and demonstrate state-of-the-art performance on the challenging task of scene graph generation.