SOTAVerified

Document Classification by Inversion of Distributed Language Representations

2015-04-27IJCNLP 2015Code Available0· sign in to hype

Matt Taddy

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

There have been many recent advances in the structure and measurement of distributed language models: those that map from words to a vector-space that is rich in information about word choice and composition. This vector-space is the distributed language representation. The goal of this note is to point out that any distributed representation can be turned into a classifier through inversion via Bayes rule. The approach is simple and modular, in that it will work with any language representation whose training can be formulated as optimizing a probability model. In our application to 2 million sentences from Yelp reviews, we also find that it performs as well as or better than complex purpose-built algorithms.

Tasks

Reproductions