SOTAVerified

How Pre-trained Word Representations Capture Commonsense Physical Comparisons

2019-11-01WS 2019Unverified0· sign in to hype

Pranav Goel, Shi Feng, Jordan Boyd-Graber

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Understanding common sense is important for effective natural language reasoning. One type of common sense is how two objects compare on physical properties such as size and weight: e.g., `is a house bigger than a person?'. We probe whether pre-trained representations capture comparisons and find they, in fact, have higher accuracy than previous approaches. They also generalize to comparisons involving objects not seen during training. We investigate how such comparisons are made: models learn a consistent ordering over all the objects in the comparisons. Probing models have significantly higher accuracy than those baseline models which use dataset artifacts: e.g., memorizing some words are larger than any other word.

Tasks

Reproductions