SOTAVerified

How Reasonable are Common-Sense Reasoning Tasks: A Case-Study on the Winograd Schema Challenge and SWAG

2018-11-05IJCNLP 2019Code Available0· sign in to hype

Paul Trichelair, Ali Emami, Adam Trischler, Kaheer Suleman, Jackie Chi Kit Cheung

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recent studies have significantly improved the state-of-the-art on common-sense reasoning (CSR) benchmarks like the Winograd Schema Challenge (WSC) and SWAG. The question we ask in this paper is whether improved performance on these benchmarks represents genuine progress towards common-sense-enabled systems. We make case studies of both benchmarks and design protocols that clarify and qualify the results of previous work by analyzing threats to the validity of previous experimental designs. Our protocols account for several properties prevalent in common-sense benchmarks including size limitations, structural regularities, and variable instance difficulty.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Winograd Schema ChallengeGPT-2 Medium 774M (partial scoring)Accuracy69.2Unverified
Winograd Schema ChallengeGPT-2 Medium 774M (full scoring)Accuracy64.5Unverified
Winograd Schema ChallengeGPT-2 Small 117M (partial scoring)Accuracy61.5Unverified
Winograd Schema ChallengeGPT-2 Small 117M (full scoring)Accuracy55.7Unverified

Reproductions