SOTAVerified

Not All Lotteries Are Made Equal

2022-01-17ICLR Track Blog 2022Unverified0· sign in to hype

Anonymous

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

The Lottery Ticket Hypothesis (LTH) states that for a reasonably sized neural network, there exists a subnetwork within the same network that yields no less performance than the dense counterpart when trained from the same initialization. To the best of our knowledge, prior work regarding the LTH has only investigated overparameterized models, and the emergence of the LTH is often attributed to the initial model being large, i.e., a dense sampling of tickets. In this blog post, we present evidence that challenges this notion of the LTH. We investigate the effect of model size and the ease of finding winning tickets. Through this work, we show that winning tickets is easier to find for smaller models.

Tasks

Reproductions