SOTAVerified

Distributed Evaluations: Ending Neural Point Metrics

2018-06-11Unverified0· sign in to hype

Cohen Daniel, Jordan Scott M., Croft W. Bruce

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

With the rise of neural models across the field of information retrieval, numerous publications have incrementally pushed the envelope of performance for a multitude of IR tasks. However, these networks often sample data in random order, are initialized randomly, and their success is determined by a single evaluation score. These issues are aggravated by neural models achieving incremental improvements from previous neural baselines, leading to multiple near state of the art models that are difficult to reproduce and quickly become deprecated. As neural methods are starting to be incorporated into low resource and noisy collections that further exacerbate this issue, we propose evaluating neural models both over multiple random seeds and a set of hyperparameters within distance of the chosen configuration for a given metric.

Tasks

Reproductions