SOTAVerified

LSH methods for data deduplication in a Wikipedia artificial dataset

2021-12-10Unverified0· sign in to hype

Juan Ciro, Daniel Galvez, Tim Schlippe, David Kanter

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

This paper illustrates locality sensitive hasing (LSH) models for the identification and removal of nearly redundant data in a text dataset. To evaluate the different models, we create an artificial dataset for data deduplication using English Wikipedia articles. Area-Under-Curve (AUC) over 0.9 were observed for most models, with the best model reaching 0.96. Deduplication enables more effective model training by preventing the model from learning a distribution that differs from the real one as a result of the repeated data.

Tasks

Reproductions