SOTAVerified

Locality Preserving Loss: Neighbors that Live together, Align together

2020-04-07EACL (AdaptNLP) 2021Unverified0· sign in to hype

Ashwinkumar Ganesan, Francis Ferraro, Tim Oates

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We present a locality preserving loss (LPL) that improves the alignment between vector space embeddings while separating uncorrelated representations. Given two pretrained embedding manifolds, LPL optimizes a model to project an embedding and maintain its local neighborhood while aligning one manifold to another. This reduces the overall size of the dataset required to align the two in tasks such as cross-lingual word alignment. We show that the LPL-based alignment between input vector spaces acts as a regularizer, leading to better and consistent accuracy than the baseline, especially when the size of the training set is small. We demonstrate the effectiveness of LPL optimized alignment on semantic text similarity (STS), natural language inference (SNLI), multi-genre language inference (MNLI) and cross-lingual word alignment(CLA) showing consistent improvements, finding up to 16% improvement over our baseline in lower resource settings.

Tasks

Reproductions