The Fast Johnson-Lindenstrauss Transform is Even Faster
Ora Nova Fandina, Mikael Møller Høgsgaard, Kasper Green Larsen
Code Available — Be the first to reproduce this paper.
ReproduceCode
Abstract
The seminal Fast Johnson-Lindenstrauss (Fast JL) transform by Ailon and Chazelle (SICOMP'09) embeds a set of n points in d-dimensional Euclidean space into optimal k=O(^-2 n) dimensions, while preserving all pairwise distances to within a factor (1 ). The Fast JL transform supports computing the embedding of a data point in O(d d +k ^2 n) time, where the d d term comes from multiplication with a d d Hadamard matrix and the k ^2 n term comes from multiplication with a sparse k d matrix. Despite the Fast JL transform being more than a decade old, it is one of the fastest dimensionality reduction techniques for many tradeoffs between , d and n. In this work, we give a surprising new analysis of the Fast JL transform, showing that the k ^2 n term in the embedding time can be improved to (k ^2 n)/ for an = (\^-1(1/), n\). The improvement follows by using an even sparser matrix. We also complement our improved analysis with a lower bound showing that our new analysis is in fact tight.