Displacement-Sparse Neural Optimal Transport
Peter Chen, Yue Xie, Qingpeng Zhang
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Optimal Transport (OT) theory seeks to determine the map T:X Y that transports a source measure P to a target measure Q, minimizing the cost c(x, T(x)) between x and its image T(x). Building upon the Input Convex Neural Network OT solver and incorporating the concept of displacement-sparse maps, we introduce a sparsity penalty into the minimax Wasserstein formulation, promote sparsity in displacement vectors (x) := T(x) - x, and enhance the interpretability of the resulting map. However, increasing sparsity often reduces feasibility, causing T_\#(P) to deviate more significantly from the target measure. In low-dimensional settings, we propose a heuristic framework to balance the trade-off between sparsity and feasibility by dynamically adjusting the sparsity intensity parameter during training. For high-dimensional settings, we directly constrain the dimensionality of displacement vectors by enforcing ((x)) l, where l < d for X R^d. Among maps satisfying this constraint, we aim to identify the most feasible one. This goal can be effectively achieved by adapting our low-dimensional heuristic framework without resorting to dimensionality reduction. We validate our method on both synthesized sc-RNA and real 4i cell perturbation datasets, demonstrating improvements over existing methods.