Data Collaboration Analysis with Orthonormal Basis Selection and Alignment
Keiyu Nosaka, Yuichi Takano, Akiko Yoshise
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Data Collaboration (DC) analysis offers a privacy-preserving approach to multi-source machine learning by enabling participants to train a shared model without revealing their raw data. Instead, each participant shares only linearly transformed data through a non-iterative communication protocol, thereby mitigating both privacy risks and communication overhead. The core idea of DC is that while each participant obfuscates their data with a secret linear transformation (or basis), the aggregator aligns these secret bases to a chosen target basis without knowing the secret bases. Although DC theory suggests that any target basis spanning the same subspace as the secret bases should suffice, empirical evidence reveals that the choice of target basis can substantially influence model performance. To address this discrepancy, we propose Orthonormal DC (ODC), a novel framework that enforces orthonormal constraints during the basis selection and alignment phases. Unlike conventional DC -- which allows arbitrary target bases -- ODC restricts the target to orthonormal bases, rendering the specific choice of basis negligible concerning model performance. Furthermore, the alignment step in ODC reduces to the Orthogonal Procrustes Problem, which admits a closed-form solution with favorable computational properties. Empirical evaluations demonstrate that ODC achieves higher accuracy and improved efficiency compared to existing DC methods, aligning with our theoretical findings. Additional evaluations assess performance in non-ideal scenarios with heterogenous distributions, also showing the best overall performance for our method. These findings position ODC as a direct and effective enhancement to current DC frameworks without compromising privacy or communication overhead when orthonormality constraints are applicable.