SOTAVerified

Enhancing Unsupervised Feature Selection via Double Sparsity Constrained Optimization

2025-01-01Unverified0· sign in to hype

Xianchao Xiu, Anning Yang, Chenyi Huang, Xinrong Li, Wanquan Liu

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Unsupervised feature selection (UFS) is widely applied in machine learning and pattern recognition. However, most of the existing methods only consider a single sparsity, which makes it difficult to select valuable and discriminative feature subsets from the original high-dimensional feature set. In this paper, we propose a new UFS method called DSCOFS via embedding double sparsity constrained optimization into the classical principal component analysis (PCA) framework. Double sparsity refers to using _2,0-norm and _0-norm to simultaneously constrain variables, by adding the sparsity of different types, to achieve the purpose of improving the accuracy of identifying differential features. The core is that _2,0-norm can remove irrelevant and redundant features, while _0-norm can filter out irregular noisy features, thereby complementing _2,0-norm to improve discrimination. An effective proximal alternating minimization method is proposed to solve the resulting nonconvex nonsmooth model. Theoretically, we rigorously prove that the sequence generated by our method globally converges to a stationary point. Numerical experiments on three synthetic datasets and eight real-world datasets demonstrate the effectiveness, stability, and convergence of the proposed method. In particular, the average clustering accuracy (ACC) and normalized mutual information (NMI) are improved by at least 3.34% and 3.02%, respectively, compared with the state-of-the-art methods. More importantly, two common statistical tests and a new feature similarity metric verify the advantages of double sparsity. All results suggest that our proposed DSCOFS provides a new perspective for feature selection.

Tasks

Reproductions