SOTAVerified

RoBoSS: A Robust, Bounded, Sparse, and Smooth Loss Function for Supervised Learning

2023-09-05Code Available0· sign in to hype

Mushir Akhtar, M. Tanveer, Mohd. Arshad

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In the domain of machine learning algorithms, the significance of the loss function is paramount, especially in supervised learning tasks. It serves as a fundamental pillar that profoundly influences the behavior and efficacy of supervised learning algorithms. Traditional loss functions, while widely used, often struggle to handle noisy and high-dimensional data, impede model interpretability, and lead to slow convergence during training. In this paper, we address the aforementioned constraints by proposing a novel robust, bounded, sparse, and smooth (RoBoSS) loss function for supervised learning. Further, we incorporate the RoBoSS loss function within the framework of support vector machine (SVM) and introduce a new robust algorithm named L_rbss-SVM. For the theoretical analysis, the classification-calibrated property and generalization ability are also presented. These investigations are crucial for gaining deeper insights into the performance of the RoBoSS loss function in the classification tasks and its potential to generalize well to unseen data. To empirically demonstrate the effectiveness of the proposed L_rbss-SVM, we evaluate it on 88 real-world UCI and KEEL datasets from diverse domains. Additionally, to exemplify the effectiveness of the proposed L_rbss-SVM within the biomedical realm, we evaluated it on two medical datasets: the electroencephalogram (EEG) signal dataset and the breast cancer (BreaKHis) dataset. The numerical results substantiate the superiority of the proposed L_rbss-SVM model, both in terms of its remarkable generalization performance and its efficiency in training time.

Tasks

Reproductions