SOTAVerified

On the Complexity of Labeled Datasets

2019-11-13Unverified0· sign in to hype

Rodrigo Fernandes de Mello

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

The Statistical Learning Theory (SLT) provides the foundation to ensure that a supervised algorithm generalizes the mapping f: X Y given f is selected from its search space bias F. SLT depends on the Shattering coefficient function N(F,n) to upper bound the empirical risk minimization principle, from which one can estimate the necessary training sample size to ensure the probabilistic learning convergence and, most importantly, the characterization of the capacity of F, including its underfitting and overfitting abilities while addressing specific target problems. However, the analytical solution of the Shattering coefficient is still an open problem since the first studies by Vapnik and Chervonenkis in 1962, which we address on specific datasets, in this paper, by employing equivalence relations from Topology, data separability results by Har-Peled and Jones, and combinatorics. Our approach computes the Shattering coefficient for both binary and multi-class datasets, leading to the following additional contributions: (i) the estimation of the required number of hyperplanes in the worst and best-case classification scenarios and the respective and O complexities; (ii) the estimation of the training sample sizes required to ensure supervised learning; and (iii) the comparison of dataset embeddings, once they (re)organize samples into some new space configuration. All results introduced and discussed along this paper are supported by the R package shattering (https://cran.r-project.org/web/packages/shattering).

Tasks

Reproductions