SOTAVerified

A Generalization Bound for Nearly-Linear Networks

2024-07-09Unverified0· sign in to hype

Eugene Golikov

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We consider nonlinear networks as perturbations of linear ones. Based on this approach, we present novel generalization bounds that become non-vacuous for networks that are close to being linear. The main advantage over the previous works which propose non-vacuous generalization bounds is that our bounds are a-priori: performing the actual training is not required for evaluating the bounds. To the best of our knowledge, they are the first non-vacuous generalization bounds for neural nets possessing this property.

Tasks

Reproductions