Regularized deep learning with nonconvex penalties
2019-09-11Code Available0· sign in to hype
Sujit Vettam, Majnu John
Code Available — Be the first to reproduce this paper.
ReproduceCode
Abstract
Regularization methods are often employed in deep learning neural networks (DNNs) to prevent overfitting. For penalty based DNN regularization methods, convex penalties are typically considered because of their optimization guarantees. Recent theoretical work have shown that nonconvex penalties that satisfy certain regularity conditions are also guaranteed to perform well with standard optimization algorithms. In this paper, we examine new and currently existing nonconvex penalties for DNN regularization. We provide theoretical justifications for the new penalties and also assess the performance of all penalties with DNN analyses of seven datasets.