Why and When Can Deep -- but Not Shallow -- Networks Avoid the Curse of Dimensionality: a Review
2016-11-02Unverified0· sign in to hype
Tomaso Poggio, Hrushikesh Mhaskar, Lorenzo Rosasco, Brando Miranda, Qianli Liao
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
The paper characterizes classes of functions for which deep learning can be exponentially better than shallow learning. Deep convolutional networks are a special case of these conditions, though weight sharing is not the main reason for their exponential advantage.