SOTAVerified

Learning Theory

Learning theory

Papers

Showing 251300 of 852 papers

TitleStatusHype
Graph Neural Networks Provably Benefit from Structural Information: A Feature Learning Perspective0
Scaling MLPs: A Tale of Inductive BiasCode1
A primal-dual data-driven method for computational optical imaging with a photonic lanternCode0
Exact Count of Boundary Pieces of ReLU Classifiers: Towards the Proper Complexity Measure for Classification0
Learning to Defend by Attacking (and Vice-Versa): Transfer of Learning in Cybersecurity Games0
On the Sample Complexity of Imitation Learning for Smoothed Model Predictive Control0
Improving Energy Conserving Descent for Machine Learning: Theory and PracticeCode0
Hypothesis Transfer Learning with Surrogate Classification Losses: Generalization Bounds through Algorithmic Stability0
Benign Overfitting in Deep Neural Networks under Lazy Training0
Embedding Inequalities for Barron-type Spaces0
How Does Information Bottleneck Help Deep Learning?Code1
Improving Generalization of Complex Models under Unbounded Loss Using PAC-Bayes Bounds0
How many samples are needed to leverage smoothness?0
Data-driven Mixed Integer Optimization through Probabilistic Multi-variable Branching0
Uniform-in-Time Wasserstein Stability Bounds for (Noisy) Stochastic Gradient Descent0
How Spurious Features Are Memorized: Precise Analysis for Random and NTK FeaturesCode0
Supervised learning with probabilistic morphisms and kernel mean embeddings0
Learnability with Time-Sharing Computational Resource Concerns0
The ART of Transfer Learning: An Adaptive and Robust Pipeline0
Optimal PAC Bounds Without Uniform Convergence0
Challenges of learning multi-scale dynamics with AI weather models: Implications for stability and one solutionCode0
Depth Separation with Multilayer Mean-Field Networks0
Bayesian Free Energy of Deep ReLU Neural Network in Overparametrized Cases0
Double Descent Demystified: Identifying, Interpreting & Ablating the Sources of a Deep Learning PuzzleCode1
Type-II Saddles and Probabilistic Stability of Stochastic Gradient Descent0
Uniform Risk Bounds for Learning with Dependent Data Sequences0
Lower Generalization Bounds for GD and SGD in Smooth Stochastic Convex Optimization0
Distribution-free Deviation Bounds and The Role of Domain Knowledge in Learning via Model Selection with Cross-validation Risk Estimation0
ERUDITE: Human-in-the-Loop IoT for an Adaptive Personalized Learning System0
Margin theory for the scenario-based approach to robust optimization in high dimension0
Near Optimal Memory-Regret Tradeoff for Online Learning0
Learning curves for deep structured Gaussian feature models0
Exponential Hardness of Reinforcement Learning with Linear Function Approximation0
Generative Models of Huge Objects0
New Guarantees for Learning Revenue Maximizing Menus of Lotteries and Two-Part Tariffs0
Heterogeneous Neuronal and Synaptic Dynamics for Spike-Efficient Unsupervised Learning: Theory and Design Principles0
Kernel-Based Distributed Q-Learning: A Scalable Reinforcement Learning Approach for Dynamic Treatment Regimes0
Statistical Inference for Linear Functionals of Online SGD in High-dimensional Linear Regression0
Stability-based Generalization Analysis for Mixtures of Pointwise and Pairwise Learning0
Computational Complexity of Learning Neural Networks: Smoothness and Degeneracy0
Quantum Learning Theory Beyond Batch Binary Classification0
Variational Bayesian Neural Networks via Resolution of SingularitiesCode0
On the Complexity of Computing Gödel Numbers0
Generalization Bounds with Data-dependent Fractal DimensionsCode0
Beyond Statistical Similarity: Rethinking Metrics for Deep Generative Models in Engineering Design0
A Comprehensive Survey of Continual Learning: Theory, Method and ApplicationCode1
Compression, Generalization and Learning0
Sampling-based Nyström Approximation and Kernel QuadratureCode0
Learning stability of partially observed switched linear systems0
Stretched and measured neural predictions of complex network dynamics0
Show:102550
← PrevPage 6 of 18Next →

No leaderboard results yet.