SOTAVerified

Model Selection

Given a set of candidate models, the goal of Model Selection is to select the model that best approximates the observed data and captures its underlying regularities. Model Selection criteria are defined such that they strike a balance between the goodness of fit, and the generalizability or complexity of the models.

Source: Kernel-based Information Criterion

Papers

Showing 13511400 of 2050 papers

TitleStatusHype
Learning Dynamic Hierarchical Models for Anytime Scene Labeling0
Learning for Multi-Model and Multi-Type Fitting0
Learning from Domain Complexity0
Learning from missing data with the Latent Block Model0
Learning Gaussian Graphical Models via Multiplicative Weights0
Learning high-dimensional probability distributions using tree tensor networks0
Learning Latent Variable Models via Jarzynski-adjusted Langevin Algorithm0
Learning manifold to regularize nonnegative matrix factorization0
Learning of networked spreading models from noisy and incomplete data0
Learning Sparse Neural Networks through L_0 Regularization0
Learning Structural Kernels for Natural Language Processing0
Learning the hypotheses space from data through a U-curve algorithm0
Learning the Hypotheses Space from data: Learning Space and U-curve Property0
Learning the Hypotheses Space from data Part II: Convergence and Feasibility0
Learning the Markov order of paths in a network0
Learning the Number of Autoregressive Mixtures in Time Series Using the Gap Statistics0
Learning to Rank Pre-trained Vision-Language Models for Downstream Tasks0
Learning under Singularity: An Information Criterion improving WBIC and sBIC0
Learning Vine Copula Models For Synthetic Data Generation0
Learning with many experts: model selection and sparsity0
Learning with tree tensor networks: complexity estimates and model selection0
Learning Word-Level Confidence For Subword End-to-End ASR0
Least Angle Regression in Tangent Space and LASSO for Generalized Linear Models0
( β, )-stability for cross-validation and the choice of the number of folds0
Leveraging free energy in pretraining model selection for improved fine-tuning0
Leveraging LLMs for MT in Crisis Scenarios: a blueprint for low-resource languages0
LEVIS: Large Exact Verifiable Input Spaces for Neural Networks0
Lexical Bias In Essay Level Prediction0
Lifelong Bayesian Optimization0
A Geometric Modeling of Occam's Razor in Deep Learning0
Likelihood Adaptively Modified Penalties0
Limits of Model Selection under Transfer Learning0
LIMSI@CoNLL'17: UD Shared Task0
Linear Bandits with Memory: from Rotting to Rising0
Linearised Laplace Inference in Networks with Normalisation Layers and the Neural g-Prior0
LLM4DS: Evaluating Large Language Models for Data Science Code Generation0
LLMProxy: Reducing Cost to Access Large Language Models0
LM-BIC Model Selection in Semiparametric Models0
Logistic principal component analysis via non-convex singular value thresholding0
Logits-Constrained Framework with RoBERTa for Ancient Chinese NER0
Lookback for Learning to Branch0
"Look Ma, No Hands!" A Parameter-Free Topic Model0
Loss function based second-order Jensen inequality and its application to particle variational inference0
Loss-guided Stability Selection0
Lossy Compression with Distortion Constrained Optimization0
Lower Bounds on Active Learning for Graphical Model Selection0
Luria-Delbruck, revisited: The classic experiment does not rule out Lamarckian evolution0
Machine Learning: a Lecture Note0
Machine Learning-Assisted Analysis of Small Angle X-ray Scattering0
Machine learning-based conditional mean filter: a generalization of the ensemble Kalman filter for nonlinear data assimilation0
Show:102550
← PrevPage 28 of 41Next →

No leaderboard results yet.