SOTAVerified

Model Selection

Given a set of candidate models, the goal of Model Selection is to select the model that best approximates the observed data and captures its underlying regularities. Model Selection criteria are defined such that they strike a balance between the goodness of fit, and the generalizability or complexity of the models.

Source: Kernel-based Information Criterion

Papers

Showing 5175 of 2050 papers

TitleStatusHype
When Heterophily Meets Heterogeneity: Challenges and a New Large-Scale Graph BenchmarkCode1
Team up GBDTs and DNNs: Advancing Efficient and Effective Tabular Prediction with Tree-hybrid MLPsCode1
AutoBencher: Creating Salient, Novel, Difficult Datasets for Language ModelsCode1
A Data-Centric Perspective on Evaluating Machine Learning Models for Tabular DataCode1
Statistical Uncertainty in Word Embeddings: GloVe-VCode1
A Scoping Review of Earth Observation and Machine Learning for Causal Inference: Implications for the Geography of PovertyCode1
Movie Revenue Prediction using Machine Learning ModelsCode1
Benchmark Self-Evolving: A Multi-Agent Framework for Dynamic LLM EvaluationCode1
GeoGalactica: A Scientific Large Language Model in GeoscienceCode1
A General Model for Aggregating Annotations Across Simple, Complex, and Multi-Object Annotation TasksCode1
BIVDiff: A Training-Free Framework for General-Purpose Video Synthesis via Bridging Image and Video Diffusion ModelsCode1
Machine-Guided Discovery of a Real-World Rogue Wave ModelCode1
BarcodeBERT: Transformers for Biodiversity AnalysisCode1
RoboLLM: Robotic Vision Tasks Grounded on Multimodal Large Language ModelsCode1
Towards Robust Multi-Modal Reasoning via Model SelectionCode1
Rethinking Model Selection and Decoding for Keyphrase Generation with Pre-trained Sequence-to-Sequence ModelsCode1
Towards Last-layer Retraining for Group Robustness with Fewer AnnotationsCode1
Saturn: An Optimized Data System for Large Model Deep Learning WorkloadsCode1
Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision TransformersCode1
LCE: An Augmented Combination of Bagging and Boosting in PythonCode1
Foundation Model is Efficient Multimodal Multitask Model SelectorCode1
Cal-SFDA: Source-Free Domain-adaptive Semantic Segmentation with Differentiable Expected Calibration ErrorCode1
Self-Compatibility: Evaluating Causal Discovery without Ground TruthCode1
Deep learning for dynamic graphs: models and benchmarksCode1
ProbVLM: Probabilistic Adapter for Frozen Vision-Language ModelsCode1
Show:102550
← PrevPage 3 of 82Next →

No leaderboard results yet.