SOTAVerified

Model Selection

Given a set of candidate models, the goal of Model Selection is to select the model that best approximates the observed data and captures its underlying regularities. Model Selection criteria are defined such that they strike a balance between the goodness of fit, and the generalizability or complexity of the models.

Source: Kernel-based Information Criterion

Papers

Showing 125 of 2050 papers

TitleStatusHype
HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging FaceCode6
M-Prometheus: A Suite of Open Multilingual LLM JudgesCode5
aeon: a Python toolkit for learning from time seriesCode5
MOSPAT: AutoML based Model Selection and Parameter Tuning for Time Series Anomaly DetectionCode5
TabReD: Analyzing Pitfalls and Filling the Gaps in Tabular Deep Learning BenchmarksCode4
ChainForge: A Visual Toolkit for Prompt Engineering and LLM Hypothesis TestingCode4
INTERS: Unlocking the Power of Large Language Models in Search with Instruction TuningCode3
Uni-QSAR: an Auto-ML Tool for Molecular Property PredictionCode3
Router-R1: Teaching LLMs Multi-Round Routing and Aggregation via Reinforcement LearningCode2
AD-AGENT: A Multi-agent Framework for End-to-end Anomaly DetectionCode2
FinTSB: A Comprehensive and Practical Benchmark for Financial Time Series ForecastingCode2
Optimizing Model Selection for Compound AI SystemsCode2
Foundational Large Language Models for Materials ResearchCode2
LHRS-Bot-Nova: Improved Multimodal Large Language Model for Remote Sensing Vision-Language InterpretationCode2
BSD: a Bayesian framework for parametric models of neural spectraCode2
Peeling Back the Layers: An In-Depth Evaluation of Encoder Architectures in Neural News RecommendersCode2
Source-Free Domain Adaptation for YOLO Object DetectionCode2
Encoder vs Decoder: Comparative Analysis of Encoder and Decoder Language Models on Multilingual NLU TasksCode2
The future of cosmological likelihood-based inference: accelerated high-dimensional parameter estimation and model comparisonCode2
The CAST package for training and assessment of spatial prediction models in RCode2
Idea23D: Collaborative LMM Agents Enable 3D Model Generation from Interleaved Multimodal InputsCode2
LoTa-Bench: Benchmarking Language-oriented Task Planners for Embodied AgentsCode2
Efficient and Effective Time-Series Forecasting with Spiking Neural NetworksCode2
Specializing Smaller Language Models towards Multi-Step ReasoningCode2
Out-of-sample scoring and automatic selection of causal estimatorsCode2
Show:102550
← PrevPage 1 of 82Next →

No leaderboard results yet.