SOTAVerified

Hyperparameter Optimization

Hyperparameter Optimization is the problem of choosing a set of optimal hyperparameters for a learning algorithm. Whether the algorithm is suitable for the data directly depends on hyperparameters, which directly influence overfitting or underfitting. Each model requires different assumptions, weights or training speeds for different types of data under the conditions of a given loss function.

Source: Data-driven model for fracturing design optimization: focus on building digital database and production forecast

Papers

Showing 476500 of 813 papers

TitleStatusHype
Scheduling the Learning Rate Via Hypergradients: New Insights and a New Algorithm0
Scientific machine learning in ecological systems: A study on the predator-prey dynamics0
Scilab-RL: A software framework for efficient reinforcement learning and cognitive modeling research0
Searching in the Forest for Local Bayesian Optimization0
Selecting for Less Discriminatory Algorithms: A Relational Search Framework for Navigating Fairness-Accuracy Trade-offs in Practice0
Self-adaptive PSRO: Towards an Automatic Population-based Game Solver0
Sentence Transformers and Bayesian Optimization for Adverse Drug Effect Detection from Twitter0
Sequential vs. Integrated Algorithm Selection and Configuration: A Case Study for the Modular CMA-ES0
SHADHO: Massively Scalable Hardware-Aware Distributed Hyperparameter Optimization0
Short-answer scoring with ensembles of pretrained language models0
SigOpt Mulch: An Intelligent System for AutoML of Gradient Boosted Trees0
Simple and Effective Gradient-Based Tuning of Sequence-to-Sequence Models0
Simple and Scalable Parallelized Bayesian Optimization0
Simple Hack for Transformers against Heavy Long-Text Classification on a Time- and Memory-Limited GPU Service0
Skip Connections in Spiking Neural Networks: An Analysis of Their Effect on Network Training0
Software Engineering for Fairness: A Case Study with Hyperparameter Optimization0
Speeding up the Hyperparameter Optimization of Deep Convolutional Neural Networks0
Spend More to Save More (SM2): An Energy-Aware Implementation of Successive Halving for Sustainable Hyperparameter Optimization0
Stacking ensemble with parsimonious base models to improve generalization capability in the characterization of steel bolted components0
Statistical Mechanics of Dynamical System Identification0
Strategies for Optimizing End-to-End Artificial Intelligence Pipelines on Intel Xeon Processors0
Structuring a Training Strategy to Robustify Perception Models with Realistic Image Augmentations0
T\"ubingen-Oslo at SemEval-2018 Task 2: SVMs perform better than RNNs in Emoji Prediction0
Takeuchi's Information Criteria as Generalization Measures for DNNs Close to NTK Regime0
Target Variable Engineering0
Show:102550
← PrevPage 20 of 33Next →

No leaderboard results yet.