SOTAVerified

Bayesian Optimisation

Expensive black-box functions are a common problem in many disciplines, including tuning the parameters of machine learning algorithms, robotics, and other engineering design problems. Bayesian Optimisation is a principled and efficient technique for the global optimisation of these functions. The idea behind Bayesian Optimisation is to place a prior distribution over the target function and then update that prior with a set of “true” observations of the target function by expensively evaluating it in order to produce a posterior predictive distribution. The posterior then informs where to make the next observation of the target function through the use of an acquisition function, which balances the exploitation of regions known to have good performance with the exploration of regions where there is little information about the function’s response.

Source: A Bayesian Approach for the Robust Optimisation of Expensive-to-Evaluate Functions

Papers

Showing 201210 of 221 papers

TitleStatusHype
Intrinsic Bayesian Optimisation on Complex Constrained Domain0
Accelerated Bayesian Optimization throughWeight-Prior Tuning0
Large Language Models for Human-Machine Collaborative Particle Accelerator Tuning through Natural Language0
Large Language Models Orchestrating Structured Reasoning Achieve Kaggle Grandmaster Level0
Learning to Explore with Pleasure0
Learning to Race through Coordinate Descent Bayesian Optimisation0
Long-run Behaviour of Multi-fidelity Bayesian Optimisation0
Machine Learning-Assisted Discovery of Flow Reactor Designs0
Machine Learning-based Regional Cooling Demand Prediction with Optimised Dataset Partitioning0
Maximizing Uncertainty for Federated learning via Bayesian Optimisation-based Model Poisoning0
Show:102550
← PrevPage 21 of 23Next →

No leaderboard results yet.