SOTAVerified

Bayesian Optimisation

Expensive black-box functions are a common problem in many disciplines, including tuning the parameters of machine learning algorithms, robotics, and other engineering design problems. Bayesian Optimisation is a principled and efficient technique for the global optimisation of these functions. The idea behind Bayesian Optimisation is to place a prior distribution over the target function and then update that prior with a set of “true” observations of the target function by expensively evaluating it in order to produce a posterior predictive distribution. The posterior then informs where to make the next observation of the target function through the use of an acquisition function, which balances the exploitation of regions known to have good performance with the exploration of regions where there is little information about the function’s response.

Source: A Bayesian Approach for the Robust Optimisation of Expensive-to-Evaluate Functions

Papers

Showing 111120 of 221 papers

TitleStatusHype
Bayesian Optimisation for Constrained Problems0
Benchmarking the Performance of Bayesian Optimization across Multiple Experimental Materials Science DomainsCode1
AutoLRS: Automatic Learning-Rate Schedule by Bayesian Optimization on the FlyCode1
Bayesian Optimistic Optimisation with Exponentially Decaying Regret0
How Bayesian Should Bayesian Optimisation Be?Code0
OCTIS: Comparing and Optimizing Topic models is Simple!Code1
Bayesian Optimisation for a Biologically Inspired Population Neural Network0
What Makes an Effective Scalarising Function for Multi-Objective Bayesian Optimisation?0
Approximate Bayesian inference from noisy likelihoods with Gaussian process emulated MCMC0
Think Global and Act Local: Bayesian Optimisation over High-Dimensional Categorical and Mixed Search SpacesCode1
Show:102550
← PrevPage 12 of 23Next →

No leaderboard results yet.