SOTAVerified

Bayesian Optimisation

Expensive black-box functions are a common problem in many disciplines, including tuning the parameters of machine learning algorithms, robotics, and other engineering design problems. Bayesian Optimisation is a principled and efficient technique for the global optimisation of these functions. The idea behind Bayesian Optimisation is to place a prior distribution over the target function and then update that prior with a set of “true” observations of the target function by expensively evaluating it in order to produce a posterior predictive distribution. The posterior then informs where to make the next observation of the target function through the use of an acquisition function, which balances the exploitation of regions known to have good performance with the exploration of regions where there is little information about the function’s response.

Source: A Bayesian Approach for the Robust Optimisation of Expensive-to-Evaluate Functions

Papers

Showing 141150 of 221 papers

TitleStatusHype
Optimal Use of Multi-spectral Satellite Data with Convolutional Neural Networks0
Sequential Subspace Search for Functional Bayesian Optimization Incorporating Experimenter Intuition0
Sub-linear Regret Bounds for Bayesian Optimisation in Unknown Search Spaces0
Preferential Bayesian optimisation with Skew Gaussian Processes0
Bayesian Optimization for Developmental Robotics with Meta-Learning by Parameters Bounds Reduction0
Automatic Tuning of Stochastic Gradient Descent with Bayesian Optimisation0
Randomised Gaussian Process Upper Confidence Bound for Bayesian OptimisationCode0
Bayesian Optimisation vs. Input Uncertainty Reduction0
BOP-Elites, a Bayesian Optimisation algorithm for Quality-Diversity search0
What do you Mean? The Role of the Mean Function in Bayesian OptimisationCode0
Show:102550
← PrevPage 15 of 23Next →

No leaderboard results yet.