SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 191200 of 265 papers

TitleStatusHype
Reliable Off-policy Evaluation for Reinforcement Learning0
Harnessing Distribution Ratio Estimators for Learning Agents with Quality and DiversityCode0
Deep Jump Learning for Off-Policy Evaluation in Continuous Treatment SettingsCode0
Off-Policy Interval Estimation with Lipschitz Value Iteration0
Off-Policy Evaluation of Bandit Algorithm from Dependent Samples under Batch Update Policy0
A Practical Guide of Off-Policy Evaluation for Bandit Problems0
CoinDICE: Off-Policy Confidence Interval Estimation0
Optimal Off-Policy Evaluation from Multiple Logging PoliciesCode1
Deep Jump Q-Evaluation for Offline Policy Evaluation in Continuous Action Space0
Open Bandit Dataset and Pipeline: Towards Realistic and Reproducible Off-Policy EvaluationCode1
Show:102550
← PrevPage 20 of 27Next →

No leaderboard results yet.