SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 6170 of 265 papers

TitleStatusHype
Robust Offline Reinforcement learning with Heavy-Tailed RewardsCode0
State-Action Similarity-Based Representations for Off-Policy EvaluationCode0
Counterfactual-Augmented Importance Sampling for Semi-Offline Policy EvaluationCode0
Off-Policy Evaluation for Large Action Spaces via Policy Convolution0
Sample Complexity of Preference-Based Nonparametric Off-Policy Evaluation with Deep Networks0
Robust Multi-Agent Reinforcement Learning by Mutual Information Regularization0
Off-Policy Evaluation for Human Feedback0
Distributional Shift-Aware Off-Policy Interval Estimation: A Unified Error Quantification Framework0
Wasserstein Distributionally Robust Policy Evaluation and Learning for Contextual Bandits0
Statistically Efficient Variance Reduction with Double Policy Estimation for Off-Policy Evaluation in Sequence-Modeled Reinforcement Learning0
Show:102550
← PrevPage 7 of 27Next →

No leaderboard results yet.