SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 110 of 265 papers

TitleStatusHype
Doubly Robust Off-Policy Evaluation for Ranking Policies under the Cascade Behavior ModelCode2
Off-Policy Evaluation for Large Action Spaces via EmbeddingsCode2
Counterfactual Evaluation of Slate Recommendations with Sequential Reward InteractionsCode1
A Policy-Guided Imitation Approach for Offline Reinforcement LearningCode1
Benchmarks for Deep Off-Policy EvaluationCode1
COptiDICE: Offline Constrained Reinforcement Learning via Stationary Distribution Correction EstimationCode1
Open Bandit Dataset and Pipeline: Towards Realistic and Reproducible Off-Policy EvaluationCode1
A Deep Reinforcement Learning Approach to Marginalized Importance Sampling with the Successor RepresentationCode1
Active Offline Policy SelectionCode1
Anytime-valid off-policy inference for contextual banditsCode1
Show:102550
← PrevPage 1 of 27Next →

No leaderboard results yet.