SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 110 of 265 papers

TitleStatusHype
Off-Policy Evaluation for Large Action Spaces via EmbeddingsCode2
Doubly Robust Off-Policy Evaluation for Ranking Policies under the Cascade Behavior ModelCode2
Trajectory World Models for Heterogeneous EnvironmentsCode1
SCOPE-RL: A Python Library for Offline Reinforcement Learning and Off-Policy EvaluationCode1
Towards Assessing and Benchmarking Risk-Return Tradeoff of Off-Policy EvaluationCode1
Off-Policy Evaluation of Ranking Policies under Diverse User BehaviorCode1
Anytime-valid off-policy inference for contextual banditsCode1
A Policy-Guided Imitation Approach for Offline Reinforcement LearningCode1
COptiDICE: Offline Constrained Reinforcement Learning via Stationary Distribution Correction EstimationCode1
Doubly Robust Distributionally Robust Off-Policy Evaluation and LearningCode1
Show:102550
← PrevPage 1 of 27Next →

No leaderboard results yet.