SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 3140 of 265 papers

TitleStatusHype
Enhancing Offline Model-Based RL via Active Model Selection: A Bayesian Optimization Perspective0
Off-Policy Evaluation for Recommendations with Missing-Not-At-Random Rewards0
Model Selection for Off-policy Evaluation: New Algorithms and Experimental Protocol0
Off-policy Evaluation for Payments at Adyen0
Off-Policy Evaluation and Counterfactual Methods in Dynamic Auction Environments0
CANDOR: Counterfactual ANnotated DOubly Robust Off-Policy Evaluation0
Two-way Deconfounder for Off-policy Evaluation in Causal Reinforcement LearningCode0
Concept-driven Off Policy Evaluation0
Logarithmic Neyman Regret for Adaptive Estimation of the Average Treatment Effect0
Off-policy estimation with adaptively collected data: the power of online learning0
Show:102550
← PrevPage 4 of 27Next →

No leaderboard results yet.