SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 251265 of 265 papers

TitleStatusHype
Minimax-Optimal Off-Policy Evaluation with Linear Function Approximation0
Minimax Weight and Q-Function Learning for Off-Policy Evaluation0
Robust Multi-Agent Reinforcement Learning by Mutual Information Regularization0
Model Selection for Off-policy Evaluation: New Algorithms and Experimental Protocol0
More Efficient Off-Policy Evaluation through Regularized Targeted Learning0
More Robust Doubly Robust Off-policy Evaluation0
Non-asymptotic Confidence Intervals of Off-policy Evaluation: Primal and Dual Bounds0
Offline Comparison of Ranking Functions using Randomized Data0
Offline Policy Evaluation and Optimization under Confounding0
Offline Reinforcement Learning for Human-Guided Human-Machine Interaction with Private Information0
Off-policy Confidence Sequences0
Off-policy estimation with adaptively collected data: the power of online learning0
Off-Policy Evaluation and Counterfactual Methods in Dynamic Auction Environments0
Off-Policy Evaluation and Learning for the Future under Non-Stationarity0
Off-Policy Evaluation and Learning from Logged Bandit Feedback: Error Reduction via Surrogate Policy0
Show:102550
← PrevPage 6 of 6Next →

No leaderboard results yet.