SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 181190 of 265 papers

TitleStatusHype
Off-Policy Risk Assessment in Contextual Bandits0
Infinite-Horizon Offline Reinforcement Learning with Linear Function Approximation: Curse of Dimensionality and Algorithm0
Non-asymptotic Confidence Intervals of Off-policy Evaluation: Primal and Dual Bounds0
Minimax Model Learning0
Towards Automatic Evaluation of Dialog Systems: A Model-Free Off-Policy Evaluation Approach0
Off-policy Confidence Sequences0
Bootstrapping Fitted Q-Evaluation for Off-Policy Inference0
Finite Sample Analysis of Minimax Offline Reinforcement Learning: Completeness, Fast Rates and First-Order Efficiency0
Minimax Off-Policy Evaluation for Multi-Armed Bandits0
Smoothed functional-based gradient algorithms for off-policy reinforcement learning: A non-asymptotic viewpoint0
Show:102550
← PrevPage 19 of 27Next →

No leaderboard results yet.