SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 171180 of 265 papers

TitleStatusHype
Deep Proxy Causal Learning and its Application to Confounded Bandit Policy EvaluationCode0
Off-Policy Evaluation via Adaptive Weighting with Data from Contextual BanditsCode1
Deeply-Debiased Off-Policy Interval EstimationCode0
Autoregressive Dynamics Models for Offline Policy Evaluation and Optimization0
Universal Off-Policy EvaluationCode0
Discovering an Aid Policy to Minimize Student Evasion Using Offline Reinforcement Learning0
Off-Policy Risk Assessment in Contextual Bandits0
Benchmarks for Deep Off-Policy EvaluationCode1
Infinite-Horizon Offline Reinforcement Learning with Linear Function Approximation: Curse of Dimensionality and Algorithm0
Non-asymptotic Confidence Intervals of Off-policy Evaluation: Primal and Dual Bounds0
Show:102550
← PrevPage 18 of 27Next →

No leaderboard results yet.