SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 101125 of 265 papers

TitleStatusHype
Consistent On-Line Off-Policy Evaluation0
Offline Policy Evaluation and Optimization under Confounding0
Loss Functions for Discrete Contextual Pricing with Observational Data0
Generalizing Off-Policy Evaluation From a Causal Perspective For Sequential Decision-Making0
A Principled Path to Fitted Distributional Evaluation0
Off-policy estimation with adaptively collected data: the power of online learning0
Counterfactual Analysis in Dynamic Latent State Models0
HOPE: Human-Centric Off-Policy Evaluation for E-Learning and Healthcare0
Balancing Immediate Revenue and Future Off-Policy Evaluation in Coupon Allocation0
Hybrid Value Estimation for Off-policy Evaluation and Offline Reinforcement Learning0
Beyond the Return: Off-policy Function Estimation under User-specified Error-measuring Distributions0
Hyperparameter Optimization Can Even be Harmful in Off-Policy Learning and How to Deal with It0
Logarithmic Neyman Regret for Adaptive Estimation of the Average Treatment Effect0
Infinite-Horizon Offline Reinforcement Learning with Linear Function Approximation: Curse of Dimensionality and Algorithm0
Interpretable Off-Policy Evaluation in Reinforcement Learning by Highlighting Influential Transitions0
IntOPE: Off-Policy Evaluation in the Presence of Interference0
Bayesian Counterfactual Mean Embeddings and Off-Policy Evaluation0
Off-policy evaluation for learning-to-rank via interpolating the item-position model and the position-based model0
Marginalized Operators for Off-policy Reinforcement Learning0
Large-scale Validation of Counterfactual Learning Methods: A Test-Bed0
Bayesian Off-Policy Evaluation and Learning for Large Action Spaces0
Off-Policy Evaluation with Online Adaptation for Robot Exploration in Challenging Environments0
Deep Jump Q-Evaluation for Offline Policy Evaluation in Continuous Action Space0
Limit Order Book Simulation and Trade Evaluation with K-Nearest-Neighbor Resampling0
Debiasing Samples from Online Learning Using Bootstrap0
Show:102550
← PrevPage 5 of 11Next →

No leaderboard results yet.