SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 101150 of 265 papers

TitleStatusHype
Optimal discharge of patients from intensive care via a data-driven policy learning framework0
Optimal Mixture Weights for Off-Policy Evaluation with Multiple Behavior Policies0
Towards Optimal Off-Policy Evaluation for Reinforcement Learning with Marginalized Importance Sampling0
Practical Marginalized Importance Sampling with the Successor Representation0
Primal-Dual Spectral Representation for Off-policy Evaluation0
Privacy Preserving Off-Policy Evaluation0
Probabilistic Offline Policy Ranking with Approximate Bayesian Computation0
Quantile Off-Policy Evaluation via Deep Conditional Generative Learning0
Reliable Off-policy Evaluation for Reinforcement Learning0
RL in Latent MDPs is Tractable: Online Guarantees via Off-Policy Evaluation0
Debiased Off-Policy Evaluation for Recommendation Systems0
Safe Evaluation For Offline Learning: Are We Ready To Deploy?0
Sample Complexity of Nonparametric Off-Policy Evaluation on Low-Dimensional Manifolds using Deep Networks0
Sample Complexity of Preference-Based Nonparametric Off-Policy Evaluation with Deep Networks0
Scalable and Robust Self-Learning for Skill Routing in Large-Scale Conversational AI Systems0
Scalable and Safe Remediation of Defective Actions in Self-Learning Conversational Systems0
Scaling Marginalized Importance Sampling to High-Dimensional State-Spaces via State Abstraction0
Semi-gradient DICE for Offline Constrained Reinforcement Learning0
STEEL: Singularity-aware Reinforcement Learning0
Smoothed functional-based gradient algorithms for off-policy reinforcement learning: A non-asymptotic viewpoint0
Stabilizing Temporal Difference Learning via Implicit Stochastic Recursion0
Stateful Offline Contextual Policy Evaluation and Learning0
Statistical Bootstrapping for Uncertainty Estimation in Off-Policy Evaluation0
Statistical Estimation of Confounded Linear MDPs: An Instrumental Variable Approach0
Statistically Efficient Variance Reduction with Double Policy Estimation for Off-Policy Evaluation in Sequence-Modeled Reinforcement Learning0
STITCH-OPE: Trajectory Stitching with Guided Diffusion for Off-Policy Evaluation0
Task Selection Policies for Multitask Learning0
Taylor Expansion Policy Optimization0
Cramming Contextual Bandits for On-policy Statistical Evaluation0
The Optimal Approximation Factors in Misspecified Off-Policy Value Function Estimation0
Towards A Unified Policy Abstraction Theory and Representation Learning Approach in Markov Decision Processes0
Towards Robust Off-Policy Evaluation via Human Inputs0
Triply Robust Off-Policy Evaluation0
Unbiased Offline Evaluation for Learning to Rank with Business Rules0
Understanding the Curse of Horizon in Off-Policy Evaluation via Conditional Importance Sampling0
Variance-Aware Off-Policy Evaluation with Linear Function Approximation0
Wasserstein Distributionally Robust Policy Evaluation and Learning for Contextual Bandits0
Weighted model estimation for offline model-based reinforcement learning0
Why Should I Trust You, Bellman? Evaluating the Bellman Objective with Off-Policy Data0
Data-Driven Off-Policy Estimator Selection: An Application in User Marketing on An Online Content Delivery Service0
Data Poisoning Attacks on Off-Policy Policy Evaluation Methods0
Debiasing Samples from Online Learning Using Bootstrap0
Deep Jump Q-Evaluation for Offline Policy Evaluation in Continuous Action Space0
Defining Admissible Rewards for High Confidence Policy Evaluation0
Designing an Interpretable Interface for Contextual Bandits0
Development and Validation of Heparin Dosing Policies Using an Offline Reinforcement Learning Algorithm0
Discovering an Aid Policy to Minimize Student Evasion Using Offline Reinforcement Learning0
Distributional Shift-Aware Off-Policy Interval Estimation: A Unified Error Quantification Framework0
Double/Debiased Machine Learning for Dynamic Treatment Effects via g-Estimation0
Double Reinforcement Learning for Efficient and Robust Off-Policy Evaluation0
Show:102550
← PrevPage 3 of 6Next →

No leaderboard results yet.