SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 201250 of 265 papers

TitleStatusHype
Deep Jump Q-Evaluation for Offline Policy Evaluation in Continuous Action Space0
Accountable Off-Policy Evaluation With Kernel Bellman Statistics0
Statistical Bootstrapping for Uncertainty Estimation in Off-Policy Evaluation0
Off-policy Evaluation in Infinite-Horizon Reinforcement Learning with Latent Confounders0
Off-Policy Evaluation via the Regularized Lagrangian0
Off-Policy Exploitability-Evaluation in Two-Player Zero-Sum Markov Games0
Strictly Batch Imitation Learning by Energy-based Distribution MatchingCode0
Confident Off-Policy Evaluation and Selection through Self-Normalized Importance WeightingCode0
A maximum-entropy approach to off-policy evaluation in average-reward MDPs0
Confidence Interval for Off-Policy Evaluation from Dependent Samples via Bandit Algorithm: Approach from Standardized Martingales0
Efficient Evaluation of Natural Stochastic Policies in Offline Reinforcement Learning0
Causality and Batch Reinforcement Learning: Complementary Approaches To Planning In Unknown Domains0
Taylor Expansion Policy Optimization0
Batch Stationary Distribution EstimationCode0
Off-Policy Evaluation and Learning for External Validity under a Covariate ShiftCode0
Minimax-Optimal Off-Policy Evaluation with Linear Function Approximation0
Debiased Off-Policy Evaluation for Recommendation Systems0
Adaptive Estimator Selection for Off-Policy EvaluationCode0
Double/Debiased Machine Learning for Dynamic Treatment Effects via g-Estimation0
Confounding-Robust Policy Evaluation in Infinite-Horizon Reinforcement Learning0
Interpretable Off-Policy Evaluation in Reinforcement Learning by Highlighting Influential Transitions0
Minimax Value Interval for Off-Policy Evaluation and Policy Optimization0
Safe Exploration for Optimizing Contextual BanditsCode0
Asymptotically Efficient Off-Policy Evaluation for Tabular Reinforcement Learning0
Double Reinforcement Learning for Efficient and Robust Off-Policy Evaluation0
Accountable Off-Policy Evaluation via a Kernelized Bellman Statistics0
More Efficient Off-Policy Evaluation through Regularized Targeted Learning0
Triply Robust Off-Policy Evaluation0
Minimax Weight and Q-Function Learning for Off-Policy Evaluation0
From Importance Sampling to Doubly Robust Policy GradientCode0
Adaptive Trade-Offs in Off-Policy Learning0
Doubly Robust Bias Reduction in Infinite Horizon Off-Policy Estimation0
Understanding the Curse of Horizon in Off-Policy Evaluation via Conditional Importance Sampling0
Efficiently Breaking the Curse of Horizon in Off-Policy Evaluation with Double Reinforcement Learning0
Off-Policy Evaluation in Partially Observable Environments0
Efron-Stein PAC-Bayesian Inequalities0
Double Reinforcement Learning for Efficient Off-Policy Evaluation in Markov Decision ProcessesCode0
Doubly robust off-policy evaluation with shrinkageCode0
Task Selection Policies for Multitask Learning0
Expected Sarsa(λ) with Control Variate for Variance Reduction0
Intrinsically Efficient, Stable, and Bounded Off-Policy Evaluation for Reinforcement LearningCode0
Balanced off-policy evaluation in general action spacesCode0
Towards Optimal Off-Policy Evaluation for Reinforcement Learning with Marginalized Importance Sampling0
Off-Policy Evaluation via Off-Policy Classification0
Defining Admissible Rewards for High Confidence Policy Evaluation0
Semi-Parametric Efficient Policy Learning with Continuous ActionsCode0
Combining Parametric and Nonparametric Models for Off-Policy Evaluation0
Counterfactual Off-Policy Evaluation with Gumbel-Max Structural Causal ModelsCode0
Privacy Preserving Off-Policy Evaluation0
Off-Policy Evaluation of Probabilistic Identity Data in Lookalike Modeling0
Show:102550
← PrevPage 5 of 6Next →

No leaderboard results yet.