SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 151200 of 265 papers

TitleStatusHype
STITCH-OPE: Trajectory Stitching with Guided Diffusion for Off-Policy Evaluation0
Task Selection Policies for Multitask Learning0
Taylor Expansion Policy Optimization0
Cramming Contextual Bandits for On-policy Statistical Evaluation0
The Optimal Approximation Factors in Misspecified Off-Policy Value Function Estimation0
Towards A Unified Policy Abstraction Theory and Representation Learning Approach in Markov Decision Processes0
Towards Robust Off-Policy Evaluation via Human Inputs0
Triply Robust Off-Policy Evaluation0
Unbiased Offline Evaluation for Learning to Rank with Business Rules0
Understanding the Curse of Horizon in Off-Policy Evaluation via Conditional Importance Sampling0
Variance-Aware Off-Policy Evaluation with Linear Function Approximation0
Wasserstein Distributionally Robust Policy Evaluation and Learning for Contextual Bandits0
Weighted model estimation for offline model-based reinforcement learning0
Why Should I Trust You, Bellman? Evaluating the Bellman Objective with Off-Policy Data0
Accelerating Offline Reinforcement Learning Application in Real-Time Bidding and Recommendation: Potential Use of Simulation0
Accountable Off-Policy Evaluation via a Kernelized Bellman Statistics0
Accountable Off-Policy Evaluation With Kernel Bellman Statistics0
Adaptive Trade-Offs in Off-Policy Learning0
A maximum-entropy approach to off-policy evaluation in average-reward MDPs0
An Instrumental Variable Approach to Confounded Off-Policy Evaluation0
A Practical Guide of Off-Policy Evaluation for Bandit Problems0
A Principled Path to Fitted Distributional Evaluation0
A Review of Off-Policy Evaluation in Reinforcement Learning0
A Spectral Approach to Off-Policy Evaluation for POMDPs0
Asymptotically Efficient Off-Policy Evaluation for Tabular Reinforcement Learning0
A Fast Convergence Theory for Offline Decision Making0
A Unified Off-Policy Evaluation Approach for General Value Function0
Automated Off-Policy Estimator Selection via Supervised Learning0
Autoregressive Dynamics Models for Offline Policy Evaluation and Optimization0
Balancing Immediate Revenue and Future Off-Policy Evaluation in Coupon Allocation0
Bayesian Counterfactual Mean Embeddings and Off-Policy Evaluation0
Bayesian Off-Policy Evaluation and Learning for Large Action Spaces0
Bellman Residual Orthogonalization for Offline Reinforcement Learning0
Beyond the Return: Off-policy Function Estimation under User-specified Error-measuring Distributions0
Bootstrapping Fitted Q-Evaluation for Off-Policy Inference0
Bootstrapping with Models: Confidence Intervals for Off-Policy Evaluation0
CANDOR: Counterfactual ANnotated DOubly Robust Off-Policy Evaluation0
Causality and Batch Reinforcement Learning: Complementary Approaches To Planning In Unknown Domains0
Characterization of Efficient Influence Function for Off-Policy Evaluation Under Optimal Policies0
CoinDICE: Off-Policy Confidence Interval Estimation0
Combining Parametric and Nonparametric Models for Off-Policy Evaluation0
Concept-driven Off Policy Evaluation0
Confidence Interval for Off-Policy Evaluation from Dependent Samples via Bandit Algorithm: Approach from Standardized Martingales0
Confident Natural Policy Gradient for Local Planning in q_π-realizable Constrained MDPs0
Conformal Off-Policy Evaluation in Markov Decision Processes0
Conformal Off-Policy Prediction in Contextual Bandits0
Confounding-Robust Policy Evaluation in Infinite-Horizon Reinforcement Learning0
Consistent On-Line Off-Policy Evaluation0
Counterfactual Analysis in Dynamic Latent State Models0
Counterfactual Learning with General Data-generating Policies0
Show:102550
← PrevPage 4 of 6Next →

No leaderboard results yet.