SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 51100 of 265 papers

TitleStatusHype
An Instrumental Variable Approach to Confounded Off-Policy Evaluation0
Off-Policy Evaluation with Online Adaptation for Robot Exploration in Challenging Environments0
Counterfactual Analysis in Dynamic Latent State Models0
Balancing Immediate Revenue and Future Off-Policy Evaluation in Coupon Allocation0
CoinDICE: Off-Policy Confidence Interval Estimation0
Characterization of Efficient Influence Function for Off-Policy Evaluation Under Optimal Policies0
Counterfactual Learning with General Data-generating Policies0
Bayesian Counterfactual Mean Embeddings and Off-Policy Evaluation0
Asymptotically Efficient Off-Policy Evaluation for Tabular Reinforcement Learning0
Limit Order Book Simulation and Trade Evaluation with K-Nearest-Neighbor Resampling0
Loss Functions for Discrete Contextual Pricing with Observational Data0
Data-Driven Off-Policy Estimator Selection: An Application in User Marketing on An Online Content Delivery Service0
Hyperparameter Optimization Can Even be Harmful in Off-Policy Learning and How to Deal with It0
A Review of Off-Policy Evaluation in Reinforcement Learning0
Inference on Time Series Nonparametric Conditional Moment Restrictions Using General Sieves0
Bootstrapping with Models: Confidence Intervals for Off-Policy Evaluation0
Designing an Interpretable Interface for Contextual Bandits0
Accountable Off-Policy Evaluation With Kernel Bellman Statistics0
Infinite-Horizon Offline Reinforcement Learning with Linear Function Approximation: Curse of Dimensionality and Algorithm0
Defining Admissible Rewards for High Confidence Policy Evaluation0
Bootstrapping Fitted Q-Evaluation for Off-Policy Inference0
Development and Validation of Heparin Dosing Policies Using an Offline Reinforcement Learning Algorithm0
Discovering an Aid Policy to Minimize Student Evasion Using Offline Reinforcement Learning0
CANDOR: Counterfactual ANnotated DOubly Robust Off-Policy Evaluation0
A Spectral Approach to Off-Policy Evaluation for POMDPs0
Distributional Shift-Aware Off-Policy Interval Estimation: A Unified Error Quantification Framework0
Causality and Batch Reinforcement Learning: Complementary Approaches To Planning In Unknown Domains0
Double/Debiased Machine Learning for Dynamic Treatment Effects via g-Estimation0
Generalizing Off-Policy Evaluation From a Causal Perspective For Sequential Decision-Making0
Double Reinforcement Learning for Efficient and Robust Off-Policy Evaluation0
Doubly Robust Bias Reduction in Infinite Horizon Off-Policy Estimation0
A Principled Path to Fitted Distributional Evaluation0
Combining Parametric and Nonparametric Models for Off-Policy Evaluation0
A Fast Convergence Theory for Offline Decision Making0
Beyond the Return: Off-policy Function Estimation under User-specified Error-measuring Distributions0
Concept-driven Off Policy Evaluation0
Doubly-Robust Off-Policy Evaluation with Estimated Logging Policy0
Effective Off-Policy Evaluation and Learning in Contextual Combinatorial Bandits0
Confident Natural Policy Gradient for Local Planning in q_π-realizable Constrained MDPs0
Efficient Counterfactual Learning from Bandit Feedback0
Efficient Evaluation of Natural Stochastic Policies in Offline Reinforcement Learning0
Efficiently Breaking the Curse of Horizon in Off-Policy Evaluation with Double Reinforcement Learning0
Efron-Stein PAC-Bayesian Inequalities0
Emphatic TD Bellman Operator is a Contraction0
Empowering Clinicians with Medical Decision Transformers: A Framework for Sepsis Treatment0
Enhancing Offline Model-Based RL via Active Model Selection: A Bayesian Optimization Perspective0
Generalized Emphatic Temporal Difference Learning: Bias-Variance Analysis0
Expected Sarsa(λ) with Control Variate for Variance Reduction0
Finite Sample Analysis of Minimax Offline Reinforcement Learning: Completeness, Fast Rates and First-Order Efficiency0
HOPE: Human-Centric Off-Policy Evaluation for E-Learning and Healthcare0
Show:102550
← PrevPage 2 of 6Next →

No leaderboard results yet.