SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 101150 of 265 papers

TitleStatusHype
Off-Policy Evaluation for Large Action Spaces via Conjunct Effect Modeling0
Learning Action Embeddings for Off-Policy EvaluationCode0
Conformal Off-Policy Evaluation in Markov Decision Processes0
On the Sample Complexity of Vanilla Model-Based Offline Reinforcement Learning with Dependent Samples0
Hallucinated Adversarial Control for Conservative Offline Policy EvaluationCode0
Balanced Off-Policy Evaluation for Personalized PricingCode0
HOPE: Human-Centric Off-Policy Evaluation for E-Learning and Healthcare0
Post Reinforcement Learning InferenceCode0
STEEL: Singularity-aware Reinforcement Learning0
Variational Latent Branching Model for Off-Policy EvaluationCode0
Off-Policy Evaluation for Action-Dependent Non-Stationary EnvironmentsCode0
Off-Policy Evaluation with Out-of-Sample GuaranteesCode0
Inference on Time Series Nonparametric Conditional Moment Restrictions Using General Sieves0
An Instrumental Variable Approach to Confounded Off-Policy Evaluation0
Quantile Off-Policy Evaluation via Deep Conditional Generative Learning0
Offline Reinforcement Learning for Human-Guided Human-Machine Interaction with Private Information0
Safe Evaluation For Offline Learning: Are We Ready To Deploy?0
Scaling Marginalized Importance Sampling to High-Dimensional State-Spaces via State Abstraction0
A Review of Off-Policy Evaluation in Reinforcement Learning0
Doubly Robust Kernel Statistics for Testing Distributional Treatment EffectsCode0
Low Variance Off-policy Evaluation with State-based Importance SamplingCode0
Counterfactual Learning with General Data-generating Policies0
Offline Policy Evaluation and Optimization under Confounding0
Policy-Adaptive Estimator Selection for Off-Policy EvaluationCode0
Counterfactual Learning with Multioutput Deep KernelsCode0
Bayesian Counterfactual Mean Embeddings and Off-Policy Evaluation0
Beyond the Return: Off-policy Function Estimation under User-specified Error-measuring Distributions0
Local Metric Learning for Off-Policy Evaluation in Contextual Bandits with Continuous ActionsCode0
Off-policy evaluation for learning-to-rank via interpolating the item-position model and the position-based model0
Off-Policy Evaluation for Episodic Partially Observable Markov Decision Processes under Non-Parametric Models0
Towards Robust Off-Policy Evaluation via Human Inputs0
Towards A Unified Policy Abstraction Theory and Representation Learning Approach in Markov Decision Processes0
On the Reuse Bias in Off-Policy Reinforcement LearningCode0
Statistical Estimation of Confounded Linear MDPs: An Instrumental Variable Approach0
Future-Dependent Value-Based Off-Policy Evaluation in POMDPsCode0
Conformal Off-policy PredictionCode0
Conformal Off-Policy Prediction in Contextual Bandits0
Sample Complexity of Nonparametric Off-Policy Evaluation on Low-Dimensional Manifolds using Deep Networks0
Markovian Interference in Experiments0
Hybrid Value Estimation for Off-policy Evaluation and Offline Reinforcement Learning0
Counterfactual Analysis in Dynamic Latent State Models0
Scalable and Robust Self-Learning for Skill Routing in Large-Scale Conversational AI Systems0
Off-Policy Evaluation with Online Adaptation for Robot Exploration in Challenging Environments0
Model-Free and Model-Based Policy Evaluation when Causality is UncertainCode0
Marginalized Operators for Off-policy Reinforcement Learning0
Bellman Residual Orthogonalization for Offline Reinforcement Learning0
Off-Policy Evaluation in Embedded Spaces0
Off-Policy Evaluation with Policy-Dependent Optimization Response0
A Multi-Agent Reinforcement Learning Framework for Off-Policy Evaluation in Two-sided MarketsCode0
Off-Policy Fitted Q-Evaluation with Differentiable Function Approximators: Z-Estimation and Inference Theory0
Show:102550
← PrevPage 3 of 6Next →

No leaderboard results yet.