SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 201250 of 265 papers

TitleStatusHype
Distributional Off-Policy Evaluation for Slate RecommendationsCode0
Distributional Off-policy Evaluation with Bellman Residual MinimizationCode0
Robust Generalization despite Distribution Shift via Minimum Discriminating InformationCode0
DOLCE: Decomposing Off-Policy Evaluation/Learning into Lagged and Current EffectsCode0
Robust Offline Reinforcement learning with Heavy-Tailed RewardsCode0
Double Reinforcement Learning for Efficient Off-Policy Evaluation in Markov Decision ProcessesCode0
Variational Latent Branching Model for Off-Policy EvaluationCode0
Off-Policy Evaluation with Out-of-Sample GuaranteesCode0
Counterfactual Learning with Multioutput Deep KernelsCode0
Doubly Robust Estimator for Off-Policy Evaluation with Large Action SpacesCode0
Doubly Robust Kernel Statistics for Testing Distributional Treatment EffectsCode0
Counterfactual Evaluation of Peer-Review Assignment PoliciesCode0
Doubly robust off-policy evaluation with shrinkageCode0
Towards Automatic Evaluation of Dialog Systems: A Model-Free Off-Policy Evaluation ApproachCode0
Conformal Off-policy PredictionCode0
Efficient and Sharp Off-Policy Evaluation in Robust Markov Decision ProcessesCode0
Safe Exploration for Optimizing Contextual BanditsCode0
Towards Hyperparameter-free Policy Selection for Offline Reinforcement LearningCode0
Adaptive Estimator Selection for Off-Policy EvaluationCode0
Off-Policy Evaluation and Learning for External Validity under a Covariate ShiftCode0
On (Normalised) Discounted Cumulative Gain as an Off-Policy Evaluation Metric for Top-n RecommendationCode0
Strictly Batch Imitation Learning by Energy-based Distribution MatchingCode0
Off-Policy Evaluation for Action-Dependent Non-Stationary EnvironmentsCode0
Counterfactual-Augmented Importance Sampling for Semi-Offline Policy EvaluationCode0
On the Reuse Bias in Off-Policy Reinforcement LearningCode0
Subgaussian and Differentiable Importance Sampling for Off-Policy Evaluation and LearningCode0
Off-policy Evaluation with Deeply-abstracted StatesCode0
From Importance Sampling to Doubly Robust Policy GradientCode0
Future-Dependent Value-Based Off-Policy Evaluation in POMDPsCode0
Confident Off-Policy Evaluation and Selection through Self-Normalized Importance WeightingCode0
Abstract Reward Processes: Leveraging State Abstraction for Consistent Off-Policy EvaluationCode0
Hallucinated Adversarial Control for Conservative Offline Policy EvaluationCode0
Harnessing Distribution Ratio Estimators for Learning Agents with Quality and DiversityCode0
Hindsight-DICE: Stable Credit Assignment for Deep Reinforcement LearningCode0
Supervised Off-Policy RankingCode0
Human Choice Prediction in Language-based Persuasion Games: Simulation-based Off-Policy EvaluationCode0
Optimal and Adaptive Off-policy Evaluation in Contextual BanditsCode0
Balanced Off-Policy Evaluation for Personalized PricingCode0
Importance Sampling Policy Evaluation with an Estimated Behavior PolicyCode0
A Minimax Learning Approach to Off-Policy Evaluation in Confounded Partially Observable Markov Decision ProcessesCode0
Semi-Parametric Efficient Policy Learning with Continuous ActionsCode0
Balanced off-policy evaluation in general action spacesCode0
Off-policy evaluation for slate recommendationCode0
Intrinsically Efficient, Stable, and Bounded Off-Policy Evaluation for Reinforcement LearningCode0
Kernel Metric Learning for In-Sample Off-Policy Evaluation of Deterministic RL PoliciesCode0
K-Nearest-Neighbor Resampling for Off-Policy Evaluation in Stochastic ControlCode0
Policy-Adaptive Estimator Selection for Off-Policy EvaluationCode0
Learning Action Embeddings for Off-Policy EvaluationCode0
Off-policy Evaluation in Doubly Inhomogeneous EnvironmentsCode0
Leveraging Factored Action Spaces for Off-Policy EvaluationCode0
Show:102550
← PrevPage 5 of 6Next →

No leaderboard results yet.