SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 226250 of 265 papers

TitleStatusHype
State Relevance for Off-Policy EvaluationCode0
Off-Policy Evaluation and Learning for External Validity under a Covariate ShiftCode0
Counterfactual Evaluation of Peer-Review Assignment PoliciesCode0
A Multi-Agent Reinforcement Learning Framework for Off-Policy Evaluation in Two-sided MarketsCode0
Safe Exploration for Optimizing Contextual BanditsCode0
Off-policy Evaluation with Deeply-abstracted StatesCode0
From Importance Sampling to Doubly Robust Policy GradientCode0
Future-Dependent Value-Based Off-Policy Evaluation in POMDPsCode0
Off-Policy Evaluation for Action-Dependent Non-Stationary EnvironmentsCode0
On (Normalised) Discounted Cumulative Gain as an Off-Policy Evaluation Metric for Top-n RecommendationCode0
Hallucinated Adversarial Control for Conservative Offline Policy EvaluationCode0
Harnessing Distribution Ratio Estimators for Learning Agents with Quality and DiversityCode0
Hindsight-DICE: Stable Credit Assignment for Deep Reinforcement LearningCode0
When is Off-Policy Evaluation (Reward Modeling) Useful in Contextual Bandits? A Data-Centric PerspectiveCode0
Human Choice Prediction in Language-based Persuasion Games: Simulation-based Off-Policy EvaluationCode0
Conformal Off-policy PredictionCode0
On the Reuse Bias in Off-Policy Reinforcement LearningCode0
Importance Sampling Policy Evaluation with an Estimated Behavior PolicyCode0
Variational Latent Branching Model for Off-Policy EvaluationCode0
Abstract Reward Processes: Leveraging State Abstraction for Consistent Off-Policy EvaluationCode0
Towards Hyperparameter-free Policy Selection for Offline Reinforcement LearningCode0
Strictly Batch Imitation Learning by Energy-based Distribution MatchingCode0
Intrinsically Efficient, Stable, and Bounded Off-Policy Evaluation for Reinforcement LearningCode0
Kernel Metric Learning for In-Sample Off-Policy Evaluation of Deterministic RL PoliciesCode0
K-Nearest-Neighbor Resampling for Off-Policy Evaluation in Stochastic ControlCode0
Show:102550
← PrevPage 10 of 11Next →

No leaderboard results yet.