SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 251265 of 265 papers

TitleStatusHype
Post Reinforcement Learning InferenceCode0
Local Metric Learning for Off-Policy Evaluation in Contextual Bandits with Continuous ActionsCode0
Universal Off-Policy EvaluationCode0
Logarithmic Smoothing for Pessimistic Off-Policy Evaluation, Selection and LearningCode0
Long-term Off-Policy Evaluation and LearningCode0
Predictive Performance Comparison of Decision Policies Under ConfoundingCode0
Low Variance Off-policy Evaluation with State-based Importance SamplingCode0
Marginal Density Ratio for Off-Policy Evaluation in Contextual BanditsCode0
Two-way Deconfounder for Off-policy Evaluation in Causal Reinforcement LearningCode0
SOPE: Spectrum of Off-Policy EstimatorsCode0
When is Off-Policy Evaluation (Reward Modeling) Useful in Contextual Bandits? A Data-Centric PerspectiveCode0
Proximal Reinforcement Learning: Efficient Off-Policy Evaluation in Partially Observed Markov Decision ProcessesCode0
Causal Deepsets for Off-policy Evaluation under Spatial or Spatio-temporal InterferencesCode0
Off-Policy Evaluation of Slate Bandit Policies via Optimizing AbstractionCode0
State-Action Similarity-Based Representations for Off-Policy EvaluationCode0
Show:102550
← PrevPage 6 of 6Next →

No leaderboard results yet.