SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 251265 of 265 papers

TitleStatusHype
Subgaussian and Differentiable Importance Sampling for Off-Policy Evaluation and LearningCode0
Learning Action Embeddings for Off-Policy EvaluationCode0
Balanced Off-Policy Evaluation for Personalized PricingCode0
Leveraging Factored Action Spaces for Off-Policy EvaluationCode0
Off-policy evaluation for slate recommendationCode0
Local Metric Learning for Off-Policy Evaluation in Contextual Bandits with Continuous ActionsCode0
Supervised Off-Policy RankingCode0
Logarithmic Smoothing for Pessimistic Off-Policy Evaluation, Selection and LearningCode0
Long-term Off-Policy Evaluation and LearningCode0
Off-policy Evaluation in Doubly Inhomogeneous EnvironmentsCode0
Low Variance Off-policy Evaluation with State-based Importance SamplingCode0
Marginal Density Ratio for Off-Policy Evaluation in Contextual BanditsCode0
Batch Stationary Distribution EstimationCode0
Policy-Adaptive Estimator Selection for Off-Policy EvaluationCode0
Post Reinforcement Learning InferenceCode0
Show:102550
← PrevPage 11 of 11Next →

No leaderboard results yet.