SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 5160 of 265 papers

TitleStatusHype
Control Variates for Slate Off-Policy EvaluationCode0
Harnessing Distribution Ratio Estimators for Learning Agents with Quality and DiversityCode0
Adaptive Estimator Selection for Off-Policy EvaluationCode0
Counterfactual-Augmented Importance Sampling for Semi-Offline Policy EvaluationCode0
Distributional Off-Policy Evaluation for Slate RecommendationsCode0
Causal Deepsets for Off-policy Evaluation under Spatial or Spatio-temporal InterferencesCode0
Batch Stationary Distribution EstimationCode0
Counterfactual Learning with Multioutput Deep KernelsCode0
Distributional Off-policy Evaluation with Bellman Residual MinimizationCode0
Deeply-Debiased Off-Policy Interval EstimationCode0
Show:102550
← PrevPage 6 of 27Next →

No leaderboard results yet.