SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 3140 of 265 papers

TitleStatusHype
DOLCE: Decomposing Off-Policy Evaluation/Learning into Lagged and Current EffectsCode0
Doubly robust off-policy evaluation with shrinkageCode0
Deeply-Debiased Off-Policy Interval EstimationCode0
A Minimax Learning Approach to Off-Policy Evaluation in Confounded Partially Observable Markov Decision ProcessesCode0
Deep Proxy Causal Learning and its Application to Confounded Bandit Policy EvaluationCode0
RoME: A Robust Mixed-Effects Bandit Algorithm for Optimizing Mobile Health InterventionsCode0
Cross-Validated Off-Policy EvaluationCode0
Deep Jump Learning for Off-Policy Evaluation in Continuous Treatment SettingsCode0
Distributional Off-Policy Evaluation for Slate RecommendationsCode0
Causal Deepsets for Off-policy Evaluation under Spatial or Spatio-temporal InterferencesCode0
Show:102550
← PrevPage 4 of 27Next →

No leaderboard results yet.