SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 7180 of 265 papers

TitleStatusHype
Off-policy Evaluation with Deeply-abstracted StatesCode0
Future-Dependent Value-Based Off-Policy Evaluation in POMDPsCode0
Harnessing Distribution Ratio Estimators for Learning Agents with Quality and DiversityCode0
Distributional Off-Policy Evaluation for Slate RecommendationsCode0
Distributional Off-policy Evaluation with Bellman Residual MinimizationCode0
Causal Deepsets for Off-policy Evaluation under Spatial or Spatio-temporal InterferencesCode0
DOLCE: Decomposing Off-Policy Evaluation/Learning into Lagged and Current EffectsCode0
Intrinsically Efficient, Stable, and Bounded Off-Policy Evaluation for Reinforcement LearningCode0
Doubly Robust Estimator for Off-Policy Evaluation with Large Action SpacesCode0
Doubly Robust Kernel Statistics for Testing Distributional Treatment EffectsCode0
Show:102550
← PrevPage 8 of 27Next →

No leaderboard results yet.