SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 171180 of 265 papers

TitleStatusHype
Limit Order Book Simulation and Trade Evaluation with K-Nearest-Neighbor Resampling0
Logarithmic Neyman Regret for Adaptive Estimation of the Average Treatment Effect0
Loss Functions for Discrete Contextual Pricing with Observational Data0
Marginalized Operators for Off-policy Reinforcement Learning0
Markovian Interference in Experiments0
Methodology for Interpretable Reinforcement Learning for Optimizing Mechanical Ventilation0
Minimax Value Interval for Off-Policy Evaluation and Policy Optimization0
Minimax Model Learning0
Minimax Off-Policy Evaluation for Multi-Armed Bandits0
Minimax-Optimal Off-Policy Evaluation with Linear Function Approximation0
Show:102550
← PrevPage 18 of 27Next →

No leaderboard results yet.