SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 2130 of 265 papers

TitleStatusHype
Primal-Dual Spectral Representation for Off-policy Evaluation0
Abstract Reward Processes: Leveraging State Abstraction for Consistent Off-Policy EvaluationCode0
Development and Validation of Heparin Dosing Policies Using an Offline Reinforcement Learning Algorithm0
Designing an Interpretable Interface for Contextual Bandits0
Limit Order Book Simulation and Trade Evaluation with K-Nearest-Neighbor Resampling0
IntOPE: Off-Policy Evaluation in the Presence of Interference0
Effective Off-Policy Evaluation and Learning in Contextual Combinatorial Bandits0
Empowering Clinicians with Medical Decision Transformers: A Framework for Sepsis Treatment0
Causal Deepsets for Off-policy Evaluation under Spatial or Spatio-temporal InterferencesCode0
Balancing Immediate Revenue and Future Off-Policy Evaluation in Coupon Allocation0
Show:102550
← PrevPage 3 of 27Next →

No leaderboard results yet.