SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 141150 of 265 papers

TitleStatusHype
Off-Policy Evaluation Using Information Borrowing and Context-Based SwitchingCode0
Optimal discharge of patients from intensive care via a data-driven policy learning framework0
BCORLE(): An Offline Reinforcement Learning and Evaluation Framework for Coupons Allocation in E-commerce MarketCode1
Weighted model estimation for offline model-based reinforcement learning0
Subgaussian and Differentiable Importance Sampling for Off-Policy Evaluation and LearningCode0
Loss Functions for Discrete Contextual Pricing with Observational Data0
A Minimax Learning Approach to Off-Policy Evaluation in Confounded Partially Observable Markov Decision ProcessesCode0
SOPE: Spectrum of Off-Policy EstimatorsCode0
Proximal Reinforcement Learning: Efficient Off-Policy Evaluation in Partially Observed Markov Decision ProcessesCode0
Towards Hyperparameter-free Policy Selection for Offline Reinforcement LearningCode0
Show:102550
← PrevPage 15 of 27Next →

No leaderboard results yet.