SOTAVerified

Off-policy evaluation

Off-policy Evaluation (OPE), or offline evaluation in general, evaluates the performance of hypothetical policies leveraging only offline log data. It is particularly useful in applications where the online interaction involves high stakes and expensive setting such as precision medicine and recommender systems.

Papers

Showing 131140 of 265 papers

TitleStatusHype
Bellman Residual Orthogonalization for Offline Reinforcement Learning0
Off-Policy Evaluation in Embedded Spaces0
Off-Policy Evaluation with Policy-Dependent Optimization Response0
A Multi-Agent Reinforcement Learning Framework for Off-Policy Evaluation in Two-sided MarketsCode0
Doubly Robust Distributionally Robust Off-Policy Evaluation and LearningCode1
Off-Policy Evaluation for Large Action Spaces via EmbeddingsCode2
Off-Policy Fitted Q-Evaluation with Differentiable Function Approximators: Z-Estimation and Inference Theory0
Doubly Robust Off-Policy Evaluation for Ranking Policies under the Cascade Behavior ModelCode2
Generalizing Off-Policy Evaluation From a Causal Perspective For Sequential Decision-Making0
On Well-posedness and Minimax Optimal Rates of Nonparametric Q-function Estimation in Off-policy Evaluation0
Show:102550
← PrevPage 14 of 27Next →

No leaderboard results yet.