SOTAVerified

Policy Gradient Methods

Papers

Showing 276300 of 382 papers

TitleStatusHype
Variance Reduction for Policy-Gradient Methods via Empirical Variance Minimization0
Variance Reduction for Policy Gradient with Action-Dependent Factorized Baselines0
Variance Reduction for Reinforcement Learning in Input-Driven Environments0
Variance Reduction in Actor Critic Methods (ACM)0
When Do Off-Policy and On-Policy Policy Gradient Methods Align?0
Diversity-Inducing Policy Gradient: Using Maximum Mean Discrepancy to Find a Set of Diverse Policies0
Zeroth-Order Supervised Policy Improvement0
2D or not 2D? Adaptive 3D Convolution Selection for Efficient Video Recognition0
Accelerated Reinforcement Learning0
Accelerating Policy Gradient by Estimating Value Function from Prior Computation in Deep Reinforcement Learning0
Action-dependent Control Variates for Policy Optimization via Stein Identity0
Actor-Critic Policy Optimization in a Large-Scale Imperfect-Information Game0
Actor-Critic Reinforcement Learning with Phased Actor0
AdaFrame: Adaptive Frame Selection for Fast Video Recognition0
Confidence-Controlled Exploration: Efficient Sparse-Reward Policy Learning for Robot Navigation0
Adaptive Batch Size for Safe Policy Gradients0
Momentum-Based Policy Gradient with Second-Order Information0
Adaptive Policy Learning to Additional Tasks0
Adaptive Step-Size for Policy Gradient Methods0
Ad Headline Generation using Self-Critical Masked Language Model0
Adversarial Policy Gradient for Alternating Markov Games0
A Hybrid Approach Between Adversarial Generative Networks and Actor-Critic Policy Gradient for Low Rate High-Resolution Image Compression0
A K-fold Method for Baseline Estimation in Policy Gradient Algorithms0
A Large Deviations Perspective on Policy Gradient Algorithms0
All-Action Policy Gradient Methods: A Numerical Integration Approach0
Show:102550
← PrevPage 12 of 16Next →

No leaderboard results yet.