SOTAVerified

Safe Exploration

Safe Exploration is an approach to collect ground truth data by safely interacting with the environment.

Source: Chance-Constrained Trajectory Optimization for Safe Exploration and Learning of Nonlinear Systems

Papers

Showing 101135 of 135 papers

TitleStatusHype
Safe Exploration for Efficient Policy Evaluation and Comparison0
Safe Exploration for Identifying Linear Systems via Robust Optimization0
Safe Exploration for Interactive Machine Learning0
Safe Exploration Incurs Nearly No Additional Sample Complexity for Reward-free RL0
Safe Exploration in Linear Equality Constraint0
Safe Exploration in Markov Decision Processes with Time-Variant Safety using Spatio-Temporal Gaussian Process0
Safe Exploration in Markov Decision Processes0
Handling Long-Term Safety and Uncertainty in Safe Reinforcement LearningCode0
A comparison of RL-based and PID controllers for 6-DOF swimming robots: hybrid underwater object trackingCode0
Infinite Time Horizon Safety of Bayesian Neural NetworksCode0
GoSafeOpt: Scalable Safe Exploration for Global Optimization of Dynamical SystemsCode0
Information-Theoretic Safe Exploration with Gaussian ProcessesCode0
Safe Exploration for Optimizing Contextual BanditsCode0
Learning-based Model Predictive Control for Safe ExplorationCode0
Learning-based Model Predictive Control for Safe Exploration and Reinforcement LearningCode0
Confidence-Guided Human-AI Collaboration: Reinforcement Learning with Distributional Proxy Value Propagation for Autonomous DrivingCode0
CUP: A Conservative Update Policy Algorithm for Safe Reinforcement LearningCode0
AI Safety GridworldsCode0
Safe Exploration in Finite Markov Decision Processes with Gaussian ProcessesCode0
The Pump Scheduling Problem: A Real-World Scenario for Reinforcement LearningCode0
Concrete Problems in AI SafetyCode0
Atlas: Automate Online Service Configuration in Network SlicingCode0
Safe and Sample-efficient Reinforcement Learning for Clustered Dynamic EnvironmentsCode0
Safe Policy Optimization with Local Generalized Linear Function ApproximationsCode0
Safe Continuous Control with Constrained Model-Based Policy OptimizationCode0
Safe reinforcement learning for probabilistic reachability and safety specifications: A Lyapunov-based approachCode0
Exterior Penalty Policy Optimization with Penalty Metric Network under ConstraintsCode0
Benefits of Monotonicity in Safe Exploration with Gaussian ProcessesCode0
Safe Reinforcement Learning in Black-Box Environments via Adaptive ShieldingCode0
DOPE: Doubly Optimistic and Pessimistic Exploration for Safe Reinforcement LearningCode0
Effects of Safety State Augmentation on Safe ExplorationCode0
Enforcing Almost-Sure Reachability in POMDPsCode0
Curiosity Killed or Incapacitated the Cat and the Asymptotically Optimal AgentCode0
Safe Exploration Method for Reinforcement Learning under Existence of DisturbanceCode0
Probabilistic Counterexample Guidance for Safer Reinforcement Learning (Extended Version)Code0
Show:102550
← PrevPage 3 of 3Next →

No leaderboard results yet.