SOTAVerified

Neural Lyapunov Function Approximation with Self-Supervised Reinforcement Learning

2025-03-19Code Available0· sign in to hype

Luc McCutcheon, Bahman Gharesifard, Saber Fallah

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Control Lyapunov functions are traditionally used to design a controller which ensures convergence to a desired state, yet deriving these functions for nonlinear systems remains a complex challenge. This paper presents a novel, sample-efficient method for neural approximation of nonlinear Lyapunov functions, leveraging self-supervised Reinforcement Learning (RL) to enhance training data generation, particularly for inaccurately represented regions of the state space. The proposed approach employs a data-driven World Model to train Lyapunov functions from off-policy trajectories. The method is validated on both standard and goal-conditioned robotic tasks, demonstrating faster convergence and higher approximation accuracy compared to the state-of-the-art neural Lyapunov approximation baseline. The code is available at: https://github.com/CAV-Research-Lab/SACLA.git

Tasks

Reproductions