SOTAVerified

Scalable Gradients for Stochastic Differential Equations

2020-01-05Code Available2· sign in to hype

Xuechen Li, Ting-Kam Leonard Wong, Ricky T. Q. Chen, David Duvenaud

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The adjoint sensitivity method scalably computes gradients of solutions to ordinary differential equations. We generalize this method to stochastic differential equations, allowing time-efficient and constant-memory computation of gradients with high-order adaptive solvers. Specifically, we derive a stochastic differential equation whose solution is the gradient, a memory-efficient algorithm for caching noise, and conditions under which numerical solutions converge. In addition, we combine our method with gradient-based stochastic variational inference for latent stochastic differential equations. We use our method to fit stochastic dynamics defined by neural networks, achieving competitive performance on a 50-dimensional motion capture dataset.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
CMU Mocap-2Latent SDETest Error4.03Unverified
CMU Mocap-2Latent ODETest Error5.98Unverified

Reproductions