SOTAVerified

Cost Function Unrolling in Unsupervised Optical Flow

2020-11-30Code Available0· sign in to hype

Gal Lifshitz, Dan Raviv

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Steepest descent algorithms, which are commonly used in deep learning, use the gradient as the descent direction, either as-is or after a direction shift using preconditioning. In many scenarios calculating the gradient is numerically hard due to complex or non-differentiable cost functions, specifically next to singular points. In this work we focus on the derivation of the Total Variation semi-norm commonly used in unsupervised cost functions. Specifically, we derive a differentiable proxy to the hard L1 smoothness constraint in a novel iterative scheme which we refer to as Cost Unrolling. Producing more accurate gradients during training, our method enables finer predictions of a given DNN model through improved convergence, without modifying its architecture or increasing computational complexity. We demonstrate our method in the unsupervised optical flow task. Replacing the L1 smoothness constraint with our unrolled cost during the training of a well known baseline, we report improved results on both MPI Sintel and KITTI 2015 unsupervised optical flow benchmarks. Particularly, we report EPE reduced by up to 15.82% on occluded pixels, where the smoothness constraint is dominant, enabling the detection of much sharper motion edges.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
KITTI 2015UnrolledCostFl-all10.81Unverified
Sintel-cleanUnrolledCostAverage End-Point Error4.69Unverified
Sintel-finalUnrolledCostAverage End-Point Error5.8Unverified

Reproductions