Jacobian-Free Backprop
TL;DR - We propose an easy-to-code scheme for training implicit models.
JFB: Jacobian-Free Backpropagation for Implicit Networks
arXiv.org
Abstract
A promising trend in deep learning replaces traditional feedforward networks with implicit networks. Unlike traditional networks, the inferences of implicit networks are solutions to fixed point equations. This enables the output to be defined as, for example, a solution to an optimization problem. Solving for the fixed point varies in complexity, depending on provided data and an error tolerance. Importantly, implicit networks may be trained with fixed memory costs in stark contrast to feedforward networks, whose memory requirements scale linearly with depth. However, there is no free lunch — backpropagation through implicit networks often requires solving a costly Jacobian-based equation arising from the implicit function theorem. This work proposes Jacobian-Free Backpropagation (JFB), a fixed-memory approach that circumvents the need to solve Jacobian-based equations. JFB makes implicit networks faster to train and significantly easier to implement, without sacrificing test accuracy. Implicit networks trained with JFB are competitive with feedforward networks and prior implicit networks given the same number of parameters.
Copy link