SOTAVerified

Understanding Feature Selection and Feature Memorization in Recurrent Neural Networks

2019-03-03Unverified0· sign in to hype

Bokang Zhu, Richong Zhang, Dingkun Long, Yongyi Mao

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In this paper, we propose a test, called Flagged-1-Bit (F1B) test, to study the intrinsic capability of recurrent neural networks in sequence learning. Four different recurrent network models are studied both analytically and experimentally using this test. Our results suggest that in general there exists a conflict between feature selection and feature memorization in sequence learning. Such a conflict can be resolved either using a gating mechanism as in LSTM, or by increasing the state dimension as in Vanilla RNN. Gated models resolve this conflict by adaptively adjusting their state-update equations, whereas Vanilla RNN resolves this conflict by assigning different dimensions different tasks. Insights into feature selection and memorization in recurrent networks are given.

Tasks

Reproductions