SOTAVerified

Deep Learning using Rectified Linear Units (ReLU)

2018-03-22Code Available0· sign in to hype

Abien Fred Agarap

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We introduce the use of rectified linear units (ReLU) as the classification function in a deep neural network (DNN). Conventionally, ReLU is used as an activation function in DNNs, with Softmax function as their classification function. However, there have been several studies on using a classification function other than Softmax, and this study is an addition to those. We accomplish this by taking the activation of the penultimate layer h_n - 1 in a neural network, then multiply it by weight parameters to get the raw scores o_i. Afterwards, we threshold the raw scores o_i by 0, i.e. f(o) = (0, o_i), where f(o) is the ReLU function. We provide class predictions y through argmax function, i.e. argmax f(x).

Tasks

Reproductions