Using Deep Convolutional Networks for Gesture Recognition in American Sign Language
2017-10-18Code Available0· sign in to hype
Vivek Bheda, Dianna Radpour
Code Available — Be the first to reproduce this paper.
ReproduceCode
Abstract
In the realm of multimodal communication, sign language is, and continues to be, one of the most understudied areas. In line with recent advances in the field of deep learning, there are far reaching implications and applications that neural networks can have for sign language interpretation. In this paper, we present a method for using deep convolutional networks to classify images of both the the letters and digits in American Sign Language.