SOTAVerified

SignSpeak: Open-Source Time Series Classification for ASL Translation

2024-06-27Code Available1· sign in to hype

Aditya Makkar, Divya Makkar, Aarav Patel, Liam Hebert

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The lack of fluency in sign language remains a barrier to seamless communication for hearing and speech-impaired communities. In this work, we propose a low-cost, real-time ASL-to-speech translation glove and an exhaustive training dataset of sign language patterns. We then benchmarked this dataset with supervised learning models, such as LSTMs, GRUs and Transformers, where our best model achieved 92% accuracy. The SignSpeak dataset has 7200 samples encompassing 36 classes (A-Z, 1-10) and aims to capture realistic signing patterns by using five low-cost flex sensors to measure finger positions at each time step at 36 Hz. Our open-source dataset, models and glove designs, provide an accurate and efficient ASL translator while maintaining cost-effectiveness, establishing a framework for future work to build on.

Tasks

Reproductions