SOTAVerified

GPT-Neo for commonsense reasoning -- a theoretical and practical lens

2022-11-28Code Available0· sign in to hype

Rohan Kashyap, Vivek Kashyap, Narendra C. P.

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recent work has demonstrated substantial gains in pre-training large-language models (LLMs) followed by supervised fine-tuning on the downstream task. In this paper, we evaluate the performance of the GPT-neo model using 6 commonsense reasoning benchmark tasks. We aim to examine the performance of smaller models using the GPT-neo models against several larger model baselines such as GPT-3, Llama-2, MPT and Falcon. Upon fine-tuning with the appropriate set of hyperparameters, our model achieves competitive accuracy on several tasks. We also investigate and substantiate our results using attention-head visualization to better understand the model performance. Finally, we conduct various robustness tests using various methods to gauge the model performance under numerous settings.

Tasks

Reproductions