SOTAVerified

YoungSheldon at SemEval-2021 Task 7: Fine-tuning Is All You Need

2021-08-01SEMEVALCode Available0· sign in to hype

Mayukh Sharma, Ilanthenral Kandasamy, W.b. Vasantha

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In this paper, we describe our system used for SemEval 2021 Task 7: HaHackathon: Detecting and Rating Humor and Offense. We used a simple fine-tuning approach using different Pre-trained Language Models (PLMs) to evaluate their performance for humor and offense detection. For regression tasks, we averaged the scores of different models leading to better performance than the original models. We participated in all SubTasks. Our best performing system was ranked 4 in SubTask 1-b, 8 in SubTask 1-c, 12 in SubTask 2, and performed well in SubTask 1-a. We further show comprehensive results using different pre-trained language models which will help as baselines for future work.

Tasks

Reproductions