SOTAVerified

GlossGPT: GPT for Word Sense Disambiguation using Few-shot Chain-of-Thought Prompting

2025-03-01Procedia Computer Science 2025Code Available0· sign in to hype

Deshan Sumanathilaka, Nicholas Micallef, Julian Hough

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Lexical ambiguity is a major challenge in computational linguistic tasks, as limitations in proper sense identification lead to inefficient translation and question answering. General-purpose Large Language Models (LLMs) are commonly utilized for Natural Language Processing (NLP) tasks. However, utilizing general-purpose LLMs for specific tasks has been challenging, and fine-tuning has become a critical requirement for task specification. In this work, we craft advanced prompts with different contextual parameters to guide the model’s inference towards accurate sense prediction to handle Word Sense Disambiguation (WSD). We present a few-shot Chain of Thought (COT) prompt-based technique using GPT-4-Turbo with knowledgebase as a retriever that does not require fine-tuning the model for WSD tasks and sense definitions are supported by synonyms to broaden the lexical meaning. Our approach achieves comparable performance on the SemEval and Senseval datasets. More importantly, we set a new state-of-the-art performance with the few-shot FEWS dataset, breaking through the 90% F1 score barrier.

Tasks

Reproductions