SOTAVerified

Word Sense Disambiguation

The task of Word Sense Disambiguation (WSD) consists of associating words in context with their most suitable entry in a pre-defined sense inventory. The de-facto sense inventory for English in WSD is WordNet.. For example, given the word “mouse” and the following sentence:

“A mouse consists of an object held in one's hand, with one or more buttons.”

we would assign “mouse” with its electronic device sense (the 4th sense in the WordNet sense inventory).

Papers

Showing 110 of 1035 papers

TitleStatusHype
Semantic similarity estimation for domain specific data using BERT and other techniques0
On Self-improving Token Embeddings0
SANDWiCH: Semantical Analysis of Neighbours for Disambiguating Words in Context ad HocCode0
GlossGPT: GPT for Word Sense Disambiguation using Few-shot Chain-of-Thought PromptingCode0
Probing Semantic Routing in Large Mixture-of-Expert Models0
TreeMatch: A Fully Unsupervised WSD System Using Dependency Knowledge on a Specific Domain0
Fietje: An open, efficient LLM for DutchCode2
Word Sense Linking: Disambiguating Outside the Sandbox0
Can LLMs assist with Ambiguity? A Quantitative Evaluation of various Large Language Models on Word Sense Disambiguation0
Astro-HEP-BERT: A bidirectional language model for studying the meanings of concepts in astrophysics and high energy physics0
Show:102550
← PrevPage 1 of 104Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1COSINE + Transductive LearningAccuracy85.3Unverified
2PaLM 540B (finetuned)Accuracy78.8Unverified
3ST-MoE-32B 269B (fine-tuned)Accuracy77.7Unverified
4DeBERTa-EnsembleAccuracy77.5Unverified
5Vega v2 6B (fine-tuned)Accuracy77.4Unverified
6UL2 20B (fine-tuned)Accuracy77.3Unverified
7Turing NLR v5 XXL 5.4B (fine-tuned)Accuracy77.1Unverified
8T5-XXL 11BAccuracy76.9Unverified
9DeBERTa-1.5BAccuracy76.4Unverified
10ST-MoE-L 4.1B (fine-tuned)Accuracy74Unverified