SOTAVerified

Word Sense Disambiguation

The task of Word Sense Disambiguation (WSD) consists of associating words in context with their most suitable entry in a pre-defined sense inventory. The de-facto sense inventory for English in WSD is WordNet.. For example, given the word “mouse” and the following sentence:

“A mouse consists of an object held in one's hand, with one or more buttons.”

we would assign “mouse” with its electronic device sense (the 4th sense in the WordNet sense inventory).

Papers

Showing 110 of 1035 papers

TitleStatusHype
Semantic similarity estimation for domain specific data using BERT and other techniques0
On Self-improving Token Embeddings0
SANDWiCH: Semantical Analysis of Neighbours for Disambiguating Words in Context ad HocCode0
GlossGPT: GPT for Word Sense Disambiguation using Few-shot Chain-of-Thought PromptingCode0
Probing Semantic Routing in Large Mixture-of-Expert Models0
TreeMatch: A Fully Unsupervised WSD System Using Dependency Knowledge on a Specific Domain0
Fietje: An open, efficient LLM for DutchCode2
Word Sense Linking: Disambiguating Outside the Sandbox0
Can LLMs assist with Ambiguity? A Quantitative Evaluation of various Large Language Models on Word Sense Disambiguation0
Astro-HEP-BERT: A bidirectional language model for studying the meanings of concepts in astrophysics and high energy physics0
Show:102550
← PrevPage 1 of 104Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Human BenchmarkAccuracy0.81Unverified
2ruT5-large-finetuneAccuracy0.74Unverified
3RuBERT conversationalAccuracy0.73Unverified
4RuBERT plainAccuracy0.73Unverified
5ruRoberta-large finetuneAccuracy0.72Unverified
6ruBert-base finetuneAccuracy0.71Unverified
7Multilingual BertAccuracy0.69Unverified
8ruT5-base-finetuneAccuracy0.68Unverified
9ruBert-large finetuneAccuracy0.68Unverified
10SBERT_Large_mt_ru_finetuningAccuracy0.66Unverified