SOTAVerified

Lemmatization

Lemmatization is a process of determining a base or dictionary form (lemma) for a given surface form. Especially for languages with rich morphology it is important to be able to normalize words into their base forms to better support for example search engines and linguistic studies. Main difficulties in Lemmatization arise from encountering previously unseen words during inference time as well as disambiguating ambiguous surface forms which can be inflected variants of several different base forms depending on the context.

Source: Universal Lemmatizer: A Sequence to Sequence Model for Lemmatizing Universal Dependencies Treebanks

Papers

Showing 125 of 351 papers

TitleStatusHype
Open-Source Web Service with Morphological Dictionary-Supplemented Deep Learning for Morphosyntactic Analysis of CzechCode3
DadmaTools: Natural Language Processing Toolkit for Persian LanguageCode2
Top2Vec: Distributed Representations of TopicsCode2
ParsiPy: NLP Toolkit for Historical Persian Texts in PythonCode1
A State-of-the-Art Morphosyntactic Parser and Lemmatizer for Ancient GreekCode1
One Model is All You Need: ByT5-Sanskrit, a Unified Model for Sanskrit NLP TasksCode1
Opera Graeca Adnotata: Building a 34M+ Token Multilayer Corpus for Ancient GreekCode1
Advancing Hungarian Text Processing with HuSpaCy: Efficient and Accurate NLP PipelinesCode1
Sentence Embedding Models for Ancient Greek Using Multilingual Knowledge DistillationCode1
Hybrid lemmatization in HuSpaCyCode1
Exploring Large Language Models for Classical PhilologyCode1
HuSpaCy: an industrial-strength Hungarian natural language processing toolkitCode1
ELIT: Emory Language and Information ToolkitCode1
Lemmatization of Historical Old Literary Finnish Texts in Modern OrthographyCode1
Neural Morphology Dataset and Models for Multiple Languages, from the Large to the EndangeredCode1
Trankit: A Light-Weight Transformer-based Toolkit for Multilingual Natural Language ProcessingCode1
KLPT – Kurdish Language Processing ToolkitCode1
TopicModel4J: A Java Package for Topic ModelsCode1
Stanza: A Python Natural Language Processing Toolkit for Many Human LanguagesCode1
NeoN: A Tool for Automated Detection, Linguistic and LLM-Driven Analysis of Neologisms in Polish0
Breaking the Fake News Barrier: Deep Learning Approaches in Bangla Language0
Context Aware Lemmatization and Morphological Tagging Method in Turkish0
GliLem: Leveraging GliNER for Contextualized Lemmatization in Estonian0
SinaTools: Open Source Toolkit for Arabic Natural Language Processing0
A Comparative Study of Hybrid Models in Health Misinformation Text Classification0
Show:102550
← PrevPage 1 of 15Next →

No leaderboard results yet.