SOTAVerified

Natural Language Understanding

Natural Language Understanding is an important field of Natural Language Processing which contains various tasks such as text classification, natural language inference and story comprehension. Applications enabled by natural language understanding range from question answering to automated reasoning.

Source: Find a Reasonable Ending for Stories: Does Logic Relation Help the Story Cloze Test?

Papers

Showing 15011550 of 1978 papers

TitleStatusHype
Meaning and understanding in large language models0
Meaning Beyond Truth Conditions: Evaluating Discourse Level Understanding via Anaphora Accessibility0
Meaning Representation of Null Instantiated Semantic Roles in FrameNet0
Measure More, Question More: Experimental Studies on Transformer-based Language Models and Complement Coercion0
Measuring and Mitigating Local Instability in Deep Neural Networks0
Measuring and Reducing Gendered Correlations in Pre-trained Models0
Med-Bot: An AI-Powered Assistant to Provide Accurate and Reliable Medical Information0
Medical mT5: An Open-Source Multilingual Text-to-Text LLM for The Medical Domain0
MediTOD: An English Dialogue Dataset for Medical History Taking with Comprehensive Annotations0
Meta Auxiliary Learning for Low-resource Spoken Language Understanding0
Meta-Learning with MAML on Trees0
Meta-Reflection: A Feedback-Free Reflection Learning Framework0
Meta Semantics: Towards better natural language understanding and reasoning0
MET-Bench: Multimodal Entity Tracking for Evaluating the Limitations of Vision-Language and Reasoning Models0
Mining Cross-Cultural Differences and Similarities in Social Media0
Mitigating Shortcuts in Language Models with Soft Label Encoding0
MixUp Training Leads to Reduced Overfitting and Improved Calibration for the Transformer Architecture0
mmT5: Modular Multilingual Pre-Training Solves Source Language Hallucinations0
Modality: logic, semantics, annotation and machine learning0
Model-Agnostic Meta-Learning for Natural Language Understanding Tasks in Finance0
Modeling Biological Processes for Reading Comprehension0
Modeling Feature Representations for Affective Speech using Generative Adversarial Networks0
Modeling meaning: computational interpreting and understanding of natural language fragments0
Modern French Poetry Generation with RoBERTa and GPT-20
ModernGBERT: German-only 1B Encoder Model Trained from Scratch0
MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation0
MoEC: Mixture of Expert Clusters0
MolX: Enhancing Large Language Models for Molecular Learning with A Multi-Modal Extension0
Monash-Summ@LongSumm 20 SciSummPip: An Unsupervised Scientific Paper Summarization Pipeline0
Monte-Carlo Planning and Learning with Language Action Value Estimates0
Motion-R1: Chain-of-Thought Reasoning and Reinforcement Learning for Human Motion Generation0
Discourse-level Relation Extraction via Graph Pooling0
MULTI3NLU++: A Multilingual, Multi-Intent, Multi-Domain Dataset for Natural Language Understanding in Task-Oriented Dialogue0
Multi-class Text Classification using BERT-based Active Learning0
Multi-level Distillation of Semantic Knowledge for Pre-training Multilingual Language Model0
Multi-Level Policy and Reward Reinforcement Learning for Image Captioning0
Multilingual Argument Mining: Datasets and Analysis0
Multilingual Few-Shot Learning via Language Model Retrieval0
Multi-lingual Intent Detection and Slot Filling in a Joint BERT-based Model0
Multilingual Pre-training with Universal Dependency Learning0
Multilingual Text Representation0
MultiLoRA: Democratizing LoRA for Better Multi-Task Learning0
Multimodal Audio-textual Architecture for Robust Spoken Language Understanding0
Multi-modal embeddings using multi-task learning for emotion recognition0
Multi-Perspective Context Aggregation for Semi-supervised Cloze-style Reading Comprehension0
Multi-Prompting Decoder Helps Better Language Understanding0
Multi-resolution Annotations for Emoji Prediction0
Multi-round, Chain-of-thought Post-editing for Unfaithful Summaries0
Distilling Multi-Scale Knowledge for Event Temporal Relation Extraction0
Multi-step Natural Language Understanding0
Show:102550
← PrevPage 31 of 40Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1HNNAccuracy90Unverified
2UDSSM-II (ensemble)Accuracy78.3Unverified
3BERT-large 340MAccuracy78.3Unverified
4UDSSM-I (ensemble)Accuracy76.7Unverified
5DSSMAccuracy75Unverified
6UDSSM-IIAccuracy75Unverified
7BERT-base 110M + MASAccuracy68.3Unverified
8USSM + Supervised Deepnet + 3 Knowledge BasesAccuracy66.7Unverified
9Word-level CNN+LSTM (full scoring)Accuracy60Unverified
10Subword-level Transformer LMAccuracy58.3Unverified
#ModelMetricClaimedVerifiedStatus
1BERT (pred POS/lemmas)Tags (Full) Acc82.5Unverified
2BERT (none)Tags (Full) Acc82Unverified
3BERT (gold POS/lemmas)Tags (Full) Acc81Unverified
4GloVe (gold POS/lemmas)Tags (Full) Acc79.3Unverified
5RoBERTa + LinearFull F1 (Preps)78.2Unverified
6GloVe (none)Tags (Full) Acc77.5Unverified
7GloVe (pred POS/lemmas)Tags (Full) Acc77.1Unverified
8SVM (feature-rich, gold syntax)Role F1 (Preps)62.2Unverified
9BiLSTM + MLP (gold syntax)Role F1 (Preps)62.2Unverified
10SVM (feature-rich, auto syntax)Role F1 (Preps)58.2Unverified
#ModelMetricClaimedVerifiedStatus
1CaseLaw-BERTCaseHOLD75.6Unverified
2Legal-BERTCaseHOLD75.1Unverified
3DeBERTaCaseHOLD72.1Unverified
4LongformerCaseHOLD72Unverified
5RoBERTaCaseHOLD71.7Unverified
6BERTCaseHOLD70.7Unverified
7BigBirdCaseHOLD70.4Unverified
#ModelMetricClaimedVerifiedStatus
1ConvBERT-DGAverage74.6Unverified
2ConvBERT-DG + Pre + MultiAverage73.8Unverified
3mslmAverage73.49Unverified
4ConvBERT + Pre + MultiAverage68.22Unverified
5BanLanGenAverage39.16Unverified
#ModelMetricClaimedVerifiedStatus
1ConvBERT + Pre + MultiAverage86.89Unverified
2mslmAverage85.83Unverified
3ConvBERT-DG + Pre + MultiAverage85.34Unverified
#ModelMetricClaimedVerifiedStatus
1MT-DNN-SMARTAverage89.9Unverified
2BERT-LARGEAverage82.1Unverified