SOTAVerified

Natural Language Understanding

Natural Language Understanding is an important field of Natural Language Processing which contains various tasks such as text classification, natural language inference and story comprehension. Applications enabled by natural language understanding range from question answering to automated reasoning.

Source: Find a Reasonable Ending for Stories: Does Logic Relation Help the Story Cloze Test?

Papers

Showing 651700 of 1978 papers

TitleStatusHype
Low-resource Bilingual Dialect Lexicon Induction with Large Language ModelsCode0
Scaling Transformer to 1M tokens and beyond with RMTCode2
Exploring the State of the Art in Legal QA SystemsCode1
LINGO : Visually Debiasing Natural Language Instructions to Support Task Diversity0
User Adaptive Language Learning Chatbots with a Curriculum0
Towards preserving word order importance through Forced InvalidationCode0
Uncertainty-Aware Natural Language Inference with Stochastic Weight AveragingCode0
Evaluating the Logical Reasoning Ability of ChatGPT and GPT-4Code1
How to Design Translation Prompts for ChatGPT: An Empirical StudyCode5
Form-NLU: Dataset for the Form Natural Language UnderstandingCode1
PEACH: Pre-Training Sequence-to-Sequence Multilingual Models for Translation with Semi-Supervised Pseudo-Parallel Document GenerationCode0
DERA: Enhancing Large Language Model Completions with Dialog-Enabled Resolving Agents0
Evaluation of ChatGPT for NLP-based Mental Health Applications0
ChatGPT as a Factual Inconsistency Evaluator for Text Summarization0
Error Analysis Prompting Enables Human-Like Translation Evaluation in Large Language ModelsCode1
SwissBERT: The Multilingual Language Model for SwitzerlandCode1
Is BERT Blind? Exploring the Effect of Vision-and-Language Pretraining on Visual Language UnderstandingCode1
Capabilities of GPT-4 on Medical Challenge ProblemsCode1
PanGu-Σ: Towards Trillion Parameter Language Model with Sparse Heterogeneous Computing0
CTRAN: CNN-Transformer-based Network for Natural Language UnderstandingCode1
A Comprehensive Capability Analysis of GPT-3 and GPT-3.5 Series Models0
A Deep Learning System for Domain-specific Speech Recognition0
Trustera: A Live Conversation Redaction System0
Can ChatGPT Replace Traditional KBQA Models? An In-depth Analysis of the Question Answering Performance of the GPT LLM FamilyCode1
Let's Get Personal: Personal Questions Improve SocialBot Performance in the Alexa Prize0
Comprehensive Event Representations using Event Knowledge Graphs and Natural Language Processing0
A Hybrid Architecture for Out of Domain Intent Detection and Intent DiscoveryCode1
Model-Agnostic Meta-Learning for Natural Language Understanding Tasks in Finance0
MathPrompter: Mathematical Reasoning using Large Language ModelsCode1
Understanding Natural Language Understanding Systems. A Critical Analysis0
How Robust is GPT-3.5 to Predecessors? A Comprehensive Study on Language Understanding Tasks0
A Persian Benchmark for Joint Intent Detection and Slot FillingCode1
Knowledge-enhanced Visual-Language Pre-training on Chest Radiology ImagesCode1
Few-shot Multimodal Multitask Multilingual Learning0
Bag of Tricks for Effective Language Model Pretraining and Downstream Adaptation: A Case Study on GLUE0
Scalable Prompt Generation for Semi-supervised Learning with Language Models0
Role of Bias Terms in Dot-Product Attention0
Is Multimodal Vision Supervision Beneficial to Language?Code0
EvoText: Enhancing Natural Language Generation Models via Self-Escalation Learning for Up-to-Date Knowledge and Improved Performance0
Reliable Natural Language Understanding with Large Language Models and Answer Set Programming0
GLADIS: A General and Large Acronym Disambiguation BenchmarkCode1
The Impacts of Unanswerable Questions on the Robustness of Machine Reading Comprehension Models0
Can We Use Probing to Better Understand Fine-tuning and Knowledge Distillation of the BERT NLU?0
Call for Papers -- The BabyLM Challenge: Sample-efficient pretraining on a developmentally plausible corpusCode1
ViDeBERTa: A powerful pre-trained language model for VietnameseCode1
Probing Taxonomic and Thematic Embeddings for Taxonomic Information0
Neural Architecture Search: Insights from 1000 PapersCode0
A Cohesive Distillation Architecture for Neural Language Models0
Counteracts: Testing Stereotypical Representation in Pre-trained Language Models0
Cross-Model Comparative Loss for Enhancing Neuronal Utility in Language UnderstandingCode0
Show:102550
← PrevPage 14 of 40Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1HNNAccuracy90Unverified
2BERT-large 340MAccuracy78.3Unverified
3UDSSM-II (ensemble)Accuracy78.3Unverified
4UDSSM-I (ensemble)Accuracy76.7Unverified
5DSSMAccuracy75Unverified
6UDSSM-IIAccuracy75Unverified
7BERT-base 110M + MASAccuracy68.3Unverified
8USSM + Supervised Deepnet + 3 Knowledge BasesAccuracy66.7Unverified
9Word-level CNN+LSTM (full scoring)Accuracy60Unverified
10Subword-level Transformer LMAccuracy58.3Unverified
#ModelMetricClaimedVerifiedStatus
1BERT (pred POS/lemmas)Tags (Full) Acc82.5Unverified
2BERT (none)Tags (Full) Acc82Unverified
3BERT (gold POS/lemmas)Tags (Full) Acc81Unverified
4GloVe (gold POS/lemmas)Tags (Full) Acc79.3Unverified
5RoBERTa + LinearFull F1 (Preps)78.2Unverified
6GloVe (none)Tags (Full) Acc77.5Unverified
7GloVe (pred POS/lemmas)Tags (Full) Acc77.1Unverified
8SVM (feature-rich, gold syntax)Role F1 (Preps)62.2Unverified
9BiLSTM + MLP (gold syntax)Role F1 (Preps)62.2Unverified
10SVM (feature-rich, auto syntax)Role F1 (Preps)58.2Unverified
#ModelMetricClaimedVerifiedStatus
1CaseLaw-BERTCaseHOLD75.6Unverified
2Legal-BERTCaseHOLD75.1Unverified
3DeBERTaCaseHOLD72.1Unverified
4LongformerCaseHOLD72Unverified
5RoBERTaCaseHOLD71.7Unverified
6BERTCaseHOLD70.7Unverified
7BigBirdCaseHOLD70.4Unverified
#ModelMetricClaimedVerifiedStatus
1ConvBERT-DGAverage74.6Unverified
2ConvBERT-DG + Pre + MultiAverage73.8Unverified
3mslmAverage73.49Unverified
4ConvBERT + Pre + MultiAverage68.22Unverified
5BanLanGenAverage39.16Unverified
#ModelMetricClaimedVerifiedStatus
1ConvBERT + Pre + MultiAverage86.89Unverified
2mslmAverage85.83Unverified
3ConvBERT-DG + Pre + MultiAverage85.34Unverified
#ModelMetricClaimedVerifiedStatus
1MT-DNN-SMARTAverage89.9Unverified
2BERT-LARGEAverage82.1Unverified