SOTAVerified

Natural Language Understanding

Natural Language Understanding is an important field of Natural Language Processing which contains various tasks such as text classification, natural language inference and story comprehension. Applications enabled by natural language understanding range from question answering to automated reasoning.

Source: Find a Reasonable Ending for Stories: Does Logic Relation Help the Story Cloze Test?

Papers

Showing 9511000 of 1978 papers

TitleStatusHype
Differentiable Reasoning over Long Stories -- Assessing Systematic Generalisation in Neural Models0
CrossAligner & Co: Zero-Shot Transfer Methods for Task-Oriented Cross-lingual Natural Language UnderstandingCode0
RoMe: A Robust Metric for Evaluating Natural Language GenerationCode1
An Analysis of Negation in Natural Language Understanding CorporaCode0
Things not Written in Text: Exploring Spatial Commonsense from Visual SignalsCode1
On the Calibration of Pre-trained Language Models using Mixup Guided by Area Under the Margin and Saliency0
PERT: Pre-training BERT with Permuted Language ModelCode2
SciNLI: A Corpus for Natural Language Inference on Scientific TextCode1
What Makes Reading Comprehension Questions Difficult?Code0
Learning Discriminative Representations and Decision Boundaries for Open Intent Detection0
CoDA21: Evaluating Language Understanding Capabilities of NLP Models With Context-Definition AlignmentCode0
HyperMixer: An MLP-based Low Cost Alternative to TransformersCode1
SkillNet-NLU: A Sparsely Activated Model for General-Purpose Natural Language Understanding0
Input-Tuning: Adapting Unfamiliar Inputs to Frozen Pretrained Models0
HyperPrompt: Prompt-based Task-Conditioning of Transformers0
TableFormer: Robust Transformer Modeling for Table-Text EncodingCode0
MERIt: Meta-Path Guided Contrastive Learning for Logical ReasoningCode1
Bi-directional Joint Neural Networks for Intent Classification and Slot Filling0
PromDA: Prompt-based Data Augmentation for Low-Resource NLU TasksCode1
Pretraining without Wordpieces: Learning Over a Vocabulary of Millions of Words0
Learning to Merge Tokens in Vision TransformersCode0
Decorrelate Irrelevant, Purify Relevant: Overcome Textual Spurious Correlations from a Feature PerspectiveCode1
Aspect Based Sentiment Analysis Using Spectral Temporal Graph Neural Network0
Generating Training Data with Language Models: Towards Zero-Shot Language UnderstandingCode1
FedQAS: Privacy-aware machine reading comprehension with federated learningCode0
CALM: Contrastive Aligned Audio-Language Multirate and Multimodal Representations0
Learnings from Federated Learning in the Real world0
data2vec: A General Framework for Self-supervised Learning in Speech, Vision and LanguageCode1
No Parameters Left Behind: Sensitivity Guided Adaptive Learning Rate for Training Large Transformer ModelsCode1
RescoreBERT: Discriminative Speech Recognition Rescoring with BERT0
Examining Scaling and Transfer of Language Model Architectures for Machine Translation0
When Do Flat Minima Optimizers Work?Code1
Cross-Lingual Dialogue Dataset Creation via Outline-Based Generation0
ScaLA: Accelerating Adaptation of Pre-Trained Transformer-Based Language Models via Efficient Large-Batch Adversarial Noise0
Pair-Level Supervised Contrastive Learning for Natural Language Inference0
Convex Polytope Modelling for Unsupervised Derivation of Semantic Structure for Data-efficient Natural Language Understanding0
Language Generation for Broad-Coverage, Explainable Cognitive Systems0
Revisiting the Roles of “Text” in Text Games0
AraBART: a Pretrained Arabic Sequence-to-Sequence Model for Abstractive Summarization0
Curriculum: A Broad-Coverage Benchmark for Linguistic Phenomena in Natural Language Understanding0
Event Linking: Grounding Event Mentions to Wikipedia0
RGL: A Simple yet Effective Relation Graph Augmented Prompt-based Tuning Approach for Few-Shot LearningCode0
Imagination-Augmented Natural Language Understanding0
A Transformer-based Threshold-Free Framework for Multi-Intent NLU0
Causal Distillation for Language Models0
Exploiting Topic Information for Joint Intent Detection and Slot Filling0
Improved and Efficient Conversational Slot Labeling through Question Answering0
Label-guided Data Augmentation for Prompt-based Few Shot Learners0
Learning To Retrieve Prompts for In-Context Learning0
MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation0
Show:102550
← PrevPage 20 of 40Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1HNNAccuracy90Unverified
2UDSSM-II (ensemble)Accuracy78.3Unverified
3BERT-large 340MAccuracy78.3Unverified
4UDSSM-I (ensemble)Accuracy76.7Unverified
5DSSMAccuracy75Unverified
6UDSSM-IIAccuracy75Unverified
7BERT-base 110M + MASAccuracy68.3Unverified
8USSM + Supervised Deepnet + 3 Knowledge BasesAccuracy66.7Unverified
9Word-level CNN+LSTM (full scoring)Accuracy60Unverified
10Subword-level Transformer LMAccuracy58.3Unverified
#ModelMetricClaimedVerifiedStatus
1BERT (pred POS/lemmas)Tags (Full) Acc82.5Unverified
2BERT (none)Tags (Full) Acc82Unverified
3BERT (gold POS/lemmas)Tags (Full) Acc81Unverified
4GloVe (gold POS/lemmas)Tags (Full) Acc79.3Unverified
5RoBERTa + LinearFull F1 (Preps)78.2Unverified
6GloVe (none)Tags (Full) Acc77.5Unverified
7GloVe (pred POS/lemmas)Tags (Full) Acc77.1Unverified
8SVM (feature-rich, gold syntax)Role F1 (Preps)62.2Unverified
9BiLSTM + MLP (gold syntax)Role F1 (Preps)62.2Unverified
10SVM (feature-rich, auto syntax)Role F1 (Preps)58.2Unverified
#ModelMetricClaimedVerifiedStatus
1CaseLaw-BERTCaseHOLD75.6Unverified
2Legal-BERTCaseHOLD75.1Unverified
3DeBERTaCaseHOLD72.1Unverified
4LongformerCaseHOLD72Unverified
5RoBERTaCaseHOLD71.7Unverified
6BERTCaseHOLD70.7Unverified
7BigBirdCaseHOLD70.4Unverified
#ModelMetricClaimedVerifiedStatus
1ConvBERT-DGAverage74.6Unverified
2ConvBERT-DG + Pre + MultiAverage73.8Unverified
3mslmAverage73.49Unverified
4ConvBERT + Pre + MultiAverage68.22Unverified
5BanLanGenAverage39.16Unverified
#ModelMetricClaimedVerifiedStatus
1ConvBERT + Pre + MultiAverage86.89Unverified
2mslmAverage85.83Unverified
3ConvBERT-DG + Pre + MultiAverage85.34Unverified
#ModelMetricClaimedVerifiedStatus
1MT-DNN-SMARTAverage89.9Unverified
2BERT-LARGEAverage82.1Unverified