SOTAVerified

Natural Language Understanding

Natural Language Understanding is an important field of Natural Language Processing which contains various tasks such as text classification, natural language inference and story comprehension. Applications enabled by natural language understanding range from question answering to automated reasoning.

Source: Find a Reasonable Ending for Stories: Does Logic Relation Help the Story Cloze Test?

Papers

Showing 201250 of 1978 papers

TitleStatusHype
AdaMix: Mixture-of-Adaptations for Parameter-efficient Model TuningCode1
The Importance of Being Parameters: An Intra-Distillation Method for Serious GainsCode1
Vector-Quantized Input-Contextualized Soft Prompts for Natural Language UnderstandingCode1
TreeMix: Compositional Constituency-based Data Augmentation for Natural Language UnderstandingCode1
Learning to Split for Automatic Bias DetectionCode1
NLU++: A Multi-Label, Slot-Rich, Generalisable Dataset for Natural Language Understanding in Task-Oriented DialogueCode1
Anno-MI: A Dataset of Expert-Annotated Counselling DialoguesCode1
KALA: Knowledge-Augmented Language Model AdaptationCode1
ALBETO and DistilBETO: Lightweight Spanish Language ModelsCode1
Back to the Future: Bidirectional Information Decoupling Network for Multi-turn Dialogue ModelingCode1
Imagination-Augmented Natural Language UnderstandingCode1
MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided AdaptationCode1
CalBERT - Code-mixed Adaptive Language representations using BERTCode1
Generative Biomedical Entity Linking via Knowledge Base-Guided Pre-training and Synonyms-Aware Fine-tuningCode1
BioBART: Pretraining and Evaluation of A Biomedical Generative Language ModelCode1
RoMe: A Robust Metric for Evaluating Natural Language GenerationCode1
Things not Written in Text: Exploring Spatial Commonsense from Visual SignalsCode1
SciNLI: A Corpus for Natural Language Inference on Scientific TextCode1
HyperMixer: An MLP-based Low Cost Alternative to TransformersCode1
MERIt: Meta-Path Guided Contrastive Learning for Logical ReasoningCode1
PromDA: Prompt-based Data Augmentation for Low-Resource NLU TasksCode1
Decorrelate Irrelevant, Purify Relevant: Overcome Textual Spurious Correlations from a Feature PerspectiveCode1
Generating Training Data with Language Models: Towards Zero-Shot Language UnderstandingCode1
data2vec: A General Framework for Self-supervised Learning in Speech, Vision and LanguageCode1
No Parameters Left Behind: Sensitivity Guided Adaptive Learning Rate for Training Large Transformer ModelsCode1
When Do Flat Minima Optimizers Work?Code1
Learning To Retrieve Prompts for In-Context LearningCode1
Causal Distillation for Language ModelsCode1
Tiny-NewsRec: Effective and Efficient PLM-based News RecommendationCode1
Systematic Generalization with Edge TransformersCode1
RoBERTuito: a pre-trained language model for social media text in SpanishCode1
On Transferability of Prompt Tuning for Natural Language ProcessingCode1
TaCL: Improving BERT Pre-training with Token-aware Contrastive LearningCode1
CLUES: Few-Shot Learning Evaluation in Natural Language UnderstandingCode1
Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language ModelsCode1
A Large-scale Comprehensive Abusiveness Detection Dataset with Multifaceted Labels from RedditCode1
KNOT: Knowledge Distillation using Optimal Transport for Solving NLP TasksCode1
LexGLUE: A Benchmark Dataset for Legal Language Understanding in EnglishCode1
FewNLU: Benchmarking State-of-the-Art Methods for Few-Shot Natural Language UnderstandingCode1
Text-based NP EnrichmentCode1
Types of Out-of-Distribution Texts and How to Detect ThemCode1
CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and GenerationCode1
How to Select One Among All? An Extensive Empirical Study Towards the Robustness of Knowledge Distillation in Natural Language UnderstandingCode1
Extracting Event Temporal Relations via Hyperbolic GeometryCode1
Temporal Pyramid Transformer with Multimodal Interaction for Video Question AnsweringCode1
Graph Based Network with Contextualized Representations of Turns in DialogueCode1
Debiasing Methods in Natural Language Understanding Make Bias More AccessibleCode1
Active Learning by Acquiring Contrastive ExamplesCode1
CREAK: A Dataset for Commonsense Reasoning over Entity KnowledgeCode1
SLIM: Explicit Slot-Intent Mapping with BERT for Joint Multi-Intent Detection and Slot FillingCode1
Show:102550
← PrevPage 5 of 40Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1HNNAccuracy90Unverified
2BERT-large 340MAccuracy78.3Unverified
3UDSSM-II (ensemble)Accuracy78.3Unverified
4UDSSM-I (ensemble)Accuracy76.7Unverified
5DSSMAccuracy75Unverified
6UDSSM-IIAccuracy75Unverified
7BERT-base 110M + MASAccuracy68.3Unverified
8USSM + Supervised Deepnet + 3 Knowledge BasesAccuracy66.7Unverified
9Word-level CNN+LSTM (full scoring)Accuracy60Unverified
10Subword-level Transformer LMAccuracy58.3Unverified
#ModelMetricClaimedVerifiedStatus
1BERT (pred POS/lemmas)Tags (Full) Acc82.5Unverified
2BERT (none)Tags (Full) Acc82Unverified
3BERT (gold POS/lemmas)Tags (Full) Acc81Unverified
4GloVe (gold POS/lemmas)Tags (Full) Acc79.3Unverified
5RoBERTa + LinearFull F1 (Preps)78.2Unverified
6GloVe (none)Tags (Full) Acc77.5Unverified
7GloVe (pred POS/lemmas)Tags (Full) Acc77.1Unverified
8SVM (feature-rich, gold syntax)Role F1 (Preps)62.2Unverified
9BiLSTM + MLP (gold syntax)Role F1 (Preps)62.2Unverified
10SVM (feature-rich, auto syntax)Role F1 (Preps)58.2Unverified
#ModelMetricClaimedVerifiedStatus
1CaseLaw-BERTCaseHOLD75.6Unverified
2Legal-BERTCaseHOLD75.1Unverified
3DeBERTaCaseHOLD72.1Unverified
4LongformerCaseHOLD72Unverified
5RoBERTaCaseHOLD71.7Unverified
6BERTCaseHOLD70.7Unverified
7BigBirdCaseHOLD70.4Unverified
#ModelMetricClaimedVerifiedStatus
1ConvBERT-DGAverage74.6Unverified
2ConvBERT-DG + Pre + MultiAverage73.8Unverified
3mslmAverage73.49Unverified
4ConvBERT + Pre + MultiAverage68.22Unverified
5BanLanGenAverage39.16Unverified
#ModelMetricClaimedVerifiedStatus
1ConvBERT + Pre + MultiAverage86.89Unverified
2mslmAverage85.83Unverified
3ConvBERT-DG + Pre + MultiAverage85.34Unverified
#ModelMetricClaimedVerifiedStatus
1MT-DNN-SMARTAverage89.9Unverified
2BERT-LARGEAverage82.1Unverified