SOTAVerified

Natural Language Understanding

Natural Language Understanding is an important field of Natural Language Processing which contains various tasks such as text classification, natural language inference and story comprehension. Applications enabled by natural language understanding range from question answering to automated reasoning.

Source: Find a Reasonable Ending for Stories: Does Logic Relation Help the Story Cloze Test?

Papers

Showing 11011150 of 1978 papers

TitleStatusHype
Natural language understanding for logical gamesCode0
A Review of Text Style Transfer using Deep Learning0
Structural Persistence in Language Models: Priming as a Window into Abstract Language RepresentationsCode0
Logic Pre-Training of Language Models0
Sparse Attention with Learning to Hash0
Variance Pruning: Pruning Language Models via Temporal Neuron Variance0
Pseudo Knowledge Distillation: Towards Learning Optimal Instance-specific Label Smoothing Regularization0
Call Larisa Ivanovna: Code-Switching Fools Multilingual NLU ModelsCode0
Using Pause Information for More Accurate Entity Recognition0
FewNLU: Benchmarking State-of-the-Art Methods for Few-Shot Natural Language UnderstandingCode1
Text-based NP EnrichmentCode1
Knowledge Distillation with Noisy Labels for Natural Language Understanding0
Training Dynamic based data filtering may not work for NLP datasets0
What Makes Reading Comprehension Questions Difficult? Investigating Variation in Passage Sources and Question Types0
FlipDA: Effective and Robust Data Augmentation for Few-Shot Learning0
CoDA21: Evaluating Language Understanding Capabilities of NLP Models With Context-Definition Alignment0
Remixers: A Mixer-Transformer Architecture with Compositional Operators for Natural Language Understanding0
Semi-Supervised Few-Shot Intent Classification and Slot Filling0
Fine-Tuned Transformers Show Clusters of Similar Representations Across Layers0
Self-training with Few-shot Rationalization: Teacher Explanations Aid Student in Few-shot NLU0
Can Machines Read Coding Manuals Yet? -- A Benchmark for Building Better Language Models for Code UnderstandingCode0
The Unreasonable Effectiveness of the Baseline: Discussing SVMs in Legal Text Classification0
ARCH: Efficient Adversarial Regularized Training with CachingCode0
Task-adaptive Pre-training and Self-training are Complementary for Natural Language Understanding0
Types of Out-of-Distribution Texts and How to Detect ThemCode1
CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and GenerationCode1
How to Select One Among All? An Extensive Empirical Study Towards the Robustness of Knowledge Distillation in Natural Language UnderstandingCode1
GradTS: A Gradient-Based Automatic Auxiliary Task Selection Method Based on Transformer Networks0
Extracting Event Temporal Relations via Hyperbolic GeometryCode1
Prior Omission of Dissimilar Source Domain(s) for Cost-Effective Few-Shot Learning0
Exophoric Pronoun Resolution in Dialogues with Topic RegularizationCode0
Temporal Pyramid Transformer with Multimodal Interaction for Video Question AnsweringCode1
Graph Based Network with Contextualized Representations of Turns in DialogueCode1
Debiasing Methods in Natural Language Understanding Make Bias More AccessibleCode1
Continuous Entailment Patterns for Lexical Inference in ContextCode0
Active Learning by Acquiring Contrastive ExamplesCode1
Proto: A Neural Cocktail for Generating Appealing Conversations0
End-to-End Self-Debiasing Framework for Robust NLU Training0
Error Detection in Large-Scale Natural Language Understanding Systems Using Transformer Models0
CREAK: A Dataset for Commonsense Reasoning over Entity KnowledgeCode1
InFoBERT: Zero-Shot Approach to Natural Language Understanding Using Contextualized Word Embedding0
Approximating a Zulu GF concrete syntax with a neural network for natural language understanding0
Does Knowledge Help General NLU? An Empirical Study0
ASR-GLUE: A New Multi-task Benchmark for ASR-Robust Natural Language Understanding0
WALNUT: A Benchmark on Semi-weakly Supervised Learning for Natural Language Understanding0
Integrating Heuristics and Learning in a Computational Architecture for Cognitive Trading0
SLIM: Explicit Slot-Intent Mapping with BERT for Joint Multi-Intent Detection and Slot FillingCode1
SAUCE: Truncated Sparse Document Signature Bit-Vectors for Fast Web-Scale Corpus Expansion0
A New Sentence Ordering Method Using BERT Pretrained Model0
Using BERT Encoding and Sentence-Level Language Model for Sentence Ordering0
Show:102550
← PrevPage 23 of 40Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1HNNAccuracy90Unverified
2UDSSM-II (ensemble)Accuracy78.3Unverified
3BERT-large 340MAccuracy78.3Unverified
4UDSSM-I (ensemble)Accuracy76.7Unverified
5DSSMAccuracy75Unverified
6UDSSM-IIAccuracy75Unverified
7BERT-base 110M + MASAccuracy68.3Unverified
8USSM + Supervised Deepnet + 3 Knowledge BasesAccuracy66.7Unverified
9Word-level CNN+LSTM (full scoring)Accuracy60Unverified
10Subword-level Transformer LMAccuracy58.3Unverified
#ModelMetricClaimedVerifiedStatus
1BERT (pred POS/lemmas)Tags (Full) Acc82.5Unverified
2BERT (none)Tags (Full) Acc82Unverified
3BERT (gold POS/lemmas)Tags (Full) Acc81Unverified
4GloVe (gold POS/lemmas)Tags (Full) Acc79.3Unverified
5RoBERTa + LinearFull F1 (Preps)78.2Unverified
6GloVe (none)Tags (Full) Acc77.5Unverified
7GloVe (pred POS/lemmas)Tags (Full) Acc77.1Unverified
8SVM (feature-rich, gold syntax)Role F1 (Preps)62.2Unverified
9BiLSTM + MLP (gold syntax)Role F1 (Preps)62.2Unverified
10SVM (feature-rich, auto syntax)Role F1 (Preps)58.2Unverified
#ModelMetricClaimedVerifiedStatus
1CaseLaw-BERTCaseHOLD75.6Unverified
2Legal-BERTCaseHOLD75.1Unverified
3DeBERTaCaseHOLD72.1Unverified
4LongformerCaseHOLD72Unverified
5RoBERTaCaseHOLD71.7Unverified
6BERTCaseHOLD70.7Unverified
7BigBirdCaseHOLD70.4Unverified
#ModelMetricClaimedVerifiedStatus
1ConvBERT-DGAverage74.6Unverified
2ConvBERT-DG + Pre + MultiAverage73.8Unverified
3mslmAverage73.49Unverified
4ConvBERT + Pre + MultiAverage68.22Unverified
5BanLanGenAverage39.16Unverified
#ModelMetricClaimedVerifiedStatus
1ConvBERT + Pre + MultiAverage86.89Unverified
2mslmAverage85.83Unverified
3ConvBERT-DG + Pre + MultiAverage85.34Unverified
#ModelMetricClaimedVerifiedStatus
1MT-DNN-SMARTAverage89.9Unverified
2BERT-LARGEAverage82.1Unverified