SOTAVerified

Natural Language Understanding

Natural Language Understanding is an important field of Natural Language Processing which contains various tasks such as text classification, natural language inference and story comprehension. Applications enabled by natural language understanding range from question answering to automated reasoning.

Source: Find a Reasonable Ending for Stories: Does Logic Relation Help the Story Cloze Test?

Papers

Showing 251300 of 1978 papers

TitleStatusHype
FlipDA: Effective and Robust Data Augmentation for Few-Shot LearningCode1
Break, Perturb, Build: Automatic Perturbation of Reasoning Paths Through Question DecompositionCode1
Bridging the Gap between Spatial and Spectral Domains: A Unified Framework for Graph Neural NetworksCode1
FewCLUE: A Chinese Few-shot Learning Evaluation BenchmarkCode1
Trusting RoBERTa over BERT: Insights from CheckListing the Natural Language Inference TaskCode1
FaVIQ: FAct Verification from Information-seeking QuestionsCode1
CLINE: Contrastive Learning with Semantic Negative Examples for Natural Language UnderstandingCode1
Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and BetterCode1
Programming PuzzlesCode1
Text-to-SQL in the Wild: A Naturally-Occurring Dataset Based on Stack Exchange DataCode1
Relative Importance in Sentence ProcessingCode1
Event Time Extraction and Propagation via Graph Attention NetworksCode1
KLUE: Korean Language Understanding EvaluationCode1
News Headline Grouping as a Challenging NLU TaskCode1
Towards General Natural Language Understanding with Probabilistic WorldbuildingCode1
PanGu-α: Large-scale Autoregressive Pretrained Chinese Language Models with Auto-parallel ComputationCode1
X-METRA-ADA: Cross-lingual Meta-Transfer Learning Adaptation to Natural Language Understanding and Question AnsweringCode1
SciCo: Hierarchical Cross-Document Coreference for Scientific ConceptsCode1
On the Importance of Effectively Adapting Pretrained Language Models for Active LearningCode1
Empowering News Recommendation with Pre-trained Language ModelsCode1
Lattice-BERT: Leveraging Multi-Granularity Representations in Chinese Pre-trained Language ModelsCode1
XTREME-R: Towards More Challenging and Nuanced Multilingual EvaluationCode1
K-PLUG: Knowledge-injected Pre-trained Language Model for Natural Language Understanding and Generation in E-CommerceCode1
Adversarial Regularization as Stackelberg Game: An Unrolled Optimization ApproachCode1
Intent Detection and Slot Filling for VietnameseCode1
How Certain is Your Transformer?Code1
Structure Inducing Pre-TrainingCode1
Multilingual Code-Switching for Zero-Shot Cross-Lingual Intent Prediction and Slot FillingCode1
Empathetic BERT2BERT Conversational Model: Learning Arabic Language Generation with Little DataCode1
Syntax-BERT: Improving Pre-trained Transformers with Syntax TreesCode1
Chess as a Testbed for Language Model State TrackingCode1
Evolving Attention with Residual ConvolutionsCode1
Training Vision Transformers for Image RetrievalCode1
VisualMRC: Machine Reading Comprehension on Document ImagesCode1
LSOIE: A Large-Scale Dataset for Supervised Open Information ExtractionCode1
RomeBERT: Robust Training of Multi-Exit BERTCode1
TextGNN: Improving Text Encoder via Graph Neural Network in Sponsored SearchCode1
BanglaBERT: Language Model Pretraining and Benchmarks for Low-Resource Language Understanding Evaluation in BanglaCode1
K-PLUG: KNOWLEDGE-INJECTED PRE-TRAINED LANGUAGE MODEL FOR NATURAL LANGUAGE UNDERSTANDING AND GENERATIONCode1
Robustness Testing of Language Understanding in Task-Oriented DialogCode1
UnNatural Language InferenceCode1
Code Summarization with Structure-induced TransformerCode1
MeDAL: Medical Abbreviation Disambiguation Dataset for Natural Language Understanding PretrainingCode1
CSKG: The CommonSense Knowledge GraphCode1
ParsiNLU: A Suite of Language Understanding Challenges for PersianCode1
Infusing Finetuning with Semantic DependenciesCode1
Braid: Weaving Symbolic and Neural Knowledge into Coherent Logical ExplanationsCode1
GLGE: A New General Language Generation Evaluation BenchmarkCode1
Fact-level Extractive Summarization with Hierarchical Graph Mask on BERTCode1
A Sequence-to-Sequence Approach to Dialogue State TrackingCode1
Show:102550
← PrevPage 6 of 40Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1HNNAccuracy90Unverified
2UDSSM-II (ensemble)Accuracy78.3Unverified
3BERT-large 340MAccuracy78.3Unverified
4UDSSM-I (ensemble)Accuracy76.7Unverified
5DSSMAccuracy75Unverified
6UDSSM-IIAccuracy75Unverified
7BERT-base 110M + MASAccuracy68.3Unverified
8USSM + Supervised Deepnet + 3 Knowledge BasesAccuracy66.7Unverified
9Word-level CNN+LSTM (full scoring)Accuracy60Unverified
10Subword-level Transformer LMAccuracy58.3Unverified
#ModelMetricClaimedVerifiedStatus
1BERT (pred POS/lemmas)Tags (Full) Acc82.5Unverified
2BERT (none)Tags (Full) Acc82Unverified
3BERT (gold POS/lemmas)Tags (Full) Acc81Unverified
4GloVe (gold POS/lemmas)Tags (Full) Acc79.3Unverified
5RoBERTa + LinearFull F1 (Preps)78.2Unverified
6GloVe (none)Tags (Full) Acc77.5Unverified
7GloVe (pred POS/lemmas)Tags (Full) Acc77.1Unverified
8SVM (feature-rich, gold syntax)Role F1 (Preps)62.2Unverified
9BiLSTM + MLP (gold syntax)Role F1 (Preps)62.2Unverified
10SVM (feature-rich, auto syntax)Role F1 (Preps)58.2Unverified
#ModelMetricClaimedVerifiedStatus
1CaseLaw-BERTCaseHOLD75.6Unverified
2Legal-BERTCaseHOLD75.1Unverified
3DeBERTaCaseHOLD72.1Unverified
4LongformerCaseHOLD72Unverified
5RoBERTaCaseHOLD71.7Unverified
6BERTCaseHOLD70.7Unverified
7BigBirdCaseHOLD70.4Unverified
#ModelMetricClaimedVerifiedStatus
1ConvBERT-DGAverage74.6Unverified
2ConvBERT-DG + Pre + MultiAverage73.8Unverified
3mslmAverage73.49Unverified
4ConvBERT + Pre + MultiAverage68.22Unverified
5BanLanGenAverage39.16Unverified
#ModelMetricClaimedVerifiedStatus
1ConvBERT + Pre + MultiAverage86.89Unverified
2mslmAverage85.83Unverified
3ConvBERT-DG + Pre + MultiAverage85.34Unverified
#ModelMetricClaimedVerifiedStatus
1MT-DNN-SMARTAverage89.9Unverified
2BERT-LARGEAverage82.1Unverified