SOTAVerified

Natural Language Understanding

Natural Language Understanding is an important field of Natural Language Processing which contains various tasks such as text classification, natural language inference and story comprehension. Applications enabled by natural language understanding range from question answering to automated reasoning.

Source: Find a Reasonable Ending for Stories: Does Logic Relation Help the Story Cloze Test?

Papers

Showing 851900 of 1978 papers

TitleStatusHype
Compressing Pre-trained Transformers via Low-Bit NxM Sparsity for Natural Language Understanding0
Not Cheating on the Turing Test: Towards Grounded Language Learning in Artificial Intelligence0
Solving Quantitative Reasoning Problems with Language ModelsCode2
ZoDIAC: Zoneout Dropout Injection Attention CalculationCode0
Endowing Language Models with Multimodal Knowledge Graph RepresentationsCode1
Meta Auxiliary Learning for Low-resource Spoken Language Understanding0
PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight ImportanceCode1
Unified BERT for Few-shot Natural Language Understanding0
Why Robust Natural Language Understanding is a Challenge0
Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems0
CHQ-Summ: A Dataset for Consumer Healthcare Question SummarizationCode0
ProcTHOR: Large-Scale Embodied AI Using Procedural Generation0
Transformer based Urdu Handwritten Text Optical Character Reader0
TCE at Qur'an QA 2022: Arabic Language Question Answering Over Holy Qur'an Using a Post-Processed Ensemble of BERT-based ModelsCode1
Strategy-level Entrainment of Dialogue System Users in a Creative Visual Reference Resolution Task0
TCE at Qur’an QA 2022: Arabic Language Question Answering Over Holy Qur’an Using a Post-Processed Ensemble of BERT-based ModelsCode1
A Large Interlinked Knowledge Graph of the Italian Cultural Heritage0
DTW at Qur’an QA 2022: Utilising Transfer Learning with Transformers for Question Answering in a Low-resource DomainCode0
Dialogue Act and Slot Recognition in Italian Complex Dialogues0
BasqueGLUE: A Natural Language Understanding Benchmark for BasqueCode0
BaSCo: An Annotated Basque-Spanish Code-Switching Corpus for Natural Language Understanding0
Question Modifiers in Visual Question Answering0
The Robotic Surgery Procedural FramebankCode0
Negation Detection in Dutch Spoken Human-Computer Conversations0
Exploring Text Recombination for Automatic Narrative Level Detection0
Towards Building a Spoken Dialogue System for Argument Exploration0
JGLUE: Japanese General Language Understanding EvaluationCode2
SenticNet 7: A Commonsense-based Neurosymbolic AI Framework for Explainable Sentiment Analysis0
Étiquetage ou génération de séquences pour la compréhension automatique du langage en contexte d’interaction? (Sequence tagging or sequence generation for Natural Language Understanding ?)0
A Multi-level Supervised Contrastive Learning Framework for Low-Resource Natural Language Inference0
E2S2: Encoding-Enhanced Sequence-to-Sequence Pretraining for Language Understanding and GenerationCode0
NLU for Game-based Learning in Real: Initial Evaluations0
IGLU 2022: Interactive Grounded Language Understanding in a Collaborative Environment at NeurIPS 2022Code0
Automatic question generation based on sentence structure analysis using machine learning approach0
InstructDial: Improving Zero and Few-shot Generalization in Dialogue through Instruction TuningCode1
GisPy: A Tool for Measuring Gist Inference Score in TextCode1
AdaMix: Mixture-of-Adaptations for Parameter-efficient Model TuningCode1
When More Data Hurts: A Troubling Quirk in Developing Broad-Coverage Natural Language Understanding SystemsCode0
A Survey on Neural Open Information Extraction: Current Status and Future Directions0
Vector-Quantized Input-Contextualized Soft Prompts for Natural Language UnderstandingCode1
The Importance of Being Parameters: An Intra-Distillation Method for Serious GainsCode1
Contrastive Representation Learning for Cross-Document Coreference Resolution of Events and Entities0
Training Efficient CNNS: Tweaking the Nuts and Bolts of Neural Networks for Lighter, Faster and Robust ModelsCode0
Revisiting Pre-trained Language Models and their Evaluation for Arabic Natural Language Understanding0
Calibration of Natural Language Understanding Models with Venn--ABERS PredictorsCode0
Down and Across: Introducing Crossword-Solving as a New NLP BenchmarkCode0
Enhancing Slot Tagging with Intent Features for Task Oriented Natural Language Understanding using BERT0
Are Prompt-based Models Clueless?0
PromptDA: Label-guided Data Augmentation for Prompt-based Few-shot LearnersCode0
A Fast Attention Network for Joint Intent Detection and Slot Filling on Edge Devices0
Show:102550
← PrevPage 18 of 40Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1HNNAccuracy90Unverified
2BERT-large 340MAccuracy78.3Unverified
3UDSSM-II (ensemble)Accuracy78.3Unverified
4UDSSM-I (ensemble)Accuracy76.7Unverified
5DSSMAccuracy75Unverified
6UDSSM-IIAccuracy75Unverified
7BERT-base 110M + MASAccuracy68.3Unverified
8USSM + Supervised Deepnet + 3 Knowledge BasesAccuracy66.7Unverified
9Word-level CNN+LSTM (full scoring)Accuracy60Unverified
10Subword-level Transformer LMAccuracy58.3Unverified
#ModelMetricClaimedVerifiedStatus
1BERT (pred POS/lemmas)Tags (Full) Acc82.5Unverified
2BERT (none)Tags (Full) Acc82Unverified
3BERT (gold POS/lemmas)Tags (Full) Acc81Unverified
4GloVe (gold POS/lemmas)Tags (Full) Acc79.3Unverified
5RoBERTa + LinearFull F1 (Preps)78.2Unverified
6GloVe (none)Tags (Full) Acc77.5Unverified
7GloVe (pred POS/lemmas)Tags (Full) Acc77.1Unverified
8SVM (feature-rich, gold syntax)Role F1 (Preps)62.2Unverified
9BiLSTM + MLP (gold syntax)Role F1 (Preps)62.2Unverified
10SVM (feature-rich, auto syntax)Role F1 (Preps)58.2Unverified
#ModelMetricClaimedVerifiedStatus
1CaseLaw-BERTCaseHOLD75.6Unverified
2Legal-BERTCaseHOLD75.1Unverified
3DeBERTaCaseHOLD72.1Unverified
4LongformerCaseHOLD72Unverified
5RoBERTaCaseHOLD71.7Unverified
6BERTCaseHOLD70.7Unverified
7BigBirdCaseHOLD70.4Unverified
#ModelMetricClaimedVerifiedStatus
1ConvBERT-DGAverage74.6Unverified
2ConvBERT-DG + Pre + MultiAverage73.8Unverified
3mslmAverage73.49Unverified
4ConvBERT + Pre + MultiAverage68.22Unverified
5BanLanGenAverage39.16Unverified
#ModelMetricClaimedVerifiedStatus
1ConvBERT + Pre + MultiAverage86.89Unverified
2mslmAverage85.83Unverified
3ConvBERT-DG + Pre + MultiAverage85.34Unverified
#ModelMetricClaimedVerifiedStatus
1MT-DNN-SMARTAverage89.9Unverified
2BERT-LARGEAverage82.1Unverified