SOTAVerified

Natural Language Understanding

Natural Language Understanding is an important field of Natural Language Processing which contains various tasks such as text classification, natural language inference and story comprehension. Applications enabled by natural language understanding range from question answering to automated reasoning.

Source: Find a Reasonable Ending for Stories: Does Logic Relation Help the Story Cloze Test?

Papers

Showing 15511600 of 1978 papers

TitleStatusHype
SAPPHIRE: Simple Aligner for Phrasal Paraphrase with Hierarchical Representation0
Inference Annotation of a Chinese Corpus for Opinion Mining0
Corpus Generation for Voice Command in Smart Home and the Effect of Speech Synthesis on End-to-End SLU0
Mandarinograd: A Chinese Collection of Winograd Schemas0
Is Language Modeling Enough? Evaluating Effective Embedding Combinations0
Dialogue-AMR: Abstract Meaning Representation for Dialogue0
Handling Noun-Noun Coreference in Tamil0
Intermediate-Task Transfer Learning with Pretrained Models for Natural Language Understanding: When and Why Does It Work?0
KLEJ: Comprehensive Benchmark for Polish Language UnderstandingCode1
Language (Re)modelling: Towards Embodied Language Understanding0
Mind the Trade-off: Debiasing NLU Models without Degrading the In-distribution PerformanceCode1
Enriched Pre-trained Transformers for Joint Slot Filling and Intent Detection0
TAVAT: Token-Aware Virtual Adversarial Training for Language UnderstandingCode1
RikiNet: Reading Wikipedia Pages for Natural Question Answering0
Towards Unsupervised Language Understanding and Generation by Joint Dual LearningCode1
Hierarchical Encoders for Modeling and Interpreting Screenplays0
Learning to Rank Intents in Voice Assistants0
Lexical Semantic RecognitionCode1
End-to-End Slot Alignment and Recognition for Cross-Lingual NLUCode1
Benchmarking Robustness of Machine Reading Comprehension ModelsCode1
FitChat: Conversational Artificial Intelligence Interventions for Encouraging Physical Activity in Older Adults0
Semantics-Aware Inferential Network for Natural Language Understanding0
ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERTCode2
Dual Learning for Semi-Supervised Natural Language UnderstandingCode1
Beyond 512 Tokens: Siamese Multi-depth Transformer-based Hierarchical Encoder for Long-Form Document MatchingCode0
Data Annealing for Informal Language Understanding Tasks0
Probabilistically Masked Language Model Capable of Autoregressive Generation in Arbitrary Word OrderCode1
A Review of Winograd Schema Challenge Datasets and Approaches0
Adversarial Training for Large Neural Language Models0
Show Us the Way: Learning to Manage Dialog from Demonstrations0
TOD-BERT: Pre-trained Natural Language Understanding for Task-Oriented DialogueCode1
Cross-Lingual Semantic Role Labeling with High-Quality Translated Training CorpusCode1
PALM: Pre-training an Autoencoding&Autoregressive Language Model for Context-conditioned GenerationCode1
CLUE: A Chinese Language Understanding Evaluation BenchmarkCode2
Unsupervised Commonsense Question Answering with Self-TalkCode1
Identifying Distributional Perspective Differences from Colingual GroupsCode0
LadaBERT: Lightweight Adaptation of BERT through Hybrid Model Compression0
Are Natural Language Inference Models IMPPRESsive? Learning IMPlicature and PRESuppositionCode1
KorNLI and KorSTS: New Benchmark Datasets for Korean Natural Language UnderstandingCode1
CG-BERT: Conditional Text Generation with BERT for Generalized Few-shot Intent Detection0
Benchmarking Machine Reading Comprehension: A Psychological Perspective0
XGLUE: A New Benchmark Dataset for Cross-lingual Pre-training, Understanding and GenerationCode1
Sum-product networks: A surveyCode1
ELECTRA: Pre-training Text Encoders as Discriminators Rather Than GeneratorsCode1
Prior Knowledge Driven Label Embedding for Slot Filling in Natural Language Understanding0
Temporal Embeddings and Transformer Models for Narrative Text Understanding0
Ecological Semantics: Programming Environments for Situated Language Understanding0
Zero-Shot Cross-Lingual Transfer with Meta LearningCode1
Convo: What does conversational programming need? An exploration of machine learning interface design0
UniLMv2: Pseudo-Masked Language Models for Unified Language Model Pre-TrainingCode1
Show:102550
← PrevPage 32 of 40Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1HNNAccuracy90Unverified
2UDSSM-II (ensemble)Accuracy78.3Unverified
3BERT-large 340MAccuracy78.3Unverified
4UDSSM-I (ensemble)Accuracy76.7Unverified
5DSSMAccuracy75Unverified
6UDSSM-IIAccuracy75Unverified
7BERT-base 110M + MASAccuracy68.3Unverified
8USSM + Supervised Deepnet + 3 Knowledge BasesAccuracy66.7Unverified
9Word-level CNN+LSTM (full scoring)Accuracy60Unverified
10Subword-level Transformer LMAccuracy58.3Unverified
#ModelMetricClaimedVerifiedStatus
1BERT (pred POS/lemmas)Tags (Full) Acc82.5Unverified
2BERT (none)Tags (Full) Acc82Unverified
3BERT (gold POS/lemmas)Tags (Full) Acc81Unverified
4GloVe (gold POS/lemmas)Tags (Full) Acc79.3Unverified
5RoBERTa + LinearFull F1 (Preps)78.2Unverified
6GloVe (none)Tags (Full) Acc77.5Unverified
7GloVe (pred POS/lemmas)Tags (Full) Acc77.1Unverified
8SVM (feature-rich, gold syntax)Role F1 (Preps)62.2Unverified
9BiLSTM + MLP (gold syntax)Role F1 (Preps)62.2Unverified
10SVM (feature-rich, auto syntax)Role F1 (Preps)58.2Unverified
#ModelMetricClaimedVerifiedStatus
1CaseLaw-BERTCaseHOLD75.6Unverified
2Legal-BERTCaseHOLD75.1Unverified
3DeBERTaCaseHOLD72.1Unverified
4LongformerCaseHOLD72Unverified
5RoBERTaCaseHOLD71.7Unverified
6BERTCaseHOLD70.7Unverified
7BigBirdCaseHOLD70.4Unverified
#ModelMetricClaimedVerifiedStatus
1ConvBERT-DGAverage74.6Unverified
2ConvBERT-DG + Pre + MultiAverage73.8Unverified
3mslmAverage73.49Unverified
4ConvBERT + Pre + MultiAverage68.22Unverified
5BanLanGenAverage39.16Unverified
#ModelMetricClaimedVerifiedStatus
1ConvBERT + Pre + MultiAverage86.89Unverified
2mslmAverage85.83Unverified
3ConvBERT-DG + Pre + MultiAverage85.34Unverified
#ModelMetricClaimedVerifiedStatus
1MT-DNN-SMARTAverage89.9Unverified
2BERT-LARGEAverage82.1Unverified