SOTAVerified

Natural Language Understanding

Natural Language Understanding is an important field of Natural Language Processing which contains various tasks such as text classification, natural language inference and story comprehension. Applications enabled by natural language understanding range from question answering to automated reasoning.

Source: Find a Reasonable Ending for Stories: Does Logic Relation Help the Story Cloze Test?

Papers

Showing 14511500 of 1978 papers

TitleStatusHype
Leveraging Large Language Models for Semantic Query Processing in a Scholarly Knowledge Graph0
Leveraging Semantic Representations Combined with Contextual Word Representations for Recognizing Textual Entailment in Vietnamese0
Leveraging Sentence-level Information with Encoder LSTM for Semantic Slot Filling0
Leveraging Syntactic Constructions for Metaphor Identification0
Lexicon-Free Conversational Speech Recognition with Neural Networks0
LIDSNet: A Lightweight on-device Intent Detection model using Deep Siamese Network0
LightMBERT: A Simple Yet Effective Method for Multilingual BERT Distillation0
Lightweight Transformers for Conversational AI0
LiLiuM: eBay's Large Language Models for e-commerce0
LINGO : Visually Debiasing Natural Language Instructions to Support Task Diversity0
Linguistic features for sentence difficulty prediction in ABSA0
LLaMAntino: LLaMA 2 Models for Effective Text Generation in Italian Language0
LLM+AL: Bridging Large Language Models and Action Languages for Complex Reasoning about Actions0
LLM-assisted Vector Similarity Search0
LLM-based Weak Supervision Framework for Query Intent Classification in Video Search0
LLM for SoC Security: A Paradigm Shift0
LLM-GAN: Construct Generative Adversarial Network Through Large Language Models For Explainable Fake News Detection0
LLMs' Reading Comprehension Is Affected by Parametric Knowledge and Struggles with Hypothetical Statements0
Local Structure Matters Most in Most Languages0
Local Structure Matters Most: Perturbation Study in NLU0
Logical analysis of natural language semantics to solve the problem of computer understanding0
Logical forms complement probability in understanding language model (and human) performance0
Logically Consistent Language Models via Neuro-Symbolic Integration0
Logic, Language, and Calculus0
Logic Pre-Training of Language Models0
Long Short-Term Memory Over Tree Structures0
LoNLI: An Extensible Framework for Testing Diverse Logical Reasoning Capabilities for NLI0
Can You Tell Me How to Get Past Sesame Street? Sentence-Level Pretraining Beyond Language Modeling0
LoRTA: Low Rank Tensor Adaptation of Large Language Models0
LoSparse: Structured Compression of Large Language Models based on Low-Rank and Sparse Approximation0
Lost in Inference: Rediscovering the Role of Natural Language Inference for Large Language Models0
Low-Resource Adaptation of Neural NLP Models0
LRC-BERT: Latent-representation Contrastive Knowledge Distillation for Natural Language Understanding0
Machine Reading, Fast and Slow: When Do Models "Understand" Language?0
Machine Reading, Fast and Slow: When Do Models “Understand” Language?0
Machine Translation with Large Language Models: Prompt Engineering for Persian, English, and Russian Directions0
MaCP: Minimal yet Mighty Adaptation via Hierarchical Cosine Projection0
Magnitude Pruning of Large Pretrained Transformer Models with a Mixture Gaussian Prior0
Making Language Models Robust Against Negation0
Making Neural Machine Reading Comprehension Faster0
Making the Most of your Model: Methods for Finetuning and Applying Pretrained Transformers0
MaLLaM -- Malaysia Large Language Model0
Mandarinograd: A Chinese Collection of Winograd Schemas0
Markov Logic Networks for Situated Incremental Natural Language Understanding0
MARRS: Multimodal Reference Resolution System0
MasonTigers at SemEval-2024 Task 9: Solving Puzzles with an Ensemble of Chain-of-Thoughts0
Matrix-Transformation Based Low-Rank Adaptation (MTLoRA): A Brain-Inspired Method for Parameter-Efficient Fine-Tuning0
Maximizing Signal in Human-Model Preference Alignment0
MCScript: A Novel Dataset for Assessing Machine Comprehension Using Script Knowledge0
mCSQA: Multilingual Commonsense Reasoning Dataset with Unified Creation Strategy by Language Models and Humans0
Show:102550
← PrevPage 30 of 40Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1HNNAccuracy90Unverified
2UDSSM-II (ensemble)Accuracy78.3Unverified
3BERT-large 340MAccuracy78.3Unverified
4UDSSM-I (ensemble)Accuracy76.7Unverified
5DSSMAccuracy75Unverified
6UDSSM-IIAccuracy75Unverified
7BERT-base 110M + MASAccuracy68.3Unverified
8USSM + Supervised Deepnet + 3 Knowledge BasesAccuracy66.7Unverified
9Word-level CNN+LSTM (full scoring)Accuracy60Unverified
10Subword-level Transformer LMAccuracy58.3Unverified
#ModelMetricClaimedVerifiedStatus
1BERT (pred POS/lemmas)Tags (Full) Acc82.5Unverified
2BERT (none)Tags (Full) Acc82Unverified
3BERT (gold POS/lemmas)Tags (Full) Acc81Unverified
4GloVe (gold POS/lemmas)Tags (Full) Acc79.3Unverified
5RoBERTa + LinearFull F1 (Preps)78.2Unverified
6GloVe (none)Tags (Full) Acc77.5Unverified
7GloVe (pred POS/lemmas)Tags (Full) Acc77.1Unverified
8SVM (feature-rich, gold syntax)Role F1 (Preps)62.2Unverified
9BiLSTM + MLP (gold syntax)Role F1 (Preps)62.2Unverified
10SVM (feature-rich, auto syntax)Role F1 (Preps)58.2Unverified
#ModelMetricClaimedVerifiedStatus
1CaseLaw-BERTCaseHOLD75.6Unverified
2Legal-BERTCaseHOLD75.1Unverified
3DeBERTaCaseHOLD72.1Unverified
4LongformerCaseHOLD72Unverified
5RoBERTaCaseHOLD71.7Unverified
6BERTCaseHOLD70.7Unverified
7BigBirdCaseHOLD70.4Unverified
#ModelMetricClaimedVerifiedStatus
1ConvBERT-DGAverage74.6Unverified
2ConvBERT-DG + Pre + MultiAverage73.8Unverified
3mslmAverage73.49Unverified
4ConvBERT + Pre + MultiAverage68.22Unverified
5BanLanGenAverage39.16Unverified
#ModelMetricClaimedVerifiedStatus
1ConvBERT + Pre + MultiAverage86.89Unverified
2mslmAverage85.83Unverified
3ConvBERT-DG + Pre + MultiAverage85.34Unverified
#ModelMetricClaimedVerifiedStatus
1MT-DNN-SMARTAverage89.9Unverified
2BERT-LARGEAverage82.1Unverified