SOTAVerified

Natural Language Understanding

Natural Language Understanding is an important field of Natural Language Processing which contains various tasks such as text classification, natural language inference and story comprehension. Applications enabled by natural language understanding range from question answering to automated reasoning.

Source: Find a Reasonable Ending for Stories: Does Logic Relation Help the Story Cloze Test?

Papers

Showing 11511200 of 1978 papers

TitleStatusHype
Slugbot: An Application of a Novel and Scalable Open Domain Socialbot Framework0
SOCCER: An Information-Sparse Discourse State Tracking Collection in the Sports Commentary Domain0
Solving Hard Coreference Problems0
SpaLLM: Unified Compressive Adaptation of Large Language Models with Sketching0
Sparse associative memory based on contextual code learning for disambiguating word senses0
Sparse Attention with Learning to Hash0
Speaker-Sensitive Dual Memory Networks for Multi-Turn Slot Tagging0
Spectral decomposition method of dialog state tracking via collective matrix factorization0
Speech2Slot: An End-to-End Knowledge-based Slot Filling from Speech0
Speech-language Pre-training for End-to-end Spoken Language Understanding0
Speech To Semantics: Improve ASR and NLU Jointly via All-Neural Interfaces0
SplitLLM: Collaborative Inference of LLMs for Model Placement and Throughput Optimization0
Spoken Language Understanding for Conversational AI: Recent Advances and Future Direction0
SQLfuse: Enhancing Text-to-SQL Performance through Comprehensive LLM Synergy0
SQuARE: Semantics-based Question Answering and Reasoning Engine0
Stable Natural Language Understanding via Invariant Causal Constraint0
State and Memory is All You Need for Robust and Reliable AI Agents0
Statistically Profiling Biases in Natural Language Reasoning Datasets and Models0
Statistical Model Compression for Small-Footprint Natural Language Understanding0
Stochastic Parrots or ICU Experts? Large Language Models in Critical Care Medicine: A Scoping Review0
Story Comprehension for Predicting What Happens Next0
StraGo: Harnessing Strategic Guidance for Prompt Optimization0
Strategies to Improve Few-shot Learning for Intent Classification and Slot-Filling0
Strategy-level Entrainment of Dialogue System Users in a Creative Visual Reference Resolution Task0
StreamLink: Large-Language-Model Driven Distributed Data Engineering System0
Stress Test Evaluation of Transformer-based Models in Natural Language Understanding Tasks0
StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding0
Structure-aware Sentence Encoder in Bert-Based Siamese Network0
Structured Knowledge Discovery from Massive Text Corpus0
Structured Prompting and Feedback-Guided Reasoning with LLMs for Data Interpretation0
Submodularity-Inspired Data Selection for Goal-Oriented Chatbot Training Based on Sentence Embeddings0
Summary Level Training of Sentence Rewriting for Abstractive Summarization0
Supervised Domain Enablement Attention for Personalized Domain Classification0
SVFit: Parameter-Efficient Fine-Tuning of Large Pre-Trained Models Using Singular Values0
SVIP: Towards Verifiable Inference of Open-source Large Language Models0
SV-LLM: An Agentic Approach for SoC Security Verification using Large Language Models0
SymGPT: Auditing Smart Contracts via Combining Symbolic Execution with Large Language Models0
Synergizing Machine Learning & Symbolic Methods: A Survey on Hybrid Approaches to Natural Language Processing0
Syntactic Structure Distillation Pretraining For Bidirectional Encoders0
Syntax-Infused Transformer and BERT models for Machine Translation and Natural Language Understanding0
Synthesize, Partition, then Adapt: Eliciting Diverse Samples from Foundation Models0
TableFormer: Robust Transformer Modeling for Table-Text Encoding0
TaCL: Improving BERT Pre-training with Token-aware Contrastive Learning0
TALKPLAY: Multimodal Music Recommendation with Large Language Models0
Taming the Beast: Learning to Control Neural Conversational Models0
TapWeight: Reweighting Pretraining Objectives for Task-Adaptive Pretraining0
Targeted Adversarial Training for Natural Language Understanding0
Targeted Aspect-Based Sentiment Analysis via Embedding Commonsense Knowledge into an Attentive LSTM0
Target Model Agnostic Adversarial Attacks with Query Budgets on Language Understanding Models0
Task-adaptive Pre-training and Self-training are Complementary for Natural Language Understanding0
Show:102550
← PrevPage 24 of 40Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1HNNAccuracy90Unverified
2BERT-large 340MAccuracy78.3Unverified
3UDSSM-II (ensemble)Accuracy78.3Unverified
4UDSSM-I (ensemble)Accuracy76.7Unverified
5DSSMAccuracy75Unverified
6UDSSM-IIAccuracy75Unverified
7BERT-base 110M + MASAccuracy68.3Unverified
8USSM + Supervised Deepnet + 3 Knowledge BasesAccuracy66.7Unverified
9Word-level CNN+LSTM (full scoring)Accuracy60Unverified
10Subword-level Transformer LMAccuracy58.3Unverified
#ModelMetricClaimedVerifiedStatus
1BERT (pred POS/lemmas)Tags (Full) Acc82.5Unverified
2BERT (none)Tags (Full) Acc82Unverified
3BERT (gold POS/lemmas)Tags (Full) Acc81Unverified
4GloVe (gold POS/lemmas)Tags (Full) Acc79.3Unverified
5RoBERTa + LinearFull F1 (Preps)78.2Unverified
6GloVe (none)Tags (Full) Acc77.5Unverified
7GloVe (pred POS/lemmas)Tags (Full) Acc77.1Unverified
8SVM (feature-rich, gold syntax)Role F1 (Preps)62.2Unverified
9BiLSTM + MLP (gold syntax)Role F1 (Preps)62.2Unverified
10SVM (feature-rich, auto syntax)Role F1 (Preps)58.2Unverified
#ModelMetricClaimedVerifiedStatus
1CaseLaw-BERTCaseHOLD75.6Unverified
2Legal-BERTCaseHOLD75.1Unverified
3DeBERTaCaseHOLD72.1Unverified
4LongformerCaseHOLD72Unverified
5RoBERTaCaseHOLD71.7Unverified
6BERTCaseHOLD70.7Unverified
7BigBirdCaseHOLD70.4Unverified
#ModelMetricClaimedVerifiedStatus
1ConvBERT-DGAverage74.6Unverified
2ConvBERT-DG + Pre + MultiAverage73.8Unverified
3mslmAverage73.49Unverified
4ConvBERT + Pre + MultiAverage68.22Unverified
5BanLanGenAverage39.16Unverified
#ModelMetricClaimedVerifiedStatus
1ConvBERT + Pre + MultiAverage86.89Unverified
2mslmAverage85.83Unverified
3ConvBERT-DG + Pre + MultiAverage85.34Unverified
#ModelMetricClaimedVerifiedStatus
1MT-DNN-SMARTAverage89.9Unverified
2BERT-LARGEAverage82.1Unverified