SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 76017650 of 17610 papers

TitleStatusHype
Supporting Vision-Language Model Inference with Causality-pruning Knowledge Prompt0
Supportiveness-based Knowledge Rewriting for Retrieval-augmented Language Modeling0
Suppressing Pink Elephants with Direct Principle Feedback0
Surface Realization Using Pretrained Language Models0
Surf at MEDIQA 2019: Improving Performance of Natural Language Inference in the Clinical Domain by Adopting Pre-trained Language Model0
Surfer100: Generating Surveys From Web Resources, Wikipedia-style0
Surgical, Cheap, and Flexible: Mitigating False Refusal in Language Models via Single Vector Ablation0
Surgical-LVLM: Learning to Adapt Large Vision-Language Model for Grounded Visual Question Answering in Robotic Surgery0
SurgVLM: A Large Vision-Language Model and Systematic Evaluation Benchmark for Surgical Intelligence0
SurveillanceVQA-589K: A Benchmark for Comprehensive Surveillance Video-Language Understanding with Large Models0
Surveying Generative AI's Economic Expectations0
Survey of different Large Language Model Architectures: Trends, Benchmarks, and Challenges0
Survey on Large Language Model-Enhanced Reinforcement Learning: Concept, Taxonomy, and Methods0
SuryaKiran at MEDIQA-Sum 2023: Leveraging LoRA for Clinical Dialogue Summarization0
Susceptibility to Influence of Large Language Models0
Sustainable Modular Debiasing of Language Models0
SUTA-LM: Bridging Test-Time Adaptation and Language Model Rescoring for Robust ASR0
SUTRA: Scalable Multilingual Language Model Architecture0
SUV: Scalable Large Language Model Copyright Compliance with Regularized Selective Unlearning0
SVD-Softmax: Fast Softmax Approximation on Large Vocabulary Neural Networks0
SWAGex at SemEval-2020 Task 4: Commonsense Explanation as Next Event Prediction0
SwahBERT: Language Model of Swahili0
Swan and ArabicMTEB: Dialect-Aware, Arabic-Centric, Cross-Lingual, and Cross-Cultural Embedding Models and Benchmarks0
SWAN-GPT: An Efficient and Scalable Approach for Long-Context Language Modeling0
SWAN: SGD with Normalization and Whitening Enables Stateless LLM Training0
Swarm Intelligence in Geo-Localization: A Multi-Agent Large Vision-Language Model Collaborative Framework0
Swinv2-Imagen: Hierarchical Vision Transformer Diffusion Models for Text-to-Image Generation0
SyCoCa: Symmetrizing Contrastive Captioners with Attentive Masking for Multimodal Alignment0
Syllable and language model based features for detecting non-scorable tests in spoken language proficiency assessment applications0
Syllable-level Neural Language Model for Agglutinative Language0
Symbolic Representation for Any-to-Any Generative Tasks0
Symmetric Pattern Based Word Embeddings for Improved Word Similarity Prediction0
SymNoise: Advancing Language Model Fine-tuning with Symmetric Noise0
Sympathy over Polarization: A Computational Discourse Analysis of Social Media Posts about the July 2024 Trump Assassination Attempt0
SyncMask: Synchronized Attentional Masking for Fashion-centric Vision-Language Pretraining0
Syncretism and How to Deal with it in a Morphological Analyzer: a German Example0
SynDARin: Synthesising Datasets for Automated Reasoning in Low-Resource Languages0
SynDL: A Large-Scale Synthetic Test Collection for Passage Retrieval0
Synergizing In-context Learning with Hints for End-to-end Task-oriented Dialog Systems0
Synergizing Unsupervised and Supervised Learning: A Hybrid Approach for Accurate Natural Language Task Modeling0
SynerGPT: In-Context Learning for Personalized Drug Synergy Prediction and Drug Design0
Synergy of Large Language Model and Model Driven Engineering for Automated Development of Centralized Vehicular Systems0
Synocene, Beyond the Anthropocene: De-Anthropocentralising Human-Nature-AI Interaction0
Synslator: An Interactive Machine Translation Tool with Online Learning0
Syntactically Guided Neural Machine Translation0
Syntactic and Lexical Complexity in Italian Noncanonical Structures0
Syntactic and Semantic Features For Code-Switching Factored Language Models0
Syntactic Learnability of Echo State Neural Language Models at Scale0
Syntactic Relevance XLNet Word Embedding Generation in Low-Resource Machine Translation0
Syntactic Structure Distillation Pretraining For Bidirectional Encoders0
Show:102550
← PrevPage 153 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified