SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 90019050 of 17610 papers

TitleStatusHype
EventChat: Implementation and user-centric evaluation of a large language model-driven conversational recommender system for exploring leisure events in an SME context0
Event Extraction in Basque: Typologically motivated Cross-Lingual Transfer-Learning Analysis0
Event participant modelling with neural networks0
Event-Priori-Based Vision-Language Model for Efficient Visual Understanding0
Event Segmentation Applications in Large Language Model Enabled Automated Recall Assessments0
EventVL: Understand Event Streams via Multimodal Large Language Model0
Evidence-Based Temporal Fact Verification0
Evidence from fMRI Supports a Two-Phase Abstraction Process in Language Models0
EvidenceMap: Learning Evidence Analysis to Unleash the Power of Small Language Models for Biomedical Question Answering0
E-ViLM: Efficient Video-Language Model via Masked Video Modeling with Semantic Vector-Quantized Tokenizer0
EVLM: An Efficient Vision-Language Model for Visual Understanding0
EVLM: Self-Reflective Multimodal Reasoning for Cross-Dimensional Visual Editing0
Evolutionary Contrastive Distillation for Language Model Alignment0
Evolutionary Multi-Objective Optimization of Large Language Model Prompts for Balancing Sentiments0
Evolutionary optimization of contexts for phonetic correction in speech recognition systems0
Evolutionary Prompt Optimization Discovers Emergent Multimodal Reasoning Strategies in Vision-Language Models0
Evolution through Large Models0
Evolution without Large Models: Training Language Model with Task Principles0
Evolving AI Collectives to Enhance Human Diversity and Enable Self-Regulation0
Evolving Code with A Large Language Model0
Evolving Deeper LLM Thinking0
Evolving Interpretable Visual Classifiers with Large Language Models0
Personalized Large Language Model Assistant with Evolving Conditional Memory0
EvoMerge: Neuroevolution for Large Language Models0
Exact and Efficient Unlearning for Large Language Model-based Recommendation0
Exact Decoding for Phrase-Based Statistical Machine Translation0
Exact Decoding with Multi Bottom-Up Tree Transducers0
EXACT-Net:EHR-guided lung tumor auto-segmentation for non-small cell lung cancer radiotherapy0
Exact Sampling and Decoding in High-Order Hidden Markov Models0
Examination and Extension of Strategies for Improving Personalized Language Modeling via Interpolation0
Examining Multilingual Embedding Models Cross-Lingually Through LLM-Generated Adversarial Examples0
Examining Scaling and Transfer of Language Model Architectures for Machine Translation0
Examining the Influence of Political Bias on Large Language Model Performance in Stance Classification0
EXAONE 3.0 7.8B Instruction Tuned Language Model0
Existential Conversations with Large Language Models: Content, Community, and Culture0
Exosense: A Vision-Based Scene Understanding System For Exoskeletons0
Expand BERT Representation with Visual Information via Grounded Language Learning with Multimodal Partial Alignment0
Expanding Abbreviations in a Strongly Inflected Language: Are Morphosyntactic Tags Sufficient?0
Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling0
Expanding the Language model in a low-resource hybrid MT system0
Expediting and Elevating Large Language Model Reasoning via Hidden Chain-of-Thought Decoding0
Experimental Evaluation of Machine Learning Models for Goal-oriented Customer Service Chatbot with Pipeline Architecture0
When Trust Collides: Decoding Human-LLM Cooperation Dynamics through the Prisoner's Dilemma0
Experimenting with Power Divergences for Language Modeling0
Experiments in Medical Translation Shared Task at WMT 20140
Experiments of ASR-based mispronunciation detection for children and adult English learners0
ExpertAF: Expert Actionable Feedback from Video0
ExpertRAG: Efficient RAG with Mixture of Experts -- Optimizing Context Retrieval for Adaptive LLM Responses0
Performance Characterization of Expert Router for Scalable LLM Inference0
Experts Don't Cheat: Learning What You Don't Know By Predicting Pairs0
Show:102550
← PrevPage 181 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified