SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 16511700 of 17610 papers

TitleStatusHype
Can Large Language Model Agents Balance Energy Systems?Code1
Has My System Prompt Been Used? Large Language Model Prompt Membership Inference0
From Markov to Laplace: How Mamba In-Context Learns Markov ChainsCode0
DeltaProduct: Improving State-Tracking in Linear RNNs via Householder Products0
A Survey on LLM-based News Recommender Systems0
Co-designing Large Language Model Tools for Project-Based Learning with K12 Educators0
Large Language Models and Provenance Metadata for Determining the Relevance of Images and Videos in News Stories0
Improve LLM-based Automatic Essay Scoring with Linguistic Features0
Escaping Collapse: The Strength of Weak Data for Large Language Model Training0
Structured Convergence in Large Language Model Representations via Hierarchical Latent Space Folding0
InfiniteHiP: Extending Language Model Context Up to 3 Million Tokens on a Single GPU0
MorphNLI: A Stepwise Approach to Natural Language Inference Using Text Morphing0
Vision-Language In-Context Learning Driven Few-Shot Visual Inspection ModelCode0
Theoretical Benefit and Limitation of Diffusion Language Model0
Unleashing the Power of Large Language Model for Denoising Recommendation0
Two-Stage Representation Learning for Analyzing Movement Behavior Dynamics in People Living with Dementia0
Logical forms complement probability in understanding language model (and human) performance0
Reinforced Large Language Model is a formal theorem proverCode0
AIDE: Agentically Improve Visual Language Model with Domain Experts0
On Mechanistic Circuits for Extractive Question-Answering0
LLM4GNAS: A Large Language Model Based Toolkit for Graph Neural Architecture Search0
E2LVLM:Evidence-Enhanced Large Vision-Language Model for Multimodal Out-of-Context Misinformation Detection0
TANTE: Time-Adaptive Operator Learning via Neural Taylor Expansion0
SelfElicit: Your Language Model Secretly Knows Where is the Relevant EvidenceCode1
ViLa-MIL: Dual-scale Vision-Language Multiple Instance Learning for Whole Slide Image ClassificationCode2
Can a Single Model Master Both Multi-turn Conversations and Tool Use? CALM: A Unified Conversational Agentic Language Model0
Examining Multilingual Embedding Models Cross-Lingually Through LLM-Generated Adversarial Examples0
Contextual Subspace Manifold Projection for Structural Refinement of Large Language Model Representations0
Lexical Manifold Reconfiguration in Large Language Models: A Novel Architectural Approach for Contextual Modulation0
LLM Pretraining with Continuous Concepts0
QA-Expand: Multi-Question Answer Generation for Enhanced Query Expansion in Information Retrieval0
AI-VERDE: A Gateway for Egalitarian Access to Large Language Model-Based Resources For Educational Institutions0
MetaSC: Test-Time Safety Specification Optimization for Language ModelsCode0
ETimeline: An Extensive Timeline Generation Dataset based on Large Language Model0
Recursive Inference Scaling: A Winning Path to Scalable Inference in Language and Multimodal Systems0
MGPATH: Vision-Language Model with Multi-Granular Prompt Learning for Few-Shot WSI ClassificationCode1
JamendoMaxCaps: A Large Scale Music-caption Dataset with Imputed MetadataCode1
Small Language Model Makes an Effective Long Text ExtractorCode1
RomanLens: Latent Romanization and its role in Multilinguality in LLMs0
DrugImproverGPT: A Large Language Model for Drug Optimization with Fine-Tuning via Structured Policy OptimizationCode0
Auditing Prompt Caching in Language Model APIsCode0
Mask-Enhanced Autoregressive Prediction: Pay Less Attention to Learn MoreCode0
Implicit Language Models are RNNs: Balancing Parallelization and ExpressivityCode1
AppVLM: A Lightweight Vision Language Model for Online App Control0
Steel-LLM:From Scratch to Open Source -- A Personal Journey in Building a Chinese-Centric LLMCode4
K-ON: Stacking Knowledge On the Head Layer of Large Language Model0
RALLRec: Improving Retrieval Augmented Large Language Model Recommendation with Representation LearningCode1
Structural Reformation of Large Language Model Neuron Encapsulation for Divergent Information Aggregation0
Recent Advances in Discrete Speech Tokens: A Review0
Rationalization Models for Text-to-SQL0
Show:102550
← PrevPage 34 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified