SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 78517900 of 17610 papers

TitleStatusHype
The Capacity for Moral Self-Correction in Large Language Models0
The CAP Principle for LLM Serving: A Survey of Long-Context Large Language Model Serving0
The cell as a token: high-dimensional geometry in language models and cell embeddings0
The Challenges of HTR Model Training: Feedback from the Project Donner le gout de l'archive a l'ere numerique0
The CLC-UKET Dataset: Benchmarking Case Outcome Prediction for the UK Employment Tribunal0
The CMU Machine Translation Systems at WMT 2013: Syntax, Synthetic Translation Options, and Pseudo-References0
The Complexity of Learning Sparse Superposed Features with Feedback0
The Consensus Game: Language Model Generation via Equilibrium Search0
The Contemporary Art of Image Search: Iterative User Intent Expansion via Vision-Language Model0
The Context-Dependent Additive Recurrent Neural Net0
The Contribution of Lyrics and Acoustics to Collaborative Understanding of Mood0
The Counterfeit Conundrum: Can Code Language Models Grasp the Nuances of Their Incorrect Generations?0
The COVID That Wasn't: Counterfactual Journalism Using GPT0
The COVID That Wasn’t: Counterfactual Journalism Using GPT0
The CRINGE Loss: Learning what language not to model0
The Dark Side of Human Feedback: Poisoning Large Language Models via User Inputs0
The Dark Side of the Language: Pre-trained Transformers in the DarkNet0
The DCU-ICTCAS MT system at WMT 2014 on German-English Translation Task0
The design and implementation of Language Learning Chatbot with XAI using Ontology and Transfer Learning0
The Detection of Distributional Discrepancy for Text Generation0
The Developmental Landscape of In-Context Learning0
The Differences Between Direct Alignment Algorithms are a Blur0
The Dream Within Huang Long Cave: AI-Driven Interactive Narrative for Family Storytelling and Emotional Reflection0
Emergent social conventions and collective bias in LLM populations0
The Economic Implications of Large Language Model Selection on Earnings and Return on Investment: A Decision Theoretic Model0
The Edinburgh/JHU Phrase-based Machine Translation Systems for WMT 20150
The Effectiveness of Intermediate-Task Training for Code-Switched Natural Language Understanding0
The Effect of Dependency Representation Scheme on Syntactic Language Modelling0
The Effect of Translationese on Tuning for Statistical Machine Translation0
The effects of data size on Automated Essay Scoring engines0
The Empirical Impact of Data Sanitization on Language Models0
The Empty Chair: Using LLMs to Raise Missing Perspectives in Policy Deliberations0
The Evolution of RWKV: Advancements in Efficient Language Modeling0
The Expressive Capacity of State Space Models: A Formal Language Perspective0
The Eye of Sherlock Holmes: Uncovering User Private Attribute Profiling via Vision-Language Model Agentic Framework0
The Fair Language Model Paradox0
The Fellowship of the Authors: Disambiguating Names from Social Network Context0
The fifth 'CHiME' Speech Separation and Recognition Challenge: Dataset, task and baselines0
The Fine Line: Navigating Large Language Model Pretraining with Down-streaming Capability Analysis0
The Fixed-Size Ordinally-Forgetting Encoding Method for Neural Network Language Models0
The Forest Convolutional Network: Compositional Distributional Semantics with a Neural Chart and without Binarization0
The Foundations of Tokenization: Statistical and Computational Concerns0
The future is different: Large pre-trained language models fail in prediction tasks0
The Future of ChatGPT-enabled Labor Market: A Preliminary Study in China0
The Future of Combating Rumors? Retrieval, Discrimination, and Generation0
The future of document indexing: GPT and Donut revolutionize table of content processing0
The Future of Large Language Model Pre-training is Federated0
The Future of Scientific Publishing: Automated Article Generation0
The Future of Spoken Dialogue Systems is in their Past: Long-Term Adaptive, Conversational Assistants0
The Generative Energy Arena (GEA): Incorporating Energy Awareness in Large Language Model (LLM) Human Evaluations0
Show:102550
← PrevPage 158 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified