SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 83518400 of 17610 papers

TitleStatusHype
Data Augmentations for Improved (Large) Language Model Generalization0
Identifying and Adapting Transformer-Components Responsible for Gender Bias in an English Language ModelCode0
Character-level Chinese Backpack Language ModelsCode1
GestureGPT: Toward Zero-Shot Free-Form Hand Gesture Understanding with Large Language Model AgentsCode0
Exploring In-Context Learning of Textless Speech Language Model for Speech Classification Tasks0
CLAIR: Evaluating Image Captions with Large Language Models0
Efficient Long-Range Transformers: You Need to Attend More, but Not Necessarily at Every Layer0
A Systematic Study of Performance Disparities in Multilingual Task-Oriented Dialogue Systems0
ICU: Conquering Language Barriers in Vision-and-Language Modeling by Dividing the Tasks into Image Captioning and Language UnderstandingCode0
Monarch Mixer: A Simple Sub-Quadratic GEMM-Based ArchitectureCode2
Position Interpolation Improves ALiBi ExtrapolationCode2
Solving the multiplication problem of a large language model system using a graph-based method0
Solving Hard Analogy Questions with Relation Embedding ChainsCode0
Preference Optimization for Molecular Language ModelsCode0
Pseudointelligence: A Unifying Framework for Language Model Evaluation0
Fast Multipole Attention: A Divide-and-Conquer Attention Mechanism for Long SequencesCode0
Harnessing Dataset Cartography for Improved Compositional Generalization in TransformersCode0
Document-Level Language Models for Machine Translation0
Zero-shot Faithfulness Evaluation for Text Summarization with Foundation Language ModelCode1
ChatGPT-guided Semantics for Zero-shot LearningCode0
Generative error correction for code-switching speech recognition using large language models0
Large Language Model Prediction Capabilities: Evidence from a Real-World Forecasting Tournament0
Multi-stage Large Language Model Correction for Speech Recognition0
Iterative Shallow Fusion of Backward Language Model for End-to-End Speech Recognition0
Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter MergingCode1
Revealing the Unwritten: Visual Investigation of Beam Search Trees to Address Language Model Prompting Challenges0
Quantifying Language Models' Sensitivity to Spurious Features in Prompt Design or: How I learned to start worrying about prompt formattingCode1
Learn Your Tokens: Word-Pooled Tokenization for Language ModelingCode0
Leveraging Large Language Model for Automatic Evolving of Industrial Data-Centric R&D Cycle0
EvalCrafter: Benchmarking and Evaluating Large Video Generation ModelsCode1
BitNet: Scaling 1-bit Transformers for Large Language ModelsCode2
EXMODD: An EXplanatory Multimodal Open-Domain Dialogue datasetCode0
Emulating Human Cognitive Processes for Expert-Level Medical Question-Answering with Large Language Models0
Correction Focused Language Model Training for Speech Recognition0
ChapGTP, ILLC's Attempt at Raising a BabyLM: Improving Data Efficiency by Automatic Task Formation0
Watermarking LLMs with Weight QuantizationCode1
Utilising a Large Language Model to Annotate Subject Metadata: A Case Study in an Australian National Research Data Catalogue0
ViSoBERT: A Pre-Trained Language Model for Vietnamese Social Media Text ProcessingCode0
DavIR: Data Selection via Implicit Reward for Large Language Models0
SD-HuBERT: Sentence-Level Self-Distillation Induces Syllabic Organization in HuBERTCode1
Swap and Predict -- Predicting the Semantic Changes in Words across Corpora by Context SwappingCode0
EconAgent: Large Language Model-Empowered Agents for Simulating Macroeconomic ActivitiesCode1
Learning to Rank Context for Named Entity Recognition Using a Synthetic DatasetCode0
MechGPT, a language-based strategy for mechanics and materials modeling that connects knowledge across scales, disciplines and modalities0
RegaVAE: A Retrieval-Augmented Gaussian Mixture Variational Auto-Encoder for Language ModelingCode1
Navigation with Large Language Models: Semantic Guesswork as a Heuristic for Planning0
Llemma: An Open Language Model For MathematicsCode3
Untying the Reversal Curse via Bidirectional Language Model EditingCode1
Use of probabilistic phrases in a coordination game: human versus GPT-40
Bootstrap Your Own Skills: Learning to Solve New Tasks with Large Language Model Guidance0
Show:102550
← PrevPage 168 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified