SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 79017950 of 17610 papers

TitleStatusHype
The Go Transformer: Natural Language Modeling for Game Play0
The GPT Surprise: Offering Large Language Model Chat in a Massive Coding Class Reduced Engagement but Increased Adopters Exam Performances0
The GUA-Speech System Description for CNVSRC Challenge 20230
The Hydra Effect: Emergent Self-repair in Language Model Computations0
The IBM 2015 English Conversational Telephone Speech Recognition System0
The IBM 2016 English Conversational Telephone Speech Recognition System0
The ILSP/ARC submission to the WMT 2018 Parallel Corpus Filtering Shared Task0
The Impact of Auxiliary Patient Data on Automated Chest X-Ray Report Generation and How to Incorporate It0
The Impact of Depth on Compositional Generalization in Transformer Language Models0
The Impact of Explanations on AI Competency Prediction in VQA0
The Impact of Large Language Models on Scientific Discovery: a Preliminary Study using GPT-40
The Impact of Multiple Parallel Phrase Suggestions on Email Input and Composition Behaviour of Native and Non-Native English Writers0
The Impact of Token Granularity on the Predictive Power of Language Model Surprisal0
The Importance of Context in Very Low Resource Language Modeling0
The Importance of Generation Order in Language Modeling0
The Importance of Prompt Tuning for Automated Neuron Explanations0
The Importance of the Current Input in Sequence Modeling0
The INCOMSLAV Platform: Experimental Website with Integrated Methods for Measuring Linguistic Distances and Asymmetries in Receptive Multilingualism0
The Influence of ChatGPT on Artificial Intelligence Related Crypto Assets: Evidence from a Synthetic Control Analysis0
The Information of Large Language Model Geometry0
The Inside-Outside Recursive Neural Network model for Dependency Parsing0
The Intelius Nickname Collection: Quantitative Analyses from Billions of Public Records0
The Zero Resource Speech Challenge 2021: Spoken language modelling0
The Invalsi Benchmarks: measuring Linguistic and Mathematical understanding of Large Language Models in Italian0
The IOIT English ASR system for IWSLT 20160
The JHU Machine Translation Systems for WMT 20170
The JHU Machine Translation Systems for WMT 20160
The JHU Parallel Corpus Filtering Systems for WMT 20180
The Karlsruhe Institute of Technology Translation Systems for the WMT 20150
The Karlsruhe Institute of Technology Translation Systems for the WMT 20120
The Karlsruhe Institute of Technology Translation Systems for the WMT 20140
The KIT-LIMSI Translation System for WMT 20140
The Language Model Understood the Prompt was Ambiguous: Probing Syntactic Uncertainty Through Generation0
The Large Language Model GreekLegalRoBERTa0
The LIG system for the English-Czech Text Translation Task of IWSLT 20190
The Limitations of Limited Context for Constituency Parsing0
The Lipschitz Constant of Self-Attention0
THELMA: Task Based Holistic Evaluation of Large Language Model Applications-RAG Question Answering0
The LMU Munich Unsupervised Machine Translation Systems0
The LMU Munich Unsupervised Machine Translation System for WMT190
The Magnitude of Categories of Texts Enriched by Language Models0
The Marchex 2018 English Conversational Telephone Speech Recognition System0
Thematic Analysis with Large Language Models: does it work with languages other than English? A targeted test in Italian0
Theme-driven Keyphrase Extraction to Analyze Social Media Discourse0
The Method for Storing Patterns in Neural Networks-Memorization and Recall of QR code Patterns-0
The MGB-2 Challenge: Arabic Multi-Dialect Broadcast Media Recognition0
The Microsoft 2016 Conversational Speech Recognition System0
The Microsoft 2017 Conversational Speech Recognition System0
The Minimum Wage as an Anchor: Effects on Determinations of Fairness by Humans and AI0
The ML4HMT Workshop on Optimising the Division of Labour in Hybrid Machine Translation0
Show:102550
← PrevPage 159 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified