SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 1360113650 of 17610 papers

TitleStatusHype
The Role of Foundation Models in Neuro-Symbolic Learning and Reasoning0
The Role of n-gram Smoothing in the Age of Neural Networks0
The RWTH Aachen German-English Machine Translation System for WMT 20150
The RWTH Aachen LVCSR system for IWSLT-2016 German Skype conversation recognition task0
The RWTH Aachen Machine Translation System for WMT 20130
The RWTH Aachen Machine Translation System for WMT 20120
The RWTH Aachen University English-Romanian Machine Translation System for WMT 20160
The RWTH Aachen University Machine Translation Systems for WMT 20190
The Same But Different: Structural Similarities and Differences in Multilingual Language Modeling0
The Seemingly (Un)systematic Linking Element in Danish0
The Self-Perception and Political Biases of ChatGPT0
The Sharpness Disparity Principle in Transformers for Accelerating Language Model Pre-Training0
Optimal Inflationary Potentials0
The SI TEDx-UM speech database: a new Slovenian Spoken Language Resource0
The Slovak Categorized News Corpus0
The SMarT Classifier for Arabic Fine-Grained Dialect Identification0
The Sociolinguistic Foundations of Language Modeling0
The Sogou-TIIC Speech Translation System for IWSLT 20180
The Solution for CVPR2024 Foundational Few-Shot Object Detection Challenge0
The Sound of Populism: Distinct Linguistic Features Across Populist Variants0
The State of Large Language Models for African Languages: Progress and Challenges0
The TALP--UPC Spanish--English WMT Biomedical Task: Bilingual Embeddings and Char-based Neural Language Model Rescoring in a Phrase-based System0
The Task-oriented Queries Benchmark (ToQB)0
The Task Shield: Enforcing Task Alignment to Defend Against Indirect Prompt Injection in LLM Agents0
The Trade-offs of Domain Adaptation for Neural Language Models0
The Turking Test: Can Language Models Understand Instructions?0
The Typing Cure: Experiences with Large Language Model Chatbots for Mental Health Support0
The (Un)faithful Machine Translator0
The University of Cambridge Russian-English System at WMT130
The University of Edinburgh’s English-Tamil and English-Inuktitut Submissions to the WMT20 News Translation Task0
The University of Edinburgh's Submissions to the WMT19 News Translation Task0
The University of Edinburgh’s Submission to the IWSLT21 Simultaneous Translation Task0
The University of Illinois submission to the WMT 2015 Shared Translation Task0
The UPC Submission to the WMT 2012 Shared Task on Quality Estimation0
The Use of a Large Language Model for Cyberbullying Detection0
The Value of Nothing: Multimodal Extraction of Human Values Expressed by TikTok Influencers0
The Volcspeech system for the ICASSP 2022 multi-channel multi-party meeting transcription challenge0
The Vulnerability of Language Model Benchmarks: Do They Accurately Reflect True LLM Performance?0
The Xiaomi Text-to-Text Simultaneous Speech Translation System for IWSLT 20220
Thick-Net: Parallel Network Structure for Sequential Modeling0
Think Before You Act: Unified Policy for Interleaving Language Reasoning with Actions0
Think Big, Generate Quick: LLM-to-SLM for Fast Autoregressive Decoding0
Thinking Aloud: Dynamic Context Generation Improves Zero-Shot Reasoning Performance of GPT-20
Thinking in Directivity: Speech Large Language Model for Multi-Talker Directional Speech Recognition0
Thinking Like an Annotator: Generation of Dataset Labeling Instructions0
Thinking Tokens for Language Modeling0
Thinking with Many Minds: Using Large Language Models for Multi-Perspective Problem-Solving0
Think Twice Before Trusting: Self-Detection for Large Language Models through Comprehensive Answer Reflection0
Thoroughly Modeling Multi-domain Pre-trained Recommendation as Language0
Thought Space Explorer: Navigating and Expanding Thought Space for Large Language Model Reasoning0
Show:102550
← PrevPage 273 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified