SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 92019250 of 17610 papers

TitleStatusHype
Learning Reward for Physical Skills using Large Language Model0
Learning Rich Image Region Representation for Visual Question Answering0
Learning Semantic Information from Raw Audio Signal Using Both Contextual and Phonetic Representations0
Learning Semantic Representations in a Bigram Language Model0
Learning Semantic Word Embeddings based on Ordinal Knowledge Constraints0
Learning Simpler Language Models with the Differential State Framework0
Learning Sparse Structured Ensembles with SG-MCMC and Network Pruning0
Learning Spatial-Semantic Context with Fully Convolutional Recurrent Network for Online Handwritten Chinese Text Recognition0
Learning structures of the French clinical language:development and validation of word embedding models using 21 million clinical reports from electronic health records0
Learning Summary-Worthy Visual Representation for Abstractive Summarization in Video0
Learning the hyperparameters to learn morphology0
Learning the Language of NVMe Streams for Ransomware Detection0
Learning the Latent Rules of a Game from Data: A Chess Story0
Learning The Sequential Temporal Information with Recurrent Neural Networks0
Learning to Answer by Learning to Ask: Getting the Best of GPT-2 and BERT Worlds0
Learning to Capitalize with Character-Level Recurrent Neural Networks: An Empirical Study0
Learning to Compile Programs to Neural Networks0
Learning to Compute Word Embeddings On the Fly0
Learning to Create and Reuse Words in Open-Vocabulary Neural Language Modeling0
Learning to Decompose: Hypothetical Question Decomposition Based on Comparable Texts0
Learning to Define Terms in the Software Domain0
Learning to Deliver: a Foundation Model for the Montreal Capacitated Vehicle Routing Problem0
Learning to Diversify Neural Text Generation via Degenerative Model0
Learning to Extract Attribute Value from Product via Question Answering: A Multi-task Approach0
Learning to Complete Code with Sketches0
Learning to Generate Long-term Future Narrations Describing Activities of Daily Living0
Learning to Generate Text in Arbitrary Writing Styles0
Learning to Generate Word Representations using Subword Information0
Learning to Ground VLMs without Forgetting0
Learning To Guide Human Decision Makers With Vision-Language Models0
Learning to Guide Human Experts via Personalized Large Language Models0
Learning to Initialize: Can Meta Learning Improve Cross-task Generalization in Prompt Tuning?0
Learning to Interpret and Describe Abstract Scenes0
Learning to Interpret Natural Language Instructions0
Learning to Keep a Promise: Scaling Language Model Decoding Parallelism with Learned Asynchronous Decoding0
Learning Tokenization in Private Federated Learning with Sub-Word Model Sampling0
Learning to Learn Weight Generation via Local Consistency Diffusion0
LEARNING TO ORGANIZE KNOWLEDGE WITH N-GRAM MACHINES0
Learning to Plan Long-Term for Language Modeling0
Learning to Predict from Textual Data0
Learning to Prune: Context-Sensitive Pruning for Syntactic MT0
Learning to Rank for Multiple Retrieval-Augmented Models through Iterative Utility Maximization0
Learning To Rank Resources with GNN0
Learning to Reason at the Frontier of Learnability0
Learning to Reason over Scene Graphs: A Case Study of Finetuning GPT-2 into a Robot Language Model for Grounded Task Planning0
Learning to Reduce: Optimal Representations of Structured Data in Prompting Large Language Models0
Learning to Reduce: Towards Improving Performance of Large Language Models on Structured Data0
Learning to Represent Image and Text with Denotation Graph0
Learning To Retrieve Prompts for In-Context Learning0
Learning to Sample Replacements for ELECTRA Pre-Training0
Show:102550
← PrevPage 185 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified