SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 1365113700 of 17610 papers

TitleStatusHype
MetaXT: Meta Cross-Task Transfer between Disparate Label Spaces0
Non-autoregressive End-to-end Speech Translation with Parallel Autoregressive Rescoring0
Sustainable Modular Debiasing of Language Models0
Memory and Knowledge Augmented Language Models for Inferring Salience in Long-Form StoriesCode0
NSP-BERT: A Prompt-based Few-Shot Learner Through an Original Pre-training Task--Next Sentence Prediction0
RefineCap: Concept-Aware Refinement for Image Captioning0
Text-Free Prosody-Aware Generative Spoken Language Modeling0
Sequential Attention Module for Natural Language Processing0
Rare Tokens Degenerate All Tokens: Improving Neural Text Generation via Adaptive Gradient Gating for Rare Token Embeddings0
Generate & Rank: A Multi-task Framework for Math Word Problems0
Infusing Future Information into Monotonic Attention Through Language Models0
GPT-3 Models are Poor Few-Shot Learners in the Biomedical DomainCode0
Enhancing Natural Language Representation with Large-Scale Out-of-Domain CommonsenseCode0
You should evaluate your language model on marginal likelihood over tokenisations0
Teaching Autoregressive Language Models Complex Tasks By DemonstrationCode0
No Need to Know Everything! Efficiently Augmenting Language Models With External Knowledge0
Language Modeling, Lexical Translation, Reordering: The Training Process of NMT through the Lens of Classical SMT0
Skim-Attention: Learning to Focus via Document LayoutCode0
Multimodal Conditionality for Natural Language Generation0
LegaLMFiT: Efficient Short Legal Text Classification with LSTM Language Model Pre-Training0
Pre-training Language Model Incorporating Domain-specific Heterogeneous Knowledge into A Unified Representation0
An Empirical Exploration in Quality Filtering of Text Data0
ConQX: Semantic Expansion of Spoken Queries for Intent Detection based on Conditioned Text Generation0
BPoMP: The Benchmark of Poetic Minimal Pairs – Limericks, Rhyme, and Narrative Coherence0
Behavior of Modern Pre-trained Language Models Using the Example of Probing Tasks0
Developing a Clinical Language Model for Swedish: Continued Pretraining of Generic BERT with In-Domain Data0
IRCologne at GermEval 2021: Toxicity Classification0
Domain-Specific Japanese ELECTRA Model Using a Small Corpus0
Does Knowledge Help General NLU? An Empirical Study0
Improving Character-Aware Neural Language Model by Warming up Character Encoder under Skip-gram Architecture0
Unsupervised Text Style Transfer with Content Embeddings0
Watching a Language Model Learning Chess0
Towards a Language Model for Temporal Commonsense Reasoning0
On Reducing Repetition in Abstractive Summarization0
Low-Resource ASR with an Augmented Language Model0
Neural Borrowing Detection with Monolingual Lexical Models0
Split-and-Rephrase in a Cross-Lingual Manner: A Complete Pipeline0
Masked Adversarial Generation for Neural Machine Translation0
LightNER: A Lightweight Tuning Paradigm for Low-resource NER via Pluggable PromptingCode0
Effectiveness of Deep Networks in NLP using BiDAF as an example architecture0
How Does Adversarial Fine-Tuning Benefit BERT?0
On the Multilingual Capabilities of Very Large-Scale English Language ModelsCode0
The effects of data size on Automated Essay Scoring engines0
Representation Memorization for Fast Learning New Knowledge without Forgetting0
Self-training Improves Pre-training for Few-shot Learning in Task-oriented Dialog SystemsCode0
Exploring Retraining-Free Speech Recognition for Intra-sentential Code-Switching0
Injecting Text in Self-Supervised Speech Pretraining0
Exploring the Capacity of a Large-scale Masked Language Model to Recognize Grammatical Errors0
Improving callsign recognition with air-surveillance data in air-traffic communication0
Position-Invariant Truecasing with a Word-and-Character Hierarchical Recurrent Neural Network0
Show:102550
← PrevPage 274 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified