SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 1410114150 of 17610 papers

TitleStatusHype
SunBear at WNUT-2020 Task 2: Improving BERT-Based Noisy Text Classification with Knowledge of the Data domain0
BioBERTpt - A Portuguese Neural Language Model for Clinical Named Entity Recognition0
Evaluation of Transfer Learning for Adverse Drug Event (ADE) and Medication Entity Extraction0
Enhancing Automated Essay Scoring Performance via Fine-tuning Pre-trained Language Models with Combination of Regression and Ranking0
BERT-MK: Integrating Graph Contextualized Knowledge into Pre-trained Language Models0
An Empirical Exploration of Local Ordering Pre-training for Structured Prediction0
Analysing Word Representation from the Input and Output Embeddings in Neural Network Language ModelsCode0
A Semi-supervised Approach to Generate the Code-Mixed Text using Pre-trained Encoder and Transfer Learning0
FENAS: Flexible and Expressive Neural Architecture Search0
Adapting Open Domain Fact Extraction and Verification to COVID-FACT through In-Domain Language Modeling0
Cross-Lingual Dependency Parsing by POS-Guided Word Reordering0
LIMIT-BERT : Linguistics Informed Multi-Task BERTCode0
Looking inside Noun Compounds: Unsupervised Prepositional and Free Paraphrasing0
The Amazing World of Neural Language Generation0
Tri-Train: Automatic Pre-Fine Tuning between Pre-Training and Fine-Tuning for SciNER0
Hybrid Emoji-Based Masked Language Models for Zero-Shot Abusive Language Detection0
Revisiting Representation Degeneration Problem in Language Modeling0
Learning Physical Common Sense as Knowledge Graph Completion via BERT Data Augmentation and Constrained Tucker Factorization0
PALM: Pre-training an Autoencoding\&Autoregressive Language Model for Context-conditioned Generation0
TESA: A Task in Entity Semantic Aggregation for Abstractive Summarization0
Personal Information Leakage Detection in ConversationsCode0
Learn to Cross-lingual Transfer with Meta Graph Learning Across Heterogeneous Languages0
Centering-based Neural Coherence Modeling with Hierarchical Discourse Segments0
``I'd rather just go to bed'': Understanding Indirect Answers0
From Zero to Hero: On the Limitations of Zero-Shot Language Transfer with Multilingual Transformers0
Explainable Clinical Decision Support from Text0
Connecting the Dots: Event Graph Schema Induction with Path Language Modeling0
Coding Textual Inputs Boosts the Accuracy of Neural NetworksCode0
ControlVAE: Tuning, Analytical Properties, and Performance AnalysisCode4
Understanding Pre-trained BERT for Aspect-based Sentiment AnalysisCode1
VECO: Variable and Flexible Cross-lingual Pre-training for Language Understanding and Generation0
SLM: Learning a Discourse Language Representation with Sentence Unshuffling0
Semantic Labeling Using a Deep Contextualized Language ModelCode0
Topic-Preserving Synthetic News Generation: An Adversarial Deep Reinforcement Learning Approach0
Phoneme Based Neural Transducer for Large Vocabulary Speech Recognition0
Seq2Mol: Automatic design of de novo molecules conditioned by the target protein sequences through deep neural networks0
Memory Attentive Fusion: External Language Model Integration for Transformer-based Sequence-to-Sequence Model0
Contextual BERT: Conditioning the Language Model Using a Global State0
One In A Hundred: Select The Best Predicted Sequence from Numerous Candidates for Streaming Speech Recognition0
Fusion Models for Improved Visual Captioning0
Effective Decoder Masking for Transformer Based End-to-End Speech Recognition0
Effective FAQ Retrieval and Question Matching With Unsupervised Knowledge Injection0
Multitask Training with Text Data for End-to-End Speech Recognition0
Probing Task-Oriented Dialogue Representation from Language Models0
Semi-Supervised Spoken Language Understanding via Self-Supervised Speech and Language Model PretrainingCode1
Improved Neural Language Model Fusion for Streaming Recurrent Neural Network Transducer0
Accelerating Training of Transformer-Based Language Models with Progressive Layer DroppingCode0
Dutch Humor Detection by Generating Negative Examples0
Automatically Identifying Words That Can Serve as Labels for Few-Shot Text ClassificationCode2
CLPLM: Character Level Pretrained Language Model for Extracting Support Phrases for Sentiment Labels0
Show:102550
← PrevPage 283 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified