SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 1145111500 of 17610 papers

TitleStatusHype
Chunk-aware Alignment and Lexical Constraint for Visual Entailment with Natural Language ExplanationsCode0
Zero-Shot Video Captioning with Evolving Pseudo-TokensCode1
PanGu-Coder: Program Synthesis with Function-Level Language ModelingCode0
Language Model CascadesCode2
Leveraging Natural Supervision for Language Representation Learning and GenerationCode1
Language models of protein sequences at the scale of evolution enable accurate structure prediction0
The Birth of Bias: A case study on the evolution of gender bias in an English language modelCode0
Unsupervised pre-training of graph transformers on patient population graphsCode1
Integrating Linguistic Theory and Neural Language ModelsCode0
Word Play for Playing Othello (Reverses)0
Training Large-Vocabulary Neural Language Models by Private Federated Learning for Resource-Constrained Devices0
Label2Label: A Language Modeling Framework for Multi-Attribute LearningCode1
Towards the Human Global Context: Does the Vision-Language Model Really Judge Like a Human Being?0
STT: Soft Template Tuning for Few-Shot Adaptation0
Natural language processing for clusterization of genes according to their functions0
An Overview of Distant Supervision for Relation Extraction with a Focus on Denoising and Pre-training Methods0
ELECTRA is a Zero-Shot Learner, TooCode0
Clover: Towards A Unified Video-Language Alignment and Fusion ModelCode1
Multimodal Dialog Systems with Dual Knowledge-enhanced Generative Pretrained Language Model0
A No-Code Low-Code Paradigm for Authoring Business Automations Using Natural Language0
Learning Flexible Translation between Robot Actions and Language Descriptions0
Combing for Credentials: Active Pattern Extraction from Smart Reply0
BERTIN: Efficient Pre-Training of a Spanish Language Model using Perplexity Sampling0
Beware the Rationalization Trap! When Language Model Explainability Diverges from our Mental Models of Language0
Confident Adaptive Language Modeling0
Language Modelling with PixelsCode2
Language models show human-like content effects on reasoning tasksCode0
Layout-Aware Information Extraction for Document-Grounded Dialogue: Dataset, Method and Demonstration0
Neural Data-to-Text Generation Based on Small Datasets: Comparing the Added Value of Two Semi-Supervised Learning Approaches on Top of a Large Language Model0
Recurrent Memory TransformerCode2
Scene Text Recognition with Permuted Autoregressive Sequence ModelsCode2
Multilinguals at SemEval-2022 Task 11: Complex NER in Semantically Ambiguous Settings for Low Resource LanguagesCode0
TRIE++: Towards End-to-End Information Extraction from Visually Rich Documents0
A Transfer Learning Based Model for Text Readability Assessment in German0
N-Grammer: Augmenting Transformers with latent n-gramsCode4
Text-driven Emotional Style Control and Cross-speaker Style Transfer in Neural TTS0
A Novel DeBERTa-based Model for Financial Question Answering Task0
LM-Nav: Robotic Navigation with Large Pre-Trained Models of Language, Vision, and ActionCode2
Internal Language Model Estimation based Language Model Fusion for Cross-Domain Code-Switching Speech Recognition0
The Harvard USPTO Patent Dataset: A Large-Scale, Well-Structured, and Multi-Purpose Corpus of Patent ApplicationsCode1
Hidden Schema Networks0
Predicting Opinion Dynamics via Sociologically-Informed Neural NetworksCode1
Meta-Learning the Difference: Preparing Large Language Models for Efficient AdaptationCode0
Neural Language Models are not Born Equal to Fit Brain Data, but Training Helps0
A Large Scale Search Dataset for Unbiased Learning to RankCode1
Aspect-Based Sentiment Analysis using Local Context Focus Mechanism with DeBERTa0
Gender Biases and Where to Find Them: Exploring Gender Bias in Pre-Trained Transformer-based Language Models Using Movement PruningCode0
Text Enriched Sparse Hyperbolic Graph Convolutional Networks0
SimLM: Pre-training with Representation Bottleneck for Dense Passage Retrieval0
MIA 2022 Shared Task Submission: Leveraging Entity Representations, Dense-Sparse Hybrids, and Fusion-in-Decoder for Cross-Lingual Question Answering0
Show:102550
← PrevPage 230 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified