SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 1205112100 of 17610 papers

TitleStatusHype
What Changed? Detecting and Evaluating Instruction-Guided Image Edits with Multimodal Large Language Models0
Unsupervised Paraphrasability Prediction for Compound Nominalizations0
What does BERT Learn from Arabic Machine Reading Comprehension Datasets?0
Zero-Shot Cross-lingual Aphasia Detection using Automatic Speech Recognition0
What Does BERT with Vision Look At?0
What Does it Mean for a Language Model to Preserve Privacy?0
What do Language Model Probabilities Represent? From Distribution Estimation to Response Prediction0
What do Language Representations Really Represent?0
Zero-shot cross-lingual Meaning Representation Transfer: Annotation of Hungarian using the Prague Functional Generative Description0
What do LLMs Know about Financial Markets? A Case Study on Reddit Market Sentiment Analysis0
Unsupervised Neural Machine Translation with Generative Language Models Only0
Zero-Shot Cross-Lingual Sentiment Classification under Distribution Shift: an Exploratory Study0
What do RNN Language Models Learn about Filler--Gap Dependencies?0
Understanding Language Model Circuits through Knowledge Editing0
Unsupervised neural and Bayesian models for zero-resource speech processing0
What do we need to know about an unknown word when parsing German0
Unsupervised Natural Question Answering with a Small Model0
What goes into a word: generating image descriptions with top-down spatial knowledge0
What Happens When Small Is Made Smaller? Exploring the Impact of Compression on Small Data Pretrained Language Models0
WHAT-IF: Exploring Branching Narratives by Meta-Prompting Large Language Models0
What is not where: the challenge of integrating spatial representations into deep learning architectures0
Unsupervised Multi-View Post-OCR Error Correction With Language Models0
Unsupervised Multiview Contrastive Language-Image Joint Learning with Pseudo-Labeled Prompts Via Vision-Language Model for 3D/4D Facial Expression Recognition0
What Kind of Language Is Hard to Language-Model?0
What Kinds of Tokens Benefit from Distant Text? An Analysis on Long Context Language Modeling0
Unsupervised Multilingual Sentence Embeddings for Parallel Corpus Mining0
Unsupervised morph segmentation and statistical language models for vocabulary expansion0
What Languages are Easy to Language-Model? A Perspective from Learning Probabilistic Regular Languages0
Zero-shot cross-lingual transfer in instruction tuning of large language models0
Unsupervised Morphology-Based Vocabulary Expansion0
ZiGong 1.0: A Large Language Model for Financial Credit0
Unsupervised Morphological Tree Tokenizer0
What represents ``style'' in authorship attribution?0
What Should Baby Models Read? Exploring Sample-Efficient Data Composition on Model Performance0
What's in a Measurement? Using GPT-3 on SemEval 2021 Task 8 -- MeasEval0
What's in a Name? Beyond Class Indices for Image Recognition0
Unified Text Structuralization with Instruction-tuned Language Models0
What's in your Head? Emergent Behaviour in Multi-Task Transformer Models0
What’s in Your Head? Emergent Behaviour in Multi-Task Transformer Models0
Adapting Long Context NLM for ASR Rescoring in Conversational Agents0
ZeroShotDataAug: Generating and Augmenting Training Data with ChatGPT0
Unsupervised Method for Improving Arabic Speech Recognition Systems0
What Syntactic Structures block Dependencies in RNN Language Models?0
What the [MASK]? Making Sense of Language-Specific BERT Models0
Unsupervised Melody Segmentation Based on a Nested Pitman-Yor Language Model0
Unsupervised Learning on an Approximate Corpus0
What Will My Model Forget? Forecasting Forgotten Examples in Language Model Refinement0
What Works and Doesn't Work, A Deep Decoder for Neural Machine Translation0
What Works and Doesn’t Work, A Deep Decoder for Neural Machine Translation0
Knowledge Injection into Dialogue Generation via Language Models0
Show:102550
← PrevPage 242 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified