SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 1130111350 of 17610 papers

TitleStatusHype
From Dogwhistles to Bullhorns: Unveiling Coded Rhetoric with Language Models0
Zero-shot Visual Question Answering with Language Model FeedbackCode0
Tokenization Impacts Multilingual Language Modeling: Assessing Vocabulary Allocation and Overlap Across LanguagesCode0
Leveraging Domain Knowledge for Inclusive and Bias-aware Humanitarian Response Entry ClassificationCode0
SQL-PaLM: Improved Large Language Model Adaptation for Text-to-SQL (extended)0
Large language models improve Alzheimer's disease diagnosis using multi-modality data0
Slide, Constrain, Parse, Repeat: Synchronous SlidingWindows for Document AMR Parsing0
Masked and Permuted Implicit Context Learning for Scene Text RecognitionCode0
RewriteLM: An Instruction-Tuned Large Language Model for Text Rewriting0
BookGPT: A General Framework for Book Recommendation Empowered by Large Language Model0
Improving Scheduled Sampling for Neural Transducer-based ASR0
VioLA: Unified Codec Language Models for Speech Recognition, Synthesis, and Translation0
Textless Speech-to-Speech Translation With Limited Parallel DataCode0
Winner-Take-All Column Row Sampling for Memory Efficient Adaptation of Language ModelCode0
Just CHOP: Embarrassingly Simple LLM Compression0
Self-Evolution Learning for Discriminative Language Model PretrainingCode0
Neural Summarization of Electronic Health Records0
Alt-Text with Context: Improving Accessibility for Images on Twitter0
Trade-Offs Between Fairness and Privacy in Language ModelingCode0
Lexinvariant Language Models0
AutoPlan: Automatic Planning of Interactive Decision-Making Tasks With Large Language ModelsCode0
Spoken Question Answering and Speech Continuation Using Spectrogram-Powered LLMCode0
Structural Ambiguity and its Disambiguation in Language Model Based Parsers: the Case of Dutch Clause Relativization0
PURR: Efficiently Editing Language Model Hallucinations by Denoising Language Model Corruptions0
Getting MoRE out of Mixture of Language Model Reasoning Experts0
Towards Adaptive Prefix Tuning for Parameter-Efficient Language Model Fine-tuning0
Towards Few-shot Entity Recognition in Document Images: A Graph Neural Network Approach Robust to Image ManipulationCode0
Large Language Models are Few-Shot Health Learners0
This Land is Your, My Land: Evaluating Geopolitical Biases in Language ModelsCode0
Allies: Prompting Large Language Model with Beam Search0
Eliciting the Translation Ability of Large Language Models via Multilingual Finetuning with Translation Instructions0
Calc-X and Calcformers: Empowering Arithmetical Chain-of-Thought through Interaction with Symbolic SystemsCode0
Chain-of-Questions Training with Latent Answers for Robust Multistep Question Answering0
Estimating class separability of text embeddings with persistent homology0
How Predictable Are Large Language Model Capabilities? A Case Study on BIG-benchCode0
Estimating Large Language Model Capabilities without Labeled Test DataCode0
Drafting Event Schemas using Language Models0
Emergent inabilities? Inverse scaling over the course of pretraining0
In-Context Demonstration Selection with Cross Entropy Difference0
Focus Your Attention (with Adaptive IIR Filters)0
Dynamic Masking Rate Schedules for MLM Pretraining0
A Monte Carlo Language Model Pipeline for Zero-Shot Sociopolitical Event Extraction0
EmbodiedGPT: Vision-Language Pre-Training via Embodied Chain of Thought0
Enhancing Black-Box Few-Shot Text Classification with Prompt-Based Data Augmentation0
APPLS: Evaluating Evaluation Metrics for Plain Language SummarizationCode0
GenSpectrum Chat: Data Exploration in Public Health Using Large Language Models0
Acquiring Frame Element Knowledge with Deep Metric Learning for Semantic Frame Induction0
Do All Languages Cost the Same? Tokenization in the Era of Commercial Language Models0
Discrete Prompt Optimization via Constrained Generation for Zero-shot Re-rankerCode0
Faithful and Efficient Explanations for Neural Networks via Neural Tangent Kernel Surrogate ModelsCode0
Show:102550
← PrevPage 227 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified