SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 37013750 of 17610 papers

TitleStatusHype
MAGDi: Structured Distillation of Multi-Agent Interaction Graphs Improves Reasoning in Smaller Language ModelsCode1
Dealing with Typos for BERT-based Passage Retrieval and RankingCode1
MAgIC: Investigation of Large Language Model Powered Multi-Agent in Cognition, Adaptability, Rationality and CollaborationCode1
LasUIE: Unifying Information Extraction with Latent Adaptive Structure-aware Generative Language ModelCode1
Counterfactual Token Generation in Large Language ModelsCode1
The Woman Worked as a Babysitter: On Biases in Language GenerationCode1
SentenceMIM: A Latent Variable Language ModelCode1
Machine learning as a model for cultural learning: Teaching an algorithm what it means to be fatCode1
MAGIC: Generating Self-Correction Guideline for In-Context Text-to-SQLCode1
Latin BERT: A Contextual Language Model for Classical PhilologyCode1
M^3GPT: An Advanced Multimodal, Multitask Framework for Motion Comprehension and GenerationCode1
M-ABSA: A Multilingual Dataset for Aspect-Based Sentiment AnalysisCode1
Automatic Label Sequence Generation for Prompting Sequence-to-sequence ModelsCode1
Data-to-Text Generation with Iterative Text EditingCode1
Debiasing Methods in Natural Language Understanding Make Bias More AccessibleCode1
Making AI Less "Thirsty": Uncovering and Addressing the Secret Water Footprint of AI ModelsCode1
Luna: Linear Unified Nested AttentionCode1
Learning distributed representations of graphs with Geo2DRCode1
Lumen: Unleashing Versatile Vision-Centric Capabilities of Large Multimodal ModelsCode1
CoVR-2: Automatic Data Construction for Composed Video RetrievalCode1
LeaPformer: Enabling Linear Transformers for Autoregressive and Simultaneous Tasks via Learned ProportionsCode1
LXMERT: Learning Cross-Modality Encoder Representations from TransformersCode1
BERT got a Date: Introducing Transformers to Temporal TaggingCode1
CPLLM: Clinical Prediction with Large Language ModelsCode1
LUMA: A Benchmark Dataset for Learning from Uncertain and Multimodal DataCode1
CPM: A Large-scale Generative Chinese Pre-trained Language ModelCode1
LyricWhiz: Robust Multilingual Zero-shot Lyrics Transcription by Whispering to ChatGPTCode1
BERT Goes Shopping: Comparing Distributional Models for Product RepresentationsCode1
MGeo: Multi-Modal Geographic Pre-Training MethodCode1
CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and GenerationCode1
An Empirical Study of Metrics to Measure Representational Harms in Pre-Trained Language ModelsCode1
A Multimodal In-Context Tuning Approach for E-Commerce Product Description GenerationCode1
Data Movement Is All You Need: A Case Study on Optimizing TransformersCode1
LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attentionCode1
M2D2: A Massively Multi-domain Language Modeling DatasetCode1
Data Efficient Masked Language Modeling for Vision and LanguageCode1
LQ-LoRA: Low-rank Plus Quantized Matrix Decomposition for Efficient Language Model FinetuningCode1
CrAM: A Compression-Aware MinimizerCode1
Low-Rank Adapting Models for Sparse AutoencodersCode1
Learning Compact Metrics for MTCode1
Automatic Prompt Augmentation and Selection with Chain-of-Thought from Labeled DataCode1
TOD-BERT: Pre-trained Natural Language Understanding for Task-Oriented DialogueCode1
LSBert: A Simple Framework for Lexical SimplificationCode1
LoRAShear: Efficient Large Language Model Structured Pruning and Knowledge RecoveryCode1
Data Augmentation using Pre-trained Transformer ModelsCode1
Learning Domain Invariant Prompt for Vision-Language ModelsCode1
Learning Fine-Grained Visual Understanding for Video Question Answering via Decoupling Spatial-Temporal ModelingCode1
Top1 Solution of QQ Browser 2021 Ai Algorithm Competition Track 1 : Multimodal Video SimilarityCode1
Sample Efficient Reinforcement Learning via Large Vision Language Model DistillationCode1
Lossless Acceleration of Large Language Model via Adaptive N-gram Parallel DecodingCode1
Show:102550
← PrevPage 75 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified