SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 31013150 of 17610 papers

TitleStatusHype
Composable Text Controls in Latent Space with ODEsCode1
Aggretriever: A Simple Approach to Aggregate Textual Representations for Robust Dense Passage RetrievalCode1
CrAM: A Compression-Aware MinimizerCode1
Contextual Information and Commonsense Based Prompt for Emotion Recognition in ConversationCode1
Training Effective Neural Sentence Encoders from Automatically Mined ParaphrasesCode1
Improving Mandarin Speech Recogntion with Block-augmented TransformerCode1
Zero-Shot Video Captioning with Evolving Pseudo-TokensCode1
Unsupervised pre-training of graph transformers on patient population graphsCode1
Leveraging Natural Supervision for Language Representation Learning and GenerationCode1
Label2Label: A Language Modeling Framework for Multi-Attribute LearningCode1
Clover: Towards A Unified Video-Language Alignment and Fusion ModelCode1
The Harvard USPTO Patent Dataset: A Large-Scale, Well-Structured, and Multi-Purpose Corpus of Patent ApplicationsCode1
A Large Scale Search Dataset for Unbiased Learning to RankCode1
Predicting Opinion Dynamics via Sociologically-Informed Neural NetworksCode1
Open-Vocabulary Multi-Label Classification via Multi-Modal Knowledge TransferCode1
Probing via PromptingCode1
ConfliBERT: A Pre-trained Language Model for Political Conflict and ViolenceCode1
CL-ReLKT: Cross-lingual Language Knowledge Transfer for Multilingual Retrieval Question AnsweringCode1
CoMPM: Context Modeling with Speaker’s Pre-trained Memory Tracking for Emotion Recognition in ConversationCode1
Forecasting Future World Events with Neural NetworksCode1
Improving Visual Grounding by Encouraging Consistent Gradient-based ExplanationsCode1
CC-Riddle: A Question Answering Dataset of Chinese Character RiddlesCode1
Few-Shot Fine-Grained Entity Typing with Automatic Label Interpretation and Instance GenerationCode1
Distilling a Pretrained Language Model to a Multilingual ASR ModelCode1
Protoformer: Embedding Prototypes for TransformersCode1
Self-Supervised Learning of Brain Dynamics from Broad Neuroimaging DataCode1
Automatic Controllable Product Copywriting for E-CommerceCode1
SemMAE: Semantic-Guided Masking for Learning Masked AutoencodersCode1
Questions Are All You Need to Train a Dense Passage RetrieverCode1
BenchCLAMP: A Benchmark for Evaluating Language Models on Syntactic and Semantic ParsingCode1
SSM-DTA: Breaking the Barriers of Data Scarcity in Drug-Target Affinity PredictionCode1
Towards Unified Conversational Recommender Systems via Knowledge-Enhanced Prompt LearningCode1
Zero-Shot Video Question Answering via Frozen Bidirectional Language ModelsCode1
Write and Paint: Generative Vision-Language Models are Unified Modal LearnersCode1
LAVENDER: Unifying Video-Language Understanding as Masked Language ModelingCode1
Memory-Based Model Editing at ScaleCode1
JiuZhang: A Chinese Pre-trained Language Model for Mathematical Problem UnderstandingCode1
Efficient recurrent architectures through activity sparsity and sparse back-propagation through timeCode1
A Multi-Task Benchmark for Korean Legal Language Understanding and Judgement PredictionCode1
SsciBERT: A Pre-trained Language Model for Social Science TextsCode1
Traditional and context-specific spam detection in low resource settingsCode1
OrdinalCLIP: Learning Rank Prompts for Language-Guided Ordinal RegressionCode1
Improving Contrastive Learning of Sentence Embeddings with Case-Augmented Positives and Retrieved NegativesCode1
Pretrained Models for Multilingual Federated LearningCode1
Cross-View Language Modeling: Towards Unified Cross-Lingual Cross-Modal Pre-trainingCode1
On Reinforcement Learning and Distribution Matching for Fine-Tuning Language Models with no Catastrophic ForgettingCode1
PoliBERTweet: A Pre-trained Language Model for Analyzing Political Content on TwitterCode1
ViHealthBERT: Pre-trained Language Models for Vietnamese in Health Text MiningCode1
ViHealthBERT: Pre-trained Language Models for Vietnamese in Health Text MiningCode1
hmBERT: Historical Multilingual Language Models for Named Entity RecognitionCode1
Show:102550
← PrevPage 63 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified