SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 70017050 of 17610 papers

TitleStatusHype
CATCH: Complementary Adaptive Token-level Contrastive Decoding to Mitigate Hallucinations in LVLMs0
Catching the Common Cause: Extraction and Annotation of Causal Relations and their Participants0
Positional Artefacts Propagate Through Masked Language Model Embeddings0
Cat, Rat, Meow: On the Alignment of Language Model and Human Term-Similarity Judgments0
Causal Distillation for Language Models0
Causal Distillation: Transferring Structured Explanations from Large to Compact Language Models0
Causal Graph in Language Model Rediscovers Cortical Hierarchy in Human Narrative Processing0
Causal Inference with Large Language Model: A Survey0
Causal Language Model for Zero-shot Constrained Keyphrase Generation0
Causal Prompting: Debiasing Large Language Model Prompting based on Front-Door Adjustment0
Data Augmentations for Improved (Large) Language Model Generalization0
caWaC -- A web corpus of Catalan and its application to language modeling and machine translation0
CBAG: Conditional Biomedical Abstract Generation0
CBT-LLM: A Chinese Large Language Model for Cognitive Behavioral Therapy-based Mental Health Question Answering0
FactLLaMA: Optimizing Instruction-Following Language Models with External Knowledge for Automated Fact-Checking0
Factored Agents: Decoupling In-Context Learning and Memorization for Robust Tool Use0
Factored Language Model based on Recurrent Neural Network0
Facts as Experts: Adaptable and Interpretable Neural Memory over Symbolic Knowledge0
FACTTRACK: Time-Aware World State Tracking in Story Outlines0
Factual and Personalized Recommendations using Language Models and Reinforcement Learning0
Factual Consistency Oriented Speech Recognition0
Factual Dialogue Summarization via Learning from Large Language Models0
A Normative Framework for Benchmarking Consumer Fairness in Large Language Model Recommender System0
FairHome: A Fair Housing and Fair Lending Dataset0
Understanding the Role of Cross-Entropy Loss in Fairly Evaluating Large Language Model-based Recommendation0
Fairness in Representation for Multilingual NLP: Insights from Controlled Experiments on Conditional Language Modeling0
FairT2I: Mitigating Social Bias in Text-to-Image Generation via Large Language Model-Assisted Detection and Attribute Rebalancing0
FairyLandAI: Personalized Fairy Tales utilizing ChatGPT and DALLE-30
Faithful Explanations of Black-box NLP Models Using LLM-generated Counterfactuals0
Faithful Path Language Modeling for Explainable Recommendation over Knowledge Graph0
Fake News Detection and Manipulation Reasoning via Large Vision-Language Models0
FakeReasoning: Towards Generalizable Forgery Detection and Reasoning0
Fake Sentence Detection as a Training Task for Sentence Encoding0
Fake Sentence Detection as a Training Task for Sentence Encoding0
Falcon2-11B Technical Report0
FALCON: Fine-grained Activation Manipulation by Contrastive Orthogonal Unalignment for Large Language Model0
Falcon Mamba: The First Competitive Attention-free 7B Language Model0
FANAL -- Financial Activity News Alerting Language Modeling Framework0
FANTAstic SEquences and Where to Find Them: Faithful and Efficient API Call Generation through State-tracked Constrained Decoding and Reranking0
ReAttention: Training-Free Infinite Context with Finite Attention Scope0
FARM: Functional Group-Aware Representations for Small Molecules0
FarSSiBERT: A Novel Transformer-based Model for Semantic Similarity Measurement of Persian Social Networks Informal Texts0
Farzi Data: Autoregressive Data Distillation0
FashionM3: Multimodal, Multitask, and Multiround Fashion Assistant based on Unified Vision-Language Model0
FAS-LLM: Large Language Model-Based Channel Prediction for OTFS-Enabled Satellite-FAS Links0
FASST: Fast LLM-based Simultaneous Speech Translation0
FastAdaBelief: Improving Convergence Rate for Belief-based Adaptive Optimizers by Exploiting Strong Convexity0
Fast and accurate factorized neural transducer for text adaption of end-to-end speech recognition models0
Fast and Accurate Preordering for SMT using Neural Networks0
Fast and Memory-Efficient Neural Code Completion0
Show:102550
← PrevPage 141 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified