SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 1520115250 of 17610 papers

TitleStatusHype
Sameness Entices, but Novelty Enchants in Fanfiction OnlineCode0
Learning to Learn Words from Visual ScenesCode0
Investigating Prior Knowledge for Challenging Chinese Machine Reading ComprehensionCode0
Sample Efficient Text Summarization Using a Single Pre-Trained TransformerCode0
Learning to Infer from Unlabeled Data: A Semi-supervised Learning Approach for Robust Natural Language InferenceCode0
Probing Linguistic Information For Logical Inference In Pre-trained Language ModelsCode0
Joint processing of linguistic properties in brains and language modelsCode0
Large Scale Language Modeling: Converging on 40GB of Text in Four HoursCode0
Learning to Generate Compositional Color DescriptionsCode0
The Factuality Tax of Diversity-Intervened Text-to-Image Generation: Benchmark and Fact-Augmented InterventionCode0
More Experts Than Galaxies: Conditionally-overlapping Experts With Biologically-Inspired Fixed RoutingCode0
StrucTexT: Structured Text Understanding with Multi-Modal TransformersCode0
Sarcasm Detection in a Less-Resourced LanguageCode0
Probing BERT's priors with serial reproduction chainsCode0
SART - Similarity, Analogies, and Relatedness for Tatar Language: New Benchmark Datasets for Word Embeddings EvaluationCode0
SAS: Self-Augmentation Strategy for Language Model Pre-trainingCode0
StrucTexTv2: Masked Visual-Textual Prediction for Document Image Pre-trainingCode0
SATA: A Paradigm for LLM Jailbreak via Simple Assistive Task LinkageCode0
Prix-LM: Pretraining for Multilingual Knowledge Base ConstructionCode0
Satori: Towards Proactive AR Assistant with Belief-Desire-Intention User ModelingCode0
SATURN: SAT-based Reinforcement Learning to Unleash Language Model ReasoningCode0
SaudiBERT: A Large Language Model Pretrained on Saudi Dialect CorporaCode0
Track the Answer: Extending TextVQA from Image to Video with Spatio-Temporal CluesCode0
LLM-GEm: Large Language Model-Guided Prediction of People’s Empathy Levels towards Newspaper ArticleCode0
Privacy Ripple Effects from Adding or Removing Personal Information in Language Model TrainingCode0
MOOCRep: A Unified Pre-trained Embedding of MOOC EntitiesCode0
Say What You Mean! Large Language Models Speak Too Positively about Negative Commonsense KnowledgeCode0
Mono vs Multilingual Transformer-based Models: a Comparison across Several Language TasksCode0
SBI-RAG: Enhancing Math Word Problem Solving for Students through Schema-Based Instruction and Retrieval-Augmented GenerationCode0
Toward Open-Set Human Object Interaction DetectionCode0
Scaffolded input promotes atomic organization in the recurrent neural network language modelCode0
LLM-enhanced Self-training for Cross-domain Constituency ParsingCode0
SCA: Improve Semantic Consistent in Unrestricted Adversarial Attacks via DDPM InversionCode0
Understanding the Vulnerability of CLIP to Image CompressionCode0
Learning to Explore and Select for Coverage-Conditioned Retrieval-Augmented GenerationCode0
Monotonic Paraphrasing Improves Generalization of Language Model PromptingCode0
Scalable Educational Question Generation with Pre-trained Language ModelsCode0
Structural Language Models of CodeCode0
Priors for symbolic regressionCode0
LLM-based Rewriting of Inappropriate Argumentation using Reinforcement Learning from Machine FeedbackCode0
Larger Probes Tell a Different Story: Extending Psycholinguistic Datasets Via In-Context LearningCode0
Primer: Searching for Efficient Transformers for Language ModelingCode0
Tradeoffs Between Alignment and Helpfulness in Language Models with Representation EngineeringCode0
Structural Self-Supervised Objectives for TransformersCode0
Pre-train, Prompt and Recommendation: A Comprehensive Survey of Language Modelling Paradigm Adaptations in Recommender SystemsCode0
Trade-Offs Between Fairness and Privacy in Language ModelingCode0
Large Product Key Memory for Pretrained Language ModelsCode0
Pretrain like Your Inference: Masked Tuning Improves Zero-Shot Composed Image RetrievalCode0
Monolingual and Multilingual Reduction of Gender Bias in Contextualized RepresentationsCode0
Large Memory Layers with Product KeysCode0
Show:102550
← PrevPage 305 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified