SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 14011450 of 17610 papers

TitleStatusHype
RARR: Researching and Revising What Language Models Say, Using Language ModelsCode1
FreeEval: A Modular Framework for Trustworthy and Efficient Evaluation of Large Language ModelsCode1
FontCLIP: A Semantic Typography Visual-Language Model for Multilingual Font ApplicationsCode1
FonBund: A Library for Combining Cross-lingual Phonological Segment DataCode1
Fool Your (Vision and) Language Model With Embarrassingly Simple PermutationsCode1
FOLIO: Natural Language Reasoning with First-Order LogicCode1
FocusLLM: Precise Understanding of Long Context by Dynamic CondensingCode1
Follow-Up Differential Descriptions: Language Models Resolve Ambiguities for Image ClassificationCode1
Forcing Diffuse Distributions out of Language ModelsCode1
AD-DROP: Attribution-Driven Dropout for Robust Language Model Fine-TuningCode1
Fly-Swat or Cannon? Cost-Effective Language Model Choice via Meta-ModelingCode1
Attribution Analysis Meets Model Editing: Advancing Knowledge Correction in Vision Language Models with VisEditCode1
Fluent dreaming for language modelsCode1
FOCUS: Effective Embedding Initialization for Monolingual Specialization of Multilingual ModelsCode1
Forecasting Future World Events with Neural NetworksCode1
Frequency Explains the Inverse Correlation of Large Language Models' Size, Training Data Amount, and Surprisal's Fit to Reading TimesCode1
LawInstruct: A Resource for Studying Language Model Adaptation to the Legal DomainCode1
Accelerating Vision-Language Pretraining with Free Language ModelingCode1
A Multi-Task Semantic Decomposition Framework with Task-specific Pre-training for Few-Shot NERCode1
A Multi-Task Benchmark for Korean Legal Language Understanding and Judgement PredictionCode1
Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN Fine-TuningCode1
Enhancing Monocular 3D Scene Completion with Diffusion ModelCode1
FIRE: Fact-checking with Iterative Retrieval and VerificationCode1
FinVis-GPT: A Multimodal Large Language Model for Financial Chart AnalysisCode1
A Multimodal In-Context Tuning Approach for E-Commerce Product Description GenerationCode1
MGeo: Multi-Modal Geographic Pre-Training MethodCode1
Accelerating Toeplitz Neural Network with Constant-time Inference ComplexityCode1
ADCNet: a unified framework for predicting the activity of antibody-drug conjugatesCode1
A Multi-Modal Context Reasoning Approach for Conditional Inference on Joint Textual and Visual CluesCode1
FineZip : Pushing the Limits of Large Language Models for Practical Lossless Text CompressionCode1
Fine-Tuning with Divergent Chains of Thought Boosts Reasoning Through Self-Correction in Language ModelsCode1
PAINT: Paying Attention to INformed Tokens to Mitigate Hallucination in Large Vision-Language ModelCode1
FLEX: Unifying Evaluation for Few-Shot NLPCode1
From Allies to Adversaries: Manipulating LLM Tool-Calling through Adversarial InjectionCode1
Finetuning Large Language Model for Personalized RankingCode1
AdaVAE: Exploring Adaptive GPT-2s in Variational Auto-Encoders for Language ModelingCode1
Fine-tuning Large Language Models for Adaptive Machine TranslationCode1
Fine-Tuning InstructPix2Pix for Advanced Image ColorizationCode1
Fine-Tuning Language Models via Epistemic Neural NetworksCode1
Fine-Tuning Pre-Trained Language Models Effectively by Optimizing Subnetworks AdaptivelyCode1
Fine-tuning a Large Language Model for Automating Computational Fluid Dynamics SimulationsCode1
Fine-Tuning CLIP's Last Visual Projector: A Few-Shot CornucopiaCode1
A Multi-Granularity-Aware Aspect Learning Model for Multi-Aspect Dense RetrievalCode1
A Multi-Grained Self-Interpretable Symbolic-Neural Model For Single/Multi-Labeled Text ClassificationCode1
Safety Tax: Safety Alignment Makes Your Large Reasoning Models Less ReasonableCode1
Fine-Tuning Discrete Diffusion Models with Policy Gradient MethodsCode1
Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self-Training ApproachCode1
Fine-grained Audible Video DescriptionCode1
Find What You Want: Learning Demand-conditioned Object Attribute Space for Demand-driven NavigationCode1
AMR Parsing via Graph-Sequence Iterative InferenceCode1
Show:102550
← PrevPage 29 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified