SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 1065110700 of 17610 papers

TitleStatusHype
Quantifying Long Range Dependence in Language and User Behavior to improve RNNs0
Quantifying reliance on external information over parametric knowledge during Retrieval Augmented Generation (RAG) using mechanistic analysis0
Quantifying Semantics using Complex Network Analysis0
Quantifying the Effectiveness of Student Organization Activities using Natural Language Processing0
Quantifying the Role of Textual Predictability in Automatic Speech Recognition0
Quantifying Uncertainties in Natural Language Processing Tasks0
Quantifying Uncertainty in Answers from any Language Model and Enhancing their Trustworthiness0
Quantized-Dialog Language Model for Goal-Oriented Conversational Systems0
Quantized Embedding Vectors for Controllable Diffusion Language Models0
Quantized Neural Network Inference with Precision Batching0
Quantized Transformer Language Model Implementations on Edge Devices0
Quantum-Enhanced Parameter-Efficient Learning for Typhoon Trajectory Forecasting0
Quantum Graph Transformer for NLP Sentiment Classification0
Quantum Language Model with Entanglement Embedding for Question Answering0
Quantum Natural Language Processing on Near-Term Quantum Computers0
Quantum State Preparation via Large-Language-Model-Driven Evolution0
Quantum Transfer Learning for Acceptability Judgements0
QUART-Online: Latency-Free Large Multimodal Language Model for Quadruped Robot Learning0
QueEn: A Large Language Model for Quechua-English Translation0
Query-Aware Learnable Graph Pooling Tokens as Prompt for Large Language Models0
Query-based summarization using MDL principle0
Query Expansion Using Contextual Clue Sampling with Language Models0
Query Generation with External Knowledge for Dense Retrieval0
Querying as Prompt: Parameter-Efficient Learning for Multimodal Language Model0
Query of CC: Unearthing Large Scale Domain-Specific Knowledge from Public Corpora0
Query Performance Explanation through Large Language Model for HTAP Systems0
Query Rewriting for Retrieval-Augmented Large Language Models0
QuesBELM: A BERT based Ensemble Language Model for Natural Questions0
QuesNet: A Unified Representation for Heterogeneous Test Questions0
Question Answering and Question Generation for Finnish0
Question Answering based Clinical Text Structuring Using Pre-trained Language Model0
Question Answering over Knowledge Base using Language Model Embeddings0
Question Aware Vision Transformer for Multimodal Reasoning0
Question-focused Summarization by Decomposing Articles into Facts and Opinions and Retrieving Entities0
Quick Dense Retrievers Consume KALE: Post Training Kullback Leibler Alignment of Embeddings for Asymmetrical dual encoders0
Quranic Verses Semantic Relatedness Using AraBERT0
QURIOUS: Question Generation Pretraining for Text Generation0
QWENDY: Gene Regulatory Network Inference Enhanced by Large Language Model and Transformer0
QwenLong-CPRS: Towards -LLMs with Dynamic Context Optimization0
Qwen vs. Gemma Integration with Whisper: A Comparative Study in Multilingual SpeechLLM Systems0
R1dacted: Investigating Local Censorship in DeepSeek's R1 Language Model0
R2GenCSR: Retrieving Context Samples for Large Language Model based X-ray Medical Report Generation0
R2H: Building Multimodal Navigation Helpers that Respond to Help Requests0
R^3AG: First Workshop on Refined and Reliable Retrieval Augmented Generation0
R^3Mem: Bridging Memory Retention and Retrieval via Reversible Compression0
RACER: Rich Language-Guided Failure Recovery Policies for Imitation Learning0
Racing Thoughts: Explaining Contextualization Errors in Large Language Models0
Radar Spectra-Language Model for Automotive Scene Parsing0
RadBARTsum: Domain Specific Adaption of Denoising Sequence-to-Sequence Models for Abstractive Radiology Report Summarization0
RadFlag: A Black-Box Hallucination Detection Method for Medical Vision Language Models0
Show:102550
← PrevPage 214 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified