SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 82518300 of 17610 papers

TitleStatusHype
Towards Trustworthy Knowledge Graph Reasoning: An Uncertainty Aware Perspective0
Towards Understanding Multi-Round Large Language Model Reasoning: Approximability, Learnability and Generalizability0
Towards Understanding Task-agnostic Debiasing Through the Lenses of Intrinsic Bias and Forgetfulness0
Towards Understanding the Influence of Reward Margin on Preference Model Performance0
Towards Unified Facial Action Unit Recognition Framework by Large Language Models0
Towards Unified Prompt Tuning for Few-shot Learning0
Towards Unified Prompt Tuning for Few-shot Text Classification0
Towards Unifying Multi-Lingual and Cross-Lingual Summarization0
Towards Universality: Studying Mechanistic Similarity Across Language Model Architectures0
Towards Unsupervised Image Captioning with Shared Multimodal Embeddings0
Towards Unsupervised Speech-to-Text Translation0
Towards Using Context-Dependent Symbols in CTC Without State-Tying Decision Trees0
Towards Using EEG to Improve ASR Accuracy0
Toward Sustainable GenAI using Generation Directives for Carbon-Friendly Large Language Model Inference0
Towards Visual-Prompt Temporal Answering Grounding in Medical Instructional Video0
Towards Visual Syntactical Understanding0
Towards Visual Text Grounding of Multimodal Large Language Model0
Towards Wireless Native Big AI Model: The Mission and Approach Differ From Large Language Model0
Towards Zero-Shot Code-Switched Speech Recognition0
Towards Zero-shot Human-Object Interaction Detection via Vision-Language Integration0
Towards Zero-shot Language Modeling0
Toward Tree Substitution Grammars with Latent Annotations0
Toward Tweets Normalization Using Maximum Entropy0
Toward Understanding In-context vs. In-weight Learning0
Toxicity Detection with Generative Prompt-based Inference0
Toxic Subword Pruning for Dialogue Response Generation on Large Language Models0
TPC: Cross-Temporal Prediction Connection for Vision-Language Model Hallucination Reduction0
TPD: Enhancing Student Language Model Reasoning via Principle Discovery and Guidance0
TPPoet: Transformer-Based Persian Poem Generation using Minimal Data and Advanced Decoding Techniques0
TP-RAG: Benchmarking Retrieval-Augmented Large Language Model Agents for Spatiotemporal-Aware Travel Planning0
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage0
TPTU-v2: Boosting Task Planning and Tool Usage of Large Language Model-based Agents in Real-world Systems0
Trace and Edit Relation Associations in GPT0
Tracing Influence at Scale: A Contrastive Learning Approach to Linking Public Comments and Regulator Responses0
Tracing the Genealogies of Ideas with Large Language Model Embeddings0
Tracking Changes in ESG Representation: Initial Investigations in UK Annual Reports0
Tracking Universal Features Through Fine-Tuning and Model Merging0
TrackVLA: Embodied Visual Tracking in the Wild0
Train & Constrain: Phonologically Informed Tongue-Twister Generation from Topics and Paraphrases0
TrainerAgent: Customizable and Efficient Model Training through LLM-Powered Multi-Agent System0
Training a Bilingual Language Model by Mapping Tokens onto a Shared Character Space0
Training a code-switching language model with monolingual data0
Training and Analysing Deep Recurrent Neural Networks0
Training a T5 Using Lab-sized Resources0
Training a Tokenizer for Free with Private Federated Learning0
Training a Vision Language Model as Smartphone Assistant0
Training Compact Models for Low Resource Entity Tagging using Pre-trained Language Models0
Training Data for Large Language Model0
Training Data is More Valuable than You Think: A Simple and Effective Method by Retrieving from Training Data0
Training Deep Networks with Stochastic Gradient Normalized by Layerwise Adaptive Second Moments0
Show:102550
← PrevPage 166 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified