SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 151200 of 17610 papers

TitleStatusHype
MegaScale: Scaling Large Language Model Training to More Than 10,000 GPUsCode5
Repetition Improves Language Model EmbeddingsCode5
MobileVLM V2: Faster and Stronger Baseline for Vision Language ModelCode5
Audio Flamingo: A Novel Audio Language Model with Few-Shot Learning and Dialogue AbilitiesCode5
MEIA: Multimodal Embodied Perception and Interaction in Unknown EnvironmentsCode5
Executable Code Actions Elicit Better LLM AgentsCode5
Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining ResearchCode5
Large Language Model based Multi-Agents: A Survey of Progress and ChallengesCode5
Unlocking Efficiency in Large Language Model Inference: A Comprehensive Survey of Speculative DecodingCode5
DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language ModelsCode5
Exploring Large Language Model based Intelligent Agents: Definitions, Methods, and ProspectsCode5
StarVector: Generating Scalable Vector Graphics Code from Images and TextCode5
PowerInfer: Fast Large Language Model Serving with a Consumer-grade GPUCode5
CogAgent: A Visual Language Model for GUI AgentsCode5
Weakly Supervised Detection of Hallucinations in LLM ActivationsCode5
CogVLM: Visual Expert for Pretrained Language ModelsCode5
Zephyr: Direct Distillation of LM AlignmentCode5
CacheGen: KV Cache Compression and Streaming for Fast Large Language Model ServingCode5
Ferret: Refer and Ground Anything Anywhere at Any GranularityCode5
Efficient Streaming Language Models with Attention SinksCode5
DeepSpeed-VisualChat: Multi-Round Multi-Image Interleave Chat via Multi-Modal Causal AttentionCode5
The Rise and Potential of Large Language Model Based Agents: A SurveyCode5
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and BeyondCode5
Chatlaw: A Multi-Agent Collaborative Legal Assistant with Knowledge Graph Enhanced Mixture-of-Experts Large Language ModelCode5
Tree of Thoughts: Deliberate Problem Solving with Large Language ModelsCode5
CodeGen2: Lessons for Training LLMs on Programming and Natural LanguagesCode5
Assessing Language Model Deployment with Risk CardsCode5
LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init AttentionCode5
FlexGen: High-Throughput Generative Inference of Large Language Models with a Single GPUCode5
Speak Foreign Languages with Your Own Voice: Cross-Lingual Neural Codec Language ModelingCode5
Self-Instruct: Aligning Language Models with Self-Generated InstructionsCode5
Fast Inference from Transformers via Speculative DecodingCode5
InstructPix2Pix: Learning to Follow Image Editing InstructionsCode5
LLM.int8(): 8-bit Matrix Multiplication for Transformers at ScaleCode5
OPT: Open Pre-trained Transformer Language ModelsCode5
WeNet 2.0: More Productive End-to-End Speech Recognition ToolkitCode5
GigaAM: Efficient Self-Supervised Learner for Speech RecognitionCode4
ImgEdit: A Unified Image Editing Dataset and BenchmarkCode4
Partition Generative Modeling: Masked Modeling Without MasksCode4
Scaling Up Biomedical Vision-Language Models: Fine-Tuning, Instruction Tuning, and Multi-Modal LearningCode4
lmgame-Bench: How Good are LLMs at Playing Games?Code4
VITA-Audio: Fast Interleaved Cross-Modal Token Generation for Efficient Large Speech-Language ModelCode4
Fin-R1: A Large Language Model for Financial Reasoning through Reinforcement LearningCode4
Block Diffusion: Interpolating Between Autoregressive and Diffusion Language ModelsCode4
R1-Onevision:An Open-Source Multimodal Large Language Model Capable of Deep ReasoningCode4
Steel-LLM:From Scratch to Open Source -- A Personal Journey in Building a Chinese-Centric LLMCode4
Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth ApproachCode4
LLMDet: Learning Strong Open-Vocabulary Object Detectors under the Supervision of Large Language ModelsCode4
Beyond Reward Hacking: Causal Rewards for Large Language Model AlignmentCode4
Tarsier2: Advancing Large Vision-Language Models from Detailed Video Description to Comprehensive Video UnderstandingCode4
Show:102550
← PrevPage 4 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified