SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 151200 of 17610 papers

TitleStatusHype
SpeechAlign: Aligning Speech Generation to Human PreferencesCode5
Show-o2: Improved Native Unified Multimodal ModelsCode5
4th PVUW MeViS 3rd Place Report: Sa2VACode5
DeepSpeed-VisualChat: Multi-Round Multi-Image Interleave Chat via Multi-Modal Causal AttentionCode5
DeTikZify: Synthesizing Graphics Programs for Scientific Figures and Sketches with TikZCode5
Exploring Large Language Model based Intelligent Agents: Definitions, Methods, and ProspectsCode5
LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init AttentionCode5
Self-Instruct: Aligning Language Models with Self-Generated InstructionsCode5
StarVector: Generating Scalable Vector Graphics Code from Images and TextCode5
VisionLLM v2: An End-to-End Generalist Multimodal Large Language Model for Hundreds of Vision-Language TasksCode5
DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language ModelsCode5
RLHF Workflow: From Reward Modeling to Online RLHFCode5
Datasets for Large Language Models: A Comprehensive SurveyCode5
Large Language Model based Multi-Agents: A Survey of Progress and ChallengesCode5
Audio Flamingo: A Novel Audio Language Model with Few-Shot Learning and Dialogue AbilitiesCode5
Rethinking LLM Language Adaptation: A Case Study on Chinese MixtralCode5
KBLaM: Knowledge Base augmented Language ModelCode5
LAB: Large-Scale Alignment for ChatBotsCode5
Randomized Autoregressive Visual GenerationCode5
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and BeyondCode5
MobileVLM V2: Faster and Stronger Baseline for Vision Language ModelCode5
Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining ResearchCode5
R1-Omni: Explainable Omni-Multimodal Emotion Recognition with Reinforcement LearningCode5
Repetition Improves Language Model EmbeddingsCode5
Sa2VA: Marrying SAM2 with LLaVA for Dense Grounded Understanding of Images and VideosCode5
InstructPix2Pix: Learning to Follow Image Editing InstructionsCode5
Assessing Language Model Deployment with Risk CardsCode5
CogAgent: A Visual Language Model for GUI AgentsCode5
CogVLM: Visual Expert for Pretrained Language ModelsCode5
Prometheus 2: An Open Source Language Model Specialized in Evaluating Other Language ModelsCode5
Codec-SUPERB @ SLT 2024: A lightweight benchmark for neural audio codec modelsCode5
Executable Code Actions Elicit Better LLM AgentsCode5
FlexGen: High-Throughput Generative Inference of Large Language Models with a Single GPUCode5
CodeGen2: Lessons for Training LLMs on Programming and Natural LanguagesCode5
Interpretable Preferences via Multi-Objective Reward Modeling and Mixture-of-ExpertsCode5
PowerInfer: Fast Large Language Model Serving with a Consumer-grade GPUCode5
Choices are More Important than Efforts: LLM Enables Efficient Multi-Agent ExplorationCode4
Optimizing Prompts for Text-to-Image GenerationCode4
Groma: Localized Visual Tokenization for Grounding Multimodal Large Language ModelsCode4
ChatDoctor: A Medical Chat Model Fine-Tuned on a Large Language Model Meta-AI (LLaMA) Using Medical Domain KnowledgeCode4
GLIPv2: Unifying Localization and Vision-Language UnderstandingCode4
ChatHaruhi: Reviving Anime Character in Reality via Large Language ModelCode4
G-LLaVA: Solving Geometric Problem with Multi-Modal Large Language ModelCode4
OLMoE: Open Mixture-of-Experts Language ModelsCode4
GigaAM: Efficient Self-Supervised Learner for Speech RecognitionCode4
Generative Representational Instruction TuningCode4
On the Contribution of Per-ICD Attention Mechanisms to Classify Health Records in Languages with Fewer Resources than EnglishCode4
Osprey: Pixel Understanding with Visual Instruction TuningCode4
FoundationPose: Unified 6D Pose Estimation and Tracking of Novel ObjectsCode4
Multimodal Chain-of-Thought Reasoning in Language ModelsCode4
Show:102550
← PrevPage 4 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified