SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 9511000 of 17610 papers

TitleStatusHype
Towards Visual Text Grounding of Multimodal Large Language Model0
Unleashing the Power of LLMs in Dense Retrieval with Query Likelihood Modeling0
Collab-RAG: Boosting Retrieval-Augmented Generation for Complex Question Answering via White-Box and Black-Box LLM CollaborationCode1
'Neural howlround' in large language models: a self-reinforcing bias phenomenon, and a dynamic attenuation solution0
A Taxonomy of Self-Handover0
The Dream Within Huang Long Cave: AI-Driven Interactive Narrative for Family Storytelling and Emotional Reflection0
Large Language Model (LLM) for Software Security: Code Analysis, Malware Analysis, Reverse Engineering0
CO-Bench: Benchmarking Language Model Agents in Algorithm Search for Combinatorial OptimizationCode1
DDPT: Diffusion-Driven Prompt Tuning for Large Language Model Code Generation0
Thanos: A Block-wise Pruning Algorithm for Efficient Large Language Model CompressionCode0
ZeroED: Hybrid Zero-shot Error Detection through Large Language Model Reasoning0
Hessian of Perplexity for Large Language Models by PyTorch autograd (Open Source)Code1
Large Language Model-Based Knowledge Graph System Construction for Sustainable Development Goals: An AI-Based Speculative Design Perspective0
Psychological Health Knowledge-Enhanced LLM-based Social Network Crisis Intervention Text Transfer Recognition Method0
SLOs-Serve: Optimized Serving of Multi-SLO LLMs0
MSL: Not All Tokens Are What You Need for Tuning LLM as a RecommenderCode1
Language Models Are Implicitly ContinuousCode1
Toward a digital twin of U.S. Congress0
MORAL: A Multimodal Reinforcement Learning Framework for Decision Making in Autonomous Laboratories0
Noise Augmented Fine Tuning for Mitigating Hallucinations in Large Language ModelsCode0
Shape My Moves: Text-Driven Shape-Aware Synthesis of Human Motions0
Efficient Dynamic Clustering-Based Document Compression for Retrieval-Augmented-GenerationCode1
Beyond the Next Token: Towards Prompt-Robust Zero-Shot Classification via Efficient Multi-Token PredictionCode1
SARLANG-1M: A Benchmark for Vision-Language Modeling in SAR Image UnderstandingCode1
Distillation and Refinement of Reasoning in Small Language Models for Document Re-rankingCode1
LightPROF: A Lightweight Reasoning Framework for Large Language Model on Knowledge Graph0
Towards Effective EU E-Participation: The Development of AskThePublic0
IPA-CHILDES & G2P+: Feature-Rich Resources for Cross-Lingual Phonology and Phonemic Language ModelingCode1
QID: Efficient Query-Informed ViTs in Data-Scarce Regimes for OCR-free Visual Document Understanding0
Low Rank Factorizations are Indirect Encodings for Deep NeuroevolutionCode0
FlowKV: A Disaggregated Inference Framework with Low-Latency KV Cache Transfer and Load-Aware Scheduling0
Noiser: Bounded Input Perturbations for Attributing Large Language Models0
JailDAM: Jailbreak Detection with Adaptive Memory for Vision-Language ModelCode1
LLM Social Simulations Are a Promising Research Method0
MG-MotionLLM: A Unified Framework for Motion Comprehension and Generation across Multiple GranularitiesCode1
A Memory-Augmented LLM-Driven Method for Autonomous Merging of 3D Printing Work Orders0
Prompt Optimization with Logged Bandit Data0
Scaling Video-Language Models to 10K Frames via Hierarchical Differential DistillationCode2
STING-BEE: Towards Vision-Language Model for Real-World X-ray Baggage Security InspectionCode1
Design of AI-Powered Tool for Self-Regulation Support in Programming Education0
Deep learning for music generation. Four approaches and their comparative evaluation0
When Reasoning Meets Compression: Benchmarking Compressed Large Reasoning Models on Complex Reasoning Tasks0
TiC-LM: A Web-Scale Benchmark for Time-Continual LLM PretrainingCode1
A Survey of Scaling in Large Language Model Reasoning0
STPNet: Scale-aware Text Prompt Network for Medical Image SegmentationCode1
Biomedical Question Answering via Multi-Level Summarization on a Local Knowledge Graph0
Prompt-Reverse Inconsistency: LLM Self-Inconsistency Beyond Generative Randomness and Prompt Paraphrasing0
TransforMerger: Transformer-based Voice-Gesture Fusion for Robust Human-Robot Communication0
BioAtt: Anatomical Prior Driven Low-Dose CT Denoising0
Investigating and Scaling up Code-Switching for Multilingual Language Model Pre-TrainingCode0
Show:102550
← PrevPage 20 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified