SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 901950 of 17610 papers

TitleStatusHype
GNN-ACLP: Graph Neural Networks based Analog Circuit Link Prediction0
SegEarth-R1: Geospatial Pixel Reasoning via Large Language ModelCode2
Domain-Adaptive Continued Pre-Training of Small Language Models0
Kongzi: A Historical Large Language Model with Fact Enhancement0
Vision-Language Model for Object Detection and Segmentation: A Review and EvaluationCode2
ClinicalGPT-R1: Pushing reasoning capability of generalist disease diagnosis with large language modelCode2
UXAgent: A System for Simulating Usability Testing of Web Design with LLM Agents0
Structure-Accurate Medical Image Translation via Dynamic Frequency Balance and Knowledge Guidance0
AgentDynEx: Nudging the Mechanics and Dynamics of Multi-Agent Simulations0
AgentA/B: Automated and Scalable Web A/BTesting with Interactive LLM Agents0
Fine-tuning a Large Language Model for Automating Computational Fluid Dynamics SimulationsCode1
Parameterized Synthetic Text Generation with SimpleStoriesCode1
PACT: Pruning and Clustering-Based Token Reduction for Faster Visual Language ModelsCode2
Large Language Model Empowered Recommendation Meets All-domain Continual Pre-Training0
Spatial Audio Processing with Large Language Model on Wearable Devices0
ELSA: A Style Aligned Dataset for Emotionally Intelligent Language Generation0
SWAN-GPT: An Efficient and Scalable Approach for Long-Context Language Modeling0
TP-RAG: Benchmarking Retrieval-Augmented Large Language Model Agents for Spatiotemporal-Aware Travel Planning0
MedRep: Medical Concept Representation for General Electronic Health Record Foundation ModelsCode0
Bringing Structure to Naturalness: On the Naturalness of ASTs0
SpecEE: Accelerating Large Language Model Inference with Speculative Early Exiting0
AstroLLaVA: towards the unification of astronomical data and natural language0
EO-VLM: VLM-Guided Energy Overload Attacks on Vision Models0
Data Metabolism: An Efficient Data Design Schema For Vision Language Model0
JEPA4Rec: Learning Effective Language Representations for Sequential Recommendation via Joint Embedding Predictive Architecture0
Investigating Vision-Language Model for Point Cloud-based Vehicle Classification0
Beyond LLMs: A Linguistic Approach to Causal Graph Generation from Narrative Texts0
VLM-R1: A Stable and Generalizable R1-style Large Vision-Language ModelCode9
Cat, Rat, Meow: On the Alignment of Language Model and Human Term-Similarity Judgments0
Synthetic Fluency: Hallucinations, Confabulations, and the Creation of Irish Words in LLM-Generated Translations0
An LLM-Driven Multi-Agent Debate System for Mendelian Diseases0
LauraTSE: Target Speaker Extraction using Auto-Regressive Decoder-Only Language ModelsCode1
GLUS: Global-Local Reasoning Unified into A Single Large Language Model for Video SegmentationCode2
DeepGreen: Effective LLM-Driven Green-washing Monitoring System Designed for Empirical Testing -- Evidence from China0
Token Level Routing Inference System for Edge Devices0
OLMoTrace: Tracing Language Model Outputs Back to Trillions of Training Tokens0
The Method for Storing Patterns in Neural Networks-Memorization and Recall of QR code Patterns-0
RuOpinionNE-2024: Extraction of Opinion Tuples from Russian News TextsCode0
A Multi-Phase Analysis of Blood Culture Stewardship: Machine Learning Prediction, Expert Recommendation Assessment, and LLM Automation0
Language Modeling for the Future of Finance: A Quantitative Survey into Metrics, Tasks, and Data Opportunities0
PAYADOR: A Minimalist Approach to Grounding Language Models on Structured Data for Interactive Storytelling and Role-playing GamesCode0
MovSAM: A Single-image Moving Object Segmentation Framework Based on Deep ThinkingCode0
Q-Agent: Quality-Driven Chain-of-Thought Image Restoration Agent through Robust Multimodal Large Language Model0
TASTE: Text-Aligned Speech Tokenization and Embedding for Spoken Language ModelingCode2
Societal Impacts Research Requires Benchmarks for Creative Composition Tasks0
Skywork R1V: Pioneering Multimodal Reasoning with Chain-of-ThoughtCode7
InstructMPC: A Human-LLM-in-the-Loop Framework for Context-Aware Control0
Simplifying Data Integration: SLM-Driven Systems for Unified Semantic Queries Across Heterogeneous Databases0
DoCIA: An Online Document-Level Context Incorporation Agent for Speech TranslationCode0
Evaluating Knowledge Graph Based Retrieval Augmented Generation Methods under Knowledge Incompleteness0
Show:102550
← PrevPage 19 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified