SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 62516300 of 17610 papers

TitleStatusHype
CLaSP: Learning Concepts for Time-Series Signals from Natural Language Supervision0
VALTEST: Automated Validation of Language Model Generated Test Cases0
Towards Optimizing a Retrieval Augmented Generation using Large Language Model on Academic Data0
Polymetis:Large Language Modeling for Multiple Material Domains0
Theoretical Analysis of Byte-Pair Encoding0
Language-Model Prior Overcomes Cold-Start ItemsCode0
Leveraging LLMs for Predictive Insights in Food Policy and Behavioral Interventions0
TIPS: Threat Actor Informed Prioritization of Applications using SecEncoder0
LLM App Squatting and Cloning0
Prompt-enhanced Network for Hateful Meme ClassificationCode0
Likelihood as a Performance Gauge for Retrieval-Augmented GenerationCode0
SecEncoder: Logs are All You Need in Security0
Retrieval, Reasoning, Re-ranking: A Context-Enriched Framework for Knowledge Graph Completion0
Model Stealing for Any Low-Rank Language Model0
Towards Low-bit Communication for Tensor Parallel LLM Inference0
Training Data for Large Language Model0
ASER: Activation Smoothing and Error Reconstruction for Large Language Model Quantization0
Contrastive Language Prompting to Ease False Positives in Medical Anomaly DetectionCode0
World Models: The Safety Perspective0
What Should Baby Models Read? Exploring Sample-Efficient Data Composition on Model Performance0
Zeroth-Order Adaptive Neuron Alignment Based Pruning without Re-TrainingCode0
Reverse Prompt Engineering0
OpenThaiGPT 1.5: A Thai-Centric Open Source Large Language Model0
More Expressive Attention with Negative WeightsCode0
Large Language Model in Medical Informatics: Direct Classification and Enhanced Text Representations for Automatic ICD Coding0
Music Discovery Dialogue Generation Using Human Intent Analysis and Large Language ModelsCode0
Training Neural Networks as Recognizers of Formal LanguagesCode0
Model Fusion through Bayesian Optimization in Language Model Fine-TuningCode0
The Backpropagation of the Wave Network0
Towards Characterizing Cyber Networks with Large Language Models0
A Clinical Trial Design Approach to Auditing Language Models in Healthcare Setting0
A Text Classification Model Combining Adversarial Training with Pre-trained Language Model and neural networks: A Case Study on Telecom Fraud Incident Texts0
Automatically Detecting Online Deceptive Patterns in Real-time0
Building a Taiwanese Mandarin Spoken Language Model: A First AttemptCode0
CapeLLM: Support-Free Category-Agnostic Pose Estimation with Multimodal Large Language Models0
Contextualized Evaluations: Taking the Guesswork Out of Language Model Evaluations0
Accelerating Large Language Model Training with 4D Parallelism and Memory Consumption Estimator0
Hermes: A Large Language Model Framework on the Journey to Autonomous Networks0
CTC-Assisted LLM-Based Contextual ASR0
LProtector: An LLM-driven Vulnerability Detection System0
TourSynbio-Search: A Large Language Model Driven Agent Framework for Unified Search Method for Protein EngineeringCode0
Target-driven Attack for Large Language Models0
ViTOC: Vision Transformer and Object-aware Captioner0
Zyda-2: a 5 Trillion Token High-Quality Dataset0
A Survey of Emerging Approaches and Advances in Video Generation0
BreakGPT: Leveraging Large Language Models for Predicting Asset Price Surges0
Clustering Algorithms and RAG Enhancing Semi-Supervised Text Classification with Large LLMs0
Aquila-plus: Prompt-Driven Visual-Language Models for Pixel-Level Remote Sensing Image Understanding0
Aquila: A Hierarchically Aligned Visual-Language Model for Enhanced Remote Sensing Image Comprehension0
Evaluating Large Language Model Capability in Vietnamese Fact-Checking Data Generation0
Show:102550
← PrevPage 126 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified