SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 73017350 of 17610 papers

TitleStatusHype
DePrompt: Desensitization and Evaluation of Personal Identifiable Information in Large Language Model Prompts0
Collaborative Cross-modal Fusion with Large Language Model for Recommendation0
Evaluating the Validity of Word-level Adversarial Attacks with Large Language ModelsCode0
Autonomous Behavior Planning For Humanoid Loco-manipulation Through Grounded Language Model0
Enhancing Large Language Model-based Speech Recognition by Contextualization for Rare and Ambiguous Words0
DaRec: A Disentangled Alignment Framework for Large Language Model and Recommender System0
DM2RM: Dual-Mode Multimodal Ranking for Target Objects and Receptacles Based on Open-Vocabulary Instructions0
General-purpose Clothes Manipulation with Semantic Keypoints0
When Raw Data Prevails: Are Large Language Model Embeddings Effective in Numerical Data Representation for Medical Machine Learning Applications?0
Penny-Wise and Pound-Foolish in Deepfake DetectionCode0
P/D-Serve: Serving Disaggregated Large Language Model at Scale0
Leveraging Web-Crawled Data for High-Quality Fine-TuningCode0
LLM4DSR: Leveraing Large Language Model for Denoising Sequential Recommendation0
Toward a Dialogue System Using a Large Language Model to Recognize User Emotions with a Camera0
ONSEP: A Novel Online Neural-Symbolic Framework for Event Prediction Based on Large Language Model0
Kraken: Inherently Parallel Transformers For Efficient Multi-Device Inference0
Training Overhead Ratio: A Practical Reliability Metric for Large Language Model Training Systems0
Abstract Operations Research Modeling Using Natural Language Inputs0
Cropper: Vision-Language Model for Image Cropping through In-Context Learning0
Development of a Large Language Model-based Multi-Agent Clinical Decision Support System for Korean Triage and Acuity Scale (KTAS)-Based Triage and Treatment Planning in Emergency Departments0
DataVisT5: A Pre-trained Language Model for Jointly Understanding Text and Data VisualizationCode0
Do GPT Language Models Suffer From Split Personality Disorder? The Advent Of Substrate-Free Psychometrics0
Bridging Information Asymmetry in Text-video Retrieval: A Data-centric Approach0
CROME: Cross-Modal Adapters for Efficient Multimodal LLM0
Diversity Empowers Intelligence: Integrating Expertise of Software Engineering Agents0
Casper: Prompt Sanitization for Protecting User Privacy in Web-Based Large Language Models0
Evaluating Cultural Adaptability of a Large Language Model via Simulation of Synthetic PersonasCode0
DyG-Mamba: Continuous State Space Modeling on Dynamic Graphs0
IFShip: Interpretable Fine-grained Ship Classification with Domain Knowledge-Enhanced Vision-Language ModelsCode0
SparkRA: A Retrieval-Augmented Knowledge Service System Based on Spark Large Language Model0
Response Wide Shut: Surprising Observations in Basic Vision Language Model Capabilities0
MGH Radiology Llama: A Llama 3 70B Model for Radiology0
SceneGPT: A Language Model for 3D Scene Understanding0
Style-Talker: Finetuning Audio Language Model and Style-Based Text-to-Speech Model for Fast Spoken Dialogue Generation0
Unlocking Efficiency: Adaptive Masking for Gene Transformer ModelsCode0
Vision Language Model for Interpretable and Fine-grained Detection of Safety Compliance in Diverse Workplaces0
XCompress: LLM assisted Python-based text compression toolkitCode0
Space-LLaVA: a Vision-Language Model Adapted to Extraterrestrial Applications0
Creating Arabic LLM Prompts at Scale0
Building Decision Making Models Through Language Model Regime0
Global-to-Local Support Spectrums for Language Model Explainability0
AGE: Amharic, Ge’ez and English Parallel Dataset0
Towards Autonomous Agents: Adaptive-planning, Reasoning, and Acting in Language Models0
On Effects of Steering Latent Representation for Large Language Model UnlearningCode0
LipidBERT: A Lipid Language Model Pre-trained on METiS de novo Lipid Library0
LUT Tensor Core: A Software-Hardware Co-Design for LUT-Based Low-Bit LLM Inference0
Speculative Diffusion Decoding: Accelerating Language Generation through Diffusion0
Large Language Model-based Role-Playing for Personalized Medical Jargon Extraction0
Path-LLM: A Shortest-Path-based LLM Learning for Unified Graph Representation0
Improving Whisper's Recognition Performance for Under-Represented Language Kazakh Leveraging Unpaired Speech and Text0
Show:102550
← PrevPage 147 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified