SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 97019750 of 17610 papers

TitleStatusHype
Make Your AUV Adaptive: An Environment-Aware Reinforcement Learning Framework For Underwater Tasks0
Making a Computational Attorney0
Making Convolutional Networks Recurrent for Visual Sequence Learning0
Making first order linear logic a generating grammar0
Making Large Language Models Better Knowledge Miners for Online Marketing with Progressive Prompting Augmentation0
Making the Most Out of the Limited Context Length: Predictive Power Varies with Clinical Note Type and Note Section0
Making Your Dreams A Reality: Decoding the Dreams into a Coherent Video Story from fMRI Signals0
MaLA-500: Massive Language Adaptation of Large Language Models0
mALBERT: Is a Compact Multilingual BERT Model Still Worth It?0
Malicious and Unintentional Disclosure Risks in Large Language Models for Code Generation0
Malicious Path Manipulations via Exploitation of Representation Vulnerabilities of Vision-Language Navigation Systems0
MaLLaM -- Malaysia Large Language Model0
MALLM-GAN: Multi-Agent Large Language Model as Generative Adversarial Network for Synthesizing Tabular Data0
MALM: Mixing Augmented Language Modeling for Zero-Shot Machine Translation0
MALMM: Multi-Agent Large Language Models for Zero-Shot Robotics Manipulation0
MambaByte: Token-free Selective State Space Model0
MammothModa: Multi-Modal Large Language Model0
ManipLLM: Embodied Multimodal Large Language Model for Object-Centric Robotic Manipulation0
Manipulating the Label Space for In-Context Classification0
Manipulation and the AI Act: Large Language Model Chatbots and the Danger of Mirrors0
MANTa: Efficient Gradient-Based Tokenization for Robust End-to-End Language Modeling0
MANTIS at TSAR-2022 Shared Task: Improved Unsupervised Lexical Simplification with Pretrained Encoders0
Many-Shot Regurgitation (MSR) Prompting0
Maoqin @ DravidianLangTech-EACL2021: The Application of Transformer-Based Model0
MapColorAI: Designing Contextually Relevant Choropleth Map Color Schemes Using a Large Language Model0
MAPLE: A Framework for Active Preference Learning Guided by Large Language Models0
MAPLE: Enhancing Review Generation with Multi-Aspect Prompt LEarning in Explainable Recommendation0
MAPO: Boosting Large Language Model Performance with Model-Adaptive Prompt Optimization0
Mapping Brains with Language Models: A Survey0
Mapping High-level Semantic Regions in Indoor Environments without Object Recognition0
Mapping Local News Coverage: Precise location extraction in textual news content using fine-tuned BERT based language model0
Mapping Researcher Activity based on Publication Data by means of Transformers0
Mapping Rules for Building a Tunisian Dialect Lexicon and Generating Corpora0
Mapping the Timescale Organization of Neural Language Models0
MapQA: Open-domain Geospatial Question Answering on Map Data0
MAP's not dead yet: Uncovering true language model modes by conditioning away degeneracy0
MARCO: Multi-Agent Real-time Chat Orchestration0
Marconi: Prefix Caching for the Era of Hybrid LLMs0
Maritime Mission Planning for Unmanned Surface Vessel using Large Language Model0
Markov Constraint as Large Language Model Surrogate0
MARM: Unlocking the Future of Recommendation Systems through Memory Augmentation and Scalable Complexity0
MARS6: A Small and Robust Hierarchical-Codec Text-to-Speech Model0
"Mask and Infill" : Applying Masked Language Model to Sentiment Transfer0
Mask and Regenerate: A Classifier-based Approach for Unpaired Sentiment Transformation of Reviews for Electronic Commerce Websites.0
MAS-KCL: Knowledge component graph structure learning with large language model-based agentic workflow0
Masked Adversarial Generation for Neural Machine Translation0
Masked Audio Text Encoders are Effective Multi-Modal Rescorers0
Masked Clinical Modelling: A Framework for Synthetic and Augmented Survival Data Generation0
Masked Diffusion Models are Secretly Time-Agnostic Masked Models and Exploit Inaccurate Categorical Sampling0
Masked ELMo: An evolution of ELMo towards fully contextual RNN language models0
Show:102550
← PrevPage 195 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified