SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 1100111050 of 17610 papers

TitleStatusHype
PromptCrafter: Crafting Text-to-Image Prompt through Mixed-Initiative Dialogue with LLM0
Multimodal LLMs for health grounded in individual-specific data0
Promoting Exploration in Memory-Augmented Adam using Critical MomentaCode0
SLMGAN: Exploiting Speech Language Model Representations for Unsupervised Zero-Shot Voice Conversion in GANs0
Linearized Relative Positional EncodingCode0
Integration of Large Language Models and Federated Learning0
ChatSpot: Bootstrapping Multimodal LLMs via Precise Referring Instruction Tuning0
Domain Knowledge Distillation from Large Language Model: An Empirical Study in the Autonomous Driving Domain0
Creating Image Datasets in Agricultural Environments using DALL.E: Generative AI-Powered Large Language Model0
Gender mobility in the labor market with skills-based matching models0
Abductive Reasoning with the GPT-4 Language Model: Case studies from criminal investigation, medical practice, scientific research0
Using an LLM to Help With Code Understanding0
Cross-Lingual NER for Financial Transaction Data in Low-Resource Languages0
Fast Quantum Algorithm for Attention Computation0
The Potential and Pitfalls of using a Large Language Model such as ChatGPT or GPT-4 as a Clinical Assistant0
Transformers are Universal Predictors0
Intuitive Access to Smartphone Settings Using Relevance Model Trained by Contrastive Learning0
Improving BERT with Hybrid Pooling Network and Drop Mask0
MorphPiece : A Linguistic Tokenizer for Large Language Models0
Population Expansion for Training Language Models with Private Federated Learning0
Mega-TTS 2: Boosting Prompting Mechanisms for Zero-Shot Speech Synthesis0
Making the Most Out of the Limited Context Length: Predictive Power Varies with Clinical Note Type and Note Section0
Electoral Agitation Data Set: The Use Case of the Polish ElectionCode0
Does Collaborative Human-LM Dialogue Generation Help Information Extraction from Human Dialogues?0
Instruction Mining: Instruction Data Selection for Tuning Large Language Models0
Self-Adaptive Large Language Model (LLM)-Based Multiagent Systems0
PolyLM: An Open Source Polyglot Large Language Model0
Transformers in Reinforcement Learning: A Survey0
Lightweight reranking for language model generations0
SuryaKiran at MEDIQA-Sum 2023: Leveraging LoRA for Clinical Dialogue Summarization0
Model Card and Evaluations for Claude Models0
SimpleMTOD: A Simple Language Model for Multimodal Task-Oriented Dialogue with Symbolic Scene Representation0
KU-DMIS-MSRA at RadSum23: Pre-trained Vision-Language Model for Radiology Report Summarization0
Text Descriptions are Compressive and Invariant Representations for Visual Learning0
Enhancing Biomedical Text Summarization and Question-Answering: On the Utility of Domain-Specific Pre-Training0
FILM: How can Few-Shot Image Classification Benefit from Pre-Trained Language Models?0
Assessing the efficacy of large language models in generating accurate teacher responses0
Natural Language Instructions for Intuitive Human Interaction with Robotic Assistants in Field Construction Work0
On decoder-only architecture for speech-to-text and large language model integration0
Can LLMs be Good Financial Advisors?: An Initial Study in Personal Decision Making for Optimized Outcomes0
Bidirectional Attention as a Mixture of Continuous Word ExpertsCode0
A Side-by-side Comparison of Transformers for English Implicit Discourse Relation Classification0
Procedurally generating rules to adapt difficulty for narrative puzzle games0
Masked Latent Semantic Modeling: an Efficient Pre-training Alternative to Masked Language ModelingCode0
S2vNTM: Semi-supervised vMF Neural Topic Modeling0
RecallM: An Adaptable Memory Mechanism with Temporal Understanding for Large Language ModelsCode0
Large Language Models Empowered Autonomous Edge AI for Connected Intelligence0
Agentività e telicità in GilBERTo: implicazioni cognitive0
Can ChatGPT's Responses Boost Traditional Natural Language Processing?Code0
UniCoRN: Unified Cognitive Signal ReconstructioN bridging cognitive signals and human language0
Show:102550
← PrevPage 221 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified