SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 1430114350 of 17610 papers

TitleStatusHype
Intentional Biases in LLM Responses0
InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent0
Interacting Large Language Model Agents. Interpretable Models and Social Learning0
Interactive Attention AI to translate low light photos to captions for night scene understanding in women safety0
Interactive Design by Integrating a Large Pre-Trained Language Model and Building Information Modeling0
Generative AI Agents with Large Language Model for Satellite Networks via a Mixture of Experts Transmission0
Interactively Robot Action Planning with Uncertainty Analysis and Active Questioning by Large Language Model0
Interactive Machine Teaching by Labeling Rules and Instances0
Interactive Model with Structural Loss for Language-based Abductive Reasoning0
Interactive Multi-fidelity Learning for Cost-effective Adaptation of Language Model with Sparse Human Supervision0
Interactive Navigation in Environments with Traversable Obstacles Using Large Language and Vision-Language Models0
Interactive Robot Learning from Verbal Correction0
Interactive Task Planning with Language Models0
Inter-document Contextual Language model0
InterDreamer: Zero-Shot Text to 3D Dynamic Human-Object Interaction0
Interim Report on Human-Guided Adaptive Hyperparameter Optimization with Multi-Fidelity Sprints0
Inter-KD: Intermediate Knowledge Distillation for CTC-Based Automatic Speech Recognition0
Interleaved Speech-Text Language Models are Simple Streaming Text to Speech Synthesizers0
Interlocking Phrases in Phrase-based Statistical Machine Translation0
Intermediate Loss Regularization for CTC-based Speech Recognition0
Intermediate Self-supervised Learning for Machine Translation Quality Estimation0
Internal and External Impacts of Natural Language Processing Papers0
Internal Language Model Adaptation with Text-Only Data for End-to-End Speech Recognition0
Internal Language Model Estimation based Language Model Fusion for Cross-Domain Code-Switching Speech Recognition0
Internal Language Model Estimation based Adaptive Language Model Fusion for Domain Adaptation0
Internal Language Model Estimation for Domain-Adaptive End-to-End Speech Recognition0
Internal Language Model Estimation Through Explicit Context Vector Learning for Attention-based Encoder-decoder ASR0
Internal Language Model Training for Domain-Adaptive End-to-End Speech Recognition0
Internet-augmented language models through few-shot prompting for open-domain question answering0
InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output0
InternLM-XComposer2: Mastering Free-form Text-Image Composition and Comprehension in Vision-Language Large Model0
InternVL3: Exploring Advanced Training and Test-Time Recipes for Open-Source Multimodal Models0
Interpolated Dirichlet Class Language Model for Speech Recognition Incorporating Long-distance N-grams0
Interpolated Spectral NGram Language Models0
Interpretable Bilingual Multimodal Large Language Model for Diverse Biomedical Tasks0
Refine Large Language Model Fine-tuning via Instruction Vector0
Interpretable Emoji Prediction via Label-Wise Attention LSTMs0
Interpretable Face Anti-Spoofing: Enhancing Generalization with Multimodal Large Language Models0
Interpretable Math Word Problem Solution Generation Via Step-by-step Planning0
Interpretable Network Structure for Modeling Contextual Dependency0
Interpretable Sentence Representation with Variational Autoencoders and Attention0
Interpretation Gaps in LLM-Assisted Comprehension of Privacy Documents0
Interpreting A Pre-trained Model Is A Key For Model Architecture Optimization: A Case Study On Wav2Vec 2.00
Interpreting the linear structure of vision-language model embedding spaces0
Interpreting Word-Level Hidden State Behaviour of Character-Level LSTM Language Models0
Intersectional Bias in Causal Language Models0
Intertwining CP and NLP: The Generation of Unreasonably Constrained Sentences0
In-the-loop Hyper-Parameter Optimization for LLM-Based Automated Design of Heuristics0
Into-TTS : Intonation Template Based Prosody Control System0
INTRA: Interaction Relationship-aware Weakly Supervised Affordance Grounding0
Show:102550
← PrevPage 287 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified