SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 1040110450 of 17610 papers

TitleStatusHype
The ROOTS Search Tool: Data Transparency for LLMsCode1
Duration-aware pause insertion using pre-trained language model for multi-speaker text-to-speech0
Choice Fusion as Knowledge for Zero-Shot Dialogue State TrackingCode0
Topic-Selective Graph Network for Topic-Focused Summarization0
Toward Fairness in Text Generation via Mutual Information Minimization based on Importance Sampling0
Leveraging Large Language Model and Story-Based Gamification in Intelligent Tutoring System to Scaffold Introductory Programming Courses: A Design-Based Research Study0
NoPPA: Non-Parametric Pairwise Attention Random Walk Model for Sentence RepresentationCode0
Factual Consistency Oriented Speech Recognition0
Automatic Prompt Augmentation and Selection with Chain-of-Thought from Labeled DataCode1
An Independent Evaluation of ChatGPT on Mathematical Word Problems (MWP)Code0
Generative Sentiment Transfer via Adaptive Masking0
Vision-Language Generative Model for View-Specific Chest X-ray GenerationCode1
EVJVQA Challenge: Multilingual Visual Question Answering0
What makes a language easy to deep-learn? Deep neural networks and humans similarly benefit from compositional structureCode0
Side Adapter Network for Open-Vocabulary Semantic SegmentationCode2
On the Generalization Ability of Retrieval-Enhanced TransformersCode0
Language Model Crossover: Variation through Few-Shot PromptingCode2
BadGPT: Exploring Security Vulnerabilities of ChatGPT via Backdoor Attacks to InstructGPT0
Hyena Hierarchy: Towards Larger Convolutional Language ModelsCode2
Playing the Werewolf game with artificial intelligence for language understanding0
kNN-Adapter: Efficient Domain Adaptation for Black-Box Language Models0
Emphasizing Unseen Words: New Vocabulary Acquisition for End-to-End Speech Recognition0
Federated Learning for ASR based on Wav2vec 2.0Code1
Can discrete information extraction prompts generalize across language models?Code0
STOA-VLP: Spatial-Temporal Modeling of Object and Action for Video-Language Pre-training0
Towards Universal Fake Image Detectors that Generalize Across Generative ModelsCode2
Language-Specific Representation of Emotion-Concept Knowledge Causally Supports Emotion InferenceCode1
BBT-Fin: Comprehensive Construction of Chinese Financial Domain Pre-trained Language Model, Corpus and BenchmarkCode2
Bag of Tricks for Effective Language Model Pretraining and Downstream Adaptation: A Case Study on GLUE0
A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT0
Prompting Large Language Models With the Socratic Method0
Massively Multilingual Shallow Fusion with Large Language Models0
Privately Customizing Prefinetuning to Better Match User Data in Federated Learning0
Entry Separation using a Mixed Visual and Textual Language Model: Application to 19th century French Trade DirectoriesCode0
Multiperiodic Processes: Ergodic Sources with a Sublinear Entropy0
GPT4MIA: Utilizing Generative Pre-trained Transformer (GPT-3) as A Plug-and-Play Transductive Model for Medical Image Analysis0
Bridge the Gap between Language models and Tabular Understanding0
Adaptable End-to-End ASR Models using Replaceable Internal LMs and Residual Softmax0
Role of Bias Terms in Dot-Product Attention0
Pretraining Language Models with Human PreferencesCode1
JEIT: Joint End-to-End Model and Internal Language Model Training for Speech Recognition0
What A Situated Language-Using Agent Must be Able to Do: A Top-Down Analysis0
FOSI: Hybrid First and Second Order OptimizationCode0
Learning to Initialize: Can Meta Learning Improve Cross-task Generalization in Prompt Tuning?0
LabelPrompt: Effective Prompt-based Learning for Relation Classification0
Reanalyzing L2 Preposition Learning with Bayesian Mixed Effects and a Pretrained Language ModelCode0
Platform-Independent and Curriculum-Oriented Intelligent Assistant for Higher Education0
Confidence Score Based Speaker Adaptation of Conformer Speech Recognition SystemsCode0
Augmented Language Models: a Survey0
Learning Performance-Improving Code EditsCode1
Show:102550
← PrevPage 209 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified