SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 26512700 of 17610 papers

TitleStatusHype
Tokenization with Factorized Subword EncodingCode1
Global and Local Semantic Completion Learning for Vision-Language Pre-trainingCode1
Waffling around for Performance: Visual Classification with Random Words and Broad ConceptsCode1
Gradient Ascent Post-training Enhances Language Model GeneralizationCode1
GKD: A General Knowledge Distillation Framework for Large-scale Pre-trained Language ModelCode1
Are Intermediate Layers and Labels Really Necessary? A General Language Model Distillation MethodCode1
QUERT: Continual Pre-training of Language Model for Query Understanding in Travel Domain SearchCode1
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model HackathonCode1
Large Language Models Are Semi-Parametric Reinforcement Learning AgentsCode1
Aladdin: Zero-Shot Hallucination of Stylized 3D Assets from Abstract Scene DescriptionsCode1
PoET: A generative model of protein families as sequences-of-sequencesCode1
Multi-Modal Classifiers for Open-Vocabulary Object DetectionCode1
Hexatagging: Projective Dependency Parsing as TaggingCode1
Privately generating tabular data using language modelsCode1
On the Difference of BERT-style and CLIP-style Text EncodersCode1
LLMZip: Lossless Text Compression using Large Language ModelsCode1
Q: How to Specialize Large Vision-Language Models to Data-Scarce VQA Tasks? A: Self-Train on Unlabeled Images!Code1
AutoScrum: Automating Project Planning Using Large Language ModelsCode1
Improving Conversational Recommendation Systems via Counterfactual Data SimulationCode1
COMET: Learning Cardinality Constrained Mixture of Experts with Trees and Local SearchCode1
Sequential Monte Carlo Steering of Large Language Models using Probabilistic ProgramsCode1
Revisiting the Role of Language Priors in Vision-Language ModelsCode1
Enhancing the Protein Tertiary Structure Prediction by Multiple Sequence Alignment GenerationCode1
The Information Pathways Hypothesis: Transformers are Dynamic Self-EnsemblesCode1
Evaluating Language Models for Mathematics through InteractionsCode1
Log Parsing: How Far Can ChatGPT Go?Code1
Training-free Neural Architecture Search for RNNs and TransformersCode1
Vocabulary-free Image ClassificationCode1
Preference-grounded Token-level Guidance for Language Model Fine-tuningCode1
Faster Causal Attention Over Large Sequences Through Sparse Flash AttentionCode1
ACLM: A Selective-Denoising based Generative Data Augmentation Approach for Low-Resource Complex NERCode1
Multilingual Multi-Figurative Language DetectionCode1
Perception and Semantic Aware Regularization for Sequential Confidence CalibrationCode1
IDAS: Intent Discovery with Abstractive SummarizationCode1
Structure-Aware Language Model Pretraining Improves Dense Retrieval on Structured DataCode1
Red Teaming Language Model Detectors with Language ModelsCode1
Primal-Attention: Self-attention through Asymmetric Kernel SVD in Primal RepresentationCode1
Preserving Pre-trained Features Helps Calibrate Fine-tuned Language ModelsCode1
LANCE: Stress-testing Visual Models by Generating Language-guided Counterfactual ImagesCode1
Likelihood-Based Diffusion Language ModelsCode1
InstructEdit: Improving Automatic Masks for Diffusion-based Image Editing With User InstructionsCode1
Large Language Models are not Fair EvaluatorsCode1
The Rise of AI Language Pathologists: Exploring Two-level Prompt Learning for Few-shot Weakly-supervised Whole Slide Image ClassificationCode1
Test-Time Training on Nearest Neighbors for Large Language ModelsCode1
PaLI-X: On Scaling up a Multilingual Vision and Language ModelCode1
FuseCap: Leveraging Large Language Models for Enriched Fused Image CaptionsCode1
Learning a Structural Causal Model for Intuition Reasoning in ConversationCode1
Rethinking Masked Language Modeling for Chinese Spelling CorrectionCode1
Query-Efficient Black-Box Red Teaming via Bayesian OptimizationCode1
Matrix Information Theory for Self-Supervised LearningCode1
Show:102550
← PrevPage 54 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified