SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 52015250 of 17610 papers

TitleStatusHype
Improving Instruction Following in Language Models through Proxy-Based Uncertainty EstimationCode0
Improving Interpersonal Communication by Simulating Audiences with Language ModelsCode0
Improving Language Generation with Sentence Coherence ObjectiveCode0
Avoiding exp(R_max) scaling in RLHF through Preference-based ExplorationCode0
Improving language models by retrieving from trillions of tokensCode0
Improving Large Language Model Safety with Contrastive Representation LearningCode0
Clinical Flair: A Pre-Trained Language Model for Spanish Clinical Natural Language ProcessingCode0
Dual Learning for Machine TranslationCode0
Improving Lemmatization of Non-Standard Languages with Joint LearningCode0
Improving Lexical Embeddings with Semantic KnowledgeCode0
Improving Low Compute Language Modeling with In-Domain Embedding InitialisationCode0
All Should Be Equal in the Eyes of Language Models: Counterfactually Aware Fair Text GenerationCode0
Improving Low-Resource Neural Machine Translation with Filtered Pseudo-Parallel CorpusCode0
Improving LSTM-based Video Description with Linguistic Knowledge Mined from TextCode0
Improving Machine Reading Comprehension with General Reading StrategiesCode0
Autoregressive Pre-Training on Pixels and TextsCode0
Improving Medical Multi-modal Contrastive Learning with Expert AnnotationsCode0
Improving Natural Language Capability of Code Large Language ModelCode0
A Few-shot Approach to Resume Information Extraction via PromptsCode0
Improving Neural Language Modeling via Adversarial TrainingCode0
Improving Neural Language Models by Segmenting, Attending, and Predicting the FutureCode0
Improving Neural Language Models with a Continuous CacheCode0
Affective-NLI: Towards Accurate and Interpretable Personality Recognition in ConversationCode0
Improving Neural Network Quantization without Retraining using Outlier Channel SplittingCode0
A Weakly Supervised Dataset of Fine-Grained Emotions in PortugueseCode0
CLIN-X: pre-trained language models and a study on cross-task transfer for concept extraction in the clinical domainCode0
Improving Pre-trained Language Model Sensitivity via Mask Specific losses: A case study on Biomedical NERCode0
A Programmable Approach to Neural Network CompressionCode0
Dwell in the Beginning: How Language Models Embed Long Documents for Dense RetrievalCode0
DynaBERT: Dynamic BERT with Adaptive Width and DepthCode0
ALMANACS: A Simulatability Benchmark for Language Model ExplainabilityCode0
Accelerating Training of Transformer-Based Language Models with Progressive Layer DroppingCode0
ALoFTRAG: Automatic Local Fine Tuning for Retrieval Augmented GenerationCode0
Improving Retrieval Augmented Open-Domain Question-Answering with Vectorized ContextsCode0
CLIP-PCQA: Exploring Subjective-Aligned Vision-Language Modeling for Point Cloud Quality AssessmentCode0
Improving Segmentation for Technical Support ProblemsCode0
Dynamic Demonstrations Controller for In-Context LearningCode0
Dynamic Entity Representations in Neural Language ModelsCode0
Dynamic Evaluation of Neural Sequence ModelsCode0
Dynamic Evaluation of Transformer Language ModelsCode0
AX-MABSA: A Framework for Extremely Weakly Supervised Multi-label Aspect Based Sentiment AnalysisCode0
Improving SSVEP BCI Spellers With Data Augmentation and Language ModelsCode0
A Low-Resource Approach to the Grammatical Error Correction of UkrainianCode0
AxomiyaBERTa: A Phonologically-aware Transformer Model for AssameseCode0
Improving the Efficiency of Visually Augmented Language ModelsCode0
Improving the Gating Mechanism of Recurrent Neural NetworksCode0
CLMSM: A Multi-Task Learning Framework for Pre-training on Procedural TextCode0
AlphaZero Neural Scaling and Zipf's Law: a Tale of Board Games and Power LawsCode0
Improving LLM Unlearning Robustness via Random PerturbationsCode0
Improving the Sample Efficiency of Prompt Tuning with Domain AdaptationCode0
Show:102550
← PrevPage 105 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified