SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 27512800 of 17610 papers

TitleStatusHype
Empower Large Language Model to Perform Better on Industrial Domain-Specific Question AnsweringCode1
Emergent Representations of Program Semantics in Language Models Trained on ProgramsCode1
DinoSR: Self-Distillation and Online Clustering for Self-supervised Speech Representation LearningCode1
AD-KD: Attribution-Driven Knowledge Distillation for Language Model CompressionCode1
A Better Way to Do Masked Language Model ScoringCode1
PMC-VQA: Visual Instruction Tuning for Medical Visual Question AnsweringCode1
Pre-Training to Learn in ContextCode1
MPI-rical: Data-Driven MPI Distributed Parallelism Assistance with TransformersCode1
Dual-Alignment Pre-training for Cross-lingual Sentence EmbeddingCode1
SatLM: Satisfiability-Aided Language Models Using Declarative PromptingCode1
Knowledge Rumination for Pre-trained Language ModelsCode1
Mobile-Env: Building Qualified Evaluation Benchmarks for LLM-GUI InteractionCode1
Watermarking Text Generated by Black-Box Language ModelsCode1
Improving End-to-End SLU performance with Prosodic Attention and DistillationCode1
Pre-trained Language Model with Prompts for Temporal Knowledge Graph CompletionCode1
The Machine Psychology of Cooperation: Can GPT models operationalise prompts for altruism, cooperation, competitiveness and selfishness in economic games?Code1
Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model RecommendationCode1
ArtGPT-4: Towards Artistic-understanding Large Vision-Language Models with Enhanced AdapterCode1
LeXFiles and LegalLAMA: Facilitating English Multinational Legal Language Model DevelopmentCode1
Self-Chained Image-Language Model for Video Localization and Question AnsweringCode1
Automatic Evaluation of Attribution by Large Language ModelsCode1
Bot or Human? Detecting ChatGPT Imposters with A Single QuestionCode1
Toeplitz Neural Network for Sequence ModelingCode1
A Multi-Modal Context Reasoning Approach for Conditional Inference on Joint Textual and Visual CluesCode1
PromptRank: Unsupervised Keyphrase Extraction Using PromptCode1
Unified Demonstration Retriever for In-Context LearningCode1
Generative Pretrained Autoregressive Transformer Graph Neural Network applied to the Analysis and Discovery of Novel ProteinsCode1
T-SciQ: Teaching Multimodal Chain-of-Thought Reasoning via Mixed Large Language Model Signals for Science Question AnsweringCode1
MindGames: Targeting Theory of Mind in Large Language Models with Dynamic Epistemic Modal LogicCode1
LMEye: An Interactive Perception Network for Large Language ModelsCode1
Sentence Embedding Leaks More Information than You Expect: Generative Embedding Inversion Attack to Recover the Whole SentenceCode1
Cognitive Reframing of Negative Thoughts through Human-Language Model InteractionCode1
Masked Structural Growth for 2x Faster Language Model Pre-trainingCode1
On the Expressivity Role of LayerNorm in Transformers' AttentionCode1
Entity Tracking in Language ModelsCode1
The Benefits of Bad Advice: Autocontrastive Decoding across Model LayersCode1
Working Memory Capacity of ChatGPT: An Empirical StudyCode1
How does GPT-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language modelCode1
Outline, Then Details: Syntactically Guided Coarse-To-Fine Code GenerationCode1
CCpdf: Building a High Quality Corpus for Visually Rich Documents from Web Crawl DataCode1
Towards autonomous system: flexible modular production system enhanced with large language model agentsCode1
Search-in-the-Chain: Interactively Enhancing Large Language Models with Search for Knowledge-intensive TasksCode1
Enhancing Large Language Model with Self-Controlled Memory FrameworkCode1
The Parrot Dilemma: Human-Labeled vs. LLM-augmented Data in Classification TasksCode1
Generation-driven Contrastive Self-training for Zero-shot Text Classification with Instruction-following LLMCode1
SAILER: Structure-aware Pre-trained Language Model for Legal Case RetrievalCode1
LaMP: When Large Language Models Meet PersonalizationCode1
CB-Conformer: Contextual biasing Conformer for biased word recognitionCode1
SkillGPT: a RESTful API service for skill extraction and standardization using a Large Language ModelCode1
Pretrained Language Models as Visual Planners for Human AssistanceCode1
Show:102550
← PrevPage 56 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified