SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 1090110950 of 17610 papers

TitleStatusHype
Generating Individual Trajectories Using GPT-2 Trained from Scratch on Encoded Spatiotemporal Data0
Diagnostic Reasoning Prompts Reveal the Potential for Large Language Model Interpretability in Medicine0
AutoConv: Automatically Generating Information-seeking Conversations with Large Language Models0
MT4CrossOIE: Multi-stage Tuning for Cross-lingual Open Information ExtractionCode0
Learning to Guide Human Experts via Personalized Large Language Models0
LittleMu: Deploying an Online Virtual Teaching Assistant via Heterogeneous Sources Integration and Chain of Teach PromptsCode0
A Large Language Model Enhanced Conversational Recommender System0
Improving Zero-Shot Text Matching for Financial Auditing with Large Language Models0
Testing GPT-4 with Wolfram Alpha and Code Interpreter plug-ins on math and science problems0
Multi-modal Multi-view Clustering based on Non-negative Matrix Factorization0
Slot Induction via Pre-trained Language Model Probing and Multi-level Contrastive LearningCode0
TextPainter: Multimodal Text Image Generation with Visual-harmony and Text-comprehension for Poster Design0
MetRoBERTa: Leveraging Traditional Customer Relationship Management Data to Develop a Transit-Topic-Aware Language Model0
Answering Unseen Questions With Smaller Language Models Using Rationale Generation and Dense Retrieval0
"Generate" the Future of Work through AI: Empirical Evidence from Online Labor Markets0
Emotion-Conditioned Text Generation through Automatic Prompt Optimization0
Exploring Multilingual Text Data DistillationCode0
Ahead of the Text: Leveraging Entity Preposition for Financial Relation Extraction0
Hybrid-RACA: Hybrid Retrieval-Augmented Composition Assistance for Real-time Text Prediction0
On Monotonic Aggregation for Open-domain QACode0
I-WAS: a Data Augmentation Method with GPT-2 for Simile Detection0
PTransIPs: Identification of phosphorylation sites enhanced by protein PLM embeddingsCode0
Large Language Model Prompt Chaining for Long Legal Document Classification0
Mondrian: Prompt Abstraction Attack Against Large Language Models for Cheaper API Pricing0
RecycleGPT: An Autoregressive Language Model with Recyclable Module0
RCMHA: Relative Convolutional Multi-Head Attention for Natural Language ModellingCode0
MedMine: Examining Pre-trained Language Models on Medication MiningCode0
TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage0
Heterogeneous Knowledge Fusion: A Novel Approach for Personalized Recommendation via LLM0
Coupling Symbolic Reasoning with Language Modeling for Efficient Longitudinal Understanding of Unstructured Electronic Medical Records0
ViLP: Knowledge Exploration using Vision, Language, and Pose Embeddings for Video Action RecognitionCode0
PromptSum: Parameter-Efficient Controllable Abstractive Summarization0
Embedding-based Retrieval with LLM for Effective Agriculture Information Extracting from Unstructured Data0
LaDA: Latent Dialogue Action For Zero-shot Cross-lingual Neural Network Language Modeling0
ParaFuzz: An Interpretability-Driven Technique for Detecting Poisoned Samples in NLP0
Specious Sites: Tracking the Spread and Sway of Spurious News Stories at ScaleCode0
Large Language Model Displays Emergent Ability to Interpret Novel Literary Metaphors0
Evaluating ChatGPT text-mining of clinical records for obesity monitoring0
InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent0
Is GPT-4 a reliable rater? Evaluating Consistency in GPT-4 Text Ratings0
Arithmetic with Language Models: from Memorization to Computation0
Contextual Emotion Recognition Using Transformer-Based ModelsCode0
Knowledge-aware Collaborative Filtering with Pre-trained Language Model for Personalized Review-based Rating PredictionCode0
Teaching Smaller Language Models To Generalise To Unseen Compositional QuestionsCode0
JIANG: Chinese Open Foundation Language Model0
CodeBPE: Investigating Subtokenization Options for Large Language Model Pretraining on Source Code0
Detecting Cloud Presence in Satellite Images Using the RGB-based CLIP Vision-Language Model0
Adapt and Decompose: Efficient Generalization of Text-to-SQL via Domain Adapted Least-To-Most Prompting0
ChatMOF: An Autonomous AI System for Predicting and Generating Metal-Organic Frameworks0
HouYi: An open-source large language model specially designed for renewable energy and carbon neutrality field0
Show:102550
← PrevPage 219 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified