SOTAVerified

Linguistic Acceptability

Linguistic Acceptability is the task of determining whether a sentence is grammatical or ungrammatical.

Image Source: Warstadt et al

Papers

Showing 150 of 72 papers

TitleStatusHype
LLM.int8(): 8-bit Matrix Multiplication for Transformers at ScaleCode5
BERT: Pre-training of Deep Bidirectional Transformers for Language UnderstandingCode3
ERNIE 2.0: A Continual Pre-training Framework for Language UnderstandingCode3
Exploring the Limits of Transfer Learning with a Unified Text-to-Text TransformerCode2
Fietje: An open, efficient LLM for DutchCode2
ALBERT: A Lite BERT for Self-supervised Learning of Language RepresentationsCode2
DeBERTa: Decoding-enhanced BERT with Disentangled AttentionCode2
Q8BERT: Quantized 8Bit BERTCode1
A Statistical Framework for Low-bitwidth Training of Deep Neural NetworksCode1
Big Bird: Transformers for Longer SequencesCode1
Charformer: Fast Character Transformers via Gradient-based Subword TokenizationCode1
ChatGPT: Jack of all trades, master of noneCode1
data2vec: A General Framework for Self-supervised Learning in Speech, Vision and LanguageCode1
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighterCode1
Entailment as Few-Shot LearnerCode1
FNet: Mixing Tokens with Fourier TransformsCode1
GeDi: Generative Discriminator Guided Sequence GenerationCode1
How to Train BERT with an Academic BudgetCode1
RealFormer: Transformer Likes Residual AttentionCode1
JCoLA: Japanese Corpus of Linguistic AcceptabilityCode1
Learning to Encode Position for Transformer with Continuous Dynamical ModelCode1
LM-CPPF: Paraphrasing-Guided Data Augmentation for Contrastive Prompt-Based Few-Shot Fine-TuningCode1
On the Robustness of Language Encoders against Grammatical ErrorsCode1
Masked Language Model ScoringCode1
RoBERTa: A Robustly Optimized BERT Pretraining ApproachCode1
RuCoLA: Russian Corpus of Linguistic AcceptabilityCode1
ScandEval: A Benchmark for Scandinavian Natural Language ProcessingCode1
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized OptimizationCode1
Synthesizer: Rethinking Self-Attention in Transformer ModelsCode1
tasksource: A Dataset Harmonization Framework for Streamlined NLP Multi-Task Learning and EvaluationCode1
Towards Debiasing Sentence RepresentationsCode1
Language Models Use Monotonicity to Assess NPI LicensingCode0
CUE: An Uncertainty Interpretation Framework for Text Classifiers Built on Pre-Trained Language ModelsCode0
MELA: Multilingual Evaluation of Linguistic AcceptabilityCode0
Monolingual and Cross-Lingual Acceptability Judgments with the Italian CoLA corpusCode0
TinyBERT: Distilling BERT for Natural Language UnderstandingCode0
Multi-Task Deep Neural Networks for Natural Language UnderstandingCode0
Can BERT eat RuCoLA? Topological Data Analysis to ExplainCode0
Natural Language Generation for Effective Knowledge DistillationCode0
Neural Network Acceptability JudgmentsCode0
General Cross-Architecture Distillation of Pretrained Language Models into Matrix EmbeddingsCode0
Domain Adversarial Fine-Tuning as an Effective RegularizerCode0
NoCoLA: The Norwegian Corpus of Linguistic AcceptabilityCode0
VALUE: Understanding Dialect Disparity in NLUCode0
ERNIE: Enhanced Language Representation with Informative EntitiesCode0
SpanBERT: Improving Pre-training by Representing and Predicting SpansCode0
SqueezeBERT: What can computer vision teach NLP about efficient neural networks?Code0
Revisiting Acceptability JudgementsCode0
Acceptability Judgements via Examining the Topology of Attention MapsCode0
Rating Distributions and Bayesian Inference: Enhancing Cognitive Models of Spatial Language Use0
Show:102550
← PrevPage 1 of 2Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1En-BERT + TDA + PCAAccuracy88.6Unverified
2BERT+TDAAccuracy88.2Unverified
3RoBERTa+TDAAccuracy87.3Unverified
4deberta-v3-base+tasksourceAccuracy87.15Unverified
5RoBERTa-large 355M + Entailment as Few-shot LearnerAccuracy86.4Unverified
6LTG-BERT-base 98MAccuracy82.7Unverified
7ELC-BERT-base 98MAccuracy82.6Unverified
8En-BERT + TDAAccuracy82.1Unverified
9FNet-LargeAccuracy78Unverified
10LTG-BERT-small 24MAccuracy77.6Unverified
#ModelMetricClaimedVerifiedStatus
1Ru-RoBERTa+TDAMCC0.59Unverified
2ruRoBERTaMCC0.53Unverified
3Ru-BERT+TDAMCC0.48Unverified
4RemBERTMCC0.44Unverified
5ruBERTMCC0.42Unverified
6ruGPT-3MCC0.3Unverified
7ruT5MCC0.25Unverified
8mBERTMCC0.15Unverified
9XLM-RMCC0.13Unverified
#ModelMetricClaimedVerifiedStatus
1En-BERT + TDAAccuracy88.6Unverified
2XLM-R (pre-trained) + TDAAccuracy73Unverified
3DeBERTa (large)Accuracy69.5Unverified
4TinyBERT-6 67MAccuracy54Unverified
5Synthesizer (R+V)Accuracy53.3Unverified
6En-BERT (pre-trained) + TDAMCC0.42Unverified
#ModelMetricClaimedVerifiedStatus
1XLM-R + TDAMCC0.68Unverified
2XLM-RMCC0.52Unverified
3It-BERT (pre-trained) + TDAMCC0.48Unverified
4mBERTMCC0.36Unverified
#ModelMetricClaimedVerifiedStatus
1Sw-BERT + H0MAccuracy76.9Unverified