SOTAVerified

Linguistic Acceptability

Linguistic Acceptability is the task of determining whether a sentence is grammatical or ungrammatical.

Image Source: Warstadt et al

Papers

Showing 125 of 72 papers

TitleStatusHype
LLM.int8(): 8-bit Matrix Multiplication for Transformers at ScaleCode5
ERNIE 2.0: A Continual Pre-training Framework for Language UnderstandingCode3
BERT: Pre-training of Deep Bidirectional Transformers for Language UnderstandingCode3
ALBERT: A Lite BERT for Self-supervised Learning of Language RepresentationsCode2
Fietje: An open, efficient LLM for DutchCode2
Exploring the Limits of Transfer Learning with a Unified Text-to-Text TransformerCode2
DeBERTa: Decoding-enhanced BERT with Disentangled AttentionCode2
Masked Language Model ScoringCode1
JCoLA: Japanese Corpus of Linguistic AcceptabilityCode1
Q8BERT: Quantized 8Bit BERTCode1
How to Train BERT with an Academic BudgetCode1
ChatGPT: Jack of all trades, master of noneCode1
A Statistical Framework for Low-bitwidth Training of Deep Neural NetworksCode1
RealFormer: Transformer Likes Residual AttentionCode1
Charformer: Fast Character Transformers via Gradient-based Subword TokenizationCode1
FNet: Mixing Tokens with Fourier TransformsCode1
LM-CPPF: Paraphrasing-Guided Data Augmentation for Contrastive Prompt-Based Few-Shot Fine-TuningCode1
Big Bird: Transformers for Longer SequencesCode1
data2vec: A General Framework for Self-supervised Learning in Speech, Vision and LanguageCode1
GeDi: Generative Discriminator Guided Sequence GenerationCode1
Learning to Encode Position for Transformer with Continuous Dynamical ModelCode1
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighterCode1
On the Robustness of Language Encoders against Grammatical ErrorsCode1
Entailment as Few-Shot LearnerCode1
RoBERTa: A Robustly Optimized BERT Pretraining ApproachCode1
Show:102550
← PrevPage 1 of 3Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1En-BERT + TDA + PCAAccuracy88.6Unverified
2BERT+TDAAccuracy88.2Unverified
3RoBERTa+TDAAccuracy87.3Unverified
4deberta-v3-base+tasksourceAccuracy87.15Unverified
5RoBERTa-large 355M + Entailment as Few-shot LearnerAccuracy86.4Unverified
6LTG-BERT-base 98MAccuracy82.7Unverified
7ELC-BERT-base 98MAccuracy82.6Unverified
8En-BERT + TDAAccuracy82.1Unverified
9FNet-LargeAccuracy78Unverified
10LTG-BERT-small 24MAccuracy77.6Unverified
#ModelMetricClaimedVerifiedStatus
1Ru-RoBERTa+TDAMCC0.59Unverified
2ruRoBERTaMCC0.53Unverified
3Ru-BERT+TDAMCC0.48Unverified
4RemBERTMCC0.44Unverified
5ruBERTMCC0.42Unverified
6ruGPT-3MCC0.3Unverified
7ruT5MCC0.25Unverified
8mBERTMCC0.15Unverified
9XLM-RMCC0.13Unverified
#ModelMetricClaimedVerifiedStatus
1En-BERT + TDAAccuracy88.6Unverified
2XLM-R (pre-trained) + TDAAccuracy73Unverified
3DeBERTa (large)Accuracy69.5Unverified
4TinyBERT-6 67MAccuracy54Unverified
5Synthesizer (R+V)Accuracy53.3Unverified
6En-BERT (pre-trained) + TDAMCC0.42Unverified
#ModelMetricClaimedVerifiedStatus
1XLM-R + TDAMCC0.68Unverified
2XLM-RMCC0.52Unverified
3It-BERT (pre-trained) + TDAMCC0.48Unverified
4mBERTMCC0.36Unverified
#ModelMetricClaimedVerifiedStatus
1Sw-BERT + H0MAccuracy76.9Unverified