SOTAVerified

Linguistic Acceptability

Linguistic Acceptability is the task of determining whether a sentence is grammatical or ungrammatical.

Image Source: Warstadt et al

Papers

Showing 150 of 72 papers

TitleStatusHype
LLM.int8(): 8-bit Matrix Multiplication for Transformers at ScaleCode5
BERT: Pre-training of Deep Bidirectional Transformers for Language UnderstandingCode3
ERNIE 2.0: A Continual Pre-training Framework for Language UnderstandingCode3
Exploring the Limits of Transfer Learning with a Unified Text-to-Text TransformerCode2
Fietje: An open, efficient LLM for DutchCode2
ALBERT: A Lite BERT for Self-supervised Learning of Language RepresentationsCode2
DeBERTa: Decoding-enhanced BERT with Disentangled AttentionCode2
Q8BERT: Quantized 8Bit BERTCode1
A Statistical Framework for Low-bitwidth Training of Deep Neural NetworksCode1
Big Bird: Transformers for Longer SequencesCode1
Charformer: Fast Character Transformers via Gradient-based Subword TokenizationCode1
ChatGPT: Jack of all trades, master of noneCode1
data2vec: A General Framework for Self-supervised Learning in Speech, Vision and LanguageCode1
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighterCode1
Entailment as Few-Shot LearnerCode1
FNet: Mixing Tokens with Fourier TransformsCode1
GeDi: Generative Discriminator Guided Sequence GenerationCode1
How to Train BERT with an Academic BudgetCode1
RealFormer: Transformer Likes Residual AttentionCode1
JCoLA: Japanese Corpus of Linguistic AcceptabilityCode1
Learning to Encode Position for Transformer with Continuous Dynamical ModelCode1
LM-CPPF: Paraphrasing-Guided Data Augmentation for Contrastive Prompt-Based Few-Shot Fine-TuningCode1
On the Robustness of Language Encoders against Grammatical ErrorsCode1
Masked Language Model ScoringCode1
RoBERTa: A Robustly Optimized BERT Pretraining ApproachCode1
RuCoLA: Russian Corpus of Linguistic AcceptabilityCode1
ScandEval: A Benchmark for Scandinavian Natural Language ProcessingCode1
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized OptimizationCode1
Synthesizer: Rethinking Self-Attention in Transformer ModelsCode1
tasksource: A Dataset Harmonization Framework for Streamlined NLP Multi-Task Learning and EvaluationCode1
Towards Debiasing Sentence RepresentationsCode1
Linguistic Analysis of Pretrained Sentence Encoders with Acceptability Judgments0
Grammatical Analysis of Pretrained Sentence Encoders with Acceptability Judgments0
Grammaticality and Language Modelling0
StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding0
How well can machine-generated texts be identified and can language models be trained to avoid identification?0
Robust ASR Error Correction with Conservative Data Filtering0
Using Integrated Gradients and Constituency Parse Trees to explain Linguistic Acceptability learnt by BERT0
Learning Phonotactics from Linguistic Informants0
What Would Elsa Do? Freezing Layers During Transformer Fine-Tuning0
Not all layers are equally as important: Every Layer Counts BERT0
CLEAR: Contrastive Learning for Sentence Representation0
Cross-Architecture Distillation Using Bidirectional CMOW Embeddings0
DaLAJ - a dataset for linguistic acceptability judgments for Swedish: Format, baseline, sharing0
DaLAJ – a dataset for linguistic acceptability judgments for Swedish0
Data-Free Distillation of Language Model by Text-to-Text Transfer0
Defense of Adversarial Ranking Attack in Text Retrieval: Benchmark and Baseline via Detection0
Dissecting Bias in LLMs: A Mechanistic Interpretability Perspective0
Q-BERT: Hessian Based Ultra Low Precision Quantization of BERT0
Rating Distributions and Bayesian Inference: Enhancing Cognitive Models of Spatial Language Use0
Show:102550
← PrevPage 1 of 2Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1En-BERT + TDA + PCAAccuracy88.6Unverified
2BERT+TDAAccuracy88.2Unverified
3RoBERTa+TDAAccuracy87.3Unverified
4deberta-v3-base+tasksourceAccuracy87.15Unverified
5RoBERTa-large 355M + Entailment as Few-shot LearnerAccuracy86.4Unverified
6LTG-BERT-base 98MAccuracy82.7Unverified
7ELC-BERT-base 98MAccuracy82.6Unverified
8En-BERT + TDAAccuracy82.1Unverified
9FNet-LargeAccuracy78Unverified
10LTG-BERT-small 24MAccuracy77.6Unverified
#ModelMetricClaimedVerifiedStatus
1Ru-RoBERTa+TDAMCC0.59Unverified
2ruRoBERTaMCC0.53Unverified
3Ru-BERT+TDAMCC0.48Unverified
4RemBERTMCC0.44Unverified
5ruBERTMCC0.42Unverified
6ruGPT-3MCC0.3Unverified
7ruT5MCC0.25Unverified
8mBERTMCC0.15Unverified
9XLM-RMCC0.13Unverified
#ModelMetricClaimedVerifiedStatus
1En-BERT + TDAAccuracy88.6Unverified
2XLM-R (pre-trained) + TDAAccuracy73Unverified
3DeBERTa (large)Accuracy69.5Unverified
4TinyBERT-6 67MAccuracy54Unverified
5Synthesizer (R+V)Accuracy53.3Unverified
6En-BERT (pre-trained) + TDAMCC0.42Unverified
#ModelMetricClaimedVerifiedStatus
1XLM-R + TDAMCC0.68Unverified
2XLM-RMCC0.52Unverified
3It-BERT (pre-trained) + TDAMCC0.48Unverified
4mBERTMCC0.36Unverified
#ModelMetricClaimedVerifiedStatus
1Sw-BERT + H0MAccuracy76.9Unverified