SOTAVerified

Text Infilling

Text Infilling is the task of predicting missing spans of text which are consistent with the preceding and subsequent text. Text Infilling is a generalization of the cloze task—cloze historically refers to infilling individual words.

Source: Enabling Language Models to Fill in the Blanks

Papers

Showing 143 of 43 papers

TitleStatusHype
The Devil behind the mask: An emergent safety vulnerability of Diffusion LLMsCode2
Flexible-length Text Infilling for Discrete Diffusion Models0
LaViDa: A Large Diffusion Language Model for Multimodal UnderstandingCode3
Insertion Language Models: Sequence Generation with Arbitrary-Position Insertions0
Enhancing Spoken Discourse Modeling in Language Models Using Gestural Cues0
TrajGPT: Controlled Synthetic Trajectory Generation Using a Multitask Transformer-Based Spatiotemporal ModelCode0
Empowering Character-level Text Infilling by Eliminating Sub-TokensCode0
Towards Probabilistically-Sound Beam Search with Masked Language ModelsCode0
A Benchmark for Text Expansion: Datasets, Metrics, and Baselines0
A Simple yet Effective Framework for Few-Shot Aspect-Based Sentiment AnalysisCode1
Having Beer after Prayer? Measuring Cultural Bias in Large Language ModelsCode1
Sequence-to-Sequence Pre-training with Unified Modality Masking for Visual Document Understanding0
MAGVLT: Masked Generative Vision-and-Language TransformerCode1
Model-tuning Via Prompts Makes NLP Models Adversarially RobustCode0
Don't Prompt, Search! Mining-based Zero-Shot Learning with Language Models0
Generative Prompt Tuning for Relation ClassificationCode1
MetaFill: Text Infilling for Meta-Path Generation on Heterogeneous Information NetworksCode0
Reprogramming Pretrained Language Models for Antibody Sequence InfillingCode1
Coordination Generation via Synchronized Text-Infilling0
A-TIP: Attribute-aware Text Infilling via Pre-trained Language Model0
Building a Knowledge-Based Dialogue System with Text Infilling0
Prompting ELECTRA: Few-Shot Learning with Discriminative Pre-Trained ModelsCode1
On the Role of Bidirectionality in Language Model Pre-Training0
BiTIIMT: A Bilingual Text-infilling Method for Interactive Machine Translation0
Predicting scalar diversity with context-driven uncertainty over alternatives0
CTRLEval: An Unsupervised Reference-Free Metric for Evaluating Controlled Text GenerationCode1
Language modeling via stochastic processesCode1
Generative Prompt Tuning for Relation Classification0
Conformal prediction for text infilling and part-of-speech predictionCode0
Show Me How To Revise: Improving Lexically Constrained Sentence Generation with XLNetCode0
LOT: A Story-Centric Benchmark for Evaluating Chinese Long Text Understanding and GenerationCode1
Nutri-bullets Hybrid: Consensual Multi-document Summarization0
Nutribullets Hybrid: Multi-document Health SummarizationCode0
InFillmore: Frame-Guided Language Generation with Bidirectional Context0
Improving Sequence-to-Sequence Pre-training via Sequence Span RewritingCode1
Reflective Decoding: Beyond Unidirectional Generation with Off-the-Shelf Language Models0
Back to the Future: Unsupervised Backprop-based Decoding for Counterfactual and Abductive Commonsense ReasoningCode1
Enabling Language Models to Fill in the BlanksCode1
Decoding As Dynamic Programming For Recurrent Autoregressive Models0
Keep Calm and Switch On! Preserving Sentiment and Fluency in Semantic Text ExchangeCode0
"Mask and Infill" : Applying Masked Language Model to Sentiment Transfer0
TIGS: An Inference Algorithm for Text Infilling with Gradient SearchCode0
Text InfillingCode0
Show:102550

No leaderboard results yet.