BERT has a Mouth, and It Must Speak: BERT as a Markov Random Field Language Model
2019-02-11WS 2019Code Available0· sign in to hype
Alex Wang, Kyunghyun Cho
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/nyu-dl/bert-genOfficialIn paperpytorch★ 0
- github.com/kyunghyuncho/bert-genOfficialIn paperpytorch★ 0
- github.com/MS-P3/code6mindspore★ 0
- github.com/gandharvsuri/Conditional-Generative-model-on-BERTnone★ 0
- github.com/FrogTravel/PMLDLpytorch★ 0
- github.com/swastishreya/BERT_as_a_conditional_generative_modelpytorch★ 0
- github.com/2023-MindSpore-1/ms-code-12/tree/main/Bertmindspore★ 0
- github.com/JuanJoseMV/neuraltextgenpytorch★ 0
- github.com/woailaosang/repo_treasurenone★ 0
- github.com/MS-P3/code7/tree/main/visual_bertmindspore★ 0
Abstract
We show that BERT (Devlin et al., 2018) is a Markov random field language model. This formulation gives way to a natural procedure to sample sentences from BERT. We generate from BERT and find that it can produce high-quality, fluent generations. Compared to the generations of a traditional left-to-right language model, BERT generates sentences that are more diverse but of slightly worse quality.