PICARD: Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models
Torsten Scholak, Nathan Schucher, Dzmitry Bahdanau
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/ElementAI/picardOfficialnone★ 378
- github.com/servicenow/picardnone★ 378
- github.com/yangyucheng000/Paper-3/tree/main/acm-mm-2023-picr-mastermindspore★ 0
Abstract
Large pre-trained language models for textual data have an unconstrained output space; at each decoding step, they can produce any of 10,000s of sub-word tokens. When fine-tuned to target constrained formal languages like SQL, these models often generate invalid code, rendering it unusable. We propose PICARD (code and trained models available at https://github.com/ElementAI/picard), a method for constraining auto-regressive decoders of language models through incremental parsing. PICARD helps to find valid output sequences by rejecting inadmissible tokens at each decoding step. On the challenging Spider and CoSQL text-to-SQL translation tasks, we show that PICARD transforms fine-tuned T5 models with passable performance into state-of-the-art solutions.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| spider | T5-3B + PICARD | Accuracy | 71.9 | — | Unverified |