SOTAVerified

Data-to-Text Generation

A classic problem in natural-language generation (NLG) involves taking structured data, such as a table, as input, and producing text that adequately and fluently describes this data as output. Unlike machine translation, which aims for complete transduction of the sentence to be translated, this form of NLG is usually taken to require addressing (at least) two separate challenges: what to say, the selection of an appropriate subset of the input data to discuss, and how to say it, the surface realization of a generation.

( Image credit: Data-to-Text Generation with Content Selection and Planning )

Papers

Showing 110 of 219 papers

TitleStatusHype
Large Language Models as Span Annotators0
SCOPE: A Self-supervised Framework for Improving Faithfulness in Conditional Text Generation0
Evaluation of NMT-Assisted Grammar Transfer for a Multi-Language Configurable Data-to-Text System0
Curriculum Learning for Cross-Lingual Data-to-Text Generation With Noisy Data0
An Extensive Evaluation of Factual Consistency in Large Language Models for Data-to-Text Generation0
Ontology-Free General-Domain Knowledge Graph-to-Text Generation Dataset Synthesis using Large Language ModelCode1
Impact of Model Size on Fine-tuned LLM Performance in Data-to-Text Generation: A State-of-the-Art Investigation0
Modeling Comparative Logical Relation with Contrastive Learning for Text Generation0
SPOR: A Comprehensive and Practical Evaluation Method for Compositional Generalization in Data-to-Text GenerationCode0
Bridging the Gap between Different Vocabularies for LLM EnsembleCode1
Show:102550
← PrevPage 1 of 22Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1T5-3BBLEU49.5Unverified
2LATTICE (T5-base)BLEU48.4Unverified
3BERT-to-BERTBLEU44Unverified
4Pointer GeneratorBLEU41.6Unverified
5NCP+CC (Puduppully et al 2019)BLEU19.2Unverified
6T5METEOR0.36Unverified