SOTAVerified

Data-to-Text Generation

A classic problem in natural-language generation (NLG) involves taking structured data, such as a table, as input, and producing text that adequately and fluently describes this data as output. Unlike machine translation, which aims for complete transduction of the sentence to be translated, this form of NLG is usually taken to require addressing (at least) two separate challenges: what to say, the selection of an appropriate subset of the input data to discuss, and how to say it, the surface realization of a generation.

( Image credit: Data-to-Text Generation with Content Selection and Planning )

Papers

Showing 110 of 219 papers

TitleStatusHype
Large Language Models as Span Annotators0
SCOPE: A Self-supervised Framework for Improving Faithfulness in Conditional Text Generation0
Evaluation of NMT-Assisted Grammar Transfer for a Multi-Language Configurable Data-to-Text System0
Curriculum Learning for Cross-Lingual Data-to-Text Generation With Noisy Data0
An Extensive Evaluation of Factual Consistency in Large Language Models for Data-to-Text Generation0
Ontology-Free General-Domain Knowledge Graph-to-Text Generation Dataset Synthesis using Large Language ModelCode1
Impact of Model Size on Fine-tuned LLM Performance in Data-to-Text Generation: A State-of-the-Art Investigation0
Modeling Comparative Logical Relation with Contrastive Learning for Text Generation0
SPOR: A Comprehensive and Practical Evaluation Method for Compositional Generalization in Data-to-Text GenerationCode0
Bridging the Gap between Different Vocabularies for LLM EnsembleCode1
Show:102550
← PrevPage 1 of 22Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Fact-aware embedding with mT5BLEU429.27Unverified
2Bi-lingual mT5BLEU425.88Unverified
3mT5BLEU425Unverified
4Vanilla TransformerBLEU419.9Unverified
5Translate-Output mT5BLEU418.91Unverified
6Graph Attention Network Encoder +Transformer DecoderBLEU418.3Unverified