SOTAVerified

Few-shot fine-tuning SOTA summarization models for medical dialogues

2022-07-01NAACL (ACL) 2022Unverified0· sign in to hype

David Fraile Navarro, Mark Dras, Shlomo Berkovsky

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Abstractive summarization of medical dialogues presents a challenge for standard training approaches, given the paucity of suitable datasets. We explore the performance of state-of-the-art models with zero-shot and few-shot learning strategies and measure the impact of pretraining with general domain and dialogue-specific text on the summarization performance.

Tasks

Reproductions