AdaptEval: Evaluating Large Language Models on Domain Adaptation for Text Summarization
Anum Afzal, Ribin Chalumattu, Florian Matthes, Laura Mascarell
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/anum94/adaptevalOfficialIn paperpytorch★ 0
Abstract
Despite the advances in the abstractive summarization task using Large Language Models (LLM), there is a lack of research that asses their abilities to easily adapt to different domains. We evaluate the domain adaptation abilities of a wide range of LLMs on the summarization task across various domains in both fine-tuning and in-context learning settings. We also present AdaptEval, the first domain adaptation evaluation suite. AdaptEval includes a domain benchmark and a set of metrics to facilitate the analysis of domain adaptation. Our results demonstrate that LLMs exhibit comparable performance in the in-context learning setting, regardless of their parameter scale.