SOTAVerified

Multi-FAct: Assessing Factuality of Multilingual LLMs using FActScore

2024-02-28Code Available0· sign in to hype

Sheikh Shafayat, Eunsu Kim, Juhyun Oh, Alice Oh

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Evaluating the factuality of long-form large language model (LLM)-generated text is an important challenge. Recently there has been a surge of interest in factuality evaluation for English, but little is known about the factuality evaluation of multilingual LLMs, specially when it comes to long-form generation. %This paper systematically evaluates multilingual LLMs' factual accuracy across languages and geographic regions. We introduce a simple pipeline for multilingual factuality evaluation, by applying FActScore (Min et al., 2023) for diverse languages. In addition to evaluating multilingual factual generation, we evaluate the factual accuracy of long-form text generation in topics that reflect regional diversity. We also examine the feasibility of running the FActScore pipeline using non-English Wikipedia and provide comprehensive guidelines on multilingual factual evaluation for regionally diverse topics.

Tasks

Reproductions