SOTAVerified

Improving Estonian Text Simplification through Pretrained Language Models and Custom Datasets

2025-01-26Unverified0· sign in to hype

Eduard Barbu, Meeri-Ly Muru, Sten Marcus Malva

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

This study introduces an approach to Estonian text simplification using two model architectures: a neural machine translation model and a fine-tuned large language model (LLaMA). Given the limited resources for Estonian, we developed a new dataset, the Estonian Simplification Dataset, combining translated data and GPT-4.0-generated simplifications. We benchmarked OpenNMT, a neural machine translation model that frames text simplification as a translation task, and fine-tuned the LLaMA model on our dataset to tailor it specifically for Estonian simplification. Manual evaluations on the test set show that the LLaMA model consistently outperforms OpenNMT in readability, grammaticality, and meaning preservation. These findings underscore the potential of large language models for low-resource languages and provide a basis for further research in Estonian text simplification.

Tasks

Reproductions