SOTAVerified

JAPAGEN: Efficient Few/Zero-shot Learning via Japanese Training Dataset Generation with LLM

2024-12-09Code Available0· sign in to hype

Takuro Fujii, Satoru Katsumata

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recently some studies have highlighted the potential of Large Language Models (LLMs) as effective generators of supervised training data, offering advantages such as enhanced inference efficiency and reduced costs associated with data collection. However, these studies have predominantly focused on English language tasks. In this paper, we address the fundamental research question: Can LLMs serve as proficient training data generators for other language tasks? Specifically, we leverage LLMs to synthesize supervised training data under few-shot and zero-shot learning scenarios across six diverse Japanese downstream tasks. Subsequently, we utilize this synthesized data to train compact models (e.g., BERT). This novel methodology is termed JAPAGEN. Our experimental findings underscore that JAPAGEN achieves robust performance in classification tasks that necessitate formal text inputs, demonstrating competitive results compared to conventional LLM prompting strategies.

Tasks

Reproductions