SOTAVerified

VinaLLaMA: LLaMA-based Vietnamese Foundation Model

2023-12-18Unverified0· sign in to hype

Quan Nguyen, Huy Pham, Dung Dao

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In this technical report, we present VinaLLaMA, an open-weight, state-of-the-art (SOTA) Large Language Model for the Vietnamese language, built upon LLaMA-2 with an additional 800 billion trained tokens. VinaLLaMA not only demonstrates fluency in Vietnamese but also exhibits a profound understanding of Vietnamese culture, making it a truly indigenous model. VinaLLaMA-7B-chat, trained on 1 million high-quality synthetic samples, achieves SOTA results on key benchmarks, including VLSP, VMLU, and Vicuna Benchmark Vietnamese, marking a significant advancement in the Vietnamese AI landscape and offering a versatile resource for various applications.

Tasks

Reproductions