SOTAVerified

Self-QA: Unsupervised Knowledge Guided Language Model Alignment

2023-05-19Code Available3· sign in to hype

Xuanyu Zhang, Qing Yang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Large-scale language models like ChatGPT and GPT-4 have gained attention for their impressive conversational and generative capabilities. However, the creation of supervised paired question-answering data for instruction tuning presents formidable challenges. This endeavor necessitates substantial human effort for data annotation and wrestles with issues concerning data quality, diversity, accuracy, and other related factors. To overcome these obstacles, we introduce an innovative framework named Self-QA, which replaces the traditional practice of human-written instruction seeds with a vast amount of unsupervised knowledge, enabling the model to generate a larger quantity of correct and domain-specific instruction data. The effectiveness of our proposed method is demonstrated through experiments conducted on unsupervised corpora from various domains.

Tasks

Reproductions