SOTAVerified

Meta-Learning for Efficient Fine-Tuning of Large Language Models

2024-05-08International Journal of Scientific and Research Publication (IJSRP) 2024Code Available0· sign in to hype

Shriyansh Singh, Pramit Saha

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In this project, we developed a parameter-efficient text generator model that generates text in the same way as a Reddit TIFU post. We used the Reddit TIFU dataset, which contains long-form text posts with titles and summaries. We explored the data and preprocessed it for training. We then designed and trained a BERT-based text generation model using PyTorch and the Hugging Face Transformers library. We tuned the hyperparameters to improve the model's performance and evaluated it using multiple metrics. Our final model achieved a high level of accuracy and can be used for various natural language processing tasks.

Tasks

Reproductions