Can Low-Rank Knowledge Distillation in LLMs be Useful for Microelectronic Reasoning?
2024-06-19Unverified0· sign in to hype
Nirjhor Rouf, Fin Amin, Paul D. Franzon
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
In this work, we present empirical results regarding the feasibility of using offline large language models (LLMs) in the context of electronic design automation (EDA). The goal is to investigate and evaluate a contemporary language model's (Llama-2-7B) ability to function as a microelectronic Q & A expert as well as its reasoning, and generation capabilities in solving microelectronic-related problems. Llama-2-7B was tested across a variety of adaptation methods, including introducing a novel low-rank knowledge distillation (LoRA-KD) scheme. Our experiments produce both qualitative and quantitative results.