SOTAVerified

Parameter Efficient Fine Tuning Llama 3.1 for Answering Arabic Legal Questions: A Case Study on Jordanian Laws

2025-06-021st International Conference on Computational Intelligence Approaches and Applications (ICCIAA) 2025Code Available0· sign in to hype

Mohammed Fasha; Bassam Hammo; Bilal Sowan; Husam Barham Business Intelligence and Data Analytics Department, University of Petra, Amman, Jordan ; Esam Al-Nsour

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

This study uses Jordanian law as a case study to explore the fine-tuning of the Llama-3.1 large language model for Arabic question-answering. Two versions of the model- Llama-3.1-8B-bnb-4bit and Llama-3.1-8B-Instruct-bnb-4bit-were fine-tuned using parameter-efficient fine-tuning (PEFT) with LoRA adapters and 4-bit quantized models, leveraging the Unsloth framework for accelerated and resource-efficient training. A custom dataset of 6000 legal question-answer pairs was curated from Jordanian laws and formatted into structured prompts. Performance was evaluated using the BLEU and the ROUGE metrics to compare the fine-tuned models to their respective base versions. Results demonstrated improved legal reasoning and accuracy while achieving resource efficiency through quantization and optimized fine-tuning strategies. This work underscores the potential of adapting large language models for Arabic legal domains and highlights effective techniques for fine-tuning domain-specific tasks.

Tasks

Reproductions