SOTAVerified

Fine-tuning Large Language Models for Entity Matching

2024-09-12Code Available1· sign in to hype

Aaron Steiner, Ralph Peeters, Christian Bizer

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Generative large language models (LLMs) are a promising alternative to pre-trained language models for entity matching due to their high zero-shot performance and ability to generalize to unseen entities. Existing research on using LLMs for entity matching has focused on prompt engineering and in-context learning. This paper explores the potential of fine-tuning LLMs for entity matching. We analyze fine-tuning along two dimensions: 1) the representation of training examples, where we experiment with adding different types of LLM-generated explanations to the training set, and 2) the selection and generation of training examples using LLMs. In addition to the matching performance on the source dataset, we investigate how fine-tuning affects the models ability to generalize to other in-domain datasets as well as across topical domains. Our experiments show that fine-tuning significantly improves the performance of the smaller models while the results for the larger models are mixed. Fine-tuning also improves the generalization to in-domain datasets while hurting cross-domain transfer. We show that adding structured explanations to the training set has a positive impact on the performance of three out of four LLMs, while the proposed example selection and generation methods, only improve the performance of Llama 3.1 8B while decreasing the performance of GPT-4o-mini.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Abt-Buygpt-4o-mini-2024-07-18F1 (%)87.68Unverified
Abt-BuyMeta-Llama-3.1-8B-Instruct_fine_tunedF1 (%)87.34Unverified
Abt-BuyMeta-Llama-3.1-70B-InstructF1 (%)79.12Unverified
Abt-BuyMeta-Llama-3.1-8B-InstructF1 (%)56.57Unverified
Abt-Buygpt-4o-mini-2024-07-18_fine_tunedF1 (%)94.09Unverified
Abt-Buygpt-4o-2024-08-06F1 (%)92.2Unverified
Amazon-Googlegpt-4o-mini-2024-07-18_fine_tunedF1 (%)80.25Unverified
Amazon-GoogleMeta-Llama-3.1-8B-Instruct_fine_tunedF1 (%)50Unverified
Amazon-Googlegpt-4o-2024-08-06F1 (%)63.45Unverified
Amazon-GoogleMeta-Llama-3.1-8B-InstructF1 (%)49.16Unverified
Amazon-Googlegpt-4o-mini-2024-07-18F1 (%)59.2Unverified
Amazon-GoogleMeta-Llama-3.1-70B-InstructF1 (%)51.44Unverified
WDC Productsgpt-4o-2024-08-06_fine_tuned_wdc_smallF1 (%)87.07Unverified
WDC Products-80%cc-seen-mediumgpt-4o-mini-2024-07-18_structured_explanationsF1 (%)84.38Unverified
WDC Products-80%cc-seen-mediumgpt-4o-2024-08-06_fine_tuned_wdc_smallF1 (%)87.1Unverified
WDC Products-80%cc-seen-mediumLlama3.1_8BF1 (%)53.36Unverified
WDC Products-80%cc-seen-mediumgpt-4o-mini-2024-07-18F1 (%)81.61Unverified
WDC Products-80%cc-seen-mediumLlama3.1_70B_structured_explanationsF1 (%)76.7Unverified
WDC Products-80%cc-seen-mediumLlama3.1_70BF1 (%)75.2Unverified
WDC Products-80%cc-seen-mediumLlama3.1_8B_error-based_example_selectionF1 (%)74.37Unverified
WDC Products-80%cc-seen-mediumLlama3.1_8B_structured_explanationsF1 (%)74.13Unverified

Reproductions