SOTAVerified

Web Page Classification using LLMs for Crawling Support

2025-05-11Code Available0· sign in to hype

Yuichi Sasazawa, Yasuhiro Sogawa

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

A web crawler is a system designed to collect web pages, and efficient crawling of new pages requires appropriate algorithms. While website features such as XML sitemaps and the frequency of past page updates provide important clues for accessing new pages, their universal application across diverse conditions is challenging. In this study, we propose a method to efficiently collect new pages by classifying web pages into two types, "Index Pages" and "Content Pages," using a large language model (LLM), and leveraging the classification results to select index pages as starting points for accessing new pages. We construct a dataset with automatically annotated web page types and evaluate our approach from two perspectives: the page type classification performance and coverage of new pages. Experimental results demonstrate that the LLM-based method outperformed baseline methods in both evaluation metrics.

Tasks

Reproductions