SOTAVerified

GPT Editors, Not Authors: The Stylistic Footprint of LLMs in Academic Preprints

2025-05-22Unverified0· sign in to hype

Soren DeHaan, Yuanze Liu, Johan Bollen, Sa'ul A. Blanco

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

The proliferation of Large Language Models (LLMs) in late 2022 has impacted academic writing, threatening credibility, and causing institutional uncertainty. We seek to determine the degree to which LLMs are used to generate critical text as opposed to being used for editing, such as checking for grammar errors or inappropriate phrasing. In our study, we analyze arXiv papers for stylistic segmentation, which we measure by varying a PELT threshold against a Bayesian classifier trained on GPT-regenerated text. We find that LLM-attributed language is not predictive of stylistic segmentation, suggesting that when authors use LLMs, they do so uniformly, reducing the risk of hallucinations being introduced into academic preprints.

Tasks

Reproductions