SOTAVerified

CodeVision: Detecting LLM-Generated Code Using 2D Token Probability Maps and Vision Models

2025-01-06Unverified0· sign in to hype

Zhenyu Xu, Victor S. Sheng

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

The rise of large language models (LLMs) like ChatGPT has significantly improved automated code generation, enhancing software development efficiency. However, this introduces challenges in academia, particularly in distinguishing between human-written and LLM-generated code, which complicates issues of academic integrity. Existing detection methods, such as pre-trained models and watermarking, face limitations in adaptability and computational efficiency. In this paper, we propose a novel detection method using 2D token probability maps combined with vision models, preserving spatial code structures such as indentation and brackets. By transforming code into log probability matrices and applying vision models like Vision Transformers (ViT) and ResNet, we capture both content and structure for more accurate detection. Our method shows robustness across multiple programming languages and improves upon traditional detectors, offering a scalable and computationally efficient solution for identifying LLM-generated code.

Tasks

Reproductions