SOTAVerified

Frontiers in Intelligent Colonoscopy

2024-10-22Code Available2· sign in to hype

Ge-Peng Ji, Jingyi Liu, Peng Xu, Nick Barnes, Fahad Shahbaz Khan, Salman Khan, Deng-Ping Fan

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Colonoscopy is currently one of the most sensitive screening methods for colorectal cancer. This study investigates the frontiers of intelligent colonoscopy techniques and their prospective implications for multimodal medical applications. With this goal, we begin by assessing the current data-centric and model-centric landscapes through four tasks for colonoscopic scene perception, including classification, detection, segmentation, and vision-language understanding. This assessment enables us to identify domain-specific challenges and reveals that multimodal research in colonoscopy remains open for further exploration. To embrace the coming multimodal era, we establish three foundational initiatives: a large-scale multimodal instruction tuning dataset ColonINST, a colonoscopy-designed multimodal language model ColonGPT, and a multimodal benchmark. To facilitate ongoing monitoring of this rapidly evolving field, we provide a public website for the latest updates: https://github.com/ai4colonoscopy/IntelliScope.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
ColonINST-v1 (Unseen)ColonGPT (w/ LoRA, w/o extra data)Accuray80.18Unverified
ColonINST-v1 (Unseen)ColonGPT (w/ LoRA, w/o extra data)Accuray83.24Unverified

Reproductions