SOTAVerified

YOLO-Former: YOLO Shakes Hand With ViT

2024-01-11Unverified0· sign in to hype

Javad Khoramdel, Ahmad Moori, Yasamin Borhani, Armin Ghanbarzadeh, Esmaeil Najafi

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

The proposed YOLO-Former method seamlessly integrates the ideas of transformer and YOLOv4 to create a highly accurate and efficient object detection system. The method leverages the fast inference speed of YOLOv4 and incorporates the advantages of the transformer architecture through the integration of convolutional attention and transformer modules. The results demonstrate the effectiveness of the proposed approach, with a mean average precision (mAP) of 85.76\% on the Pascal VOC dataset, while maintaining high prediction speed with a frame rate of 10.85 frames per second. The contribution of this work lies in the demonstration of how the innovative combination of these two state-of-the-art techniques can lead to further improvements in the field of object detection.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
PASCAL VOC 2007YOLO-FormerMAP86.01Unverified
PASCAL VOC 2012YOLO-FormerMAP86.01Unverified

Reproductions