SOTAVerified

Lifting Multi-View Detection and Tracking to the Bird's Eye View

2024-03-19Code Available2· sign in to hype

Torben Teepe, Philipp Wolters, Johannes Gilg, Fabian Herzog, Gerhard Rigoll

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Taking advantage of multi-view aggregation presents a promising solution to tackle challenges such as occlusion and missed detection in multi-object tracking and detection. Recent advancements in multi-view detection and 3D object recognition have significantly improved performance by strategically projecting all views onto the ground plane and conducting detection analysis from a Bird's Eye View. In this paper, we compare modern lifting methods, both parameter-free and parameterized, to multi-view aggregation. Additionally, we present an architecture that aggregates the features of multiple times steps to learn robust detection and combines appearance- and motion-based cues for tracking. Most current tracking approaches either focus on pedestrians or vehicles. In our work, we combine both branches and add new challenges to multi-view detection with cross-scene setups. Our method generalizes to three public datasets across two domains: (1) pedestrian: Wildtrack and MultiviewX, and (2) roadside perception: Synthehicle, achieving state-of-the-art performance in detection and tracking. https://github.com/tteepe/TrackTacular

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
MultiviewXTrackTacular (Bilinear Sampling)IDF185.6Unverified
WildtrackTrackTacular (Bilinear Sampling)IDF195.3Unverified

Reproductions