SOTAVerified

Combining visibility analysis and deep learning for refinement of semantic 3D building models by conflict classification

2023-03-10Unverified0· sign in to hype

Olaf Wysocki, Eleonora Grilli, Ludwig Hoegner, Uwe Stilla

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Semantic 3D building models are widely available and used in numerous applications. Such 3D building models display rich semantics but no facade openings, chiefly owing to their aerial acquisition techniques. Hence, refining models' facades using dense, street-level, terrestrial point clouds seems a promising strategy. In this paper, we propose a method of combining visibility analysis and neural networks for enriching 3D models with window and door features. In the method, occupancy voxels are fused with classified point clouds, which provides semantics to voxels. Voxels are also used to identify conflicts between laser observations and 3D models. The semantic voxels and conflicts are combined in a Bayesian network to classify and delineate facade openings, which are reconstructed using a 3D model library. Unaffected building semantics is preserved while the updated one is added, thereby upgrading the building model to LoD3. Moreover, Bayesian network results are back-projected onto point clouds to improve points' classification accuracy. We tested our method on a municipal CityGML LoD2 repository and the open point cloud datasets: TUM-MLS-2016 and TUM-FACADE. Validation results revealed that the method improves the accuracy of point cloud semantic segmentation and upgrades buildings with facade elements. The method can be applied to enhance the accuracy of urban simulations and facilitate the development of semantic segmentation algorithms.

Tasks

Reproductions