Construction Information Technology Laboratory

CURRENT RESEARCH

RECIPROCAL RECONSTRUCTION AND RECOGNITION FOR MODELING OF CONSTRUCTED FACILITIES

The motivation behind this research project stems from the need for viable methods to map and label existing infrastructure. This is one of the grand challenges of engineering in the 21st century as noted by the “Restoring and Improving Urban Infrastructure” report of the National Academy of Engineering (NAE, 2008). For instance, over two thirds of the effort needed to model even simple infrastructure is spent on manually converting surface data to a 3D model. The result is that only very few constructed facilities today have a complete record of as-built information and that as-built models are not produced for the vast majority of new construction and retrofit projects, which leads to rework and design changes that cost up to 10% of the installed costs. Any efforts towards automating the modeling process will increase the percentage of infrastructure projects being modeled and, considering that construction is a $958 billion industry, each 1% of increase can lead up to $958 million in savings.

R4-r R4-l

Doctoral student Abbas Rashidi and CEE Assistant Professor Ioannis Brilakis, with support from the U.S. National Science Foundation (Grant #1031329), are validating the ability of reciprocal use of 3D reconstruction and 2D object recognition techniques for 3D modeling of constructed facilities. The objective of this research project is to evaluate whether a novel framework proposed by the researchers can progressively reconstruct a reinforced concrete frame structure into an object-oriented geometric model, for the purpose of automating the Building Information Model (BIM) making process of constructed facilities in a cost-effective manner. According to the proposed framework the modeler videotapes the structure from all accessible angles to minimize occlusions. During this stage, the structural members (concrete columns and beams in this study) in the resulting stream of images are detected and their occupying region is marked in all images. These regions are used to establish correspondence at the object level across images, and solve the rough registration problem efficiently. Line-based structure from motion is then applied to the result to produce a rendered 3D view of the structure with the recognized regions marked. This loops back to the detection of structural members, which can now be also performed on the spatial data covered by the visually marked regions. The result is more robust element detection (by combining visual and spatial detection results), and consequently improved element matching and reconstruction. The resulting object-oriented model is expected to be an accurate 3D representation of the structure with the load bearing linear members detected. This model is provided to the modeler, who can then use it to complete the model making process. As a result, the key intellectual merit of this framework lies in its reciprocal use of the results; the video recognition of building elements is used to assist the 3D reconstruction of their spatial data, while the 3D reconstruction provides the spatial data needed for spatial recognition to assist in more robust element detection.

Abbas Rashidi, joined the Construction Information Technology Laboratory (CITL) at Georgia Institute of Technology in Fall 2009. Before coming to US, he has worked as a lecturer at Islamic Azad University in Iran from 2004-09. His research interests include application of different image processing and computer vision techniques in 3D modeling of built environments.