International Construction Specialty Conference of the Canadian Society for Civil Engineering (ICSC) (5th : 2015)

An image-based frameworks for automated discrepancy quantification and realignment of industrial assemblies Nahangi, Mohammad; Czerniawski, Thomas; Yeung, Jamie; Haas, Carl T.; Walbridge, Scott; West, Jeffrey Jun 30, 2015

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
52660-Nahangi_M_et _al_ICSC15_047_Image_based_framework.pdf [ 859.98kB ]
52660-Nahangi_M_et _al_ICSC15_047_Image_based_framework_slides.pdf [ 11.53MB ]
Metadata
JSON: 52660-1.0076319.json
JSON-LD: 52660-1.0076319-ld.json
RDF/XML (Pretty): 52660-1.0076319-rdf.xml
RDF/JSON: 52660-1.0076319-rdf.json
Turtle: 52660-1.0076319-turtle.txt
N-Triples: 52660-1.0076319-rdf-ntriples.txt
Original Record: 52660-1.0076319-source.json
Full Text
52660-1.0076319-fulltext.txt
Citation
52660-1.0076319.ris

Full Text

5th International/11th Construction Specialty Conference 5e International/11e Conférence spécialisée sur la construction    Vancouver, British Columbia June 8 to June 10, 2015 / 8 juin au 10 juin 2015   AN IMAGE-BASED FRAMEWORK FOR AUTOMATED DISCREPANCY QUANTIFICATION AND REALIGNMENT OF INDUSTRIAL ASSEMBLIES Mohammad Nahangi1, Thomas Czerniawski1, Jamie Yeung1, Carl T. Haas1, Scott Walbridge1, Jeffrey West1 1 Civil Infrastructure Sensing Laboratory, Civil and Environmental Engineering Department, University of Waterloo, Waterloo, ON, Canada Abstract: Image-based frameworks for automated as-built modeling and infrastructure 3D reconstruction are increasingly being used in the construction industry. The increasing use of image-based technologies in the construction processes is due to the ease of application, cheap cost of enhancement, time effectiveness and high level of accuracy and automation. Automating the tasks involved in the inspection, quality control and quality assurance (QA/QC) processes are the potential avenues for utilizing such frameworks. This paper presents an image-based approach for acquiring the built status of fabricated assemblies and describes a framework for realigning the defective segments by borrowing concepts from robotics and forward kinematics. A laboratory set of experiments is then conducted to measure the accuracy and performance of the proposed framework for realignment of industrial facilities, in general, and pipe spools, in particular. Results demonstrate that it can be utilized in real cases providing the required level of accuracy as well. 1 INTRODUCTION Inspecting and monitoring of civil infrastructure is a critical challenge that has to be performed in various construction and maintenance phases. Inspection processes guarantee that construction segments and components are reliably fabricated, transported, and erected. They also provide the determination of as-built status and dimensions required for potential considerations needed to be given to defective segments. These tasks are difficult to perform on construction sites. Inspection processes to perform quality control and quality assurance (QA/QC) process become more critical in staged fabrication that requires sequential fabrication and installation. In the construction industry, in particular, staged fabrication includes but is not limited to prefabricated steel columns, concrete segments and huge pipe spools/modules. Although construction segments are typically correctly fabricated considering recent advances in the fabrication industry, they still require continuous inspection as dimensional errors are introduced during transportation and shipment. As-built status acquisition is therefore necessary to assess the current state of construction elements and determine the potential considerations for repair and realignment. Manual and conventional approaches to acquire the as-built status are inherently prone to error and therefore often ineffective and inaccurate. Craft workers use tape measuring and paper-based methods to inspect and control the fabrication quality. The as-built status acquired is therefore often unreliable and may cause profound unfavorable effects on construction sites such as schedule delays and rework. For example, in the case a defective segment is erected on site and discrepancies are not detected in a 047-1 timely and accurate manner, the segment is going to be either repaired or replaced whereby both cases are associated with rework. Therefore, as-built status of construction elements must be acquired in a timely manner and electronically transferred to various interfaces involved in order to provide managers and decision makers with an effective framework. In the last couple of decades, adequate accuracy and speed for as-built status acquisition and assessment has become possible utilizing various sensing technologies including Ultra Wide Band (UWB) tags, Global Positioning System (GPS), Radio Frequency Identification (RFID), and 3D imaging techniques. Among these sensing technologies, 3D imaging that provides spatial and detailed geometric information of objects is a reliable method for detailed and geometric as-built status acquisition. 3D imaging is the general name for the process used to generate three dimensional images, also called point clouds, that includes LADAR (Laser and Distance Ranging), range imaging, videogrammetry, traditional photogrammetry, and modern photogrammetry which is also known as image-based sensing technology. Among these technologies, laser scanning is the most common technique in the related industry because of the ease of use and the level of accuracy provided. However, there are major limitations involved with laser-based methods that make it less applicable under some conditions. These conditions may include the time required for setting up, data preprocessing, and occlusion occurrences in busy construction environments. Cost of purchasing is an additional barrier for laser scanners that makes them less accessible by all contractors and project managers. Developing a cheaper framework, commonly affordable, with a high rate of accuracy that can address the mentioned deficiencies of laser scanners is therefore necessary.  In recent years, some research efforts have attempted to address these limitations of laser scanners. Image and video based techniques that have been developed during the past several years are the emerging solutions that are significantly less expensive than laser scanning. It has been stated that image-based methods can provide the desirable level of accuracy and automation if well applied. According to (Dai et al. 2013; Golparvar-Fard et al. 2011; Zhu and Brilakis 2009) image-based 3D reconstruction has provided a comparable level of accuracy to laser-based methods without the previously discussed limitations of laser scanners.  One of the key advantages of utilizing the image-based techniques to assess the as-built status of construction elements is that it does not require any further consideration or procedure on construction sites as images are taken on a regular basis by inspectors (Golparvar-Fard et al. 2011). Such random and unordered images can be imported to a cloud server on a daily or weekly basis for further processing in order to generate 3D images for assessing the as-built status. This paper presents an image-based framework in order to accurately detect, characterize, and identify the discrepancies between the as-built status and as-designed data (Figure 1). Most research to date is relatively limited in its approach to the automated detection of visible damage and defects. There is still a significant lack of knowledge in civil infrastructure aimed at effectively and efficiently assessing the construction processes in different phases in order to realign defective assemblies in a timely manner. Unordered images taken by inspectors Cloud databaseuploadGenerate 3D modelreconstructionExtract 3D dimensions and compare with original drawingsprocessFeedback for inspectors Figure 1: Proposed framework for image-based 3D model generation and realignment strategy This research is directed toward damage prevention by localizing and quantifying fabrication errors and generating a potential solution for repair and rehabilitation in a systematic way. In order to comprehensively determine the knowledge gap and the need in the construction industry, the related background is extensively investigated. The research methodology to develop the proposed framework is then presented, and required functions and metrics are formed. A set of experiments are then designed in 047-2 order to validate the proposed framework and evaluate its accuracy by comparing the results with previously acquired laser-based data. 2 RESEARCH BACKGROUND The key contribution of the framework presented in this work is an image-based reconstruction technique for the as-built status assessment of industrial facilities compared with laser-based status acquisition. Recent studies attempted for inspecting, monitoring, and assessing the built status of civil infrastructure, in general, are also briefly reviewed. The related background is extensively investigated in order to determine the current state of these methods, and the knowledge gap is therefore comprehensively identified. 2.1 Monitoring and Inspecting Civil Infrastructure for the Built Status Assessment Using automated tools for monitoring and inspection purposes in construction provides a superior level of accuracy. The automated tools and techniques for reliable and accurate 3D measurement are also being increasingly used because of electronic communication and integration with other interfaces in the construction management system. 3D spatial locating devices such as ultra-wide band (UWB) and radio frequency identification (RFID) tags have been used for automated material location on heavy industrial construction facilities (Razavi and Haas 2010). They have been found to improve productivity and to reduce risk substantially. Additionally, 3D sensing and imaging technologies such as laser scanning, image/video-based reconstruction methods, and range imaging are used for a wide range of applications in construction. Laser scanning was first introduced as a tool for structural health monitoring (Park et al. 2007). Automating the tasks involved in the processing step to use the point clouds generated by laser scanners improved the applicability of laser-based techniques in real-sized and complex construction sites. Bosche et al. (2009) developed a method for automated recognition of CAD objects in cluttered point clouds that was later employed for automated progress tracking of construction components (Turkan et al. 2012). Laser scanning can be employed for compliance control used for QA/QC purposes. An automated framework was used for compliance control of industrial assemblies using laser-scanned point clouds (Nahangi and Haas 2014). A video-based 3D point cloud generation technique (videogrammetry), was also employed to acquire the as-built status (Brilakis et al. 2011; Koch et al. 2013), to be used for various purposes such as progress tracking and status assessment discussed earlier. Recently, 3D sensing technologies have been actively employed for tracking the progress of the industrial construction or Mechanical Electrical and Piping (MEP) components (Ahmed et al. 2012; Dimitrov and Golparvar-Fard 2015; Lee et al. 2013; Son et al. 2014). The use of image-based techniques and their application in the construction industry are discussed in the following section.  2.2 Image-Based 3D Reconstruction State-Of-The-Art in the Construction Industry Photogrammetry is well known as a 3D reconstruction technique. It was originally developed for generating the stereo elevation from aerial photos (Mikhail et al. 2001). In recent decades, advances in the camera production industry have provided the required accuracy for close-range applications such as industrial inspection and architectural documentation. The challenge associated with traditional photogrammetry techniques is the manual labour and computationally intensive processing involved. When high accuracy is desired for further investigations, computational cost and time also increased. This drawback of traditional photogrammetry has been a challenging factor over the years that makes it limited and currently not applicable for near-real-time modeling. Later advancements in the processing unit and related industry significantly improved the preprocessing time involved for common feature detection; however, it was still time consuming to achieve the required accuracy (Mikhail et al. 2001). With the rise of digital images and digital photos, traditional techniques were replaced with modern and digital photogrammetry, which resulted in profound improvements in accuracy considering the required processing time (Brilakis et al. 2012; Jahanshahi et al. 2009; Liu et al. 2014). In the construction industry, the use of modern photogrammetry thus became more significant as it was much less expensive than 047-3 laser scanning, the most common technique in AEC industry for as-built status acquisition, mentioned earlier. A study by (El-Omari and Moselhi 2008) showed the enhancement of integrating photogrammetry and laser scanning for progress tracking, which was proven to be more time effective. Researchers enhanced the use of close range photogrammetry for pavement pothole localization and progress tracking in pipe-works construction (Ahmed et al. 2011; Ahmed et al. 2012). Most of this research showed the time effectiveness of the application of photogrammetry in the related industry, however, some major limitations such as the desired accuracy and required intensive preprocessing, such as camera calibration, severely diminished the method’s practical utility compared to its competitor, laser scanning. New and innovative techniques have emerged, however for the detection of common features of the desired accuracy without requiring camera calibration. Such automated feature detection techniques include scale the invariant feature transform (SIFT) and speeded up robust features (SURF). (Golparvar-Fard et al. 2009) introduced as automated reconstruction techniques that use time-lapsed photographs to generate the as-built status of construction projects in order to measure the progress on construction sites. (Zhu and Brilakis 2009) investigated the accuracy of the optical-based methods compared to laser-based techniques for the reconstruction of the built status of civil infrastructure. They concluded that laser scanning has larger range for data collection with a higher level of accuracy that makes it more reliable though also more costly. They have suggested the efficient optical-based technique for different applications in the construction industry under various circumstances. These methods include videogrammetry and 3D range imaging that can be employed based on the level of accuracy required and the application used.  The construction industry, several research efforts have been attempted to evaluate the accuracy and investigate time and cost effectiveness of the image-based methods compared to laser scanning. Golparvar-Fard et al. (2011) evaluated the accuracy of an image-based 3D point cloud for determining the as-built status of construction elements compared to a laser-based 3D point cloud. They concluded that the accuracy provided by laser scanning is slightly higher than image-based techniques, however, image-based reconstruction is quicker and less costly.  Bhatla et al. (2012) evaluated the accuracy of an image-based reconstruction framework for assessing the built status of bridge girders. According to this study, image-based 3D reconstruction using free, promising commercial software packages provided the desired level of accuracy; however, their study also revealed that for highly accurate modeling purposes, laser scanning is more appropriate. (Dai et al. 2013) comprehensively compared the image-based techniques with time-of-flight in various civil infrastructure. Such a comparison includes the accuracy of reconstruction, density of the point cloud generated, cost of purchasing and utilizing, and the required time for setting up and processing. Their study revealed that both photogrammetry and videogrammetry can provide sufficiently dense and accurate point clouds to be employed for different applications such as visualization. It was stated that photogrammetry is sufficiently accurate to be used for as-built quality control on construction sites. In summary, image-based 3D reconstruction, which is originally known as photogrammetry, has provided the desired accuracy within a reasonable processing time due to the recent advances in the related industry that are previously discussed. Such improvements in image-based reconstruction provide the opportunity to employ related techniques in different inspection phases of civil infrastructure. Despite the significant impact of automated tools, in general, and image-based techniques, in particular, in the construction industry, their uses are mostly directed toward object recognition for status assessment. A preventive approach to measure the discrepancies in a time effective way that can avoid rework is needed. Nahangi et al. (2015a) showed the effectiveness of laser-based point clouds for discrepancy detection and a realignment strategy. This paper is aimed to generalize the previous developments by employing an image-based framework for the as-built status acquisition and to measure the performances of the resulted realignment strategies. The detailed methodology along with the required functions and metrics are explained in the following section. 047-4 3 PROPOSED METHODOLOGY For generating the 3D model required for performing the as-built status identification and discrepancy quantification, an image-based framework is used. As shown in Figure 2, the image-based algorithm for automated discrepancy detection and quantification has two primary steps: (1) Image-based 3D point cloud generation that provides the as-built status with a reasonable level of accuracy, and (2) processing that enables detection, localization, and quantification of incurred discrepancies based on the original drawings. The primary steps and the required components are extensively explained in the following section.  Inspectors and QA/AC managers take photosUpload photos to a cloud serverGenerate the 3D point cloud of assembliesNoise removal 3D image registration with orginal CAD drawingsDiscrepancy detection and quantificationRealignment plan development Figure 2: Algorithm for automated discrepancy quantification using an image-based framework 3.1 Image-Based 3D Point Cloud Generation Inspectors and QA/QC managers use off-the-shelf digital cameras. Cellphone and tablet cameras can also be used to acquire the required images. Once sufficient numbers of images with sufficient overlap are captured they are imported to a cloud server/database for point cloud generation. The image processing toolboxes for image stitching and 3D point cloud generation can be employed. These toolboxes include MATLAB Image Processing Toolbox and open source databases such as Open CV (Open source computer vision library at the University of Washington) and PCL (Point Cloud Library). In this study, Autodesk 123D Catch (www.123dapp.com/catch) is employed. The 3D reconstruction of construction assemblies using images is composed of a set of steps:  • Finding common features in the images taken, • Matching the common features detected, • Transforming the images into a global coordinate space based on the common features. Based on the required processing steps mentioned above, more images taken improves the applicability and performance of the algorithm. Furthermore, a sufficient level of overlapping between images must exist in order to find a reliable set of matching features. Once a 3D model of the construction environment is roughly generated, unwanted objects and the existing noise are filtered. Noise removal is performed manually by removing the objects outside the region of interest. This can be automated in the future. 047-5 3.2 Processing Once appropriate 3D model is generated using the proposed image-based framework, a set of processes is required to quantify the incurred discrepancies based on the framework developed by Nahangi et al. (2015b). Such processes include:  • Registration: the filtered 3D model can now be used for registration with CAD drawings converted to an appropriate format (i.e. point cloud format such as STL), in order to enhance a comparison. A modified iterative closes point (ICP) is used for registration in this study. • Kinematics chain development: the geometric relationship is established using an analogy of robotics. The kinematics chain development results in a set of transformations required to identify the geometry of assemblies. • Local registration: a sliding cube moves along the assembly and quantifies the discrepancies where it is occurred. A local registration is performed on the contained points in the cube at each location that the discrepancy is being investigated. The resulted discrepancy is then transformed to the local coordinate system defined by the previously developed kinematics chain. The quantified discrepancy can be fed into a realignment planning framework for potential realignment strategies (Nahangi et al. 2015a).  4 EXPERIMENTAL VERIFICATION A set of experiments is designed to verify and validate the performance of the explained image-based algorithm for discrepancy quantification. 4.1 Design of Experiments The experiments are validated on a small-scale set of pipe spools existing in Civil Infrastructure Sensing Laboratory at University of Waterloo. The pipe spool is designed so that the connections and joints allow reconfigurability. In other words, the pipe spool can be reconfigured by introducing any alteration; so that the algorithm for discrepancy quantification can be tested. The reconfigurable pipe spool used for the experimental tests is shown in Figure 3.  An off-the-shelf digital camera is used for this experimental study. An SX40-HS Canon camera was used at 12.2 megapixel resolution. Technical details of the camera can be found in Table 1. Figure 3: The experimental specimen (reconfigurable set of pipe spool) and the investigated branch. Adjustable connections 047-6 applications. Moreover, using a finer mesh for 3D reconstruction will provide a more accurate point cloud although it is computationally more intensive.  5 CONCLUSIONS AND RECOMMENDATIONS An image-based framework was presented for automated discrepancy detection and realignment planning. The scope of this study was industrial assemblies, in general, and pipe spool assemblies, in particular. Typical users who are QA/QC managers or inspectors use cellphones, tablets or digital cameras to take unordered photos and upload to a cloud server for processing. The processing step results in a 3D point cloud representing the built status that will be compared with the original CAD drawings. For the comparison purpose, 3D image registration is employed. Once the two representing states are appropriately registered, an analogy of robotics results in the development of geometries of the assemblies. Such an approach provides localized discrepancy feedback to be used for realignment strategies. Some remarks on the experimental verification follow: • The enhanced image-based framework is significantly cheaper than laser-based techniques. This makes it more applicable, particularly for the cases where a lower level of accuracy is required. • The data acquisition step for the proposed image-based framework requires minimal prior knowledge and training. • Image-based discrepancies are reasonably reliable and accurate; however, laser-based point clouds are more precise, which concurs with the previous evaluation studies (Bhatla et al. 2012; Dai et al. 2013; Golparvar-Fard et al. 2011) A limitation of the proposed methodology is that it requires more processing time for the images to be retrieved and generate the 3D point cloud.  Acknowledgements The authors would like to thank Ian Bowes for assisting in the development of the framework during his coop term at the University of Waterloo. This research is funded by National Science and Engineering Research Council (NSERC) of Canada (partially funded by NSERC CRD with Aecon and SNC-Lavalin and NSERC Discovery Grant). Figure 5: Experimental results for rotational discrepancy (a); and translational discrepancy (b). 00.10.20.30.41 2 3 4Error (degree) Test Number Laser-based (Nahangi et al. 2015)Image-based0...0.41Error (degree) ber Laser-bas hangi et al. 2015b)Image-based(a) 00.050.10.150.21 1.5 2 2.5 3Error (cm) Test Number Laser-based (Nahangi et al. 2015b)Image-based(b) 047-8 References Ahmed, M., Haas, C. T., Haas, R., 2012. "Using Digital Photogrammetry for Pipe-Works Progress Tracking." Canadian Journal of Civil Engineering, 39(9), 1062-1071. Ahmed, M., Haas, C. T., Haas, R., 2011. "Toward Low-Cost 3D Automatic Pavement Distress Surveying: The Close Range Photogrammetry Approach." Canadian Journal of Civil Engineering, 38(12), 1301-1313. Bhatla, A., Choe, S. Y., Fierro, O., Leite, F., 2012. "Evaluation of Accuracy of as-Built 3D Modeling from Photos Taken by Handheld Digital Cameras." Autom. Constr., 28, 116-127. Bosche, F., Haas, C. T., Akinci, B., 2009. "Automated Recognition of 3D CAD Objects in Site Laser Scans for Project 3D Status Visualization and Performance Control." J. Comput. Civ. Eng., 23(6), 311-318. Brilakis, I., Dai, F., Radopoulou, S. -. , 2012. "Achievements and Challenges in Recognizing and Reconstructing Civil Infrastructure." Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 7474 LNCS, 151-176. Brilakis, I., Fathi, H., Rashidi, A., 2011. "Progressive 3D Reconstruction of Infrastructure with Videogrammetry." Autom. Constr., 20(7), 884-895. Dai, F., Rashidi, A., Brilakis, I., Vela, P., 2013. "Comparison of Image-Based and Time-of-Flight-Based Technologies for Three-Dimensional Reconstruction of Infrastructure." J. Constr. Eng. Manage., 139(1), 69-79. Dimitrov, A., and Golparvar-Fard, M., 2015. "Segmentation of Building Point Cloud Models Including Detailed Architectural/Structural Features and MEP Systems." Autom. Constr., 51, 32-45. El-Omari, S., and Moselhi, O., 2008. "Integrating 3D Laser Scanning and Photogrammetry for Progress Measurement of Construction Work." Autom. Constr., 18(1), 1-9. Golparvar-Fard, M., Bohn, J., Teizer, J., Savarese, S., Peña-Mora, F., 2011. "Evaluation of Image-Based Modeling and Laser Scanning Accuracy for Emerging Automated Performance Monitoring Techniques." Autom. Constr., 20(8), 1143-1155. Golparvar-Fard, M., Pena-Mora, F., Arboleda, C. A., Lee, S., 2009. "Visualization of Construction Progress Monitoring with 4D Simulation Model Overlaid on Time-Lapsed Photographs." J. Comput. Civ. Eng., 23(6), 391-404. Jahanshahi, M. R., Kelly, J. S., Masri, S. F., Sukhatme, G. S., 2009. "A Survey and Evaluation of Promising Approaches for Automatic Image-Based Defect Detection of Bridge Structures." Structure and Infrastructure Engineering, 5(6), 455-486. Koch, C., Jog, G., Brilakis, I., 2013. "Automated Pothole Distress Assessment using Asphalt Pavement Video Data." J. Comput. Civ. Eng., 27(4), 370-378. Lee, J., Son, H., Kim, C., Kim, C., 2013. "Skeleton-Based 3D Reconstruction of as-Built Pipelines from Laser-Scan Data." Autom. Constr., 35, 199-207. Liu, Y., Cho, S., Spencer Jr, B., Fan, J., 2014. "Concrete Crack Assessment using Digital Image Processing and 3D Scene Reconstruction." J. Comput. Civ. Eng., . Mikhail, E. M., Bethel, J. S., McGlone, J. C., 2001. Introduction to Modern Photogrammetry, John Wiley & Sons Inc, . Nahangi, M., Haas, C. T., West, J., Walbridge, S., 2015a. "Automatic Realignment of Defective Assemblies using an Inverse Kinematics Analogy." ASCE Journal of Computing in Civil Engineering. Nahangi, M., Yeung, J., Haas, C. T., Walbridge, S., West, J., 2015b. "Automated Assembly Discrepancy Feedback using 3D Imaging and Forward Kinematics." Automation in Construction, in press. Nahangi, M., and Haas, C. T., 2014. "Automated 3D Compliance Checking in Pipe Spool Fabrication." Advanced Engineering Informatics, 28(4), 360-369. Park, H. S., Lee, H. M., Adeli, H., Lee, I., 2007. "A New Approach for Health Monitoring of Structures: Terrestrial Laser Scanning." Computer-Aided Civil and Infrastructure Engineering, 22(1), 19-30. Razavi, S. N., and Haas, C. T., 2010. "Multisensor Data Fusion for on-Site Materials Tracking in Construction." Autom. Constr., 19(8), 1037-1046. Son, H., Kim, C., Kim, C., 2014. "Fully Automated as-Built 3D Pipeline Extraction Method from Laser-Scanned Data Based on Curvature Computation." J. Comput. Civ. Eng., , B4014003. Turkan, Y., Bosche, F., Haas, C. T., Haas, R., 2012. "Automated Progress Tracking using 4D Schedule and 3D Sensing Technologies." Autom. Constr., 22(0), 414-421. 047-9 Zhu, Z., and Brilakis, I., 2009. "Comparison of Optical Sensor-Based Spatial Data Collection Techniques for Civil Infrastructure Modeling." J. Comput. Civ. Eng., 23(3), 170-177.      047-10  5th International/11th Construction Specialty Conference 5e International/11e Conférence spécialisée sur la construction    Vancouver, British Columbia June 8 to June 10, 2015 / 8 juin au 10 juin 2015   AN IMAGE-BASED FRAMEWORK FOR AUTOMATED DISCREPANCY QUANTIFICATION AND REALIGNMENT OF INDUSTRIAL ASSEMBLIES Mohammad Nahangi1, Thomas Czerniawski1, Jamie Yeung1, Carl T. Haas1, Scott Walbridge1, Jeffrey West1 1 Civil Infrastructure Sensing Laboratory, Civil and Environmental Engineering Department, University of Waterloo, Waterloo, ON, Canada Abstract: Image-based frameworks for automated as-built modeling and infrastructure 3D reconstruction are increasingly being used in the construction industry. The increasing use of image-based technologies in the construction processes is due to the ease of application, cheap cost of enhancement, time effectiveness and high level of accuracy and automation. Automating the tasks involved in the inspection, quality control and quality assurance (QA/QC) processes are the potential avenues for utilizing such frameworks. This paper presents an image-based approach for acquiring the built status of fabricated assemblies and describes a framework for realigning the defective segments by borrowing concepts from robotics and forward kinematics. A laboratory set of experiments is then conducted to measure the accuracy and performance of the proposed framework for realignment of industrial facilities, in general, and pipe spools, in particular. Results demonstrate that it can be utilized in real cases providing the required level of accuracy as well. 1 INTRODUCTION Inspecting and monitoring of civil infrastructure is a critical challenge that has to be performed in various construction and maintenance phases. Inspection processes guarantee that construction segments and components are reliably fabricated, transported, and erected. They also provide the determination of as-built status and dimensions required for potential considerations needed to be given to defective segments. These tasks are difficult to perform on construction sites. Inspection processes to perform quality control and quality assurance (QA/QC) process become more critical in staged fabrication that requires sequential fabrication and installation. In the construction industry, in particular, staged fabrication includes but is not limited to prefabricated steel columns, concrete segments and huge pipe spools/modules. Although construction segments are typically correctly fabricated considering recent advances in the fabrication industry, they still require continuous inspection as dimensional errors are introduced during transportation and shipment. As-built status acquisition is therefore necessary to assess the current state of construction elements and determine the potential considerations for repair and realignment. Manual and conventional approaches to acquire the as-built status are inherently prone to error and therefore often ineffective and inaccurate. Craft workers use tape measuring and paper-based methods to inspect and control the fabrication quality. The as-built status acquired is therefore often unreliable and may cause profound unfavorable effects on construction sites such as schedule delays and rework. For example, in the case a defective segment is erected on site and discrepancies are not detected in a 047-1 timely and accurate manner, the segment is going to be either repaired or replaced whereby both cases are associated with rework. Therefore, as-built status of construction elements must be acquired in a timely manner and electronically transferred to various interfaces involved in order to provide managers and decision makers with an effective framework. In the last couple of decades, adequate accuracy and speed for as-built status acquisition and assessment has become possible utilizing various sensing technologies including Ultra Wide Band (UWB) tags, Global Positioning System (GPS), Radio Frequency Identification (RFID), and 3D imaging techniques. Among these sensing technologies, 3D imaging that provides spatial and detailed geometric information of objects is a reliable method for detailed and geometric as-built status acquisition. 3D imaging is the general name for the process used to generate three dimensional images, also called point clouds, that includes LADAR (Laser and Distance Ranging), range imaging, videogrammetry, traditional photogrammetry, and modern photogrammetry which is also known as image-based sensing technology. Among these technologies, laser scanning is the most common technique in the related industry because of the ease of use and the level of accuracy provided. However, there are major limitations involved with laser-based methods that make it less applicable under some conditions. These conditions may include the time required for setting up, data preprocessing, and occlusion occurrences in busy construction environments. Cost of purchasing is an additional barrier for laser scanners that makes them less accessible by all contractors and project managers. Developing a cheaper framework, commonly affordable, with a high rate of accuracy that can address the mentioned deficiencies of laser scanners is therefore necessary.  In recent years, some research efforts have attempted to address these limitations of laser scanners. Image and video based techniques that have been developed during the past several years are the emerging solutions that are significantly less expensive than laser scanning. It has been stated that image-based methods can provide the desirable level of accuracy and automation if well applied. According to (Dai et al. 2013; Golparvar-Fard et al. 2011; Zhu and Brilakis 2009) image-based 3D reconstruction has provided a comparable level of accuracy to laser-based methods without the previously discussed limitations of laser scanners.  One of the key advantages of utilizing the image-based techniques to assess the as-built status of construction elements is that it does not require any further consideration or procedure on construction sites as images are taken on a regular basis by inspectors (Golparvar-Fard et al. 2011). Such random and unordered images can be imported to a cloud server on a daily or weekly basis for further processing in order to generate 3D images for assessing the as-built status. This paper presents an image-based framework in order to accurately detect, characterize, and identify the discrepancies between the as-built status and as-designed data (Figure 1). Most research to date is relatively limited in its approach to the automated detection of visible damage and defects. There is still a significant lack of knowledge in civil infrastructure aimed at effectively and efficiently assessing the construction processes in different phases in order to realign defective assemblies in a timely manner. Unordered images taken by inspectors Cloud databaseuploadGenerate 3D modelreconstructionExtract 3D dimensions and compare with original drawingsprocessFeedback for inspectors Figure 1: Proposed framework for image-based 3D model generation and realignment strategy This research is directed toward damage prevention by localizing and quantifying fabrication errors and generating a potential solution for repair and rehabilitation in a systematic way. In order to comprehensively determine the knowledge gap and the need in the construction industry, the related background is extensively investigated. The research methodology to develop the proposed framework is then presented, and required functions and metrics are formed. A set of experiments are then designed in 047-2 order to validate the proposed framework and evaluate its accuracy by comparing the results with previously acquired laser-based data. 2 RESEARCH BACKGROUND The key contribution of the framework presented in this work is an image-based reconstruction technique for the as-built status assessment of industrial facilities compared with laser-based status acquisition. Recent studies attempted for inspecting, monitoring, and assessing the built status of civil infrastructure, in general, are also briefly reviewed. The related background is extensively investigated in order to determine the current state of these methods, and the knowledge gap is therefore comprehensively identified. 2.1 Monitoring and Inspecting Civil Infrastructure for the Built Status Assessment Using automated tools for monitoring and inspection purposes in construction provides a superior level of accuracy. The automated tools and techniques for reliable and accurate 3D measurement are also being increasingly used because of electronic communication and integration with other interfaces in the construction management system. 3D spatial locating devices such as ultra-wide band (UWB) and radio frequency identification (RFID) tags have been used for automated material location on heavy industrial construction facilities (Razavi and Haas 2010). They have been found to improve productivity and to reduce risk substantially. Additionally, 3D sensing and imaging technologies such as laser scanning, image/video-based reconstruction methods, and range imaging are used for a wide range of applications in construction. Laser scanning was first introduced as a tool for structural health monitoring (Park et al. 2007). Automating the tasks involved in the processing step to use the point clouds generated by laser scanners improved the applicability of laser-based techniques in real-sized and complex construction sites. Bosche et al. (2009) developed a method for automated recognition of CAD objects in cluttered point clouds that was later employed for automated progress tracking of construction components (Turkan et al. 2012). Laser scanning can be employed for compliance control used for QA/QC purposes. An automated framework was used for compliance control of industrial assemblies using laser-scanned point clouds (Nahangi and Haas 2014). A video-based 3D point cloud generation technique (videogrammetry), was also employed to acquire the as-built status (Brilakis et al. 2011; Koch et al. 2013), to be used for various purposes such as progress tracking and status assessment discussed earlier. Recently, 3D sensing technologies have been actively employed for tracking the progress of the industrial construction or Mechanical Electrical and Piping (MEP) components (Ahmed et al. 2012; Dimitrov and Golparvar-Fard 2015; Lee et al. 2013; Son et al. 2014). The use of image-based techniques and their application in the construction industry are discussed in the following section.  2.2 Image-Based 3D Reconstruction State-Of-The-Art in the Construction Industry Photogrammetry is well known as a 3D reconstruction technique. It was originally developed for generating the stereo elevation from aerial photos (Mikhail et al. 2001). In recent decades, advances in the camera production industry have provided the required accuracy for close-range applications such as industrial inspection and architectural documentation. The challenge associated with traditional photogrammetry techniques is the manual labour and computationally intensive processing involved. When high accuracy is desired for further investigations, computational cost and time also increased. This drawback of traditional photogrammetry has been a challenging factor over the years that makes it limited and currently not applicable for near-real-time modeling. Later advancements in the processing unit and related industry significantly improved the preprocessing time involved for common feature detection; however, it was still time consuming to achieve the required accuracy (Mikhail et al. 2001). With the rise of digital images and digital photos, traditional techniques were replaced with modern and digital photogrammetry, which resulted in profound improvements in accuracy considering the required processing time (Brilakis et al. 2012; Jahanshahi et al. 2009; Liu et al. 2014). In the construction industry, the use of modern photogrammetry thus became more significant as it was much less expensive than 047-3 laser scanning, the most common technique in AEC industry for as-built status acquisition, mentioned earlier. A study by (El-Omari and Moselhi 2008) showed the enhancement of integrating photogrammetry and laser scanning for progress tracking, which was proven to be more time effective. Researchers enhanced the use of close range photogrammetry for pavement pothole localization and progress tracking in pipe-works construction (Ahmed et al. 2011; Ahmed et al. 2012). Most of this research showed the time effectiveness of the application of photogrammetry in the related industry, however, some major limitations such as the desired accuracy and required intensive preprocessing, such as camera calibration, severely diminished the method’s practical utility compared to its competitor, laser scanning. New and innovative techniques have emerged, however for the detection of common features of the desired accuracy without requiring camera calibration. Such automated feature detection techniques include scale the invariant feature transform (SIFT) and speeded up robust features (SURF). (Golparvar-Fard et al. 2009) introduced as automated reconstruction techniques that use time-lapsed photographs to generate the as-built status of construction projects in order to measure the progress on construction sites. (Zhu and Brilakis 2009) investigated the accuracy of the optical-based methods compared to laser-based techniques for the reconstruction of the built status of civil infrastructure. They concluded that laser scanning has larger range for data collection with a higher level of accuracy that makes it more reliable though also more costly. They have suggested the efficient optical-based technique for different applications in the construction industry under various circumstances. These methods include videogrammetry and 3D range imaging that can be employed based on the level of accuracy required and the application used.  The construction industry, several research efforts have been attempted to evaluate the accuracy and investigate time and cost effectiveness of the image-based methods compared to laser scanning. Golparvar-Fard et al. (2011) evaluated the accuracy of an image-based 3D point cloud for determining the as-built status of construction elements compared to a laser-based 3D point cloud. They concluded that the accuracy provided by laser scanning is slightly higher than image-based techniques, however, image-based reconstruction is quicker and less costly.  Bhatla et al. (2012) evaluated the accuracy of an image-based reconstruction framework for assessing the built status of bridge girders. According to this study, image-based 3D reconstruction using free, promising commercial software packages provided the desired level of accuracy; however, their study also revealed that for highly accurate modeling purposes, laser scanning is more appropriate. (Dai et al. 2013) comprehensively compared the image-based techniques with time-of-flight in various civil infrastructure. Such a comparison includes the accuracy of reconstruction, density of the point cloud generated, cost of purchasing and utilizing, and the required time for setting up and processing. Their study revealed that both photogrammetry and videogrammetry can provide sufficiently dense and accurate point clouds to be employed for different applications such as visualization. It was stated that photogrammetry is sufficiently accurate to be used for as-built quality control on construction sites. In summary, image-based 3D reconstruction, which is originally known as photogrammetry, has provided the desired accuracy within a reasonable processing time due to the recent advances in the related industry that are previously discussed. Such improvements in image-based reconstruction provide the opportunity to employ related techniques in different inspection phases of civil infrastructure. Despite the significant impact of automated tools, in general, and image-based techniques, in particular, in the construction industry, their uses are mostly directed toward object recognition for status assessment. A preventive approach to measure the discrepancies in a time effective way that can avoid rework is needed. Nahangi et al. (2015a) showed the effectiveness of laser-based point clouds for discrepancy detection and a realignment strategy. This paper is aimed to generalize the previous developments by employing an image-based framework for the as-built status acquisition and to measure the performances of the resulted realignment strategies. The detailed methodology along with the required functions and metrics are explained in the following section. 047-4 3 PROPOSED METHODOLOGY For generating the 3D model required for performing the as-built status identification and discrepancy quantification, an image-based framework is used. As shown in Figure 2, the image-based algorithm for automated discrepancy detection and quantification has two primary steps: (1) Image-based 3D point cloud generation that provides the as-built status with a reasonable level of accuracy, and (2) processing that enables detection, localization, and quantification of incurred discrepancies based on the original drawings. The primary steps and the required components are extensively explained in the following section.  Inspectors and QA/AC managers take photosUpload photos to a cloud serverGenerate the 3D point cloud of assembliesNoise removal 3D image registration with orginal CAD drawingsDiscrepancy detection and quantificationRealignment plan development Figure 2: Algorithm for automated discrepancy quantification using an image-based framework 3.1 Image-Based 3D Point Cloud Generation Inspectors and QA/QC managers use off-the-shelf digital cameras. Cellphone and tablet cameras can also be used to acquire the required images. Once sufficient numbers of images with sufficient overlap are captured they are imported to a cloud server/database for point cloud generation. The image processing toolboxes for image stitching and 3D point cloud generation can be employed. These toolboxes include MATLAB Image Processing Toolbox and open source databases such as Open CV (Open source computer vision library at the University of Washington) and PCL (Point Cloud Library). In this study, Autodesk 123D Catch (www.123dapp.com/catch) is employed. The 3D reconstruction of construction assemblies using images is composed of a set of steps:  • Finding common features in the images taken, • Matching the common features detected, • Transforming the images into a global coordinate space based on the common features. Based on the required processing steps mentioned above, more images taken improves the applicability and performance of the algorithm. Furthermore, a sufficient level of overlapping between images must exist in order to find a reliable set of matching features. Once a 3D model of the construction environment is roughly generated, unwanted objects and the existing noise are filtered. Noise removal is performed manually by removing the objects outside the region of interest. This can be automated in the future. 047-5 3.2 Processing Once appropriate 3D model is generated using the proposed image-based framework, a set of processes is required to quantify the incurred discrepancies based on the framework developed by Nahangi et al. (2015b). Such processes include:  • Registration: the filtered 3D model can now be used for registration with CAD drawings converted to an appropriate format (i.e. point cloud format such as STL), in order to enhance a comparison. A modified iterative closes point (ICP) is used for registration in this study. • Kinematics chain development: the geometric relationship is established using an analogy of robotics. The kinematics chain development results in a set of transformations required to identify the geometry of assemblies. • Local registration: a sliding cube moves along the assembly and quantifies the discrepancies where it is occurred. A local registration is performed on the contained points in the cube at each location that the discrepancy is being investigated. The resulted discrepancy is then transformed to the local coordinate system defined by the previously developed kinematics chain. The quantified discrepancy can be fed into a realignment planning framework for potential realignment strategies (Nahangi et al. 2015a).  4 EXPERIMENTAL VERIFICATION A set of experiments is designed to verify and validate the performance of the explained image-based algorithm for discrepancy quantification. 4.1 Design of Experiments The experiments are validated on a small-scale set of pipe spools existing in Civil Infrastructure Sensing Laboratory at University of Waterloo. The pipe spool is designed so that the connections and joints allow reconfigurability. In other words, the pipe spool can be reconfigured by introducing any alteration; so that the algorithm for discrepancy quantification can be tested. The reconfigurable pipe spool used for the experimental tests is shown in Figure 3.  An off-the-shelf digital camera is used for this experimental study. An SX40-HS Canon camera was used at 12.2 megapixel resolution. Technical details of the camera can be found in Table 1. Figure 3: The experimental specimen (reconfigurable set of pipe spool) and the investigated branch. Adjustable connections 047-6 applications. Moreover, using a finer mesh for 3D reconstruction will provide a more accurate point cloud although it is computationally more intensive.  5 CONCLUSIONS AND RECOMMENDATIONS An image-based framework was presented for automated discrepancy detection and realignment planning. The scope of this study was industrial assemblies, in general, and pipe spool assemblies, in particular. Typical users who are QA/QC managers or inspectors use cellphones, tablets or digital cameras to take unordered photos and upload to a cloud server for processing. The processing step results in a 3D point cloud representing the built status that will be compared with the original CAD drawings. For the comparison purpose, 3D image registration is employed. Once the two representing states are appropriately registered, an analogy of robotics results in the development of geometries of the assemblies. Such an approach provides localized discrepancy feedback to be used for realignment strategies. Some remarks on the experimental verification follow: • The enhanced image-based framework is significantly cheaper than laser-based techniques. This makes it more applicable, particularly for the cases where a lower level of accuracy is required. • The data acquisition step for the proposed image-based framework requires minimal prior knowledge and training. • Image-based discrepancies are reasonably reliable and accurate; however, laser-based point clouds are more precise, which concurs with the previous evaluation studies (Bhatla et al. 2012; Dai et al. 2013; Golparvar-Fard et al. 2011) A limitation of the proposed methodology is that it requires more processing time for the images to be retrieved and generate the 3D point cloud.  Acknowledgements The authors would like to thank Ian Bowes for assisting in the development of the framework during his coop term at the University of Waterloo. This research is funded by National Science and Engineering Research Council (NSERC) of Canada (partially funded by NSERC CRD with Aecon and SNC-Lavalin and NSERC Discovery Grant). Figure 5: Experimental results for rotational discrepancy (a); and translational discrepancy (b). 00.10.20.30.41 2 3 4Error (degree) Test Number Laser-based (Nahangi et al. 2015)Image-based0...0.41Error (degree) ber Laser-bas hangi et al. 2015b)Image-based(a) 00.050.10.150.21 1.5 2 2.5 3Error (cm) Test Number Laser-based (Nahangi et al. 2015b)Image-based(b) 047-8 References Ahmed, M., Haas, C. T., Haas, R., 2012. "Using Digital Photogrammetry for Pipe-Works Progress Tracking." Canadian Journal of Civil Engineering, 39(9), 1062-1071. Ahmed, M., Haas, C. T., Haas, R., 2011. "Toward Low-Cost 3D Automatic Pavement Distress Surveying: The Close Range Photogrammetry Approach." Canadian Journal of Civil Engineering, 38(12), 1301-1313. Bhatla, A., Choe, S. Y., Fierro, O., Leite, F., 2012. "Evaluation of Accuracy of as-Built 3D Modeling from Photos Taken by Handheld Digital Cameras." Autom. Constr., 28, 116-127. Bosche, F., Haas, C. T., Akinci, B., 2009. "Automated Recognition of 3D CAD Objects in Site Laser Scans for Project 3D Status Visualization and Performance Control." J. Comput. Civ. Eng., 23(6), 311-318. Brilakis, I., Dai, F., Radopoulou, S. -. , 2012. "Achievements and Challenges in Recognizing and Reconstructing Civil Infrastructure." Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 7474 LNCS, 151-176. Brilakis, I., Fathi, H., Rashidi, A., 2011. "Progressive 3D Reconstruction of Infrastructure with Videogrammetry." Autom. Constr., 20(7), 884-895. Dai, F., Rashidi, A., Brilakis, I., Vela, P., 2013. "Comparison of Image-Based and Time-of-Flight-Based Technologies for Three-Dimensional Reconstruction of Infrastructure." J. Constr. Eng. Manage., 139(1), 69-79. Dimitrov, A., and Golparvar-Fard, M., 2015. "Segmentation of Building Point Cloud Models Including Detailed Architectural/Structural Features and MEP Systems." Autom. Constr., 51, 32-45. El-Omari, S., and Moselhi, O., 2008. "Integrating 3D Laser Scanning and Photogrammetry for Progress Measurement of Construction Work." Autom. Constr., 18(1), 1-9. Golparvar-Fard, M., Bohn, J., Teizer, J., Savarese, S., Peña-Mora, F., 2011. "Evaluation of Image-Based Modeling and Laser Scanning Accuracy for Emerging Automated Performance Monitoring Techniques." Autom. Constr., 20(8), 1143-1155. Golparvar-Fard, M., Pena-Mora, F., Arboleda, C. A., Lee, S., 2009. "Visualization of Construction Progress Monitoring with 4D Simulation Model Overlaid on Time-Lapsed Photographs." J. Comput. Civ. Eng., 23(6), 391-404. Jahanshahi, M. R., Kelly, J. S., Masri, S. F., Sukhatme, G. S., 2009. "A Survey and Evaluation of Promising Approaches for Automatic Image-Based Defect Detection of Bridge Structures." Structure and Infrastructure Engineering, 5(6), 455-486. Koch, C., Jog, G., Brilakis, I., 2013. "Automated Pothole Distress Assessment using Asphalt Pavement Video Data." J. Comput. Civ. Eng., 27(4), 370-378. Lee, J., Son, H., Kim, C., Kim, C., 2013. "Skeleton-Based 3D Reconstruction of as-Built Pipelines from Laser-Scan Data." Autom. Constr., 35, 199-207. Liu, Y., Cho, S., Spencer Jr, B., Fan, J., 2014. "Concrete Crack Assessment using Digital Image Processing and 3D Scene Reconstruction." J. Comput. Civ. Eng., . Mikhail, E. M., Bethel, J. S., McGlone, J. C., 2001. Introduction to Modern Photogrammetry, John Wiley & Sons Inc, . Nahangi, M., Haas, C. T., West, J., Walbridge, S., 2015a. "Automatic Realignment of Defective Assemblies using an Inverse Kinematics Analogy." ASCE Journal of Computing in Civil Engineering. Nahangi, M., Yeung, J., Haas, C. T., Walbridge, S., West, J., 2015b. "Automated Assembly Discrepancy Feedback using 3D Imaging and Forward Kinematics." Automation in Construction, in press. Nahangi, M., and Haas, C. T., 2014. "Automated 3D Compliance Checking in Pipe Spool Fabrication." Advanced Engineering Informatics, 28(4), 360-369. Park, H. S., Lee, H. M., Adeli, H., Lee, I., 2007. "A New Approach for Health Monitoring of Structures: Terrestrial Laser Scanning." Computer-Aided Civil and Infrastructure Engineering, 22(1), 19-30. Razavi, S. N., and Haas, C. T., 2010. "Multisensor Data Fusion for on-Site Materials Tracking in Construction." Autom. Constr., 19(8), 1037-1046. Son, H., Kim, C., Kim, C., 2014. "Fully Automated as-Built 3D Pipeline Extraction Method from Laser-Scanned Data Based on Curvature Computation." J. Comput. Civ. Eng., , B4014003. Turkan, Y., Bosche, F., Haas, C. T., Haas, R., 2012. "Automated Progress Tracking using 4D Schedule and 3D Sensing Technologies." Autom. Constr., 22(0), 414-421. 047-9 Zhu, Z., and Brilakis, I., 2009. "Comparison of Optical Sensor-Based Spatial Data Collection Techniques for Civil Infrastructure Modeling." J. Comput. Civ. Eng., 23(3), 170-177.      047-10  AN IMAGE-BASED FRAMEWORK FOR AUTOMATED DISCREPANCY QUANTIFICATION AND REALIGNMENT OF INDUSTRIAL ASSEMBLIES Mohammad Nahangi  & Thomas  Czern iawski ,  Jamie  Yeung,  Car l  Haas ,  Scot t  Walbr idge ,  Je ff rey  West  OUTLINE •  Problem statement and motivation •  Related background •  Proposed framework •  Experimental results •  Conclusions and outlook Civil Infrastructure Sensing (CIS) Laboratory 2	  PROBLEM STATEMENT Civil Infrastructure Sensing (CIS) Laboratory 3	  PROBLEM STATEMENT Civil Infrastructure Sensing (CIS) Laboratory 4	  RELATED BACKGROUND Civil Infrastructure Sensing (CIS) Laboratory •  Discrepancy quantification "   Deviation analysis "   Our previous research 5	  Detected Deviation Connecting Piece Unconstrained registration Constrained registration Anil et al. 2013 RELATED BACKGROUND Civil Infrastructure Sensing (CIS) Laboratory •  Image-based 3D reconstruction  6	  Liu et al. 2014 Golparvar-Fard et al. 2011 Ahmed et al. 2013 CONTRIBUTION Developing an accurate and reliable framework for quick discrepancy quantification between the built and designed states for industrial assemblies Civil Infrastructure Sensing (CIS) Laboratory 7	  PROPOSED FRAMEWORK Civil Infrastructure Sensing (CIS) Laboratory •  3D model generation and fabrication/assembly control Unordered images taken by inspectors Cloud databaseuploadGenerate 3D modelreconstructionExtract 3D dimensions and compare with original drawingsprocessFeedback for inspectors8	  METHODOLOGY Civil Infrastructure Sensing (CIS) Laboratory 9	  Inspectors and QA/AC managers take photosUpload photos to a cloud serverGenerate the 3D point cloud of assembliesNoise removal 3D image registration with orginal CAD drawingsDiscrepancy detection and quantificationRealignment plan developmentSTEP 1: IMAGE-BASED 3D RECONSTRUCTION Civil Infrastructure Sensing (CIS) Laboratory •  Potential tools for 3D reconstruction "   MATLAB image processing toolbox "   Point Cloud Library (PCL) " OpenCV (Computer Vision)-University of Washington "   Online servers (Autodesk 123D Catch-Microsoft Photosynth) 10	  STEP 1: IMAGE-BASED 3D RECONSTRUCTION Civil Infrastructure Sensing (CIS) Laboratory •  Summary of 3D reconstruction "   Finding common features in the images taken, "   Matching the common features detected, and "   Transforming the images into a global coordinate system based on the previously matched features 11	  Jahanshahi et al. 2009  STEP 2: DISCREPANCY ANALYSIS Civil Infrastructure Sensing (CIS) Laboratory •  Preprocessing "  Clutter removal    "  3D CAD model format conversion  12	  (a) (b) (c) (1) (2) (3) STEP 2: DISCREPANCY ANALYSIS Civil Infrastructure Sensing (CIS) Laboratory •  Point cloud registration-Iterative Closest Point (ICP) "   A modified (constrained) registration is developed for considering real, on-site situations for assembly and erection  13	  (a) (b)Discrepancy resultedDiscrepancy resultedConnecting platform is fixedUser identifies the fixed frames as the origin platformRefine the registration by matching the selected fixed framesSTEP 2: DISCREPANCY ANALYSIS Civil Infrastructure Sensing (CIS) Laboratory •  Kinematics chain development-robotics analogy  14	  ​𝑧↓5  ​𝑥↓5  ​𝑧↓4  ​𝑥↓4  ​𝑥↓1  ​𝑧↓1  ​𝐻↓1  ​𝑥↓2  ​𝑧↓2  ​𝑥↓3  ​𝑧↓3  ​𝐿↓2  ​𝐿↓1  ​𝐿↓3  ​𝑥↓6  ​𝑧↓6  ​𝐿↓4  ​𝐻↓2  ​𝑧↓7  ​𝑥↓7  ​𝐿↓5  Origin of the assembly where it is connected to the adjacent segment ​𝑥↓𝐺  {𝐺} ^0↑𝐺↓𝑇  Cube position where the discrepancy is being investigated locally ​𝑧↓𝐺  {𝑙} STEP 2: DISCREPANCY ANALYSIS Civil Infrastructure Sensing (CIS) Laboratory •  Quantification of local discrepancies and transformation into global coordinate system   15	  ​{𝑙}↓𝑖  𝑀𝑆 ​𝑠↓𝑖  ​𝑚↓𝑖  ​( ​𝑟 , ​𝑡 )↓𝑖↑𝑙  {𝑔} ​( ​𝑟 , ​𝑡 )↓𝑖↑𝑔 ←𝐼𝐶𝑃( ​𝑚↓𝑖 , ​𝑠↓𝑖 )	  (​​𝑇 ↓𝑔↑𝑙 )Cube located at a critical location Fixed frame (c) EXPERIMENTAL RESULTS Civil Infrastructure Sensing (CIS) Laboratory •  Design of experiments   16	  Adjustable connections EXPERIMENTAL RESULTS Civil Infrastructure Sensing (CIS) Laboratory •  Camera properties (Canon SX 40-HS)          * [L]: Large, [M]: Medium, [S]: Small.   17	  Camera	  type Digital	  camera Image	  resolu3on	  (size) 4000×3000	  [L]	  *,	  2816×2112	  [M1]	  *	  	  1600×1200	  [M2]	  *,	  640×480	  [S] Shu:er	  speed 1/3200	  sec	  (min)-­‐	  15	  sec	  (max) Focal	  length 24-­‐840	  mm EXPERIMENTAL RESULTS Civil Infrastructure Sensing (CIS) Laboratory           18	  (a) (b) (c) EXPERIMENTAL RESULTS Civil Infrastructure Sensing (CIS) Laboratory           19	  Size of images taken [L]: 4000×3000 Number of images processed 59 Total number of points retrieved 134,742 Number of points retrieved from the pipe spool 15,152 Total processing time 26 min EXPERIMENTAL RESULTS Civil Infrastructure Sensing (CIS) Laboratory           20	  0 0.1 0.2 0.3 0.4 1 2 3 4 Error (degree) Test Number Laser-based (Nahangi et al. 2015b) Image-based  Rotational 0 0.05 0.1 0.15 0.2 1 1.5 2 2.5 3 Error (cm) Test Number Laser-based (Nahangi et al. 2015b) Image-based Translational CONCLUSIONS Civil Infrastructure Sensing (CIS) Laboratory •  Remarked observations "   Accuracy (concurs with previous studies) "   Time-related aspects "   Applicability  •  Path forward "   Time effectiveness improvement "   Real-time assembly "   Smart and guided quality control 21	  Civil Infrastructure Sensing (CIS) Laboratory Question & Discussion 22	  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.52660.1-0076319/manifest

Comment

Related Items