International Construction Specialty Conference of the Canadian Society for Civil Engineering (ICSC) (5th : 2015)

An exploration of image-based walk through technologies Bradley, David Cody; Rankohi, Sara; Rankin, Jeff H.; Waugh, Lloyd M. Jun 30, 2015

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
52660-Bradley_D_et_al_ICSC15_081_Image_based_Walkthrough_System.pdf [ 2.55MB ]
Metadata
JSON: 52660-1.0076472.json
JSON-LD: 52660-1.0076472-ld.json
RDF/XML (Pretty): 52660-1.0076472-rdf.xml
RDF/JSON: 52660-1.0076472-rdf.json
Turtle: 52660-1.0076472-turtle.txt
N-Triples: 52660-1.0076472-rdf-ntriples.txt
Original Record: 52660-1.0076472-source.json
Full Text
52660-1.0076472-fulltext.txt
Citation
52660-1.0076472.ris

Full Text

5th International/11th Construction Specialty Conference 5e International/11e Conférence spécialisée sur la construction    Vancouver, British Columbia June 8 to June 10, 2015 / 8 juin au 10 juin 2015   AN EXPLORATION OF IMAGE-BASED WALK THROUGH TECHNOLOGIES David Cody Bradley 1, Sara Rankohi 2,5, Jeff H. Rankin 3, Lloyd M. Waugh 4 1 O'Kane Consultants Inc., Fredericton, NB, Canada 2 Canam Group/Groupe Canam, Montreal, QC, Canada 3 University of New Brunswick, Fredericton, NB, Canada 4 University of New Brunswick, Fredericton, NB, Canada 5 sara.rankohi@groupecanam.com Abstract: Construction sites contain vast amounts of information. Recent advances in image-based visualization techniques enable monitoring construction progress using interactive and visual approaches. Photographs capture construction information in great detail while allowing the user to absorb as much information as they need. Construction as-built images captures great detail yet excludes portions of the site which may become of interest to the project participants. To overcome limitations of existing image-based monitoring techniques, this research focuses on an image-based virtual walk-through visualization approach to monitor construction sites. The use of a 3D model to create a virtual walk-through enables a comprehensive record and delivers the information in an intuitive manner. A pilot study is conducted to create several as-built 3D models from construction photographs. Then these 3D models are visualized in a 3D walk-through model. Within such an environment, the as-built construction objects are visualized to generate the status of construction progress. The study shows that this 3D image-based walk-through system introduces an advanced model that enables the user to have a realistic understanding of the construction site. 1 INTRODUCTION Research has been conducted on image-based walk-trhough virtual reality technologies for monitoring construction projects (Rankohi et al. 2013). Dang et al. (2011) proposed a panorama-based semi-interactive 3D reconstruction framework for indoor scenes. Their framework overcomes the problems of limited field of view in indoor scenes and has the desired properties of: robustness, efficiency, and accuracy. Bradley et al. (2005) present a system for virtual navigation in real environments using image-based panorama rendering. Roh et al. (2011) proposed an object-based 3D walk-through model for interior construction progress monitoring. Virtual Reality Documentation or VR Doc (www.vrdoc.ca) records construction progress through high resolution 360 degree panoramas (Rankohi et al. 2014). An intuitive VR Doc user interface allows the user to view panoramas from a specific date and location. Although VR Doc panoramas provide great detail through zoom capability, items out of view may become important as construction progresses. Therefore, increasing VR Doc coverage beyond predetermined locations creates a comprehensive photographic record of construction which is of value to the entire construction team. Moreover, using VR panoramas to capture an entire construction site would result in an overwhelming amount of data and 081-1 requires a significant amount of time to capture and process. Using photogrammetry techniques to compile site photographs into a single 3D walk-through environment provides a simple method of displaying and navigating a large and detailed data set. The walk-through is seen as a “global” tool to be used at all levels of management and through all stages of construction. Before construction even begins the walk-through could assist with site planning and disaster management. During construction, inspectors would be able to perform quick assessments of the work being performed. Safety personnel could perform routine safety audits. Subcontractors would better predict their schedule by checking on the status of the work preceding them. Use also continues long after construction is complete by leveraging the ability to locate hidden items when planning upgrades and renovations. The technology could even be expanding to perform measurements, incorporating surveying and material quantity calculations. This paper investigates generating a low resolution walk-through model, which retains sufficient detail to visualize and document progress that would complement current features. To reach this goal, different software and photography techniques have been tested and the results are compared. Finally based on the achieved results, a first person Walkthrough model of an interior hallway is developed.  2 GOPRO CAMERA A GoPro camera (www.gopro.com) was chosen to incorporate 3D model photography into the existing VR Doc workflow. The GoPro Hero 2 is an 11 megapixel camera capable of capturing both HD video and still images. The camera measures 42 mm x 60 mm x 30 mm, and has many mounting options that allow attachment to your body, helmet and various other objects or vehicles. The lens provides a 170 degree horizontal field of view which is captured by a 1.0160 cm CMOS sensor. The size and ease of portability makes it ideal to integrate into current VR Doc workflow. A time lapse option lets the user specify a capture interval of 0.5, 1, 2, 5, 10, 30 or 60 seconds. An interval of 2 seconds was chosen as it allows sufficient time to pass at a walking pace between images. Using this mode, the camera could be mounted to the photographer during VR Doc capture to simultaneously capture 3D model data. Using the GoPro for photogrammetry purposes poses some problems. While able to capture a large field of view and minimize the number of photographs to cover a scene, the lens produces images with significant optical imperfections that affect the construction of a 3D model. The most visible imperfection is barrel distortion. Overlapping images are difficult to match as objects become different shapes in each image. Figure 1 compares the barrel distortion produced by the GoPro to an image taken with a Canon EOS 60D (a 17.9 megapixels camera with a 22.3 × 14.9 mm (1.6x crop factor) sensor). Corrections were applied to help reduce imperfections found in the GoPro images. Table 1 summarizes corrections applied using Dxo OpticsPro 10 (www.dxo.com). Table 1: A summary of corrections applied using Dxo Dxo Optical Corrections Lens Distortion Vignetting Chromatic Aberration Lens Softness Denoising 3 BUILDING A 3D MODEL 3.1 Photography A 3D model generated from photographs has different photography and post processing requirements than VR Doc panoramas. While capturing the twelve photographs for a VR Doc panorama the camera 081-2 3.3 Virtual Walk-through A first person walk-through is an intuitive method to explore the model in detail. The user is free to move around the model as if they were walking on foot with control of the direction of view as well as forward and backward and side to side movement. Many video games employ this type of viewpoint which is achieved using a video game engine. The video game engine is used to design and publish multiplatform video games, where in this case, the video game is simply the 3D model.  The Unity (www.unity3d.com) game engine was chosen to build the walk-through as it has a free version with sufficient capabilities. After building the model in Photoscan, it is exported in .fbx format with textures exported as .png. These two files are then imported into a Unity project. Before adding the model to the scene, picture normal vectors were set to calculate; otherwise the model was displayed inside out. Colliders were also enabled to form a rigid surface over the entire mesh, creating boundaries on floors, walls and objects. Textures are a separate component from the model and must be applied to the mesh. Unity has many options to apply a texture and introduce artificial lighting. For this application we wish to preserve the original scene and use the unlit texture option. Once rendered, the walk-through can be exported as a standalone executable file to be viewed on a personal computer. There is also a web delivery option which would facilitate delivery to a client. 3.4 Description of Variables/Challenges Various challenges were encountered during the photography stage. Errors typically only become evident after photography is complete when building the 3D model. It is also difficult to determine the exact cause of the error as they tend to produce similar results; therefore it is important to pay close attention to all details during photography. Table 2 highlights the main challenges and expected solutions associated with photography. Table 2: The main challenges and expected solutions associated with photography Challenges Solutions Model will not build or contains holes or is severely distorted Increase number of photos  Avoid moving objects in scene  More photos from other angles  Photograph objects with texture Low detail in model Increase photo resolution The number of photographs required to build a 3D model depends on the size of the scene, the lens used and the amount of overlap between images. A wide angle lens is recommended for interior photography in order to obtain sufficient overlap while minimizing the amount of images. A wide angle lens also aids in scenes with low texture, providing more opportunities for key point detection. Holes will appear in the model if portions are not sufficiently covered during photography. In extreme cases entire sections of the model will be left out or multiple sub-models will be created if there is not enough information to link sections together. Distortions or holes will also be created if objects move or if the scene contains repeating patterns. Repeating patterns have similar key points that can be confused during matching. Several camera settings influence the outcome of the model. Interiors typically have lower light levels than exteriors and require increasing the ISO. High ISOs introduce noise into the image making key point identification difficult. Wide apertures are also often used in low light and have a large impact on the depth of field. Many photography factors contribute to a sharp image including: ISO, shutter speed and focus point. A sharp image is critical to key point identification and therefore attention to focus point and shutter speed is essential, especially while hand holding the camera. A properly photographed scene will proceed through post processing with little difficulty. Another challenge in post processing is the time required to build the model. Projects which contain hundreds of images with high quality settings can take upwards of twelve hours to process. More processing resources would reduce this time and allow more trials to be performed. Post processing related challenges encountered are identified in Table 3. 081-4 Table 3: Post processing challenges Challenges Solutions Low key point matching Apply distortion correction  Reduce noise in images  Check focus point  Increase shutter speed  Decrease ISO Slow processing Use a faster computer 4 MODEL BUIDING SOFTWARE The following section examines each variable independently using panoramas taken in a laboratory setting. Each variable has a base image with which the effects of the variable are compared. Base images were taken in Room H135 Head Hall using artificial overhead lighting with one window providing exterior light. The software that have been used in this pilot study are Agisoft PhotoScan, Photomodeler, Autodesk 123D Catch, Visual SFM, and Unity 3D.  Agisoft Photoscan (www.agisoft.com) is a fully automated commercially available software package. Photoscan is also a complete 3D model building package creating 3D point clouds, meshes and textured models. Limited tools are available for point cloud and mesh editing, although it is possible to export either for further refinement. Photoscan performs very well with large unordered photo collections. Similar to Photoscan, Photomodeler (www.photomodeler.com) is a complete 3D model building package. Photomodeler provides the user a great amount of control throughout the building process. Large unordered photo collections of the interior of a building were not one of Photomodeler’s strengths. The models generated were very distorted. Photomodeler appears to be better suited to stereo pair photography with the use of coded targets. While it is possible to create dense point clouds without targets, this type of work is largely seen in aerial imagery. Autodesk 123D Catch (www.123dapp.com/catch) is a free web based service provided by Autodesk. As many as seventy images, three mega pixels or larger can be uploaded to their servers where all of the processing takes place. Model construction is fast as local computing resources are not a limiting factor. Poor results were obtained when building 3D models of interiors. Models were very distorted, incomplete, and many of the photos were unable to be matched. Visual SFM (www.ccwu.me/vsfm) is an open source program which facilitates the use of a collection of scripts developed by various researchers by incorporating a GUI. There are no limitations on the number of images although the largest dimension must be no greater than 3,200 pixels. Once a point cloud is created it can be exported for further processing or for use in other software to continue with 3D model generation. Unity 3D (www.unity3d.com) was the only video game engine tested and produced satisfactory results. Standalone and web based delivery options are possible and fit well with the current workflow of VR Doc. 5 PILOT STUDY 5.1 Initial Model Testing Testing was originally performed to evaluate each software package. The three packages evaluated are Photoscan, 123D Catch, and Visual SFM. Several other dimensions of building a 3D model became evident throughout testing. These included the camera used, which was further divided into post processing steps, as well as the location of photography. In order to compare the test results, tests have been conducted by using different cameras and in different locations.  081-5 The GoPro Hero 2 was the first camera used as it fit well within the current VR Doc workflow. The Canon 60D was added to the tests as the GoPro was producing poor results. A comparison would help direct efforts in determining sources of error. Initial testing was done in Room H135 to remain consistent with previous testing on change detection in panoramas. Two other locations were added to overcome low texture and contrast in Room H135. With the original goal of evaluating the software in mind, the following discussion is organized by location with the other dimensions discussed within each section. 5.2 Room H135 Low texture walls combined with repeating patterns found in the desks and chairs presented significant challenges in this location. Seventy nine images were taken with the GoPro camera in Room H135 and loaded in Photoscan. Using a medium alignment setting and pair selection disabled, 20 images were aligned and included in the point cloud. Photoscan was unable to move further in the processing due to the low number of key points. The point cloud did not resemble any specific section of the room. Using the Canon 60D, 99 photographs were taken and loaded in Photoscan. Using a medium alignment setting, a model was able to be produced with 81 images although significant distortions were present. Fifty seven of the 79 images from the GoPro camera could not be matched in 123D. The images which were matched produced a distorted and hole filled model indicating the matches were poor. Slight improvements were noted using images from the 60D although key point matching is still poor with only 32 of the 102 images matched. 123D Catch shows great difficulty matching key points of interior photographs. The point cloud generated by Visual SFM with the GoPro images was broken into 3 models. Without sufficient key points to match images Visual SFM will create sub models with the matched images. Thirty nine images were match with the sub models containing 18, 12 and 9 images. All of the sub models included components of the same objects and excluded the same portions of the room. Excluded portions contained little texture and were highly repetitive. Using the images from the 60D Visual SFM produced 4 separate models which included similar components to those in the GoPro model. The geometry of the models appears to be correct in both the GoPro and 60D models. The 60D model has notably more points than the GoPro model, 124,413 vs. 28,899 respectively. The different 3D models created for Room H135 are shown in Figure 2. 5.3 Room H229B Room H229B (office) is much smaller than Room H135 (classroom), but contains more texture and randomness. The number of photos taken with the GoPro was increased to 102 in an attempt to capture occlusions generated by the furniture. The arrangement of the furniture and size of the room made photography difficult, therefore only two walls were captured for reconstructed. Photoscan was expected to produce better results than those obtained in Room H135 due to the added texture. One hundred images were aligned in the point cloud but Photoscan was unable to continue with the model due to a lack of key points. A significant improvement was noted with the 63 images taken using the Canon 60D. All 63 images aligned and produced a clean textured model with minimal holes or distortions. 123D Catch produced a slightly better model with the GoPro images from Room H229B than ROOM H135. While there were very few holes, the geometry was visibly incorrect as noted by the angle between walls. The texture was also distorted indicating poor key point matching. The improvement over ROOM H135 demonstrates the necessity of texture. The added texture was also noted in the images taken with the Canon 60D which produced a significantly better model than Room H135. The textured model did contain holes but geometry was visibly correct. Multiple models were again created with the GoPro images in Visual SFM indicating a disconnect between key points. One main model contained the majority of the images and the geometry was visually correct. A significant improvement was noted with the images from the 60D. Visual SFM created a single model with a dense point cloud consisting of 365,970 points compared to 24,976 points in the main GoPro model. Comparing the results between Room H229B and Room H135 across all software reveals 081-6 system could be expanded to capture spherical video instead of photographs, providing complete coverage of the site. These methods would require software development as there are no off the shelf solutions. 7 CONCLUSION Construction sites contain vast amounts of information. Photographs capture this in great detail while allowing the user to absorb as much information as they need. The photographic recorded provided by VR Doc panoramas captures great detail yet excludes portions of the site which may become of interest. The use of a 3D model to create a virtual walk-through enables a comprehensive record and delivers the information in an intuitive manner. With further refinements, the virtual walk-through could become a key component to construction personnel. Future efforts would be well spent investigating photography workflows and automating post processing. Improvements in these areas would reduce the workload involved and make integrating the technology into live construction sites simpler. Methods of cleaning up the model are also an avenue worth exploring. This paper investigates different image-based modeling methods to improve VR Doc applications. Pilot studies have been conducted to test different hardware (GoPro camera and Canon 60D) and software (Agisoft PhotoScan, Photomodeler, Autodesk 123D Catch, Visual SFM, and Unity 3D) available for creating a 3D walk-through model. The results show that although GoPro is a portable, light, and handy camera for the construction sites, it has significant distortions due to the wide angle lens. Although corrections are applied in Photoscan, better results were found if distortions were corrected in Dxo before attempting to build the model. The built-in corrections were sufficient when using the Canon 60D with a Sigma 20 mm lens. Moreover, results show that the Unity walk-through software with its web delivery option fits well with the current delivery of VR Doc and could be easily integrated into the workflow. However, the current walk-through model contains distortions which would need to be refined in future studies before implementation in the construction industry. References Agisoft Photoscan (2015), Agisoft LLC, http://agisoft.com, last visited 2015 Feb. 15.  Autodesk 123D Catch (2015), Autodesk Inc., https://123dapp.com/catch, last visited 2015 Feb. 15. Bradley D., Brunton A., Fiala M., Roth G. (2005). Image-based Navigation in Real Environments Using Panoramas, HAVE–IEEE International Workshop on Haptic Audio Visual Environments and their Applications, Ottawa, Ontario, Canada. Dang, T. K., Worring, M., Bui, T. D. (2011). A Semi-Interactive Panorama Based 3D Reconstruction Framework For Indoor Scenes, Journal of Computer Vision and Image Understanding, 115: 1516-1524. Photomodeler (2015), Eos Systems Inc, https://photomodeler.com, last visited 2015 Feb. 15. Rankohi S., Waugh L. M., Bradley D. C., (2014), The Efficacy of Virtual Reality Technologies Relative to Traditional Methods of Assessing Project Status: An Experimental Study, 13th International Conference on Construction Applications of Virtual Reality, London, UK. Rankohi and Waugh, (2013), Review and analysis of augmented reality literature for construction industry. Visualization in Engineering 2013 1:9. Roh, S., Aziz Z., Peña-Mora F., (2011), An object-based 3D walk-through model for interior construction progress monitoring, J. of Automation in Construction, Elsevier, 20: 66–75 UNB CEM (2015), Virtual Reality Documentation, University of New Brunswick Construction Engineering and Management Program, http://vrdoc.ca, last visited 2015 Feb. 15.  Unity 3D (2015), Unity Inc., https://unity3d.com, last visited 2015 Feb. 15. Visual SFM (2015), https://ccwu.me/vsfm, last visited 2015 Feb. 15. 081-10 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.52660.1-0076472/manifest

Comment

Related Items