International Construction Specialty Conference of the Canadian Society for Civil Engineering (ICSC) (5th : 2015)

Hybrid object detection and marker recognition system to monitor performance of the hauling dump trucks Azar, Ehsan Rezazadeh Jun 30, 2015

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
52660-Azar_E_ICSC15_105_Hybrid_Object.pdf [ 637.12kB ]
Metadata
JSON: 52660-1.0076379.json
JSON-LD: 52660-1.0076379-ld.json
RDF/XML (Pretty): 52660-1.0076379-rdf.xml
RDF/JSON: 52660-1.0076379-rdf.json
Turtle: 52660-1.0076379-turtle.txt
N-Triples: 52660-1.0076379-rdf-ntriples.txt
Original Record: 52660-1.0076379-source.json
Full Text
52660-1.0076379-fulltext.txt
Citation
52660-1.0076379.ris

Full Text

5th International/11th Construction Specialty Conference 5e International/11e Conférence spécialisée sur la construction    Vancouver, British Columbia June 8 to June 10, 2015 / 8 juin au 10 juin 2015   HYBRID OBJECT DETECTION AND MARKER RECOGNITION SYSTEM TO MONITOR PERFORMANCE OF THE HAULING DUMP TRUCKS Ehsan Rezazadeh Azar1,2 1 Lakehead University, Canada 2 eazar@lakeheadu.ca Abstract: Various sensing technologies have been developed for real-time monitoring of the earthmoving fleet on construction and surface mining jobsites. Computer vision-based methods are among the most recent techniques employed to track earthmoving machines in a construction field. All the research efforts in this area investigated computer vision algorithms to detect and track different types of equipment, but they were unable to identify individual machines within the fleet. This paper introduces a hybrid system which uses a combination of an object detection method and a marker recognition algorithm to identify individual dump trucks using specific markers attached on them. Background subtraction and Histogram of Oriented Gradients (HOG) algorithm were used to detect candidates in the video frames and then the system zooms on the detected bounding boxes to obtain a better resolution for marker detection.  Next, the marker recognition module searches the zoomed frame for a marker and in case of successful identification; it verifies the detection and records a trip for that individual truck. The results showed promising performance, in which the system identified 83% of the hauling trips made by the marked machines without producing any false positives.  1 INTRODUCTION Real-time monitoring of the construction equipment is beneficial for proactive equipment management and productivity measurement and, in addition, the collected productivity data can be used for simulation, planning, and cost estimation of future projects. Manual monitoring is costly, labour-intensive, and non-real-time, therefore, different automated data collection systems have been developed to monitor earthmoving machines, namely GPS, which has been the main tool for locating the equipment fleet in construction and open-pit mining fields. The GPS-based systems can only provide spatiotemporal data of a machine in predetermined time intervals as a beacon on the map of the jobsite, therefore methods have been developed to analyze these raw data and provide productivity information such as cycle times (Montaser and Moselhi 2014; Hildreth et al. 2005; Navon et al. 2004). In addition, records of the weight sensors installed on earthmoving equipment, such as dump trucks, loaders, and excavators, were considered in analysis to enrich the collected productivity data (Ibrahim and Moselhi 2014; Akhavian and Behzadan 2013). Other radio-based sensing devices, including ultra-wideband (UWB) (Cheng et al. 2011) and radio-frequency identification (RFID) (Montaser and Moselhi 2012), were also tested for tracking construction equipment. The radio-based systems can only provide spatiotemporal data, i.e. time and location, which cannot precisely represent machine actions.  Computer vision-based monitoring systems are among the most recent techniques tested for monitoring construction equipment performance. It has been argued that the vision-based techniques are 105-1 appropriate options as they are not intrusive, cost-effective, and can successfully operate in open field jobsites where clear sightlines could be selected. A number of research projects studied the feasibly of using cutting-edge image and video processing algorithms for recognition (Memarzadeh et al. 2013; Rezazadeh Azar and McCabe 2012a; Rezazadeh Azar and McCabe 2012b; Chi and Caldas 2011), tracking (Rezazadeh Azar et al. 2013; Park et al. 2012; Brilakis et al. 2011; Gong and Caldas 2011), and activity identification purposes (Golparvar-Fard et al. 2013; Rezazadeh Azar et al. 2013; Gong and Caldas 2011). All the mentioned research efforts, however, have a major shortcoming as they cannot identify a certain machine among the in-class equipment. The developed recognition methods are only able to distinguish machine types, e.g. backhoe and dump truck, and fail to identify individual machines, e.g. dump truck # 8. Therefore, these vision-based monitoring systems are unable to track the productivity of individual machines, which is a major disadvantage compared to radio-based sensing technologies such as GPS and UWB.  As a result, a main research gap is to investigate vision-based identification methods to address this problem. Vision-based identification is an evolving field of computer vision and can be grouped into model- and marker-based identification. Model-based identification algorithms try to recognize a certain target based on the unique visual features of that individual. Face identification is the most active topic in this area that is heavily used for security purposes, and related algorithms use certain features of the existing model(s) of the target to identify it in surveillance videos and images. The model-based methods are exclusively useful for identification of the targets that have unique visual features within its class, e.g. human face. This approach, however, is impractical to identify individual machines in a mining or construction jobsite, because most of in-class machines are completely similar in shape, unless they are painted in different color which is uncommon in most large heavy construction and mining jobsites.  Therefore, the second identification approach– marker-based method– is the practical choice to identify individual machines. There are several robust marker recognition techniques that are able to identify tags under challenging visual condition. These algorithms use a specific type of markers, called fiducial markers, which are made of a set of easy-to-detect features for robust recognition performance. For example, ARToolkit (Kato and Billinghurst 1999), ARTag (Fiala 2005), and AprilTag (Olson 2011) are the popular marker recognition algorithms, which are mainly developed to facilitate augmented reality generation processes. This research project aims in utilizing a fiducial marker-based approach to identify labeled dump trucks in real-time construction videos, and estimate their hauling trips. This paper introduces a pipeline framework to monitor construction equipment using attached markers.  First, it describes the architecture of the system, and then it explains the modules of the framework. Next, the performance of the system is evaluated using a number of test videos and finally, the potential applications and shortcomings of this hybrid system are discussed. 2 SYSTEM ARCHITECTURE A straight-forward approach is to simply search for the attached fiducial markers using a marker recognition algorithm in real-time frames. This brute-force approach, however, has several shortcomings: 1) markers should have a certain resolution (in pixels) to be robustly detectable in frames. This requires optical zooming on the work zone, which limits the field of view and eventually results in missing the big picture of the operations and in addition, the work zones often change and someone should adjust the viewfinder accordingly. 2) This brute-force approach is computationally-intensive. Therefore, a pipeline framework is developed to detect the most probable candidates, and then zoom on these potential targets to get a high resolution view for the marker recognition process. Next, it passes detected machines to a tracking module which monitors the truck until it exits the view. This way, the system would be able to count hauling trips of individual dump trucks.  Figure 1 shows the entire process of this system, in which different modules of the framework are highlighted by color.  105-2  Figure 1: Flowchart of the system First, the system grabs 1280x960 (pixels) frames and creates a 640x480 (pixels) resized copy of each frame. The higher resolution frame is kept and only used for marker recognition, and the lower resolution 640x480 frame is used for other processes, including background subtraction and object recognition. The background subtraction module isolates moving particle (see Figure 2.a) and checks whether any of moving particles are larger than a certain size, which triggers the object recognition module. The object detection module searches frame for dump trucks (Figure 2.b) and detection of a dump truck triggers the zoom control element, which changes the focal length of the lens to optically zoom on the target (Figure 2.c). Afterwards, the system applies an autofocus function to obtain a high contrast view (Figure 2.d), and then employs the marker recognition module to search a high resolution 1280x960 frame for a marker (Figure 2. e). Successfully identified dump trucks are passed to the tracking module, which will be tracked until they exit the scene. This confirms the trip of that individual dump truck. Details of each module are described in next section. 105-3  Figure 2: a) Background subtraction; b) object detection; c) zooming on target; d) autofocus; e) marker recognition 3 MODULES OF THE SYSTEM As described before, this system is made of several components which are described in detail in the following subsections. 3.1 Background Subtraction Background subtraction methods are used to isolate moving particles from a static background in the videos captured by a stationary camera. Mixtures of Gaussian (Stauffer and Grimson 1999), a Codebook-based method (Kim et al. 2005), and a Bayesian Model-based (Li et al. 2003) were tested in construction videos, in which the Codebook-based and Bayesian-based methods showed promising performance in detecting moving objects in construction videos (Gong and Caldas 2011). Therefore, the Bayesian-based background subtraction method was selected as the first filter of this pipeline framework. In addition, a process, called connected-components analysis (Bradski and Kaehler 2008), was used to remove random noise from the filtered frame and combine the correlated particles, designated “blobs”. The presence of a blob indicates the appearance of a moving object in the video, which might be one of the targets. But the system only selects moving objects for the next step – object recognition – which are larger than a pre-defined threshold. The threshold was defined given the minimum size (in terms of pixels) of a truck that could be detected by the object recognition algorithm. The object detection algorithm uses a sliding window approach with a minimum size of 128x80 pixels to detect trucks, therefore it would be useless to accept a moving particle with smaller size than 128x80 pixels as it simply cannot be picked by the object recognition. As a result, the width and height of the bounding box of a blob to be qualified for the next step should be larger than 128 and 80 pixels, respectively. 3.2 Object Detection The Histogram of Oriented Gradients (HOG) (Dalal and Triggs 2005) algorithm was used in this research, because this popular object recognition method has provided promising results in detection of construction equipment (Memarzadeh et al. 2013; Rezazadeh Azar and McCabe 2012a; Rezazadeh Azar and McCabe 2012b). Details of the background effort for the object detection module of this framework to recognize dump trucks is described in Rezazadeh Azar and McCabe (2012a). Implementation of this 105-4 algorithm using parallel computing on a Graphics Processing Unit (GPU) could dramatically improve the computation runtime (Rezazadeh Azar et al. 2013). The next step in identifying individual trucks is to recognize attached markers, therefore, object recognition searches only for the views in which the marker could possibly be visible. The markers were attached to the sides of dump trucks in this research, thus the object recognition engine searches for the side viewpoints.  The HOG algorithm generally employs a binary classifier, SVMlight (Joachims 1999), in which the classifier uses a threshold to classify candidates. Therefore, the detection threshold of the classifier was lowered in this system to avoid the risk of false negatives (i.e., failing to select views in which a dump truck was visible), although it could also result in more false positives (i.e., selecting windows in which the machine does not exist). This is not a major concern as the next module of this cascade system – marker recognition –is designed to reject the possible false positives. 3.3 Fiducial Marker Recognition Fiducial marker identification is a well-investigated topic in the field of computer science and several methods have been developed to address this issue. Marker recognition algorithms were initially developed for superimposing virtual objects in augmented reality, but they have been gradually adopted in other field, namely pose estimation in robotics. Fiducial markers are artificial tags designed with a set of black and white features to create simple geometric shapes, such as straight lines, and sharp angles and edges, to be robustly detected in low resolution, rotated, unevenly lit conditions, or in the corner of an occluded image. The fiducial marker recognition methods essentially differ from general 2D barcode detection techniques, such as QR code (Figure 3.a), because they also have to provide the camera position and orientation relative to the detected marker. ARToolkit (Kato and Billinghurst 1999), ARTag (Fiala 2005), and AprilTag (Olson 2011) are among the most popular marker recognition methods. Figure 3 illustrates instances of ARToolkit (Figure 3.b), ARTag (Figure 3.c), and AprilTag (Figure 3.d) markers. AprilTag algorithm outperformed other methods (including ARToolkit and ARTag) in detection rate and localization accuracy under various visual conditions, including distance and the angle between the camera’s optical axis and the tag’s normal (Olson 2011). Therefore, this algorithm was selected as the last step of the identification process. AprilTag is a sequential algorithm which firstly computes the gradient at every pixel, including magnitudes and direction. Then, the pixels with similar gradient magnitude and direction are clustered using a graph-based algorithm. Afterwards, it fits multiple line segments to form the boundary of each cluster and finally, existing quadrilaterals are detected and decoded to identify a candidate tag ID.  Figure 3: Samples of a) QR code, b) ARToolkit, c) ARTag, d) AprilTag 3.4 Zoom control As mentioned above, detected trucks have low resolutions in the wide angle frames, which are not sufficient for the marker recognition, therefore the system needs to zoom on the potential candidates for marker recognition. The size of the markers (in pixels) is the key factor to robustly detect them, which depends on both the distance and orientation of the marker (Olson 2011). Application of optical zooming could overcome these limitations. A set of experiments was carried to determine the size (in pixels) of an 105-5 AprilTag marker to be reliably recognized (>98%). The results are presented in Figure 4, in which an AprilTag marker should be at least 18x18 pixels to achieve more than 98% detection rate. It should be mentioned that no false positive was observed in these tests.  Based on this result, an automated zoom control module is necessary to zoom on the targets to achieve a reliable marker resolution. This module changes the focal length of a motorized lens to provide a resolution sufficient for the marker identification.   Figure 4: Detection rate of various resolutions of AprilTag markers First, a set of tests was done to determine a linear relationship between the focal length and the magnification factor of the camera lens. For this purpose, the length of a specific target (in pixels) was measured in the frames captured from the same viewpoint but with altered focal lengths. Second, the module should calculate the magnification factor required to achieve sufficient resolution based on the size of the detected dump truck (in pixels). Then, it finds the corresponding focal length and therefore, sets the focal length of the lens. Magnification factor is determined using the ratio of actual length of the marker to the actual length of the machine (e.g. 0.6m: 8m) and the minimum detectable marker size (18x18 pixels). For example, if the size of a detected dump truck is 200x125 pixels, the expected size of the marker would be 15x15 (200x(0.6/8) = 15), which is not sufficient for robust marker recognition. Therefore, the magnification factor should be 18/15=1.2. Moreover, the calculated magnification factor is increased by 20% to compensate the effects of variations in the size of the detected bounding box, and variations in size of different models of dump trucks. Thus, the scale factor of the example would be adjusted to 1.2x1.2 = 1.44.  This module should also examine whether the detected target will remain in the frame after magnification. For example, an object might be detected in the corner of the frame, but it might not remain after zooming, which makes the zoom useless. A scale affine transformation is used to calculate the coordinates of the object in the zoomed frame, and analyze whether it will appear within the zoomed view. This process is illustrated in Figure 5. First, the origin of the coordinate system is moved to the center of the frame as most computer vision applications use the frame’s top-left corner as the origin (see Figure 5.b). This change is due to the fact that the lens zooms toward the center of the view. The coordinates of the detected dump truck (see Figure 5.c) are multiplied by the calculated magnification factor (see Figure 5.d) to examine if the target will appear in the view after zooming. The magnified frames are usually blurred from the change of the focal length, thus an autofocus process (a function provided by the camera manufacturer) is employed to enhance the contrast of view. Then, the AprilTag algorithm searches the frame for a marker. 105-6  Figure 5: a) Origin of the coordinate in the top-left corner, b) move the origin to the center of frame, c) coordinates of the detected candidate, d) expected coordinates of the target after zooming 3.5  Tracking The system passes identified dump trucks to the tracking module for monitoring purpose. A hybrid tracking algorithm was developed that provided promising performance in tracking slow-moving objects in construction videos, such as dump trucks (Rezazadeh Azar et al. 2013). This hybrid tracking method uses a combination of the HOG object recognition and KLT feature tracking (Tomasi and Kanade 1991). The hybrid structure of this tracking algorithm allows handling orientation and scale changes of a slow moving target. The tracking module has two main applications in this system: first, it continually locates the dump after identification until it exists the frame, therefore it can prevent double identification of a dump truck. The system examines the location of the candidates with the location of tracked machines and will ignore the candidates which have more than 50% overlapping area with a tracked machine. Second, it confirms a hauling trip for an individual dump truck when it exits the scene. 4 EXPERIMENTAL RESULTS This framework was developed in Visual Studio Express 2010 using three open source libraries: 1) IC Imaging Control C++ Class Library for frame acquisition and camera control (The Imaging Source 2014), 2) OpenCV 2.4.4 library for background subtraction and the HOG object recognition (OpenCV 2013), 3) cv2cg library for AprilTag marker recognition (Feng 2013).  This system can only process real-time videos as it must change the focal length of the camera based on the real-time information provided by other modules. Therefore, all the experiments had to be carried out on-site, which limited the testing opportunities. The experiments were done in two stages: First the performance of the system was evaluated using videos of scale model equipment to validate the concept, 105-7 and then it was tested using real-world cases. Altogether, eight videos with the total length of 25 minutes and seven videos with the total length of 43 minutes were processed for scale model and actual dump trucks, respectively. Three unique AprilTag markers were attached on three dump trucks in actual test cases. The camera was positioned in overlooking locations to obtain a clear sightline of the hauling operations. Four criteria, including runtime efficiency, precision, recall, and identified trips rate, were used to evaluate the performance of the system.  4.1 Runtime efficiency This system should be able to operate on a regular computer in real-time. Therefore, all the experiments were carried out using an ordinary laptop with a 2.2 GHz Intel Core i7 CPU, 8 GB RAM, and an NVIDIA GeForce GT 525M GPU. This system has a pipeline structure, which includes of a chain of processing elements and the output of each element is the input of the next. Table 1 presents the process times of the compute-intensive elements of this chain. The rest of the elements —such as frame grabbing and zoom control— are simple processes and immediate. It was aimed to achieve five frames per second (5Hz) runtime efficiency, which is sufficient for a real-time monitoring of hauling operations.  Table 1: Runtimes of the processes Operation Frame size  (pixels) Average process time  (millisecond) Average frame per  second (Hz) Background subtraction +  segment connected components 640 x 480 45 22 HOG object recognition using GPU  640 x 480 300 3.3 Apriltag identification 1280 x 960 400 2.5  The background subtraction and component segmentation are quick processes and meet the requirement, but the HOG object detection and AprilTag identification processes are quite slow.  These runtimes, however, did not delay the average five frames per second performance of the entire system as they are occasionally used in this pipeline system.  For instance, the object recognition is only triggered when a large moving object is detected and subsequently, AprilTag is merely used when the HOG object detector locates a candidate in a possible zooming region. Moreover, an interval constraint (four seconds) was set to avoid pointless searches. For example, the background subtraction detects a moving entity but the HOG algorithm could not identify a dump truck (e.g. the moving object was a grader), so the HOG will not search the subsequent frame; rather, it waits for interval constraint and then searches for candidates, if there is any. This way, the system could maintain the average five frames per second throughout all the real-time experiments of this research project. 4.2 Identification performance The experiments were carried out in two stages: scale model machines and actual equipment. The size of the Apriltag labels installed to the real machines was 60 cm x 60 cm, and the same ratio was used for 1:32 scale models which resulted in 1.9 cm x 1.9 cm markers. Table 2 provides the overall performance of the system, which contains three evaluation parameters: • Recall: True positives / (True positives+ False Negatives) • Precision: True positives / (True positives + False positives) • Identified trips: Number of identified hauling trips made by individual machines/ total number of hauling trips Recall and identified trips are two different evaluation criteria as the identified trips only considers detection of dump trucks’ trips whereas the recall also considers unsuccessful identification attempts (i.e., false negatives). 105-8 Table 2: Performance of the system in test videos Experiment No. appeared machine No. identified machines No. attempts Precision  Recall  identified trips  RC model Dump truck 28 25 32 100% 78.1% 89.3% Actual Dump truck 24 20 28 100% 71.4% 83.3%  A noticeable outcome of these experiments is the precision rate, which was 100% in all test cases. This indicates that the results did not include any false positives, which was mainly due to the robust performance of the marker recognition module in rejection of all false positives. The rates of identified trips were higher than the recall rates in both cases, because the system had several failed marker recognition attempts, in which the marker recognition could not detect the marker in the zoomed frames. In particular, the system was unable to identify a few fast moving trucks, because the system could not zoom and focus on targets on time. This system requires about 1.6 seconds (0.045 sec for background subtraction + 0.3 sec for HOG recognition + 0.25 sec changing focal length + 1 sec autofocus) to grab a zoomed frame for marker recognition, which demonstrated to be too long to identify a marker attached on the fast moving dump trucks.  In addition, the system was not able to identify the dump trucks with obscured markers, e.g. by other equipment. This is an inherent disadvantage of any vision-based method compared to radio-based devices, which do not require clear sightline. 5 CONCLUSION Vision-based systems showed promising performance in detection and tracking of key resources, such as heavy equipment, in construction jobsites. These systems, however, were not able to identify individual machines, which prevent them from providing detailed performance data of each machine. A hybrid vision-based system has been developed which uses a pipeline combination of a background subtraction, an object detection method, and a marker recognition algorithm to identify tagged dump trucks, and then passes them to a tracking module. This system includes a smart zooming feature to zoom on the potential targets for marker recognition. This framework could identify 83.3% of the hauling trips made by dump trucks in construction jobsite. In addition, application of a robust marker recognition algorithm – AprilTag – proved to be effective in rejecting all false positives. Despite the promising performance, some shortcomings, such as limited coverage area and inability to detect fast moving targets, exist with this system. Thus, future research effort aims in addressing these issues by adding smart panning and tilting features to the system, and also tracking of other equipment-intensive operations will be investigated. Acknowledgment This research was supported by the SRC Research Development Fund from the Office of Research Services at Lakehead University.  References Akhavian, R. and Behzadan, A. 2013. Knowledge-Based Simulation Modeling of Construction Fleet Operations Using Multimodal-Process Data Mining. Journal of Construction Engineering and Management, 139(11), 04013021. Bradski, G. and Kaehler, A. 2008. Learning OpenCV. O’Reilly, Sebastopol, CA, 287-294. Brilakis, I., Park, M.W. and Jog, G. 2011. Automated Vision Tracking of Project Related Entities, Advanced Engineering Informatics, 25(4), 713-724. Cheng, T., Venugopal, M., Teizer, J. and Vela P.A. 2011. Performance evaluation of ultra wideband technology for construction resource location tracking in harsh environments. Automation in Construction, 20(8), 1173–1184. 105-9 Chi, S. and Caldas, C.H. 2011. Automated Object Identification Using Optical Video Cameras on Construction Sites. Journal of Computer-Aided Civil and Infrastructure Engineering, 26(5), 368–380. Dalal, N., and Triggs. B. 2005. Histograms of Oriented Gradients for Human Detection. Conference on Computer Vision and Pattern Recognition, IEEE, San Diego, CA, USA, 2, 886 – 893. Feng, C.  2013. Computer Vision and Computer Graphics Interactions. <https://code.google.com/p/cv2cg/> (Sep. 25, 2013) Fiala, M. 2005. ARTag, a fiducial marker system using digital techniques. 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, 2, 590–596. Golparvar-Fard, M., Heydarian, A. and, Niebles, J.C. 2013. Vision-based action recognition of earthmoving equipment using spatio-temporal features and support vector machine classifiers. Advanced Engineering Informatics, 27(4), 652–663. Gong, J. and Caldas, C.H. 2011. An object recognition, tracking, and contextual reasoning-based video interpretation method for rapid productivity analysis of construction operations. Automation in Construction, 20(8), 1211–1226.  Hildreth, J., Vorster, M. and Martinez, J. 2005. Reduction of Short-Interval GPS Data for Construction Operations Analysis. Journal of Construction Engineering and Management, ASCE, 131(8), 920–927. Ibrahim, M. and Moselhi, O. 2014. Automated productivity assessment of earthmoving operations. ITcon, 19, 169-184, http://www.itcon.org/2014/9 Joachims. T. 1999. Making large-scale SVM learning practical. Advances in Kernel Methods: Support Vector Learning, B. Schlkopf, C. Burges, and A. Smola, The MIT Press, Cambridge, MA, USA. Kato H. and Billinghurst M. 1999. Marker tracking and hmd calibration for a video-based augmented reality conferencing system. 2nd IEEE and ACM International Workshop on Augmented Reality, 85-94. Kim, K., Chalidabhongse, T.H., Harwood, D. and Davis, L. 2005. Real-time foreground-background segmentation using codebook model. Real-Time Imaging, 11(3), 172–185. Li, L., Huang, W., Gu, I.Y.H. and Tian, Q. 2003. Foreground Object Detection from Videos Containing Complex Background. In MULTIMEDIA ’03: Proceedings of the eleventh ACM international conference on Multimedia, Berkeley, CA, USA. 2-10.  Memarzadeh, M., Golparvar-Fard, M. and Niebles, J. C. 2013. Automated 2D detection of construction equipment and workers from site video streams using histograms of oriented gradients and colors. Automation in Construction, 32, 24–37. Montaser, A. and Moselhi, O. 2012. RFID+ for Tracking Earthmoving Operations. Construction Research Congress, West Lafayette, IN, USA, 1011-1020. Montaser, A. and Moselhi, O. 2014. Truck+ for earthmoving operations. ITcon, 19, 412-433, http://www.itcon.org/2014/25 Navon, R., Goldschmidt, E. and Shpatnisky, Y. 2004. A concept proving prototype of automated earthmoving control. Automation in Construction, 13(2), 225– 239. Olson E. 2011. AprilTag: a robust and flexible visual fiducial system. IEEE International Conference on Robotics and Automation (ICRA’11), Shanghai, China, 3400–3407. OpenCv. 2013. Open Source Computer Vision. < http://opencv.org/ > (May 18, 2013).  Park, M., Koch, C. and Brilakis, I. 2012. Three-Dimensional Tracking of Construction Resources Using an On-Site Camera System. Journal of Computing in Civil Engineering, ASCE, 26(4), 541–549. Rezazadeh Azar, E. and McCabe, B. 2012a. Automated visual recognition of dump trucks in construction videos. Journal of Computing in Civil Engineering, ASCE, 26(6), 769-781. Rezazadeh Azar, E. and McCabe, B. 2012b. Part based model and spatial-temporal reasoning to recognize hydraulic excavators in construction images and videos. Automation in Construction, 24, 194-202. Rezazadeh Azar, E., Dickinson, S. and McCabe, B. 2013. Server-Customer Interaction Tracker: Computer Vision–Based System to Estimate Dirt-Loading Cycles. Journal of Construction Engineering and Management, ASCE, 139(7), 785–794. Stauffer, C., and Grimson W.E.L. 1999. Adaptive background mixture models for real-time tracking. Computer Vision and Pattern Recognition, Fort Collins, CO. The Imaging Source. 2014. Software Development Kits (SDKS) for Microsoft Windows. < http://www.theimagingsource.com/en_US/support/downloads/> (July 17, 2014). Tomasi, C. and Kanade, T. 1991. Detection and tracking of point features. Technical Report CMU-CS-91-132, Carnegie Mellon University. 105-10 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.52660.1-0076379/manifest

Comment

Related Items