UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Computer image based scaling of logs Finnighan, Grant Adam 1987

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-UBC_1987_A7 F56.pdf [ 6.43MB ]
Metadata
JSON: 831-1.0097093.json
JSON-LD: 831-1.0097093-ld.json
RDF/XML (Pretty): 831-1.0097093-rdf.xml
RDF/JSON: 831-1.0097093-rdf.json
Turtle: 831-1.0097093-turtle.txt
N-Triples: 831-1.0097093-rdf-ntriples.txt
Original Record: 831-1.0097093-source.json
Full Text
831-1.0097093-fulltext.txt
Citation
831-1.0097093.ris

Full Text

C O M P U T E R I M A G E B A S E D S C A L I N G O F L O G S by G R A N T A D A M F I N N I G H A N B . A . S c . ( E E ) , The Universi ty of Toronto, 1984 A T H E S I S S U B M I T T E D I N P A R T I A L F U L F I L L M E N T O F T H E R E Q U I R E M E N T S F O R T H E D E G R E E O F M A S T E R S O F A P P L I E D S C I E N C E in T H E F A C U L T Y O F G R A D U A T E S T U D I E S E L E C T R I C A L E N G I N E E R I N G We accept this thesis as conforming to the required standard T H E U N I V E R S I T Y O F B R I T I S H C O L U M B I A October, 1987 © G r a n t A d a m Finnighan 1987 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of feuEC-T^cAk ftiUiJEe&*/(q The University of British Columbia 1956 Main Mall Vancouver, Canada V6T 1Y3 Date QCfbgsfc l5\/j67 DE-6C3/81) Abstract Individual log scaling for the forest industry is a time consuming operation. Presented here are the design and prototype test results of an automated technique that w i l l improve on the current speed of this operation, while s t i l l achieving the required accuracy. This is based on a technique that uses a television camera and graphics monitor to enable the operator to spot logs in images, which an attached processor can automatically scale. The system must be first calibrated however. Add i t iona l to the time savings are the advantages that the accuracy wi l l be main-tained, if not improved, and the operation may now be performed from a sheltered location. i Contents Abstract i List of Tables iv List of Figures v Acknowledgements x 1 Introduction 1 2 Overview of the Problem and Related Work 3 2.1 L o g Scaling Background 3 2.2 P rob lem Formulat ion . 4 2.3 Prac t ica l Considerations 6 2.4 Related Works 7 2.4.1 Scope of the Survey 7 2.4.2 Mode l l ing 7 2.4.3 Information Acquis i t ion Techniques 12 3 Development of the Scaling Method 20 3.1 Camera Mode l l ing 20 3.2 Camera Cal ibra t ion 23 i i 3.3 L o g Recognit ion 35 3.4 L o g Scaling Calculations 35 3.5 Sources of Er ror 48 3.6 Simulat ion 49 3.7 Simplifying Assumptions 52 3.8 Advantages and Disadvantages of the System 54 4 Experiments 56 4.1 Equipment 56 4.2 Ca l ib ra t ion 57 4.2.1 Ca l ib ra t ion Procedure 57 4.2.2 Ca l ib ra t ion Observations 60 4.2.3 Cal ibra t ion Results . . 61 4.3 Scaling 68 4.3.1 Procedure 68 4.3.2 Scaling Results 70 4.3.3 Scaling Conclusions 78 5 Conclusions 80 6 Recommendations 82 References 87 A Experimental Results - Simulation 90 B Experimental Results - Calibration 100 C Experimental Results - Scaling 124 i i i List of Tables 4.1 Summary of a Sample of the Cal ibra t ion Test Results 67 4.2 Summary of the Scaling Tests from the Cal ibra t ion Models - Radius 75 4.3 Summary of the Scaling Tests from the Cal ibra t ion Models - Length 77 iv List of Figures 3.1 P in-Hole Camera M o d e l 21 3.2 Two-Plane Camera Cal ibra t ion 22 3.3 Perspective Transformation 28 3.4 Image Output Device Transformation 29 3.5 L o g Scaling Geometry 37 3.6 L o g Scaling Simulat ion Results 51 4.1 Laboratory Scaling Apparatus 58 4.2 M e a n R M S Er ro r , $ , for the Cal ibra t ion Process 62 4.3 Standard Deviat ion of $ , a$, for the Cal ibra t ion Process 62 4.4 Result Dis t r ibu t ion for 0X - Test Image 64 4.5 Result Dis t r ibu t ion for 0X - Six Images 64 4.6 Result Dis t r ibu t ion for h0 - Test Image 65 4.7 Result Dis t r ibu t ion for h0 - Six Images 65 4.8 Result Dis t r ibu t ion for Y 0 - Test Image 66 4.9 Result Dis t r ibu t ion for Y 0 - Six Images 66 4.10 L o g Scaling Radius Dis t r ibut ion - Centre of the Image 71 4.11 L o g Scaling Radius Dis t r ibut ion - Corners of the Image 71 4.12 Log Scaling Radius Dis t r ibu t ion - Corners of the Image 72 4.13 L o g Scaling Radius Dis t r ibu t ion - Corner M o d e l F i t to the Centre . 74 v 4.14 L o g Scaling Radius Dis t r ibut ion - Corner M o d e l F i t to the Opposite Corner 74 4.15 Log Scaling Length Dis t r ibut ion - Centre of the Image 75 4.16 Radius Scaling Accuracy - A l l Tests Combined 76 4.17 Length Scaling Accuracy - A l l Tests Combined 76 A . l Simulat ion Results: Radius Er ror vs. L o g Angle 95 A . 2 Simulat ion Results: Standard Devia t ion vs. L o g Angle 95 A . 3 Simulat ion Results: Radius Er ror vs. L o g Radius 96 A . 4 Simulat ion Results: Standard Devia t ion vs. L o g Radius 96 A . 5 Simulat ion Results: Radius Er ro r vs. Norma l Distance 97 A . 6 Simulat ion Results: Standard Devia t ion vs. Norma l Distance . . . . 97 A . 7 Simulat ion Results: Radius Er ror vs. Diagonal Distance 98 A . 8 Simulat ion Results: Standard Devia t ion vs. Diagonal Distance . . . 98 A . 9 Simulat ion Results: Radius Er ro r vs. Zoom L i m i t 99 A . 10 Simulat ion Results: Standard Devia t ion vs. Zoom L i m i t 99 B . l Ca l ib ra t ion Results: Convergence of $ - Test Image 103 B.2 Ca l ib ra t ion Results: Convergence of o$ - Test Image 103 B . 3 Ca l ib ra t ion Results: fMv Es t imat ion His togram - Test 104 B.4 Cal ibra t ion Results: fMv Es t imat ion Histogram - Centre 104 B.5 Ca l ib ra t ion Results: fMy Es t imat ion His togram - Corner 105 B.6 Ca l ib ra t ion Results: fMy Es t imat ion Histogram - Total 105 B.7 Ca l ib ra t ion Results: Mratio Es t imat ion His togram - Test 106 B.8 Ca l ib ra t ion Results: Mratio Es t imat ion His togram - Centre 106 B .9 Ca l ib ra t ion Results: Mratio Es t imat ion His togram - Corner 107 B.10 Cal ib ra t ion Results: Mratio Es t imat ion His togram - Total 107 v i B . l l Ca l ibra t ion Results: 0X Es t imat ion His togram - Test 108 B.12 Cal ib ra t ion Results: 0X Es t imat ion His togram - Centre 108 B.13 Cal ib ra t ion Results: 0X Es t imat ion His togram - Corner 109 B.14 Cal ib ra t ion Results: 6X Es t imat ion His togram - Total 109 B.15 Cal ibra t ion Results: 0V Es t imat ion Histogram - Test 110 B.16 Cal ibra t ion Results: 0V Es t imat ion Histogram - Centre 110 B.17 Cal ibra t ion Results: 9V Es t imat ion Histogram - Corner I l l B.18 Cal ibra t ion Results: 0y Es t imat ion Histogram - Total I l l B.19 Cal ib ra t ion Results: 0Z Es t imat ion Histogram - Test 112 B.20 Cal ib ra t ion Results: 0Z Es t imat ion Histogram - Centre 112 B.21 Cal ib ra t ion Results: 0Z Es t imat ion Histogram - Corner 113 B.22 Cal ibra t ion Results: 0 £ ' E s t ima t ion Histogram - Total 113 B.23 Ca l ib ra t ion Results: h0 Es t imat ion Histogram - Test 114 B.24 Cal ib ra t ion Results: h0 Es t imat ion Histogram - Centre 114 B.25 Cal ib ra t ion Results: h0 Es t imat ion Histogram - Corner 115 B.26 Cal ib ra t ion Results: h0 Es t imat ion Histogram - Total 115 B.27 Cal ib ra t ion Results: u 0 Es t imat ion His togram - Test 116 B.28 Cal ib ra t ion Results: v 0 Es t imat ion Histogram - Centre 116 B.29 Cal ibra t ion Results: vo Es t imat ion His togram - Corner 117 B.30 Cal ib ra t ion Results: v0 Es t imat ion His togram - Total 117 B.31 Cal ib ra t ion Results: X 0 Es t imat ion His togram - Test 118 B.32 Cal ib ra t ion Results: X0 Es t imat ion His togram - Centre 118 B.33 Cal ib ra t ion Results: X0 Es t imat ion His togram - Corner 119 B.34 Cal ib ra t ion Results: X0 Es t imat ion His togram - Total 119 B.35 Cal ib ra t ion Results: Y0 Es t imat ion His togram - Test 120 B.36 Cal ib ra t ion Results: Y0 Es t imat ion His togram - Centre 120 vii B.37 Cal ibra t ion Results: YQ Es t imat ion His togram - Corner 121 B.38 Cal ib ra t ion Results: Y0 Es t imat ion Histogram - Total 121 B.39 Cal ib ra t ion Results: Zo Es t imat ion His togram - Test 122 B.40 Cal ibra t ion Results: Z0 Es t imat ion His togram - Centre 122 B.41 Cal ib ra t ion Results: Z0 Es t imat ion His togram - Corner 123 B.42 Cal ibra t ion Results: ZQ Es t imat ion Histogram - Total 123 C . l Scaling Results: Radius Dis t r ibu t ion for Image Centre - Test Mode l 127 C.2 Scaling Results: Radius Dis t r ibut ion for Image Corners - Test M o d e l 127 C.3 Scaling Results: Radius Dis t r ibut ion from A l l Images - Test M o d e l . 128 C.4 Scaling Results: Radius Dis t r ibut ion for Corner Images - Same Cor-ner Models 128 C.5 Scaling Results: Radius Dis t r ibut ion for Centre Image - Corner M o d -els 129 C.6 Scaling Results: Radius Dis t r ibu t ion for Corner Images - Opposite Corner Models 129 C.7 Scaling Results: Radius Dis t r ibut ion for A l l Images Combined . . . 130 C.8 Scaling Results: Length Dis t r ibut ion for Image Centre - Test M o d e l 131 C.9 Scaling Results: Length Dis t r ibut ion for Image Corners - Test M o d e l 131 C.10 Scaling Results: Length Dis t r ibut ion from A l l Images - Test M o d e l . 132 C . l l Scaling Results: Length Dis t r ibut ion for Corner Images - Same Cor-ner Models 132 C.12 Scaling Results: Length Dis t r ibut ion for Centre Image - Corner M o d -els 133 C.13 Scaling Results: Length Dis t r ibut ion for Corner Images - Opposite Corner Models 133 C.14 Scaling Results: Length Dis t r ibut ion for A l l Images Combined . . . 134 C.15 Scaling Results: Radius of Log at 40° Angle to the Image Plane -Test M o d e l 135 v i i i C.16 Scaling Results: Length of Log at 40° Angle to the Image Plane -Test M o d e l 135 C.17 Scaling Results: Radius of Smaller Cyl inder - Test M o d e l 136 C.18 Scaling Results: Length of Smaller Cyl inder - Test M o d e l 136 C.19 Scaling Results: Radius of Real Log - Test M o d e l 137 C.20 Scaling Results: Length of Real Log - Test M o d e l 137 ix Acknowledgements I would like to thank my supervisor, D r . Peter Lawrence, for the guidance and encouragement that he provided throughout this project. His technical advice and opinions have been crucial to the outcome of this work. I would also like to thank the fellow graduate students that provided extremely valuable technical and moral support, in part icular James Reimer, Derek Hutchin-son, Joe Poon and Nader R i a h i . Th i s research would not have been possible without the financial support of the Natura l Sciences and Engineering Research Counc i l . x Chapter 1 Introduction The forest industry in Br i t i sh Co lumbia currently employs large, relatively flat fields as sorting yards where, amongst other things, delimbed logs are brought in to be measured. A group of logs are laid out, roughly parallel to each other, on the surface of the sort yard. There are numerous formulae for computing the volume of a log, based on the length and radius,, in use in industry today. The process of determining the log volume is called scaling. One process of scaling these logs involves a scaler wi th a calibrated stick who measures the dimensions of the log. The measurement of length and diameter of a log is a relatively slow technique and thus the goal of this project was to investigate a more automated process. A t the same time, the accuracy (modelling of reality) and precision (repeatability) of the measurements must not suffer. This problem, as it turns out, is fairly complicated if one turns to computational vision techniques using cameras and signal processing algorithms. The problem is also not very amenable to laser-based or other active sensing techniques. Initially, a survey of possible solutions to this industrial problem is presented. The second section of this report contains a new design, based on satisfying the problem constraints. This consists of the considerations which led to the design equations, and those decisions which were made about the actual process and the 1 human interface. A simulation was performed of the si tuation, using some idealiza-tions. This simulation worked well enough to warrant testing of a prototype. The final major section of the body of this report describes the experimental work that was performed wi th a laboratory prototype in order to verify design theory, justify the maintenance of accuracy wi th a substantial t ime savings, and provide insight into future work on this topic. In addit ion, this insight led to a section of practical recommendations based on this 'hands-on' experience. Appendices are also included which contain more complete listings of the results determined experimentally. 2 Chapter 2 Overview of the Problem and Related Work 2 . 1 Log Scaling Background In the logging industry, sort yards are used as intermediate destinations for cut trunks on their way to the mil ls . These large and relatively flat, fields are generally located near water so that the logs may be brought in by truck or water (without l imbs, but w i th bark) in order to be bundled into booms for water-based shipment to a saw-mil l . In these sort yards, each individual log is graded according to its quality and species so that proper mi l l ing may take place [37]. Further, the sort yard is required to measure the volume of the logs in order to calculate payments (to the Crown and logging contractors) and as an industrial measurement technique (for monitoring resource flow, as well as the unit productivi ty) [29]. Thus, the process becomes one of tremendous financial importance when considering the value of annual product ion in Br i t i sh Co lumbia from this primary industry. Under the current system, log volumes are measured either by weighing whole truckloads and making gross assumptions about the uniformity of the load moisture content and species, or by the more accurate method involving individual log, stick scaling [29]. In yards implementing this latter technique, qualified log scalers actu-3 ally measure the individual pieces w i th the aid of a graduated stick (1-2 m long). Whi le the former method is quicker and statistically correct over a long period of t ime, it does lead to errors due to the assumptions being made [29]. The individual log scaling technique provides more accurate values for the d i -ameters of the end faces and the length. However, this process is seen as being somewhat t ime consuming and a bottle-neck to the overall product ivi ty of the log-ging industry [29]. A s the size of the individual pieces being harvested drops in attempts to stretch product ivi ty in areas of diminishing returns, this bottle-neck becomes even more predominant, as more scaling w i l l be required to process similar volumes. 2.2 Problem Formulation W i t h the foregoing discussion in mind then, it becomes immediately apparent that the alleviation of the log scaling time bottle-neck without a loss in accuracy would be a valuable contr ibution to the industry. To be more technically specific, it is desired to find, by means of automation, a method of determining the length and radius (or diameter) of a log. The ground, which is going to be composed of grass, dirt and bark, w i l l be of varied colour, but may be assumed to be planar. The logs themselves may be modelled as tapered cylinders (hence of a characteristic length and radius) in order to extract the volume measurement. No rigorous surface or volume calculat ion is requested as the formulae currently employed to calculate the size of the log are dependent on length and diameter measurements only. This is not to say that the volume could not be calculated more directly, just that, by the current industrial standards, no need exists. A s described in the "Forestry Handbook of B . C . " [37], there are primari ly two scaling formulae used. The first of these is the B . C . Cubic Scale, which measures "the actual solid wood contents of a log in cubic feet without deducting for slabs, edgings, or saw kerf" [37]. This is a fairly straightforward method that requires the 4 two end diameters and the length: V = {Ai + A2) 2 (2.1) where: V is the log's cubic scale volume Ai,A? are the end face areas determined from di,d2 I is the length of the log, measured to wi th in 10 c m di,d2 are the two end face diameters, measured inside the bark to the nearest inch (2.54 cm) The other commonly used scale is the B . C . Board Foot Scale, which measures "the number of inch-thick boards that may be cut from logs of various lengths and diameters" [37]. This one is not as intuitively appealling a measure, but rather is more practically oriented. The formula for it is correspondingly more complex. D is the top diameter, measured inside the bark to the nearest inch There are some important facts that should be noticed from the above two formulae. B o t h volume calculations are based on the length and the square of the radius, as they are cylindrical- type modellings. The first one is a tapered cylinder, while the second formula makes no allowance for taper in logs of up to forty feet in length. For the purpose of automating log scaling, the outside diameters could be measured and a constant correction for the bark could then be subtracted. F ina l ly , it is noticed that while the length accuracy is not very stringent, that of the diameter, which is both easier to measure and more dominant in the volume V = _ ! ! ) » , ( J _ ± ) 4 V 21 v 1 2 1 1 ; (2.2) where: 5 formulae, is required to wi th in one inch. Whi le the diameters are used here for ease of measurement, these formulae could both be thought of in terms of the end face radi i instead, if this should prove to be a more relevent way of thinking. The accuracy desired would then become one-half inch (1.27 cm) for this measurement. 2.3 Practical Considerations Even without any preconceptions as to what form of automation might be employed for the log scaling process, several practical considerations should be stated. Log sorting yards tend to be relatively busy places. A great deal of heavy machinery is constantly on the move as shipments of logs are being simultaneously brought i n , scaled, graded and parcelled for shipment to the saw m i l l . This makes for work conditions where each individual involved must be constantly on the alert for the safety of themselves and everyone else around. The log scalers are no exception to this. It is probable that automating their task, and possibly also removing them from the scene (on foot) at least part of the t ime, would prevent inaccuracies in measurements due to haste or distraction. Someone wi l l s t i l l be required to grade and mark the logs on foot, but without the burden of a measuring stick. Thus, an ideal automation scenario would see the log scaling operator located away from traffic (and the natural elements as well) . The log scaling apparatus itself should preferably be a single unit device. It should be no more obtrusive a factor in the sort yard than the scalers on foot currently are and, in fact, it would be a great advantage to everyone involved if it could be placed somewhere out of the way or attached to existing equipment. Final ly , the log scaling apparatus should be sturdy and accurate enough to wi ths tand weather elements (rain, snow, fog, wind) . It would be advantageous if the device was easily protected, both while in use and when not in use. Further, it should ideally have li t t le in the way of moving parts. 6 2 . 4 Related Works 2.4.1 Scope of the Survey A survey of works that might provide useful direction for solving this problem is a wide ranging one, encompassing the fields of computer science, electrical and civil engineering, as well as forestry. While the goal of the project is to contribute a practical method of solving an industrial problem, there is theoretical literature of equal importance in assessing its possible impact on the solution. The first step in the analysis of this problem is that of bringing the real world into a more digestible form by modelling. While some of this was given already in the problem formulation, a statement and previous work in this field is initially reviewed here. Next, the crux of the problem, that of sensing the input data is attacked. Litera-ture is surveyed on various means of acquiring the information necessary to produce the log dimensions. Sensors may be categorized as either active or passive [15]. Ac-tive sensors rely on being able to transmit and receive a beam of energy (light, sound, etc.), where the desired information is somehow encoded in the relation be-tween the two beams. Passive sensors rely solely on ambient conditions in order to derive information. 2.4.2 Modelling As was mentioned in Section 2.2, for the purpose of accounting for the amount of usable product that passes through a log sort yard, logs are assumed to be fundamentally cylindrical. From there, different formulae are based on different assumptions about the nature of the departure from this ideal modelling and the amount of wood that is actually usable within the derived volume. Many computational vision techniques have been proposed for the extraction of 7 three dimensional surfaces from two dimensional images. A m o n g these is a mod-elling system, very close to that suggested above, based on what is known as a generalized cylinder. This technique was first proposed by T . O . Binford [3] in 1971 and may also be referred to as that of generalized cones [18,2]. One of the earliest applications of generalized cylinders was implemented by A g i n and Binford in 1973 [l] . In their experimental apparatus, a laser beam was deflected through a glass rod to achieve a horizontal beam of light incident on the subject. Using a television camera to spatially sample the scene and the process of tr iangulat ion, the scene was scanned and non-zero brightness points were stored. This data was processed to perform line detection and the axes of the generalized cyl indr ical representations were extracted from the image. When the software had located a part of a cylinder, it created a characterization of the cross-section there and attempted to extend the cylinder in both directions. They proposed the scenario where the outline was defined as: uRADIUS{n) = RADIUS(Q) +M*n (2.3) where RADIU5(0) and M are parameters of the function, and n corresponds to the order of points along the axis" [l]. Th i s could very easily be a modelling equation for a log, seen to be the frustum of a cone. In practice, the algori thm uti l ized to trace out each possible cylinder proceeded ini t ia l ly to arrive at cross-sectional planes normal to the points on the axis. Then , for each plane, points were located on the actual surface of the object and the diameter of each resulting cyl indr ical slice was calculated. A linear radius function was then fit to the set of values obtained as the diameters. Conical cross-sections resulting from this function were then retro-fit to the surface data to justify the validi ty of the original fit. Possible problems arose when the curve fit to the diametric data was not very good, as this tended to become a divergent process. The ends of the cylinder were part icularly susceptible to this problem and they were, therefore, tested appropriately [l]. 8 In actual operation, this system required some operator control. For example, while all of the ini t ia l cylinder groupings and rankings (according to the length-to-wid th ratios for their l ikelihood of being cylindrical) were automatically performed, the operator must s t i l l specify which of these groups are to be analyzed further. The cylinder-tracing algori thm presented above was then run and the operator was required to decide on the success or failure of the operation. The above work was also uti l ized as an input medium for some work on "Struc-tured Descriptions of Complex Objects" by Nevatia and Binford [25] in order to develop specific object recognition techniques for robotic applications. Generalized cylinders may be used to represent or approximate a three-dimensional object or any part thereof. A s an example of their applicabili ty, generalized cy l in-ders may be used for a hierarchical recognition system for some three-dimensional objects characterizing say, the human body, wi th the individual models being used for each of the head, trunk, arms and legs [19]. This sort of recognition was also performed on artificially generated, three-dimensional data by Soroka and Bajcsy [32]. The end goal of their work was tomographic reconstruction however. Another three-dimensional, model-driven recognition system was developed by Shani in 1981 [28]. In this system, generalized cylinders were used as geometric models for the abdominal anatomy. M o d e l descriptions for generalized cones may take on a few different forms when parameterized. One such representation, that of a linear radius function, has al-ready been presented. This is a relatively simple reduction of information done by assuming a circular cross-section that is perpendicular to the axis. Whi le it is not necessary to restrict oneself to th inking solely of a circular template sweeping out the volume of a generalized cone, it is certainly the simplest and most math-ematically tractable. For the purpose of log scaling, this reduction in information is both attractive and consistent wi th current practice. The simplest case of this linear radius model is obviously that of a constant radius cone - a cylinder. The radius of the cross-section does not necessarily have to be an analytic function at a l l , 9 however. W i t h the example of log scaling, it could conceivably become a sampled function of arc length along the principle axis. It also becomes necessary to describe the individual object's principle axis. One choice would be as three, linearly independent functions of the arc length, such as: a{s) = where: ( x(s) \ (2.4) « is the arc length along the principle axis a(s) is the object's principle axis [2] In this manner, a Cartesian coordinate system is defined and the projection function of the object's cross-section is determined along each of the axes. Further, this idea can expand to include, by definition, the abili ty for the cross-sectional template (and principle axes) to rotate as it moves through space, as is the case wi th a screw. In contrast, the simplest case of axis representation involves a linear assumption [36]. For this s i tuat ion, it may then be characterized by means of a point in the image plane, an angle relative to the horizontal (6) and an angle relative to the line of sight (a). These may be found by various fitting techniques dealing wi th such things as the symmetry of the surface normal data about a proposed axis. In a work by Walker and Kanade, the object's coordinates were represented by two parameters: s and t. s is defined to be the normalized distance along the cylinder's axis and t is defined to be the normalized distance (angular, in the right-handed sense) around the axis in the normal plane, starting at the point on the far side of the cylinder that is coplanar wi th both the axis and the line of sight [36]. A somewhat different manner of parametrically representing the principle axis is that of a "Frenet frame". This system uses points along the object's axis as the 10 origins of local coordinate systems and defines these coordinate systems wi th the aid of three orthogonal unit vectors: £, u and c;. f is the unit vector that is tangent to the principle axis, v is defined to be in the direction of the centre of curvature, and c; is called the centre of torsion of the axis. Whi le these three vectors are a more meaningful description of the axis ' activity in three dimensions, they do have their drawbacks. For one, the centre of torsion of the axis is determined as the vector which is orthogonal to both the tangent and the radius of curvature. When the curvature approaches zero (which is a quite common occurrence), the torsion is ill-defined. Further, if one is to allow the cross-sectional template to twist as it passes along the axis, then an addit ional parameter is required to deal wi th this [2]. O n the positive side for the use of generalized cylinders is their modular i ty when being used to represent a port ion of an object. In addit ion, cross-sectional area and object volume may be analytically derived from the extracted model . There are further problems not yet mentioned that arise from this technique however. If the axis of the cylinder curves sharply, then the volume and surface area calculations w i l l be in error due to an overlap of successive slices. Further, if one must match the cylinder derived from the image to a model, then the axis should, in general, be parallel to the image plane for opt imal shape extraction. Should any other orientation occur, then a foreshortening effect w i l l result. The perspective projection of objects onto an image plane causes those farther away to be smaller than those which are closer. This can result in the far end of the log ap-pearing smaller than the near end. Addi t ional ly , the overall length of the projection varies and w i l l be maximized only when the log axis is parallel to the image plane. This could possibly be detected from clues provided by a cooperative process [20], but in general, w i l l lead to an ambiguity in the image to three-dimensional model transformation. 11 2.4.3 Information Acquisition Techniques A s has been mentioned, data acquisition techniques for this problem may be grouped into two categories: active and passive [15]. Act ive techniques are those which rely on being able to receive a transmitted beam of energy from the log target, the desired information being somehow encoded in that beam. This includes methods ut i l iz ing lasers, structured lighting and ultrasound. O n the other side of these are techniques which depend solely on ambient conditions. Cameras are the most common transducers for this si tuation, possibly also ut i l iz ing optical filters. Act ive sensing is not new to the logging industry. Lasers have been used in saw mills to determine the diameter of a log moving along a conveyor belt [10,35]. Th i s is done by detecting a broken beam of laser-generated light as the log passes between the source and the detectors. The reason that this can be done is that the conveyor is located in a controlled industrial environment where the geometry of the measuring system can be fixed and the laser light intensity may be readily detected amid the ambient light. These conditions, pr imari ly the latter, impose a severe restriction on most light-oriented, data acquisition systems. However this restriction is not so severe that lasers have not been a popular data acquisition tool for other tasks in the past. A good treatment of their applications to various industr ial tasks may be found in Harry [11] and Maurer [22]. C i v i l engineering has seen lasers used in the out of doors wi th great success [11,30,27,38]. For surveying purposes, electronic distance metering equipment (laser range finders) are available that are able to measure distances that typically run up to ten kilometres w i th a standard deviation of 5 m m + 5ppm. In addit ion, they may be mounted on highly accurate optical or electronic theodolites in order to determine the angles of the measurements. If measurements of this type could be made in a sort yard , then any linear dimensions of the logs could be extracted from two distance measurements and the angle between them. Unfortunately, it is not that simple. The bulk of this equipment is designed such that the laser beam is reflected off of a flat prism face (or a bank of prisms for farther distances) and 12 received at the source. In this manner, there is sufficient received power to isolate the monochromatic (usually infrared) light from the broad-spectrum, atmospheric light. Laser range finders operate based on a number of different techniques depending on the application and measuring distance. For short, high-accuracy work, such as machine tool cal ibrat ion, interferometers are used to receive the incident light (directly or off of a mirror) and determine the distance measurement [13]. This is highly unsuitable for the log sort yard, as it would be far too labour intensive to spot each measured point wi th the receiver. For longer distances, where the error may be of a higher absolute value, the most common technique, as wi th the surveying equipment, is to modulate the outgoing light intensity and perform phase detection on the received signal [15,11]. The difficulties, as were alluded to above, are that the received power wi l l be insufficient off of a log target and that it is too time consuming to place a reflector at each of the desired measurement points. A further long distance technique that has been implemented involves the mea-surement of the time of flight of a pulse of laser light off of an adequately reflective target [11,15]. This is known as lidar and requires highly sensitive and fast pho-todetectors combined w i t h state-of-the-art electronics to be able to find the return pulse and accurately measure the time lag amid pulse dispersion and ambient light noise. In addi t ion to this, the same limitations occur as were mentioned for the phase modulat ion scheme. Laser range finders have been used many times before in controlled environments for the purpose of generating a range image, where each element of the image is a range value rather than a light intensity level. [26,15]. W i t h the case of a known scanning-laser position and a planar lateral effect photodiode, range images may also be calculated by triangulation [9]. 13 Laser range finders offer the possibility of a variety of techniques for determining individual distances by active sensing. S t i l l , problems exist wi th the use of this tech-nology for this application. The logs (and their background of grass and mud) are very poor reflectors and wi l l not return sufficient power to overcome ambient light. To place a reflector at each desired measurement location would be far too labour intensive. In addit ion, while the low-end laser range finder is relatively inexpen-sive, it would also be ineffective. The increases in cost towards more sophisticated equipment are quite substantial if the unit is to be outfit w i th scanning mirrors, planar photodiodes, time of flight measuring electronics, or any other equipment that would be required to perform the actual task. Other active methods of range finding have also been used in industr ial situ-ations. A more common of these for eyesight range measurements is ultrasound. However, ultrasonic beams are not directive enough to be able to make the pinpoint measurements [15,31]. Structured lighting provides another example, where the use of a light pattern, such as stripes [1,2,14,39] can provide sufficient depth and orientation clues to a television camera for a post-processor to deduce the desired information. In light s tr iping, a plane of light is scanned across the scene that the camera is viewing and the i l luminated image coordinates for each scan posit ion are stored. A s the three-dimensional coordinates of the light plane are known, and each of these stored image coordinates transforms to a line of sight in the real-world, the intersection of the above information leads to each of the real-world points on the surface of an object shown in the image. This is a tr iangulation technique and the distance between the source and camera is referred to as the baseline. Whi le each stripe also adds surface continuity information to the image, problems arise where concavities in the image occur. If the concavity is deep enough, the camera and the light source w i l l not both be able to see into it, however this is not a severe problem in dealing w i th cylinders. This technique is also less expensive than most of the other active sensing schemes. The l imi t ing factor here however, as wi th the lasers, 14 is that the structured light wi l l not be visible enough to the camera in daylight. M u c h the same could be said of any variation on the above structured lighting technique. The feasibilty of these were also rejected by Clark [6], in dealing wi th the log scaling problem. Passive data acquisition techniques offer alternatives that should be more feasi-ble for an industr ial problem that is out of doors in a log sort yard environment. The volume scaling of stacked pulpwood was automated by M i l l e r and Tardiff, who photographed end views of uniform length, evenly piled pulpwood [12]. This photograph was viewed wi th a 729 line television camera. Binary thresholding was then applied to the camera image. This is the technique of classifying those pixels as ' 1 ' that possess a grey-scale (intensity) value above some predetermined value, and '0 ' otherwise. Following this, the number of ' 1 ' pixels, indicating the presence of a log end face at that point in the image, was electronically counted to arrive at a volume figure. This was estimated to be wi th in 2% for these uniform loads. One study on automating the log scaling process was performed by Demaer-schalk, Cot te l l and Zobeiry in 1980 [7]. W i t h the use of a two-camera, data acqui-sit ion system, their technique involved photographing an end and a side view of the logs while s t i l l on the truck. They investigated a system to improve on weigh scaling by enlarging these two views and measuring the logs directly from the images. In this manner, weigh scaling could be replaced by stick scaling a sample of incoming logs and applying a regression technique to estimate the total volume and species volume. This study concluded that the efficiency of total volume prediction could be increased thus. A very similar problem was studied by Clark [5,6], who investigated the possi-bi l i ty of using stereo vision techniques to automate the stick scaling process. In his 1985 P h . D . dissertation, he provided a theoretical design of a system based on the Marr -Poggio stereo matching algori thm [19,20]. 15 In this system, he sought to provide a method of edge detection in order to derive the outline of the logs for recognition and subsequent measurement purposes. A basic difficulty which complicated this problem is that the surface which the logs are lying on (the background in a camera image) is going to be open ground which wi l l be composed of mud, wood pieces, etc. This is unpredictably close in appearance to the actual logs themselves. B inary thresholding failed for the reason that the logs d id not have a uniform distr ibution, but rather are highly textured. He too precluded the ideas that involved control over either the lighting on the logs or the characteristics of the surfaces of the logs (or their background), as these techniques were not practical for the open environment of a logging sort yard. The conclusion eventually reached was that a computational vision method of edge detection was required for this task. This is a process whereby the occlud-ing contours of objects in the image are extracted, thus reducing the scene to a simpler line drawing wi th more tractable information for this end goal. In aid of this technique is the fact that logs are simply connected (i.e., they can be assumed to contain no holes). It was discovered from there that the problem was not all that simple because the two-dimensional spatial filtering operator (a Laplacian of a Gaussian - V 2 G ) was producing many extraneous edges (corresponding to zero-crossings in the filtered image). This filter was chosen because it performs both a derivative function on the image, thus extracting sharp discontinuities (the Lapla -cian), and an adjustable bandpass function, thus providing it w i th a tunable edge frequency analyzer (the Gaussian). It has been argued that the V 2 G filter, which is a rotationally symmetric function, is an op t imum choice for this purpose [19]. It was at this point that the concept of stereo vision entered into the picture, as it may take advantage of the rich image texture of both the logs and the irregular background to assemble disparity maps of the two-dimensional images. Stereo vision uses a tr iangulation technique, where identical points in two images are matched and their displacement relative to each other (the disparity) is taken as inversely proport ional to the normal distance from the cameras to the world point. The 16 surfaces of the logs are closer to the cameras than the surface that the logs are lying on , thus enabling the occluding contours to be readily extracted from the depth values (by thresholding these values). Whi le the depth values acquired may also be determined by means of other commonly used industr ial methods, these have been pointed out to require an environment that may be t ightly constrained. For Cla rk ' s proposal, experimental apparatus consisting of two television cam-eras suspended from a horizontal track above the ground was set up. The dimensions of the apparatus involved a trade-off between having the image disparaties large enough to provide a useful measurement, and small enough to allow the matching algori thm to succeed. Whi le it was intended to use a hierarchical system of fil-tering (four different scales for the filters to analyze in succession, coarse to fine), only one level was implemented. In fact, it was recommended that scale space techniques, which are those implementing a continuously ranging filter scale factor, would remove false stereo matching and allow for the opt imum accuracy. Following the disparity map thresholding, a filling in technique was used to eliminate the holes in the log image and isolate the occluding contour of the target object. This was done by scanning the thresholded image and setting all pixels to one that have at least one pixel on both sides (within a fifteen pixel range) that is set to one. This scanning was repeatedly run for five different angular directions between zero and ninety degrees. F rom this occluding contour and an application of a discrete version of Green's theorem, the centroid and axes of the ellipse that w i l l produce equivalent moments of inert ia were computed. Then , wi th the assumption that the log's axis is perfectly straight, the major and minor axes of the ellipse were used as the directions along which the length and wid th were calculated, respectively. Using this proposed technique, an estimated volume accuracy of 10% was arrived at, although errors as high as 20% were exhibited for some volume calculating formulae. Aside from these general estimates, no mention was given of the actual 17 values derived for the length and wid th of the log, and their comparison to reality. Th i s would have de-coupled these measurements and provided a slightly more useful yardstick by which to gauge the accuracy of the technique. This would also have shed more light on whether the assumption about a straight log axis is true or not, although essentially this same assumption is being used in practice wi th the current manual technique. A further source of error arises w i th the use of a stereo technique at a l l . One of the disadvantages of stereo matching is that it suffers at hidden edges; i.e., those points in the image which are visible to one camera, but not to the other. In this case, there is no stereo match and any attempt to do so wi l l lead to error. This exact si tuation w i l l occur at the rounded, non-uniform, log edges, as the rays that project from the occluding edges back to the image planes do not actually touch the logs at the points on their surface that are common to both of the images. F ina l ly , the above algori thm is very computationally intensive, especially if more than one filtering resolution is required, as it currently appears. Whi le it could be sped up considerably wi th the aid of parallel architecture hardware for the filtering, and a pipelined system of handling each of the filter resolutions simultaneously, this wi l l s t i l l quite likely result in a delay for the operator in awaiting the mea-surements. A further concern is the cost of the required hardware, which would be quite substantial. Other computat ional vision techniques exist for object recognition from camera images [20,2,14], based on such things as motion, shading or texture. Some have even designed systems based on a cooperation between several of these techniques in order to identify objects from models. Unfortunately, these are computationally sophisticated and would require expensive hardware to attempt to implement wi th any realistic turnaround time. The difficulty wi th the ideas discussed so far is that they either require a con-strained environment in the case of most of the active sensors, or spend too long 18 performing object recognition in the case of the passive techniques. One method of overcoming these hurdles is to use a passive technique (television camera) which creates a situation where the operator can quickly perform the recognition. This allows for an immediate calculation of the measurements. The next section will lay out a system of this nature, beginning with a mathematical analysis of the transducer itself, the camera. 19 Chapter 3 Development of the Scaling Method This chapter presents the theory behind the design of the log scaling system to be proposed. It uses, as its means of data acquisition, a single television camera. As far as the computational power required to do the processing is concerned, only a small amount of on-line calculations are required. The hurdle of object recognition is overcome by prompting the operator for a few easily found points on a graphics monitor. Initially, the camera will be modelled mathematically in order to determine the relationship between the image plane and the real world. Then, the basic algorithm and physical layout will be described as it was proposed for a simulation. Finally, results of this simulation will be discussed. 3.1 Camera Modelling The simplest and most commonly used model of a camera is that known as a "pin-hole" model [21,33]. As shown in Figure 3.1, all lines of sight in the pin-hole model pass through a single point (lens centre) before intersecting with the image plane. In the image plane, which is a focal length in distance (/) behind the lens 20 Figure 3.1: Pin-Hole Camera M o d e l centre, objects in the real world are inverted. For this reason, the si tuation is generally treated as if the image plane were actually in front of the lens centre by a distance equal to / . Here, objects do not appear inverted, but the world-to-image transformation is the same. The transformation from world coordinates to image coordinates has the effect of shrinking the x and y dimensions by a factor equal to the depth, z, divided by the focal length, / . Vc = ^ (3.2) If homogeneous coordinates are being used, this could also be represented by a transformation matr ix , however the treatment used here wi l l not describe it s tr ict ly as such. 21 Figure 3.2: Two-Plane Camera Cal ibrat ion Th i s model requires seven parameters to fully define orientation (three angular degrees of freedom), location (three posit ional degrees of freedom) and the focal length. Another model exists that is more general than the above linear model . This is the two plane model [4,21]. Instead of having al l of the lines of sight project to a central point, this model associates al l image points wi th a pre-determined line of sight that need not pass through any other point in particular. The two plane model is actually so named because the technique of deriving the lines of sight involves measuring points in each of two planes that correspond to the same image point (refer to Figure 3.2). Not every image point need be measured. Rather , a regular grid of points is calibrated and the remainder of the lines of sight are determined by an interpolation technique. Mar t ins , et. a l . [21] did this using linear, quadratic and spline interpolation schemes wi th results that verified an improvement in accuracy over the strict pin-hole model for a variety of different camera/lens combinations. 22 This technique was used by Chen, et. a l . [4] in 1980 wi th an 8 x 8 grid of cal ibrat ion data. They found that, for ten image test points, the reverse process of calculating the two points in the calibration planes that correspond to a given image point (strictly from the interpolation formulae) produced points that, when viewed in the image, differed by an average of 0.2 pixels. Th i s figure is quite good, however the cal ibrat ion process involved beforehand is quite extensive and time-consuming. A n operator in the field would have great difficulty in achieving it. It could be done in a laboratory, prior to field installation of the equipment, but might then be subject to such factors as temperature change, change resulting from any movement or blows during shipping, and parameter drift w i th t ime. The cal ibrat ion process itself was carried out wi th the aid of a robot arm capable of 0.001 inch placement accuracy. If the sampling grid was reduced to only 6 x 6 , the average error rose to about 0.6 pixels, and similarly to a full pixel when the grid was 2 x 2 . Thus, this process is neither simple nor quick enough for field calibration, although it may become necessary to do it in a laboratory before installation should the pin-hole model not be accurate enough. 3.2 Camera Calibration The pin-hole model , being the simplest and most commonly used, shall be the one selected for the in i t ia l design wi th the awareness that it is merely a good linearization of the si tuation. Greater accuracy could possibly be attained at the expense of time, a loss of flexibility and any cost of more precise cal ibrat ion equipment than that described herein. The task of rigourous camera calibration using this camera model was performed by Sobel [33]. His system was designed to allow a computer controlled television camera to guide robot manipulators. 23 In order to calibrate a television camera, one must first be able to analyze al l of its internal and external parameters. In addit ion to the six degrees of freedom specifying the world-coordinate position and orientation of the camera in some reference posit ion, Sobel, described the camera as having variable P A N (rotation about the unrotated, world-coordinate y-axis) and T I L T (rotation about the once rotated, world-coordinate x-axis). S W I N G (rotation about the twice rotated, world-coordinate z-axis) was held constant. This introduced two further parameters to be solved for, as each of P A N and T I L T were assumed to be measured by linearly varying potentiometers. In addit ion, his camera lens centre was specified by three more parameters (a vector from the centre of rotation of the camera to the lens centre) rather than just the three given above (the point) , as this point itself w i l l not be stationary while the camera pans and t i l ts . This brings the total number of parameters determining the external posit ion of the model to be found up to eleven. The internal geometry of this camera also required derivation. The image refer-ence (centre) coordinates needed to be determined, as d id the ratio of the quanti-zat ion factors (Mratio — Mx/My) which scaled the x and y values from the image plane to the quantized video output (see Figure 3.4). F ina l ly , both the zoom and the focus of the camera were variable. The product of the focal length and the vertical quantization factor ( / M „ ) was linearly dependent on the potentiometer measuring the focal control (ki(focus) + k2) and hyperbolically dependent on that measuring the zoom control ( c i H f2—0- This led to four more values to be determined, \ A C 3 — [zoom)' ' bringing the total up to eighteen for this particular si tuation. Ana ly t i c equations for each of the horizontal and vertical image coordinates may be derived that are functions of the world coordinates and each of the four potentiometer settings. To determine these equations, which are not linear, the above eighteen parameters had to be determined. A system of two equations of eighteen parameters each should theoretically require nine data sets, each consisting of the real-world coordinates of a point, the associated potentiometer readings and the resulting image coordinates. In fact, to 24 produce sufficient, independent information in order to solve for the parameters, ten data sets spread appropriately over a total of four images were required as a m i n i m u m . Since the model used for the camera is a simplification of reality, the more information that can be applied to the opt imizat ion of this model v i a its parameters, the better. It is best to provide enough calibration data to not only solve for the required parameters, but also to exercise as much of the input (world, image and potentiometer reading) space as possible in doing so. In this way, a regression technique w i l l better fit the model to reality. In reconsidering the original problem of the log sort yard, an addit ional factor also becomes important in the design of a camera-based system. For the system to remain flexible, it should be possible to quickly and easily calibrate it in a field s i tuat ion, by an operator wi th a min imal of training. Whi le the above model cali-bration may st i l l seem quite daunting, it w i l l be shown to be feasible wi th the aid of certain simplifications. For each world coordinate point located, a transformation, P, is involved that converts from world coordinates to camera coordinates [12]. This may first be described in homogeneous coordinates as rotations about the j / - , x- and 2-axes, in that order (analogous to the operations of pan, t i l t and swing). The rotation angles of the transformation are the negative values of those angles, 9X, 9V, 9Z, that describe the orientation of the camera in world coordinates. 0X,9V and 6Z are derived in the real-world as the angles that are required to map the world-coordinate axes onto the orientation of the camera coordinate axes. P' = Rotv{-Oy) • Rotx(-9x) • Rotz{-9z) ( cos0y 0 -sin9y 0 \ f 1 0 _ 0 1 0 0 0 cosOx sin9y 0 cosy 0 0 —sin9x V 0 0 0 1 y V 0 0 0 sinQx 0 cos9x 0 0 1 J 0 \ f cos9z sin9z 0 0 V sin9z 0 0 \ cosOz 0 0 0 1 0 0 0 1 / (3.3) (3.4) 25 ( cos0y sin0xsin0y —cos0xsin8y 0 \ f cos0z sin0z 0 0^ _ 0 cos0x sin9x 0 —sindz cos0z 0 0 (3 5) sin0y —sin0xcos0y cos0xcos0y 0 0 0 1 0 \ • I ^ 0 0 0 i ; v o 0 0 1, f cos0ycos0z — sin0xsinBysin6z cos9ysin9z + sin9xsin0ycos0z —cos9xsin0y 0 ^  _ —cos0xsin9z cos0xcos9z sin8x 0 sin0ycos0z + sin0xcos0ysin0z sin0ysin0z — sin0xcos9ycos0z cos0xcos9y 0 V 0 0 0 1 , (3.6) where: P' is the rotation transformation portion of P, the world-to-camera transfor-mation matrix Roti(0j) is a homogeneous, rotation transformation matrix, describing a right-handed rotation, 0j, about the i-axis [12,2] Following this, a translation transformation is applied in order to shift the co-ordinates to a camera frame of reference. The translation components consist of the negative of the camera's world coordinates. This makes the entire homogeneous transformation equal to the following: P''Trans{-X0,-Y0,-Z0) Roty(-0y) • Rotx(-0x) • Rotz(-9z) • Trans{-X0,-Y0,-ZQ) Roty(-0y) • Rotx(-9x) • Rotz{-9z) ( 1 0 0 1 0 0 V -X0 -Fo 0 0 \ 0 0 1 0 -Z0 1 J -cos0xsin0. cos9xcos0z st inOr (3.7) (3.8) (3.9) / cos0ycos0z — sin9xsin9ysin9z cos0ysin0z + sin9xsinOycosdz —cos9xsin9y 0 \ sin0ycos0z + sin9xcos8ysin9z sin0ysin9z — sin0xcosOycos0z cos0xcos6y 0 —Xo ~Y0 —ZQ 1 J (3.10) where: 26 XO,YO,ZQ are the camera's x,y,z coordinates in the world frame of reference Trans(dx,dv,dz) is a homogeneous transformation matr ix producing a trans-lation by the vector (dx,dy,dzY [12,2] For the purpose of deriving analytic expressions wi th a min imal number of pa-rameters, the above homogeneous relations were expanded out when the complete system transformation equations were required: v t V i J (X Y Z 1 ) P (3.11) = ( X Y Z 1 ) • Roty(-By) • Rotx{-9x) • Rotz{-6z) • Trans{-X0,-Yo,-Zo) (3.12) u = X(cos 0y cos 6Z — sin 6X sin 0y sin 6Z) + Y(— cos 6X sin 9Z) + Z(s'm By cos Bz + sin 9X cos 9V sin 9Z) — Xo (3.13) v = X ( c o s 0V sin 9Z + sin 0X sin 0y cos 6Z) + F (cos Bx cos 9Z) + Z(s in 9V sin 0Z — sin 9X cos 9y cos 0Z) — Y0 (3.14) t = X{-cos9xsinOy) + Y(s\n0x) + Z(cos0 X cos0 V ) - Z0 (3.15) where: u , v,t are the x,y,z coordinates in the camera-based coordinate system X, Y, Z are the x, y, 2 coordinates in the world coordinate system Oz,Qy>Qz a r e the rotation angles that transform the world coordinate axes into the camera's orientation XQ^YQ^ZQ are the translations required to shift the world coordinate origin onto that of the camera 27 Figure 3.3: Perspective Transformation Left-handed coordinate systems were used for both of the coordinate systems in order to keep the depth, t, in the camera-based coordinates positive, while avoiding having to invert one of the axes in the transformation. Next , a transformation exists that maps a point in the camera-based coordinate system to the image plane v i a its line of sight. This is the perspective transforma-t ion: u ' = ( y ) u (3.16) v' = {L)V (3.17) where: u ' , v ' are the x and y image transducer plane coordinates, as shown in F i g -ure 3.3. 28 Figure 3.4: Image Output Device Transformation The output image from the sytem is not a continuous image, but rather a spatially-sampled graphic. Th i s leads to one more transformation, as shown be-low and in Figure 3.4 h = Mxu' + ho = Mratio{fMy){^) + h0 (3.18) v = Mvv' + v0 = (/M„)(y)+ vo (3.19) Thus , from the original world-coordinates, the output image may be calculated if ten parameters are known: * fMv, the focal length/vert ical quantization factor product * Mratio = Mx/Mv, the quantization factor ratio * 0x,Ov,0t, the angles that transform the world coordinate axes into the camera coordinate axes 29 • ho, Vo, the output image coordinates when u = 0 and v = 0, respectively • X0,Y0,Z0, the camera's location in world coordinates A n advantage of this model is the need for inversion of the world-to-image trans-formation. B y inverting Equations 3.18 and 3.19, (~t) and (~t) may be found from h and v. B y setting t arbi trari ly to unity and reversing the ini t ial translation and rotat ion, line of sight vector components arising from image points may be calcu-lated. A s can be seen from the analytic expressions derived above, the relations that are to be used to determine the above parameters are quite non-linear. If one thinks of this problem as that of finding the peak (optimum) in a ten-dimensional, mul t i -modal surface, then the difficulty of the problem can be better understood, and it becomes apparent that a standard least squares technique is not applicable. In at tempting to model the movements of celestial bodies, Gauss was confronted wi th a similar type of opt imizat ion problem in the nineteenth century [8,17]. His relations too involved non-linear, trigonometric terms, not unlike those expressions discussed above. The technique derived to solve situations of this nature is named for h i m - the Gaussian least squares, differential correction, parameter estimation technique. This is s t i l l a least squares regression technique, but operates on a linearizing assumption to overcome the barrier. A standard, least squares technique seeks to minimize the squared error sum directly. This method seeks to minimize the predicted sum of the squared residuals as the parameter estimates are varied. While a more complete description of this (with an example similar to this situation) may be found in Junkins [17], a short presentation of the theory, as applied here, wi l l be given, as follows. It is known that the image measurements may be modelled as a function of the 30 world data: where: y is the vector of image plane coordinates are the horizontal and vertical image plane coordinates, respectively fh,fv are functions involving the ten camera parameters to be determined P is the vector of camera parameters A s this system of equations is non-linear, the least squares technique of taking the weighted, pseudo-inverse wi l l not work directly. Instead, it must be first assumed that a reasonably good, starting estimate of the parameters is available. This is no problem for the log scaling si tuat ion, as most of the parameters are directly measurable to a reasonable degree of accuracy. The sum of the squared residuals, <f>, is derived from the difference between the measured image data, y, and that calculated from the transformation equations, yc: <j>c = Ayc'WAy, = (y - yc)%W{y - ye) (3.21) where: y is the vector of measured image coordinates, /i, and v, yc is the vector of corresponding image coordinates calculated from the world coordinate input and the transformation equations, /(/?) W is a weighting matr ix related to the accuracy of the measurements Using a linearizing assumption (the first term of the Taylor series), the residual vector for that si tuation where there is a local variat ion in the parameters, A/?, about the current estimates, Pc, can be predicted: 31 where: A y p = A y c - AA(3 (3.22) A y c is the current residual vector A y p is the linearly predicted residual vector A = §£:{0c) is the partial derivative matrix of the transformation functions with respect to the camera parameters, evaluated at the current estimates for the parameters Now the situation is exactly analogous to that of standard least squares. With the weighting set equal to the identity matrix, as the measurements are all taken to the same accuracy, the pseudo-inverse situation is derived as follows. The square of the linearly-predicted residuals is: 4>p = Ay^Ay, = ( A y c - AA~/?)<(Ay c - AA0) (3.23) Taking the derivative and setting it equal to zero will minimize this expression: ^ = -2v4 'Ay c + 2A*AA~P = 0 (3.24) dA/3 Hence: A/? = ( A ' A ) - M ' A y e (3.25) Therefore, the predicted sum of the square of the residuals may be minimized with the application of the parameter change, A / ? . This is based on the linear assumption. Turning the above into an iterative process, the parameter estimates may be improved until the model optimum is achieved or the desired accuracy is reached - all provided that the initial parameter estimates are good enough that the first derivatives will lead them to the local optimum that is desired. 32 Comput ing the inverse of a matr ix is prone to numerical errors [17]. Fortunately, other techniques exist for solving the above equation. One of these is known as Householder reduction. It is a process by which the matr ix A is reduced to upper triangular form by means of application of a series of Householder transforms [17]. Backing up a step, it is clear that (analogous to the standard least square tech-nique) A/3 is a solution of: A Aft = A~yc (3.26) where: A, Ayc are known A may be reduced to upper triangular form and the same transformations may be applied to Ayc, leading to: RA/3 = C (3.27) where: R = QA is the upper triangular version of A C — QAyc is the corresponding measurement vector Q is the lower triangular mat r ix containing the Householder transforms Therefore, Af} may be solved for by simple back substi tut ion, starting wi th its last element. To improve upon the estimates for the image centre coordinates before the main opt imizat ion process began, a further adjustment was made. This involved adding the mean error of the ini t ial calculated image coordinate estimates back to the values of h0 and v0. In this way, a constant correction can ini t ia l ly be added to the constant offsets in each of the image transformation equations. j n - l K « - + - ] £ ( M 0 - MO) (3-28) n i=0 33 where: 1 n-i v0 *- v0 + - 53(ue(t) - 5(0) n t=0 (3.29) n is the number of measurements involved in the calibration. Each image h(i),v(i) are the measured image coordinates hc(i),vc(i) are the image coordinates computed from the initial camera trans-formations The calibration technique described was then iterated. For each iteration, the value of (f>p was tested against a minimum threshold for termination unless an itera-tion limit was reached. This figure, when divided by the number of measurements, led to an estimate of the convergence behaviour of the model parameters. The parameters thus derived were written out to a disk file for the scaling process to utilize, in determining the inverse camera transforms. While the value of 4>p itself is dependent upon the number of measurements used for the minimization, another value, derived from it, provides a more useful figure of merit. This value, to be called is a root-mean-square (rms) error for the coordinate measurements used to fit the model. n is the number of measurements The value of $ indicates an rms number of pixels that the model will be in error for the calculation of an image plane coordinate value (h or u), given the actual world coordinates of the imaged point. point consists of two measurements (h and v) (3.30) where: 34 3.3 Log Recognition W i t h a camera that has been calibrated, the problem now posed is that of finding the log in the image. Here is where the operator is required. W i t h the use of a graphics screen display of the camera image, the operator w i l l easily be able to visually discern the log in the image. A light pen may then be used to quickly find a few simple features from which, wi th the use of the inverse camera transform, a floating-point processor may calculate the desired real-world measurements. A n operator is required for any sort of log scaling implemented because of the grading and species identification tasks. B y using an operator to perform quick and simple operations in scaling the log, it can be better guaranteed that the operation is proceeding without error, as the attention w i l l be directed to the task at hand. A s a result of having to look only at a graphics screen attached to a remote camera, this operation may be located in a shelter away from any noise and discomfort caused by the weather, and the large machinery operating in the vicini ty. This step too should increase the reliabil i ty of the measurement process. B y performing the object recognition manually, a tremendous computational burden is alleviated, thus lowering the cost of the system and the time required to make measurements. 3.4 Log Scaling Calculations The idea, then, would be to arrange it such that the salient features that the operator must locate are both simple to find and small in number. Reverting to the cyl indr ical model of a log, it would be simplest to have the operator select the four corners of the log's projection onto the image plane. These points could then be used to derive the length and radius of the equivalent cylinder that would fit between the four lines of sight that result from inverting the camera model transformation. 35 A n algori thm for computing this has been derived and it w i l l now be presented, based on the si tuation of Figure 3.5. The two image points that are found on the underside of the log's projection in the image plane correspond to lines of sight that are tangent to the presumed cyl indr ical shape. These lines form a plane which passes through the camera's lens centre: Aix + Biy + Ciz + Z ? i = 0 (3.31) Similar ly the two points on the topside of the log's image projection lead to an upper tangential plane: A2x + B2y + C2 z + D2 = 0 (3.32) It is assumed that the ground plane is described as: AGx + BGy + CGz + DG = 0 (3.33) The log description may then be derived as the largest cylinder which wi l l fit between these three planes and truncated at each of the end points. derived from their first (3.34) (3.35) (3.36) hi,h2,hc are the unit normal vectors for the three planes The unit normal vectors for each of these planes may be three coefficients. Ml n 2 nG = 1 + Bl + c? 1 + BI + cl 1 M + BG ( Ar \ B, V ci J ( A* \ B2 \C2 j ( A G \ BG V CG ) where: 36 37 Bi,B2,Bo > 0. This constraint is applied in order to ensure that all of the vectors point in the "up" direction (positive y-component) in Figure 3.5. For each of these planes, the points of intersection w i th the log cylinder form a line. These lines are parallel to the axis of the cylinder and at a distance equal to the radius, r . The vectors from each of these lines to the cylinder axis at normal incidence are: r rii = rn-i = n2 -rn2 yjA\ + B\ + C\ yJA\ + B\ + C y/ AG + Bl + CG ( A t \ I M \ i \ C 2 J I Ag \ BG \ Co J (3.37) (3.38) (3.39) These lines may be represented parametrically by three linearly dependent vari-ables, as follows: ( l\ f 0\ f 0\ (3.40) (3.41) ( 1 1 ( 0 1 (0\ h - xi 0 + yi 1 + zx 1° J 1° J ( 1 1 ( 0 1 ( 0 ^ h = x2 0 1 0 1° J 1° ) 1 1 ; ( l A + ZG t0 0 V i (3.42) where: each set {x{, y,, 2, ; t = 1,2, G}, are linearly dependent parameters that describe points along the lines of intersection hi h, IG are the equations for the lines of intersection between the planes and the cylinder 38 Knowing the equations of the planes applies one constraint on the above equa-tions, thus: / 0 \ I 1 / 0 h = xi Bi + Zl c±. + V 0 ; V 1 J f 1 I 0 / h — x2 B2 + z2 B 2 + 0 J V 1 ; v Bi V o J Bi ( 1 \ Ar, BG V o J + %G 0 \ / 0 C , ; 'Bo 1 J + Ba V o (3.43) (3.44) (3.45) Therefore, knowing the equations for the three lines parallel to the log axis and the normal vectors from these lines to it, three equivalent descriptions of the infinite line containing the the log's principle axis may be determined. R l = I\ + Tli — X\ R2 — I2 + n2 = x2 RG = IG + nG — xG where: 1 Ai Bi 0 / l A B 2 V 0 J ( l A Aq Ba V o ; ( 0 0 \ + 21 £x Bi + J2± Bi T + y Br \ 1 J V 0 J J ( 0 / 0 ^ ( A2 + z2 B 2 + B 2 r p B2 V 1 J I 0 j ^2 V c 2 J + zG 0 ' Be 1 + / o \ Da Ba V o ; r + 7T (3.46) (3.47) (3.48) Pi = \/A\ + B f + C\ P2 = + B f + C 2 2 G = yjAl + B£ + Cl R\ are the infinite lines containing the log's principle axis Ai, Bi,C{, Di are the constant coefficients of the known plane equations 39 Xi, Zi are linearly dependent parameters which describe points along the lines of intersection and the log's axis The following equalities must then hold: rAi rA2  X l + 1\ = X 2 ~ ^ = x G + — (3.49) Ax C j Dx rBx A2 C2 D2 rB2 — Xi - — Z i - — + Bi B\ B\ P\ B2 B2 B2 P> 2 c2 2 i = - B ^ - 2 AG CG DG = —E~xG &G -B~aZG B G 2 + -P7- (3-50) rCx rC2 2 rCG zG + - 7 r (3-51) Therefore: Ai Ci Di rBi AG . rAi rAG. CG . r C , rCG. DG rBG TIXI-B-IZI-B-I+-K = "^(ll+^r"^)-^ (2l+PT""^)_^+"^ (3.52) ( B\CG — BG Cx \ 11 = ZL UB0 - A O B J + Subst i tut ing back into Equat ion 3.46, the three orthogonal components of Rx may be solved for wi th only the one parameter: D _ (B\CG — BGCi \ R l * ~ Z l \ A i B G - A G B j + r (( Bi -) (AiAG + B\BG + CiCG - GPi) + A^j + Fx WAIBG - AGBi BiDG-BGDi\ MBG - AGBi ) ( 3 - 5 4 ) 40 Therefore: Rlx = *\ r B\CQ — BQCI An By + A\BQ - AQB B 1 D G - B G D X A X B G - A G B X -) (BG ( A \ + B\) - Bx (GPX - CXCG)) + (3.55) _ / AX (BXCG~BGCX\ C A RL* ~ ZL \ ~ B ~ X \ A X B G - A G B X ) ~ B ~ X ) + k (- i k LBGB-AGBI) ^ A G + + C I C G -GP^+5>) -Therefore: = _ (AGCX- AXCG\ RLV - Z I \ A 1 B G - A G B 1 ) + R ( A X B G - A G B X ) (~AG W + * 0 + * - c * c «>) + Pi V^iPcv - A G B X ( A G D X - A X D G \ A X B G - A G B X (3.57) A n d : RL = zi (1) + "J" (Ci) (3.58) In a similar manner, the expressions for the three components of R2 may be derived: A2 ^ C2 ^ D 2 rB2 _ A G ^ rA2 rAG^ CG ^ rC2 rCG D G ^rBG B2 B2 B2 P2 BG P2 G BG P2 G BG G (3.59) _ / B2CG — BGC2 \ X2 ~ Z 2 \ A 2 B G - A G B 2 ) + •F ( A R ~B\ u ) (A*Ag + B * B g + C 2 C g + G P 2 ) + r 2 \A2JoG — A G D 2 ) 41 The three orthogonal components of R2 are thus: 5 _ { B2CG — BGC2\ R 2 X ~ Z 2 \ A 2 B G - A G B 2 ) + r_ (( - B P2 \ \ S i 2 J D G — l\GD2 B G D 2 — B 2 D G A G B 2 — A 2 B G (3.61) Therefore: R — z (^2<^G ~ BqC2' \ A 2 B G — AGB2J r ( -1 -) (BG [A\ + B\) + B2 {GP2 + C2CG)) + P2 \ A 2 B G — A G B 2 , _ / (-fiz^c ~ BGC2 \ C2 K2V ~ Z 2 [ ~ B 2 \ A 2 B G - A G B 2 ) ~ B 2 ] + r_ ( A2 ( -B P2 \ D2 \J\2DG — J\GD2 M / B G D 2 - B2DG\ _ /Di (3.63) B2 \AGB2 — A2BGJ \B\ Therefore: _ (AGC2 - A2CG\ R2V ~ Z 2 \ A 2 B G - A G B 2 ) + r ( 1 P2 \A2BG — AGB2  AGD2 - A2DG A2BG — AGB2 ) (AG [A\ + B\) + A2 (GP2 + C2CG)) + (3.64) A n d : R~iz = *2 (1) + (-Ci) (3.65) r2 Therefore, the equations for the line containing the axis of the cylinder may be reduced to equivalent forms involving only one parameter and the unknown radius, 42 r: where: «2 where: Rx = zx (V1 x (al) ( Pi v2 + v + fa V i J { a3 ) { fa J B iC G — BGC i AiBr,-AGBi AgCX-AXCr, AiBtJ-AGBi «i = AXBO-AOBSB*W + B\) - Bi(GPi - CxCa)) A* = A^BO-AOBA-MAI + BI) + AX(GPX - CXCG)) « 3 = CX fa = BiDg-BgD\ AiBG-AaBi a _ AnDi-AiDr. ^ 2 AIB,J-A,JBI Ps=0 ( v4 7* ( A R2 = z2 + 06 I 1 J "2 I a 6 J I fa J (3.66) (3.67) v4 a4 B2CG -BGC? A2BG -AGB2 AGC2 -A2CG A2BG -AGB2 -1 A2BG -AGB2 I A2BG -AGB2 -c2 B%Dr; -B,,Di A2BG-AGB2 43 A similar equation could also be derived for RG, but it is not necessary. These equations are in the form of a fixed point, which is a function of the constant, r , and a vector. A s the vector portions must all have the same orientation and their z-components are equal to unity, their x- and y-components must also be the same. Tha t is: vx = vA (3.68) v2 = vB (3.69) B y equating two of the three line expression components, Rx and Rz, an analytic expression for the radius may be derived. rai „ ret* z i v i + — 1 + ft = z2v4 + — - + 0 4 (3.70) r\ r2 Therefore: Also : Therefore: 2' = ^ (2!"' + r(l-t) + (A-"')) ( 3 n ) *. + I j r + ft = *j + r| 5+/». (3.72) zi = z2 + r - + (/?6 - ft) (3.73) Equat ing 3.71 and 3.73 leads to: Vi Vi \P2 P\J Vi \P2 P\) B u t ui = V 4 , therefore: r((t-t)+-(i-S))=-(ft-A)"(ft-ft» <3-75) 44 Therefore: r = Vi{Pe - Pz) + {Pi - PA) M f f " g) + (ft " ft) (3.76) This may, by back substi tution for some of the symbols, be reduced to: N (3.77) D where: N = P2\DG{AiB1 - AiB2) + D^AGBI - A2BG) + D2{AXBG - AQB^] D = [A2Bi - A1B2){P2G + C2CG) + (AGB2 - A2BG){P2Pi + C 2 C X ) - {AXBG -AaBMPi-Cl) Thus, wi th the above formula, the radius may be calculated wi th merely nine-teen floating-point multiplications and ten floating-point additions. Therefore, by picking out the four corners of the projection of a cylinder (log) onto an image plane, the radius of the cylinder may be derived. This assumes the ini t ia l knowledge of the inverse camera transformation and the ground plane. Following the calculation of the radius, the length too must be derived. The infinite line that contains the axis is fully known, w i th the solution for the radius given above. Wha t remains is to find out where along this cylinder the lines of sight touch. One of the lines of sight that is tangent to the bot tom side of the log may be described by: (3.78) where: K is the varying parameter of the line 45 7ii are the three, orthogonal direction components of the line of sight cor-responding to the selected image point, as derived from the inverse camera transformation XQ,Y0,ZQ is the camera's origin, in world coordinates Where this line touches the cylinder, K takes on a fixed (but as yet unknown) value, K U . By projecting along a radial vector of the cylinder, an axial point is reached that determines where this line of sight truncates this cylinder. Here: ( 7n V 713 * 0 \ (A, \ I \ Y0 + T Bl = *1 + m Z0 J "1 I Ci J V 1 j / V ^3 (3.79) where: Ku is an unknown constant which determines the tangent point of the line of sight with the cylinder surface r)i = -£-cti + /?,, as determined in equation 3.66 K,n may be solved for by equating the x- and z-components of the above expres-sions. K l l 7 l l + X0 + Pi zivi + m 1 / ^ rAi \ 2 l = — K l l 7 l l + X0 + - 5 »?1 U X V Pi J Substituting into the z-component expression from above: rCi If v rAi \ K l l 7 l 3 + Z0 + — = — I / C U 7 i i + A0 + — TJi + P i ui V Pi J (3.80) (3.81) (3.82) Therefore: K11 Vi7i3 - 7u XQ + — rii ) ~ v i ( z 0 + ^ - r , 3 ) ) (3.83) 46 Similarly, / c 1 2 , the constant corresponding to the other bottom intersection with the cylinder may be determined with the aid of the other line of sight, given by: (3.84) f l u ) ( X0 \ Li\i — AC 715 + V 716 ) V zo J The length of this log segment, as determined from the bottom lines of sight, is therefore given by the distance between the two points of intersection with the cylinder. ACX2 ( ) 7l5 v 7 i e ; (3.85) where: /' is the length of the cylinder determined from the bottom lines of sight It is unlikely that the points that the operator has picked out from the image are exactly diametrically opposed on the log surface, an averaging of the length, as determined from both the top and bottom planes would be more accurate. In this case, the length becomes: 2 2 ( 12, \ AC22 725 — AC2l V 726 J (3.86) where: K2i are the constants determining where, on the top lines of sight, intersection with the cylinder occurs, as determined in the same manner as Ac n and AC12 7 2 t are the constants describing the vectors of the top lines of sight 47 3.5 Sources of Error These two analytic equations involving the length and radius of the log form the model description to be used and provide the desired measurements directly. The errors involved in these calculations may be classified into three types. Whi le the length is a function of the calculated radius, it is the accuracy of the radius that is more of a concern. The radius w i l l be much smaller than the length and, therefore its measurement w i l l be more l imited by the resolution of the imaging system. What w i l l be called here a type 1 error arises from the spatial quantization of the image. This w i l l add a fixed-sized term to the error budget of linear measurements that are derived from a subtended angle in the real-world. The smaller the measurement, the higher the error becomes as a fraction of it. The same could be said of a type 2 error, arising from the feature point not being located exactly by the operator, due to such factors as haste, image noise or par t ia l obscurity by grass or mud. Whi le the above error situations w i l l also affect the length calculation, the effect is not as severe because the length value is going to be on the order of ten to one hundred times that of the radius. The length measurement w i l l be affected more by such things as errors in the radius calculation and errors in the assumption of a straight axis. The former of these problems is unavoidable, however the latter may be at least part ial ly remedied by allowing the operator to locate particularly bent logs as a series of straight cyl indrical segments, each wi th a radius and length. The overall length and radius may then be determined as the sum and weighted sum respectively, of those of the individual segments. F ina l ly , type 3 errors w i l l be introduced as a result of the fact that the models used are not perfect representations of reality. This includes the pin-hole model for the camera, the cyl indr ical model for the log, and the planar assumption for the ground. A l l three are intuit ively appealling and mathematically tractable however, 48 and it was not felt that this type of error would be too large. 3.6 Simulation In order to verify that in fact useful information about logs could be derived from a single-image system at a reasonable distance, a simulation based on just this was performed. It consisted of a program which fabricated a 512 x 512 pixel , log image. A t the outset of the simulation, the operator is prompted for the log's length, radius, location and angle relative to the camera, the ground slope, and the camera's height off of the ground. Using an ideal, pin-hole camera model, the software generated the image on a graphics screen that this camera would see. Potential errors of types 1 and 2 were incorporated in the simulation, while type 3 errors were not. The operator was prompted to pick out the log corners wi th the aid of a movable cursor and combined keyboard/knob box input. This allowed control, not only of the cursor mot ion, but of the abil i ty to re-draw the image following panning, t i l t ing or zooming (up to 200mm) of the camera. The result of al l of this was a radius figure arrived at by the derived equations. This figure turned out to be quite accurate through a number of trials (see F i g -ure 3.6). The average error was only about 0.2 cm for a log located fifteen metres from a camera placed five metres above the ground. For a more complete descrip-t ion of results that were obtained wi th the aid of this simulation software, refer to Append ix A . A s can be seen from the results of Figure 3.6, and expanded upon in Appendix A, some trends are prevalent for this measuring scheme. The error tended to worsen as the log angle increased. The op t imum angle for measurements is to have the principle axis parallel to the image plane (0°). For situations other than this, there is a practical difficulty in locating the "corners", as the projection is no longer 49 essentially square, but becomes rounded at the ends. When the radius of the log was varied, no clear trend resulted. It is expected that the absolute magnitude of the error in a real system wi l l not be a function of radius (at a constant distance), but that it w i l l remain constant. This follows from the fact that this error w i l l arise from spatial quantization and mistakes in locating the exact pixel closest to the desired feature. B o t h of these errors w i l l lead to inaccuracies that are independent of the other feature locations and therefore w i l l not be tied to the size of the log itself. The s imulat ion allowed the camera model to zoom in on the fabricated scene up to a certain l imi t . This allowed for greater accuracy as both of the quantization and mis-location errors were decreased at the max imum zoom position (focal length). This trend is quite clearly seen from the experimental results. F ina l ly , the distance of the target from the camera played a role in the accu-racy of the measurement system. This too would be expected, as linear errors in the image plane correspond to angular errors in the real world . The farther away that the log is, the larger the distance between the two arms of this angle become. The magnitude of the error in radial measurements should vary linearly wi th dis-tance. Despite al l of this, the results were certainly accurate enough to continue the development. Whi le this s imulat ion does provide a reasonable assurance that the derived cal-culations w i l l be accurate enough under ideal conditions, it makes no allowance for errors introduced by the camera model. For this, an actual test apparatus is required, the experimentation and discussion of which is the basis for the next chap-ter. A l though the pin-hole model was used to produce the results of the simulation, it was also used to generate the original image, therefore no conclusion could yet be drawn on its viabi l i ty . However, as it is by far the simplest and most commonly used approximation, it was implemented in the test apparatus as, at worst, a starting point. 50 E U o L. c v E 20 10 60 B0 loa anale (dearees) E u L. o i. L. c 10 t) E 18 20 30 a c t u a l r a d i u s ( c m ) Figure 3.6: Effect of Log Angle and Log Radius on the Scaling Accuracy of a Simulated Log; radius = 10 cm, distance to log from camera base = 15 m, height of camera above base = 5 m „ 3.7 Simplifying Assumptions In the camera calibration phase, the parameters of a model are found such that the world-to-image transformation is known and the image-to-world coordinate trans-formation may be determined to wi th in one degree of freedom (the line of sight). Using a pin-hole camera model , where all lines of sight must pass through the cen-tre of the lens, a model may be optimized to fit a set of test data. This test data consists of sets of known, world-coordinate points, corresponding image points and the values of any camera variables. It would require up to eighteen parameters to completely specify a pin-hole camera model [33]. However, certain simplifications wi l l allow for a reduction in this number and a corresponding decrease in the complexity of the calibration and the model's inverse. If the camera can be assumed, to a good approximation, to rotate about its lens centre (which could be arranged wi th a carefully designed apparatus), then three degrees of freedom are eliminated from the external geometry parameters. A s the logs are not going to be located in the near field of the camera's vis ion, they may be imaged w i t h a camera set up wi th constant zoom and focus, as these are only required to vary when the deviation in the depth of field is a large proport ion of the depth. This would eliminate a non-linear equation and three degrees of freedom in the internal geometry. For the test apparatus, the pan and ti l t functions of the camera wi l l be held constant, thus el iminating two further external degrees of freedom and bringing the total number of parameters down to ten. In the field, the change in these angles may be measured to a very high degree of accuracy. For example, electronic theodolites used for surveying, which util ize an opto-electronic scanning system of a precise, graduated ring can achieve accuracies of better than 0.5 seconds (0.15mgon) [38]. Th i s exceeds the quantization accuracy of the image resolution itself. 52 Thus, what remains is a camera wi th three orientational (9x,9y,9z) and three posit ional (XQ,YQ, Zo) degrees of freedom. There are also four internal parameters consisting of the centre coordinates of the image (ho, VQ), the ratio of the resolution scale factors (Mratio — Mx/My), and the focal length/vert ical resolution scale factor product (fMy). To solve for these parameters, only one image is required, however, an absolute m i n i m u m of five, real-world points are required in that image. These may be pointed out on the graphics screen by the operator in the same manner as was suggested for the scaling feature extraction. A t least one of these points should not be co-planar so as to introduce sufficiently independent information, and the data should ideally span the full range of the camera's image space. Clearly, a regression technique involving as many points as is possible would be the best means to optimize the camera's model . S t i l l , this operation may be carried out quickly in the field by a user wi th lit t le t raining, if the equipment is in a well-designed framework. Having solved for the camera parameters, the operator can then scale logs. This is done by aiming the camera at the logs by remote control , and picking out, wi th the aid of a light pen, the end corners of their projections onto the image plane, as displayed on a local graphics screen. In addi t ion, logs that deviate from the straight cyl indr ical model used may be scaled as a series of straight segments by picking out diametrical ly opposed break points where curvature becomes noticable. Th i s scenario allows the operator to perform these measurements in a quick and relatively comfortable setting. Most importantly, the measuring time wi l l be greatly shortened. The calculations are simple and the processor could perform them rapidly. As w i th any new equipment, there would be a period of getting used to i t , but wi th a bit of practice and experience in situations such as spotting edges occluded by grass and the effects of some small degree of foreshortening, this system could be very accurate. 53 3.8 Advantages and Disadvantages of the System One major advantage of this system is speed. This technique for individual log measurement can scale a log in less than ten seconds. It would take litt le more than the length of time for the operator to touch the screen four times w i th a light pen in order to scale a log. Compared wi th just the time that it takes to walk from one end of a log to the other, this w i l l be a savings. Another major advantage of this system is its simplicity. It requires nothing more sophisticated than a high resolution imaging system. The processing power required is not great. The actual processes involved are very simple to perform -even more so once a lit t le practice has been gained. The accuracy of the measurements made wi th a system of this nature are, at least from the simulation results, able to meet the radius accuracy set out for stick scaling. Relat ive to the other possible automation techniques looked at, this system wi l l be inexpensive. N o sophisticated t iming electronics or opto-electronics are required. On ly one television camera is required and the processor only need be able to do some simple, floating-point calculations. There are few moving parts, thus, also increasing the reliabili ty of the equipment, especially since it is in an outdoor environment. Further along the lines of reliability, this equipment may be portable, as the cal ibrat ion is quick enough that it may be done every time that set-up occurs. Por tabi l i ty means that the apparatus need not be exposed to the elements when not in use. Por tabi l i ty also means that the same equipment may scale at several locations in the same or even different sort yards, if desired. For that matter, it could be taken anywhere to scale logs that these measurements would be useful. 54 Final ly , this system could isolate the user from the outdoor setting of the log sort yard, allowing a more comfortable work environment for the scaling process. A s wi th most things, there are drawbacks to this design. The fact that it does require operator input leaves room for human error and some time in the interaction process. Addi t ional ly , the system wi l l take some getting used to, as judgement may be required for part ial occlusion of a bo t tom edge by grass or rounded end segments w i th no clear-cut corner in the the projection. These factors must be evaluated in the field. The main drawbacks however, deal wi th the errors involved. This error pr imari ly results from the assumptions made. The ground is not perfectly flat, although log sort yards are reasonably so. The logs are not perfectly cyl indr ica l , although it is felt that w i th the abil i ty to break any especially crooked logs down into a series of smaller log segments, this can be mitigated. Final ly , any practical camera model is going to be only an approximation to reality. The price that is paid for the simplicity of the pin-hole model is that it quite likely leads to the most error. Just how much error this and other factors introduce to corrupt the log scaling measurements w i l l be discussed in Chapter 4. 55 Chapter 4 Experiments The purpose of this section is to describe a series of experiments that took place to verify and analyze the design of the automated log scaling technique described previously. 4.1 Equipment The hardware used for the processing of both phases of the test consisted of a 1024 x 768 pixel colour graphics screen (HP98700) attached to an HP9050 computer. The processes were called and controlled from a standard display terminal , although the cursor used for locating the feature points was manipulated wi th the aid of two, optically-encoded knobs. The 480 x 512 pixel images, which were displayed in seven-bit grey-scale intensities, were generated by a Dage television camera attached to an Imaging Technology IP512 frame grabber board. 56 4 . 2 Calibration 4.2.1 Calibration Procedure The cal ibrat ion por t ion of the experiment was performed very similar ly to that described in Chapter 3. The television camera was hung from the ceiling of a laboratory at an angle of roughly thir ty degrees to the horizontal. A cal ibrat ion object, a cube, was put into the field of view of the camera and the known world coordinate points of its vertices were used as input data (see Figure 4.1). The cube was chosen for this because it provided a set of distinct and attached points that had easily calculated positions relative to each other. Other objects, or for that matter, any set of available world coordinate points could be used however. The cal ibrat ion technique began wi th actual measurements of the six external parameters. fMy and Mratio were both estimated based on a knowledge of the imaging system being used. The image centre, ho and v0, may be seen as constant (or dc) offsets in the graphic output coordinates and the ini t ia l estimates were set to half the number of pixels in the horizontal and vertical outputs, respectively. Once set up, the physical procedure was similar to the scaling simulation. The user was provided wi th the image of the cal ibrat ion object and then asked to locate the relevant feature points w i th the aid of knob box cursor controls and keyboard data input. A t each point location, the world coordinates were entered, either from the keyboard or a previously created text file. As there were no camera parameters that varied, the two image coordinates and the three world coordinates alone formed an input data set for each point. After the seven visible vertices of the cube were located and entered, the cal-ibrat ion routines were invoked. This performed the least squares fit of the input 57 camera Figure 4.1: Laboratory Scaling Apparatus 58 data to the pin-hole camera model . For the cal ibrat ion tests, six images were used. Each one was an image of the same cube, however, it was placed in six different locations. For the first case, referred to from here on as the test case, the cube (which was 38 cm on edge) was centred in the image space at a distance from the camera that was so close that it filled most of the image. This provided information to the regression process about as much of the image space as possible, and should have been the "best" model that could be derived, as a result. A second image, referred to as the centre image, was taken in which the cube was centred in the image plane, however at a greater distance (about 4.5 m) from the camera. Th i s meant that information was provided to the camera about only the centre por t ion (about 10%) of the image plane. This is where the imaging system is the most linear and may best be described by the pin-hole model [21]. However, the fit derived by calibrating the camera wi th this image was not as useful for other portions of the image space. F ina l ly , the cube was placed in each of the four corners of the image space. These are the least linear portions of the imaging system and these models should, therefore, have been the least useful about other portions of the image space, in al l l ikel ihood. For each of these six images described, the cal ibrat ion procedure was repeated five times. The repetit ion was performed in order to ensure that the model fit was not dependent on extremely accurate feature point spott ing, and should provide a feel for the precision (repeatability) of this operation, averaging out any particularly sloppy trials. 59 4.2.2 Calibration Observations It was at this t ime that it was observed that al l of the objects of interest in the image were distinct and clear, without any variation of the zoom or focus controls. This verified the lack of having to calibrate these also. The cal ibrat ion process, as described previously, led to a problem. Whi le there are ten parameters to be optimized, they are not independent. In fact, there turned out to be four pairs of dependent parameters amongst them. Xo and YQ were dependent on h0 and u 0 , respectively. The scale factor, fMy, was dependent on the distance along the camera's normal axis, Z0. F ina l ly , two of the angles, 6V and 0Z, were not independent of each other. The problem wi th this is that, in t ry ing to optimize the model to fit al l of these parameters simultaneously, the cal ibrat ion technique d id not arrive at a unique solution. Instead, it would converge on one of an infinite number of solutions which locally minimized the sum of the square of the residuals, and it became a highly data dependent process. Fortunately, a technique was developed which remedied this problem. Most of the parameters being optimized for were directly measurable and very good estimates could be obtained for their values. The cal ibrat ion was broken down into two stages. In stage 1, one of each of the four pairs of dependent parameters (6V, h0, v0, Zo) was held constant. These were considered to be quite well known from direct measurement (0v,Zo) or the in i t ia l equation offset process carried out {ho,v0), as mentioned in Section 3.2. The remaining six parameters in the model were then allowed to vary by means of the described least squares technique. This generally reduced the sum of the squares of the residual errors down two orders of magnitude. Following this, the other four dependent parameters of the model were fixed at the values derived from stage 1 and opt imizat ion continued on six free camera parameters again. Stage 2 reduced 60 the sum of the squares of the residuals by only about 25%. This stage could be thought of as almost a fine tuning, however it proved extremely valuable also in increasing the repeatability of the solutions obtained, as borne out partly by the decrease in the standard deviation of the rms error, <7$, over a number of trials of the same test. T w o variations of this two stage calibration technique were also tried. F i rs t , if a different set of variables was held constant for each successive iteration (instead of i terating one set to minimizat ion , and then the other), the solution oscillated for a short period before settling on a value that did not quite minimize the value of the sum of the squares of the residuals. Final ly , if a th i rd stage, in which the same parameters are held constant as in stage 1, was performed, no improvement was derived. Thus these efforts were abandoned in favour of the two stage method above. 4.2.3 Calibration Results W i t h this technique in mind , some results may be seen in Figures 4.2-4.9 and a more complete list of results may be found in Appendix B . Figures 4.2-4.3 show the value of the mean and standard deviation of $ as the cal ibrat ion from the test image was repeated over five trials. The test image involved the seven visible vertices of a cube that spanned a great deal of the image space. The starting estimates for X 0 , Y0, ZO,0X, 6y and 6Z were determined by linear measurements (with the aid of a tape measure) that had an estimated accuracy of one inch (2.54 cm). The method for determining the starting estimates for h0 and v0 was described in Section 3.2. fMv was estimated using some "ballpark" figures for the focal length, aperture and image pixel density, as was Mratio. These last two parameters were seen as being the least accurate of the starting parameter estimates. F r o m the logarithmic plots of the rms error, the convergence behaviour of the 61 188—3 2 4 i t e r a t i o n number Figure 4.2: R M S Er ror , for the Camera Cal ibra t ion; test image, averaged over five trials v> I B — j 2 * i t e r a t i o n number Figure 4.3: Standard Deviat ion of the R M S Error , cr*, for the Camera Cal ibrat ion; test image, averaged over five trials 62 calibrat ion may be seen. Convergence for both stages required only two to three iterations. Figures 4.4 and 4.5 illustrate the statistical distr ibution of one of the parameters, 6S, through a number of cal ibrat ion trials. 6X was not fixed in any stage of the cal ibrat ion. In Figure 4.4, 6X is taken from the cal ibrat ion based on the test image data. This was expected to yield the best fit for the values of each of the parameters. Addi t ional ly , five other images were used to calibrate the same camera, placing a smaller version of the cube in each of the four corners of the image and in the centre. For each of these test images, the cal ibrat ion was repeated. The combined results of 0X for all six of the images is shown in Figure 4.5. Whi le the mean remains almost the same, the standard deviation has risen by a factor of four. This indicates that the model cal ibrat ion is most repeatable (precise) when a spanning set of test data is used. h0 was held constant in stage 1 only. The plots for the same two sets of tests just discussed are shown in Figures 4.6 and 4.7. Whi le the same two observations can also be made here, Figure 4.7 brings out even better the variat ion in a parameter's value when different calibration data is used. Here, distinct clusters occur around false means located ± 2 0 pixels ( ± 3 . 8 % of 480 pixels) away from the true mean. This is a side effect of the camera model being a linearization of reality. Locat ing all of the cal ibrat ion data into one corner of the image wi l l tend to skew the results obtained by the opt imizat ion. These conclusions are once more borne out by Figures 4.8 and 4.9, which his-togram results of the cal ibrat ion of Vo, which was held constant only in stage 2. Here, the mean is roughly the same, while the standard deviation rises by two orders of magnitude. The results just presented describe a situation where a camera was successively calibrated w i th data that spanned either the entire image space or a distinct subset. 63 V) •*-> 3 (0 0) o i-(O -Q E C 2—I 1—• Kan » -0 .6642 radians st. dev. - 0.0028 radians 0 11 j 111111111 I I 11111111 j 11111111111111111111111111111111 1 - 0 . 7 - 0 . 6 8 - « . 6 6 t h e t a ( x ) ( r a d i a n s ) Figure 4.4: Result Dis t r ibut ion for 6X; test image cal ibrat ion, 5 trials (A +•» 1A V L. 0) • i C 6 — 5 — 3-2 H I-0—"rr •ean » -0 .6682 radians st. dev. - 0.0111 radians 1111111111 I 'I 111 I 'I i • I I i i i i i 11 v i i i i i ri i i I 'I 111 I 'I i - 0 . 7 - 0 . 6 8 - 0 . 6 6 theta(x) (radians) Figure 4.5: Result Dis t r ibut ion for 9Z\ total from all 6 image calibrat ion test, 30 trials 64 D to OJ L. 0) _D E C 6-5-4—I 3-2-1-aean - 520.8256 pixels tt. dev. - B.6664 pixels 8 1111II11111111111111111 11111111111111111111111111111 588 528 548 h (0 ) ( p i x e l s ) Figure 4 .6: Result Dis t r ibut ion for h0; test image cal ibrat ion, 5 trials (0 3 CO t ) **-o 93 C B — 7 — 6 — 5 — 4 — 3 — 2-1 — wan - 524.3388 pixels st. dev. - 16.4811 pixels 11111111111111111111111111111111111111111111111 588 528 548 h (0 ) ( p i x e l s ) Figure 4.7: Result Dis t r ibu t ion for / i 0 ; total from all 6 image cal ibrat ion test, 30 trials 65 +•> 3 M 0) L. «+-O i_ V Si E 3 C 6 5-4-3 -2-1-•can - 2.1806 m tt. dev. - 0.0003 a i i i • | i i i i 2.5 Y(0) (metres) Figure 4.8: Result Dis t r ibut ion for Y 0 ; test image cal ibrat ion, 5 trials 3 W V _Q E 3 C 8 — 7 — 6 — 5 — 1 — 3 — 2 — :± •ear. - 2.4875 a tt. dev. « 0.0214 • D L - d 2.5 Y(0) (metres) Figure 4.9: Result Dis t r ibut ion for Y0; total from all 6 image calibration test, trials 66 Sample Calibration Result Statistics Parameter Mean St. Dev. % of F.S./Mean 6X (test image) -.6642 rad .0028 rad .04% of 2TT 6X {total) -.6682 rad .0111 rad .17%of27r ho (test image) 520.8 pixels .6664 pixels .17% of mean ho (total) 524.3 pixels 16.48 pixels 3.1% of mean YQ (test image) 2.481 m .0003 m .01% of mean Y0 (total) 2.488 m .0214 m .86% of mean Table 4.1: Summary of a Sample of the Cal ibra t ion Test Results A summary of the above statistics may be seen in table 4.1. Each of the six camera models derived was repeatable, although more so for the spanning set of data (the test image). Those model parameters derived from either just the centre of the image (the centre image) or just a corner of the image plane (the corner images) tend to have their mean calibrat ion value located at a slightly different value than that derived from the more spanning data. The values for these parameters derived from the test image can be seen to be, not necessarily the exact physical values, but rather the mean of the values that would be determined from each of the less spanning information sets (calibration images). This was expected, as the regression technique is fitting a linear model to a slightly, non-linear system. The value of the rms error, at convergence for the test image is 2.1388 pixels, w i t h a standard deviat ion, of 0.4634 pixels. This indicates the rms error in horizontal or vert ical image coordinate values that could be expected from the camera transformation equations. The horizontal scale was 480 pixels at full scale. Thus , the rms error turned out to be 0.44% of i t . The low standard deviation points 67 to the high repeatability of the model derived from the tests. These results along wi th those included in appendix B , may be seen as a partial justification of the calibration technique. However, what s t i l l lacks is some sense of how accurate this technique is when coupled to the log scaling calculat ion routines. This w i l l be the subject of the next subsection. For this, al l thir ty sets of camera parameters (derived from the six images, tested five times apiece) w i l l be used to scale a cylinder. 4.3 Scaling 4.3.1 Procedure In order to test the accuracy of both an operator at locating points and the camera model used, cyl indr ica l objects were imaged and scaled by the same apparatus as that used for the cal ibrat ion. The process and calculation algorithms were exactly those presented in Chapter 3 for the radius and length of the objects. Cy l indr ica l objects were used for these tests instead of real logs because they have readily determinable dimensions that provide a direct measurement of how accurate this system is. This provides a real-world si tuation where type 1 and 2 errors are present. Type 3 errors are also present, but only in the form of camera model non-idealities. F ina l ly , an actual log was imaged and scaled in order to include type 3 errors that arise from the cyl indr ical log assumption. The only error not included is that type 3 error arising from the log sort yard not being perfectly flat. Th i s , as well as further tests on actual logs should be done in field tests. It is known that a model fit to the camera may be derived, and it has been shown that an algori thm exists that w i l l calculate the dimensions of a log, given an ideal, camera model. What remained next to be seen was whether these two may be combined to accurately scale logs wi th a real-world camera situation. 68 In the last section, the extraction of thir ty sets of camera parameters from six different images was performed wi th the aid of a regression technique. Some of these derived models were not as complete a description of the entire image space as others. A l l of them were then used to scale a cylinder of known dimensions in order to provide conclusive evidence about system accuracy. The images used for scaling were taken by placing a cylinder wi th a radius of 12.86 c m and a length of 50.48 cm in view of the camera. Five different images of this object were used, w i th the cylinder being viewed in each of the four corners of the image and once in the centre. Thus, if a particular set of the camera parameters was derived from only a port ion of the image plane, then this model's scaling accuracy in that same section (or any other section) can be isolated. Similar ly, the parameters calibrated from a more spanning set of data can be tested on various portions of the image space. Each of the tests was run five times, as wi th the simulation, in order to verify its precision and allow for quantization error and mis-location of the feature points due to human l imitat ions. B o t h the length and the radius were calculated, using the design equations. The radius is the most important of the calculations made however. The length is derived as a function of the radius, and so is part ial ly dependent on its accuracy. The radius is by far the smaller of the two measurements, thus the same magnitude of error w i l l be a larger proport ion of it. W i t h the current stick scaling process, the radius may be determined by one simple measurement, whereas the length must be determined by going end-over-end wi th the stick, which is inherently less accurate. F ina l ly , in calculating the size of the log, the formulae used are dependent on the square of the radius, while the length is only a linear factor. 69 4.3.2 Scaling Results Figures 4.10 and 4.11 display the results of scaling w i th the parameters derived from the test case (where the calibration input information spans the entire image plane). In 4.10, the target was located in the centre of the image and the mean of the results was correct to five digits. Just as importantly, the standard deviation of the measurements was only 0.15 cm. The full scale value for the number of horizontal pixels in the image is 480. This image size corresponds, at the depth of the target, to 2.432 m in the world . Thus , the standard deviation of the measurements, indicating the precision of them, was only 0.06% of the full scale, or 0.29 pixels. This figure is very favourable, especially when considered wi th the mean. It is under quite favourable conditions however. For the measurements of Figure 4.11, the same object was placed in each of the four corners of the image and the test model was used again. For this case, both the error and standard deviation of the mean radius got worse due to the non-linearity of the image transducer. S t i l l , the mean radius is only in error by less than 0.4 cm (0.16% of full scale = 0.77 pixels). When each of the 20 calibration parameter sets derived from the 4 corner images were used to scale the "log" located in the same corner as the respective calibration model , the results shown in Figure 4.12 were derived. The distr ibution of results for this case is quite accurate, as would be expected. The calibration model has been fit, as accurately as possible to this space. The mean radius is out by an equivalent of 0.06 pixels, w i th a standard deviation of 0.71 pixels. Figure 4.13 shows the histogram of results of scaling in the centre of the image w i t h al l of those parameter sets derived from the corners, combined wi th the results from scaling the object in each of the four corners wi th camera parameters derived from just the centre of the image. Here, the mean radius is s t i l l very accurate, but the dis t r ibut ion of results has flattened out somewhat, w i th results occurring ± 1 c m from the mean. The standard deviation of the mean radius has risen to an 70 tt •«-» "3 tt V L. i 3 c 2 — I — •ean - 12.8683 ca tt . dev. - B.H98 ca • 1 1 1 1 1 j 1 1 1 1 12 13 radius (cm) 1 1 1 1 1 Figure 4.10: L o g Scaling Radius Dis t r ibut ion for a Log Placed in the Centre of the Image; test model parameters, actual radius = 12.86 cm, 5 trials tt +» *3 tt t) V "I 3 C 4 — 3H 2 - : aean - 13.2486 ea tt . dev. - 8.3398 ca 1 1 1 1 1 1 1 1 1 1 12 13 radius (cm) Figure 4.11: L o g Scaling Radius Dis t r ibut ion for Logs Placed in the 4 Corners of the Image; test model parameters, actual radius = 12.86 c m , 5 trials x 4 Images 71 0) 3 4-X ! E 3 C 6-5-4—i 3-2-1-0 •can • 12.8320 cm s t . dev. - 0.3576 cm ' ' ' ' I ' ' ' 12 r a d i u s (cm) Figure 4.12: L o g Scaling Radius Dis t r ibut ion for Logs Placed in the 4 Corners of the Image; same corner model parameters, actual radius = 12.86 cm, 5 trials x 4 Images 72 equivalent of 1.00 pixels. Another "worst case" situation uses the parameters from the corner models for scaling a log in the opposite corner from that in which they were derived (see Figure 4.14). Whi le the mean radius is closer to 12.86 c m , the standard deviation of the mean is higher than any of the other cases, indicating that the result of any one scaling operation is more likely to be in error. S t i l l , the results are not that bad, when one considers that this is essentially a worst case si tuation for a simple linear model . A s expected, the length was not as accurate as the radius. Figure 4.15 shows the length histogram that corresponds to the standard tests of Figure 4.10. The mean length is in error by 1.12 cm (2.21 pixels), wi th a standard deviation of 0.89 c m (1.76 pixels). This may sound like a great deal, compared to the accuracy of the radius exhibited above, however it is not that bad compared to the error from stick scaling. Figures 4.16 and 4.17 show the radius and length result distributions for al l of the tests carried out. The test calibration parameter set was used to scale a cylinder in the centre of the image and in each of the four corners of the image plane (25 0 tests total) . The set of parameters derived from strict ly the centre port ion of the image plane was used to scale the cylinder in each of the four corners (20 tests total) , and each of the corner parameter sets were used in the same corner, in the centre and in the opposite corner (60 tests total) . This provided a good collection of results of not opt imal conditions. S t i l l , the mean radius is wi th in 0.6mm (0.12 pixels) of the true value, w i th a standard deviation of 0.5234 cm (1.03 pixels) The same dis t r ibut ion of measurements for the length illustrates a mean that is high by about 0.58 cm (1.14 pixels) wi th a standard deviation of 1.24 cm (2.45 pixels). A tabular summary of the above scaling tests may be seen in Tables 4.2 and 4.3. Some further tests were also run on other objects using the test calibration data, which was deemed to be the best fit over the entire image. A small cylinder of 73 co +> *3 CO o 3 C 12 13 radius (cm) Figure 4.13: Log Scaling Radius Distribution for Logs Placed in the 4 Corners of the Image and in the Centre; corner model parameters for the centre image, centre model parameters for the corner images, actual radius = 12.86 cm, 5 trials x 4 corner images + 20 trials x 1 centre image a s m - 12.7507 cm t t . dev. - BAM cm CO 3 CO V o i -•) 3 c 0 12 13 radius (cm) Figure 4.14: Log Scaling Radius Distribution for Logs Placed in the 4 Corners of the Image; opposite corner model parameters for each of corner image, actual radius = 12.86 cm, 5 trials x 4 images 74 +-> V) CO i _ <-t-o i _ CO £ C mean = 52.3750 cm st . dev. - 0.3483 cm 1 — 0 11111111111111111111111111111111111111111 I I 111 ni 1111111 48 50 52 l eng th (cm) Figure 4.15: Log Scaling Length Dis t r ibut ion for a Log Placed in the Centre of the Image; test model parameters, actual length = 50.48 cm, 5 trials Scaling Radius Result Statistics Mode l "Log" Mean Error St. Deviation Source Position cm pixels cm pixels test image centre .0003 .00 .15 .29 test image corners .39 .77 .34 .67 test image total .30 .59 .35 .68 corner images same corners -.03 -.06 .36 .71 corner images centre -.29 -.58 .51 1.00 corner images opp. corners -.11 -.22 .65 1.29 all 6 images all 5 places -.05 -.11 .52 1.03 Table 4.2: Summary of the Scaling Tests from the Cal ibra t ion Models - Radius 75 •can * 12.8856 cm tt . dev. - 8.5234 cm to 3 V) I) L. 4. - i 3 C IB — 5 — 8- XL 12 13 radius (cm) Figure 4.16: Log Scaling Radius Dis t r ibu t ion - A l l Tests Combined; all 6 parameter models, all 5 images, actual radius = 12.86 c m , 105 trials 3 V) t) L. L. 3 C ¥> 15 — 18 — 1 5 _ •can • 51.8614 cm tt . dev. • 1.2398 ca 11111111111 11111111111 u 1111 1111111 • 11111 48 49 58 SI 52 53 lenath (cm) Figure 4.17: L o g Scaling Length Dis t r ibut ion - A l l Tests Combined; a l l 6 parameter models, all 5 images, actual length = 50.48 c m , 105 trials 76 Scaling Length Result Statistics Model "Log" Mean Error St. Deviation Source Position cm pixels cm pixels test image centre 1.12 2.21 .89 1.76 test image corners 1.90 3.74 .34 .69 test image total 1.31 2.60 .88 1.73 corner images same corners .63 1.24 1.11 2.19 corner images centre .20 .39 1.34 2.64 corner images opp. corners .34 .67 1.21 2.40 all 6 images all 5 places .58 1.14 1.23 2.44 Table 4.3: Summary of the Scaling Tests from the Calibration Models - Length radius 2.54 cm and length 13.65 cm was scaled. The result for the mean radius was 2.9436 cm (mean radius error = 0.80 pixels) with a standard deviation of 0.3062 cm (0.60 pixels). Both of these results are quite similar in magnitude to those derived for the larger object, attesting further to the fact that the magnitude of the error is independent of the size of the object, down to a certain point anyway. Not so accurate results were obtained in scaling the original cylinder when it was placed at an angle of about 40° to the image plane. The mean of the results was high by 2.33 cm (4.60 pixels) for the radius and low by 2.23 cm (4.40 pixels) for the length. This decrease in accuracy is primarily attributed to a human error in properly spotting the end points, which should be reduced with practice. The large error in the radius also would have had an effect on the length calculations. Lastly, a real log was scaled. The mean radius of its end points was measured (to within 0.1 cm) to be 4.20 cm, while that measured by the log scaling system was 4.08 cm (mean radius error = 0.24 pixels). The length (measured to within 0.25 cm) of the log was 1.017 m, while that measured by the automation was 1.034 m (3.36 77 pixels). A s could be predicted, the results for a real log are in greater disagreement w i t h the design calculations than those for any of the str ict ly cyl indr ical objects due to this type 3 modelling error. It is felt, however, that this departure does not corrupt the accuracy by so much as to invalidate the technique. Whi l e al l of the previous tests were performed by the author, a subsequent series of tests were performed on two other individuals . They had neither seen the scaling tests before, nor knew the actual dimensions of the test cylinder. Each of the individuals used those 5 models derived from the test cal ibrat ion images to scale the cylinder in the centre of the image. The mean value for the radius derived from the ten trials was 12.6615 cm, wi th a standard deviation of 0.3530 cm. Thus , their mean radial error is 0.39 pixels, w i th a standard deviation of 0.70 pixels. These figures are quite comparable wi th those derived by the author, and lead to the conclusion that no personal bias has artificially improved the accuracy or precision of the results determined experimentally. 4.3.3 Scaling Conclusions A l l of these results are sufficiently accurate to lead to the conclusion that this system does provide an accurate enough result, given that the logs' are not going to be placed at too large an angle w i th the image plane. Using the results of the tests shown in Figure 4.16, the mean is convergent on the true value of the radius. This is true of the weigh scaling technique currently employed [7], and quite likely also of the stick scaling technique. However, the precision here is in the pixel range (ar = 1.03 pixels for a l l of the tests combined). The expected value of the error term contributed by the quantization error would be 0.25 pixels, which leaves approximately 0.78 pixels of error to the type 2 and 3 errors. The cal ibrat ion procedure provides a linearized model of the image extraction system, so that feature points spotted fairly quickly by a human operator wi l l lead to the measurement of the length and radius of a target cylinder (log). The computer 78 system used for these calculations required 1 second to perform them, once the features had been located, so that the entire operation is going to produce a savings in time over stick scaling. 79 Chapter 5 Conclusions The scaling system developed in this thesis utilizes a simple, camera-based, imaging system, attached to a high-resolution graphics screen wi th a floating-point processor. Instead of having the log scaler walk around the logs wi th a measuring stick, the desired dimensions (and, in fact, an entire, cyl indr ical fit) may be derived from camera image data. T w o phases are required in the implementation operation. The first of these is the camera cal ibrat ion, which is able to define the parameters of a simple camera model from non-linear, real-world data. This requires a 'Gaussian least square re-gression technique that minimizes the linearly-predicted residuals to overcome the non-linearities. This cal ibrat ion process was seen to repeatably converge and pro-duce parameter value distributions that were precise. The greater the portion of the image space spanned by the input data, the better the overall fit of the model. The rms error of the world-to-camera transformation equations was 2.14 pixels, wi th a standard deviation of 0.46 pixels. The second phase of the process is the scaling operation in which the operator picks out simple features of the log from an image. This is quick (less than 10 seconds wi th light pen implementation), simple, accurate and precise, as determined herein. Statist ically, the correct value for the radius of the target was achieved in the mean, 80 w i t h a standard deviation of only 1.03 pixels. The length was slightly high in the mean, due to difficulty in spott ing the exact end point of the object, and carry through error from the radius calculation. The standard deviation of the length was just less than 1.24 c m (2.45 pixels), which is st i l l at least as accurate as what could be determined by the current stick scaling technique. Thus , w i th this system analyzed for its accuracy, and precision, the next stage should be field tests before full implementation. One major strength of this system is its simplici ty, as no complicated manual operations or sophisticated electronics are involved. Another predominant advantage is the speed wi th which a log may now be scaled (less than 10 seconds) accurately. In addi t ion, the operator may now perform this task from a closed hut or the cab of a vehicle. Reports based on this information could easily be generated using the same processor and an attached printer. This step would save a l i t t le more time and any possible transcription error. Addi t iona l ly , having this computing power available could allow for direct communicat ion wi th a data base. 81 Chapter 6 Recommendations Having analyzed and verified this log scaling system under laboratory prototype conditions, the next stage is that of field trials. This involves installing an actual imaging system in a log sort yard and testing it there for such things as accuracy (as compared wi th these previous results), precision, time savings and operator acceptance. Whi l e the first three have been examined already, the final point is worthy of mention, as it may be the human interface that ult imately l imits this system. A recommended design scenario wi l l now be presented that satisfies these requirements best. For this system, a high-resolution camera, frame grabber and graphics screen w i l l be required to present the scene to the operator. In addit ion, a floating-point processor and a light pen wi l l be needed for the actual scaling process. Th i s structure should be sturdy enough to be unaffected by wind . If it is mounted on, for example, the top of the cab of a pick-up truck, then it becomes portable. This means that the scaling may be done at various locations wi th in the sort yard, or for that matter, anywhere that they are desired. Por tabi l i ty also means that the device may be stored in a dry place where deterioration of the equipment is min imized . The camera w i l l have to be calibrated each time that the vehicle is moved. The mounting structure should be of well-known dimensions in order to 82 provide accurate starting estimates of the camera's position for the calibration pro-cedure. In addit ion, some accurate means, such as optical encoders, may be used to measure changes in the pan and t i l t of the camera, once mounted. The calibration procedure may be broken down in order to improve its reliability. Using a laboratory test j i g , the external position and orientation of the camera and the cal ibrat ion data may be made constant to whatever accuracy is desired. The internal parameters may then be calibrated to provide extremely good starting estimates for the field. In fact, these values may be accurate enough to never require further adjustment. The external parameters of the camera may be fixed also by the careful design of a mounting structure. If it is not convenient or simple enough to mount the camera so that it rotates about its lens centre, then the vector a rm between the lens centre and the centre of rotation w i l l have to be calculated or calibrated (once) and included in the design equations as a further constant transformation of coordinate frames. This may be done when the mounting structure is being designed and need not be updated, unless the lens centre is changed (remember that zoom and focus are held constant). Hav ing derived accurate, start ing estimates for the imaging system's internal parameters in a laboratory and the external parameters by careful design and con-struct ion, the cal ibrat ion of the two need not be done simultaneously. Each time that the imaging system is moved, the external calibration w i l l have to be per-formed, however the internal calibration need not occur each time. In fact, the only t ime that this need be done subsequently is if the measurements are becoming suspect or if a factor, such as a temperature change, may have affected them. The internal cal ibrat ion may be achieved by having a set or test position into which the camera snaps. In this posit ion, if the camera is looking at a portion of its own mounting structure or the vehicle upon which it is located, fixed points w i t h exact positions relative to the camera may be placed in sight for calibration 83 purposes. If these points are painted spots, then the operator may perform the internal cal ibrat ion by locating them, just as was done in the simulation. If they are lights, then their positions may be automatically derived by locating the centre of mass of the intensity peaks in the image. This may be a needless expense in both monetary and computat ional terms however. The world coordinates of these marked points are a part of the design structure too, and their values may be contained wi th in the software so that this operation is quick. The calibration software wi l l only need one stage for this, as the external parameters are considered a constant here. Having calibrated the internal parameters, they may be held constant while the external parameters are likewise found in the field by opt imizing the model over some test data. This test data is not going to be as accurate as that used for the internal cal ibrat ion, as these points w i l l have to be located independent of the accurately designed, support ing structure, and relative to the ground plane. The points used to do the external calibration may be placed in a number of ways. The placement of the origin is somewhat arbitrary. A s the ground is not a perfectly flat plane, the farther apart that the cal ibrat ion points are located (within the direction parallel to the ground), the better that the averaging effect w i l l be on its discontinuities. One technique for providing some calibration data would be to have a set of fixed length sticks joined, both top and bot tom, by fixed length str ing. These could be stuck in the ground in a regular pattern, such as a triangle. Th i s would provide two fixed points per stick, one of them on the ground and one at the top of each stick. If this pattern were laid out and some information about its position were entered, then the operator could calibrate the camera's external posit ion. Lay ing out the external cal ibrat ion points on the ground has the advantage that they may be farther apart and, thus, better average any unevenness in the ground. It does require some time whenever the camera has been moved however. A n alternative to this would be to mount the points (painted spots or lights) on 84 another vehicle, such as that which transports the logs in the first place, or that which places them on the ground. These points may be fixed and their positions well-known, relative to some fairly arbitrary origin. Once calibrated, scaling is just as simple as that process described for the ex-periments here. The log placement is somewhat constrained and recommendations may be made based on the results of the simulation. A s the zoom and focus are to be a constant, the logs wi l l have to be placed beyond a certain depth, although this w i l l l ikely be satisfied by the camera-to-ground distance by itself. The logs wi l l have to be placed on the ground such that one does not obscure the view to any others from the camera. This w i l l entail a m i n i m u m distance between their locations on the ground, based on the camera's height and the max imum distance that they may be placed relative to i t . Opt imal ly , the log axes wi l l be parallel to the image plane, so that the ends are simple for the operator to pick out, and parallel to each other so that they may be packed as close to each other as possible. The logs w i l l be constrained to be wi thin a certain distance of the imaging system in order to maintain the accuracy. This w i l l be a function of the resolution of the imaging system, and possibly the log's m i n i m u m radius. The operator should be able to light up chosen pixels 'wi th a light pen, selecting or de-selecting points wi th the aid of single buttons or simple keyboard entries. Once a log has been completely spotted, it might be useful to see its outline drawn on the screen over top of the image in order to verify the fit. This might, however, require too much computing time to be invoked automatically, but could be a useful opt ion. If the fit is unsuitable, then points may be moved. The camera w i l l not likely be able to see both ends of the logs at once. Therefore, the scaling technique, once a group of logs has been laid out, should consist of looking at one end of the logs first, storing all of the end points in a set order, and then doing the same for the other end. A l l of the end points on one side wi l l likely be fit into a single image frame, although again, they need not. The amount of 85 panning and t i l t ing (both of which could be joystick controlled) required should be minimized though, for t ime considerations. A s there is further sorting of the logs that must be performed on foot, a printed report, including the information just derived would be a useful by-product of the scaling process. This would lessen the paperwork a lit t le and allow the operator to just fill in the remaining blanks as the logs are surveyed on foot for knots, species, etc. The user interface for this system shal l , as expected, be as simple as possible. N o computer knowledge should be necessary and as l i t t le of the calibration input information as possible should be manually entered. For the purpose of t ra ining operators, some test cylinders and pre-scaled logs should be available. The process of accurately locating the actual end points is not a difficult one, but accuracy may be improved if one has some standard to compare the results against at the start. If scaling is practised on these test objects in a variety of positions for a while, the difficulties encountered from spotting the rounded edges and estimating corners that may be part ial ly obscured by grass may be overcome. 86 References G . J . A g i n and T . O . Binford , "Computer descriptions of curved objects", Third International Joint Conference on Artificial Intelligence, Stanford, 1973. D . H . Ba l l a rd and C . M . B r o w n , "Computer V i s i o n " , Prentice-Hall, Inc., Englewood Cliffs, N . J . , 1982. T . O . Binford , "Visua l perception by computer," IEEE Conference on Systems and Control, M i a m i , 1971. N . Y . Chen , J . R . B i r k and R . B . Ke l ly , "Est imat ing Workpiece Pose Using the Feature Points Method , " IEEE Trans. Automatic Control, vol . 25, no. 6, pp. 1027-1041, 1980. J . J . C la rk , "Mul t i -Resolu t ion Stereo Vi s ion wi th Appl ica t ion to the Auto -mated Measurement of Logs," P h . D . thesis, University of British Columbia, Vancouver, B . C , 1985. J . J . Clark and P . D . Lawrence, " A Systolic Parallel Processor for the Rap id Computa t ion of Mul t i -Resolu t ion Edge Images U«ing the V 2 G Operator," Journal of Parallel and Distributed Computing, 1985. J . P . Demaerschalk, P . L . Cote l l and M . Zobeiry, "Photographs improve sta-t is t ical efficiency of t ruckload scaling," Canadian Journal of Forest Research, vol . 10, no. 3, pp. 269-277, 1980. K . F . Gauss, "Theory of the M o t i o n of the Heavenly Bodies M o v i n g About the Sun in Conic Sections," Dover Publications, Inc., reprinted 1963. S. Gower, P . Kennedy and A . Holzer, " A New Approach to 3-D Art i f ic ia l V i s i o n , " Proc. 1986 Australian Robotics Conference, pp. 163-174, 1986. D . E . Hand , "Scanners can be Simple" , Proc. 5th Sawmill Clinic, vol . 5, pp. 187-197, 1975. J . E . Harry, "Industrial Lasers and their Appl icat ions ," McGraw-Hill, London, 1974. D . Hearn and P . Baker , "Computer Graphics" , Addison- Wesley, 1985. 87 [13] Hewlet t-Packard Laser Measurement Equipment Specifications: a. 5526A Laser Measurement System, b. 5527A Laser Posi t ion Transducer, c. 5528A Laser Measurement System, d. 55288S Dimensional Metrology Analysis Sys-tem, Hewlett-Packard (Canada) Ltd., Mississauga. [14] B . K . P. Horn , "Robot V i s i o n " , MIT Press, Cambridge, Mass . , 1986. [15] R . A . Jarvis , " A Perspective on Range F ind ing Techniques for Computer V i s i o n , " IEEE Trans. Pattern Analysis and Machine Intelligence, vo l . 5, no. 2, pp. 122-139, 1983. [16] R . A . Jarvis , " A Laser Time-of-Flight Range Scanner for Robot ic V i s ion , " IEEE Trans. Pattern Analysis and Machine Intelligence, vo l . 5, no. 5, pp. 505-512, 1983. [17] J . L . Junkins , " A n Introduction to Op t ima l Est imat ion of Dynamica l Sys-tems," Sijthoff & Noordhoff International Publishers, A lphen A n n der R i j n , The Netherlands, 1978. [18] D . M a r r and H . K . Nishihara , "Representation and recognition of the spa-t ia l organization of three-dimensional shapes," Proc. Royal Society of London B200 , pp . 269-294, 1978. [19] D . M a r r and E . Hi ldre th , "Theory of edge detection," Proc. Royal Society of London B207, pp. 187-217, 1980. [20] D . M a r r , "V i s ion , " W. H. Freeman and Company, N . Y . , 1982. [21] H . A . Mar t in s , J . R . B i r k and R . B . Ke l ly , "Camera Models Based on Da ta from T w o Cal ibra t ion Planes," Computer Graphics and Image Processing, vo l . 17, pp. 173-180, 1981. [22] A . Maurer , "Lasers: Light Wave of the Future," Arco Publishing, New York , 1982. [23] D . G . M i l l e r and Y . Tardiff, " A video technique for measuring the solid volume of stacked pulpwood," Pulp and Paper Magazine of Canada, vol . 71, no. 8, pp. 40-41, 1970. [24] W . T . M i l l e r 111, "Video Image Stereo Match ing Using Phase-Locked Loop Techniques," Proc. IEEE International Conference on Robotics and Automa-tion, San Francisco, pp. 112-117, 1986. [25] R . Nevat ia and T . O. Binford , "Structured descriptions of complex objects," Third International Joint Conference on Artificial Intelligence, Stanford, 1973. [26] D . N i t z a n , A . E . Bra in and R . O . Duda , "The Measurement and Use of Registered Reflectance and Range Da ta in Scene Analysis ," Proc. IEEE, vol . 65, no. 2, pp. 206-220, 1977. [27] Pentax Electronic Distance Meter Specifications: a. E D M Theodolite P X -0 6 D / P X - 1 0 D , b. E D M P M - 8 1 , Asahi Precision Co. Ltd., Tokyo. 88 [28] U . Shani , " A 3-D model-driven system for the recognition of abdominal anatomy from C T scans," Proc. Fifth International Conference on Pattern Recognition, M i a m i Beach, 1980. [29] A . W . J . Sinclair , "Evaluat ion and economic analysis of twenty-six log sorting operations on the coast of Br i t i sh Columbia ," Forest Engineering Research Institute of Canada, Technical Note T N - 3 9 , 1980. [30] Sokkisha Electronic Distance Meter Specifications: R E D 2 E D M , Sokkisha Co. Ltd., Tokyo. [31] Sonic Tape Specifications: F C O l Sonic Tape, Sonic Tape PLC. [32] B . I. Soroka and R . K . Bajcsy, "Generalized cylinders from serial sections," Proc. Third International Joint Conference on Pattern Recognition, Coronado, C A , 1976. [33] I. Sobel, " O n Cal ibra t ing Computer Controlled Cameras for Perceiving 3-D Scenes," Artificial Intelligence, vo l . 5, pp. 55-64, 1974. [34] G . G . Vainshtein , N . V . Zavalishin and I. B . Muchn ik , "Visua l Information Processing by Robots (Survey)," Automation and Remote Control, vo l . 35, no. 6, pp. 959-986, 1974. [35] R . V i t , "Electronic Log Scaler and its application in the logging industry," Canadian Pulp and Paper Association, Woodlands Section, Index no. 2125 (B-6) , pp. 1-2, 1962. [36] E . L . Walker and T . Kanade, "Shape recovery of a solid of revolution from ap-parent distortions of patterns," Carnegie-Mellon University, Technical Report C M U - C S - 8 4 - 1 5 7 , 1984. [37] S. B . Watts (ed.), "Forestry Handbook of B . C , " Forestry Undergraduate Society, University of British Columbia, Vancouver, B . C , 1983. [38] W i l d Heerbrugg Distance Meter Equipment Specifications: a. General cata-logue, b. Dis tomat D120, c. Theomat W i l d T1000, d. Distomat W i l d DI 1000, e. W i l d T2000, TC2000 , T2000S, f. C . A . T . 2000, g. Distomat W i l d DI 3000, h. W i l d G R E 3 , Wild Heerbrugg Ltd., CH-9435 Heerbrugg, Switzerland. [39] K . K . Yeung, " A Low Cost Three-dimensional Vis ion System Using Space-encoded Spot Projections," SPIE, vo l . 728. 89 Appendix A Experimental Results - Simulation In chapter 3, a description of a simulation was presented. It was designed to be a tool that would verify the design equations, to provide some insight into how well a system like this would work wi th 512 x 512 pixel images, and to assess its drawbacks. For the simulation to run , the operator was queried for some input parameters w i th which the scaling scene would be generated. These were: • log radius • log length • x- and z-coordinates of the log's centre of mass • angles that the ground plane makes wi th the x- and 2-axes • height of the camera off of the ground A s a reference case, a log was 'placed' 15 metres away from the camera, which was raised 5 metres off of the ground (x = 0, y = 5,2 = 0). This log was 10 centimetres in radius. A test was conducted where this log was scaled ten times in 90 succession. The mean radius for all of these tests was 10.2190 cm, and the standard deviation was 0.1086 cm. This result verified the design equations, and was to stand as the reference case for further tests. The next step was then to vary some of the input parameters, in order to determine whether this accuracy just achieved could be affected. Each time that a parameter was fixed to a new value, it was tested five times in succession in order to try to overcome the effects caused by difficulty in locating the exact feature point and quantization round-off. The measurements were conducted at a reasonable pace (less than five seconds per point) , in order to avoid false accu-racy arising from spending an excessive amount of t ime locating the features and any possible ' learning' of where the best results were obtained from any particular cursor placement. It was, nonetheless, noticed that as time progressed, the spotting process became slightly faster and more accurate as the proper points became easier to recognize wi th the eye, and familiari ty w i th the process controls increased. Figures A . l - A . 10 show the results obtained from five experiments in graphical form. For each test, the mean error of the radius, r , the standard deviation of this mean, <7r, are plotted as a function of the control variable tested. A s well , the mean of both of these figures for each complete experiment is drawn in to provide a better, visual idea of whether the accuracy (error) or precision (standard deviation) of the measurements are dependent on the control variable. The error plots for each indicate just how accurate the radius measurements made wi th the use of the design equations are. The standard deviation plots, on the other hand, indicate more how reliable the error plots are. If the standard deviation figure increases, it can be expected that the error measurements are less reliable and subject to greater fluctuation. It is not expected that the plots wi l l conform to any simply-described, analytic relations. In addit ion to the statistical variation arising from repeated testing, it is felt that the errors involved are going to be the result of a combination of factors. 91 The control variable may not even become one of these factors unt i l it exceeds a certain threshold, below which the error it generates is not noticeable above the error "noise floor". The "noise floor" would be considered to be that error which is essentially constant due to such things as quantization noise and statistical errors resulting from the repetit ion of the spotting operation. In addit ion to the spatial quantization of the image plane, it should be pointed out that the log itself was generated as an eighty-sided polygon, rather than as a pure cylinder. Whi le this d id not provide too perceptible an error to the eye, it may have been a further quantizat ion type of error. Figures A . l and A . 2 deal w i th the angle that the log's axis makes w i th the plane x = 0. A s this angle is increased, only a slight overall decrease in accuracy is exhibited as a trend. O n the other hand, the precision of the results is dramatically altered at angles above 40°. Th i s results pr imari ly from the difficulty in spotting the cursor over what are perceived to be the "corners" of the log in the image plane at the rounded ends. It would be recommended, as a result, that the logs being scaled in the sort yard be placed sufficiently parallel to the image plane in order to avoid this. The error, as a function of the log radius, d id not exhibit a clear trend, as shown in Figures A . 3 and A .4. This leads to the conclusion that this error is roughly a constant. A s was mentioned in chapter 4, the spott ing process of each of the feature points is independent of each of the others. Therefore, the proximity of these points to each other has no effect on the accuracy or precision. Should they, for some reason, be so close to each other that the ensuing calculations are subject to numerical errors, or the resolution of the image is insufficient to provide any real accuracy, then the radius would become a factor. However, this is unlikely for logs of an economically viable size located sufficiently close to the imaging device. If this does become a problem, then a higher resolution imaging system, or a greater zoom capabil i ty wi l l be required. Figures A.5- A.8 deal wi th the proximity of the log to the image plane. These 92 cases present very similar information to those derived above for the varying radius case, and in fact a great deal of the above paragraph could be repeated here. What these experiments do bring out more is that the resolution of the imaging system is a factor in determining the accuracy of the measurements. For example, if the actual corner point to be spotted is exactly half-way between the two pixels that are nearest, then the quantization error subtends an angle that corresponds to one-half pixel in the image plane, as seen from the focal point. The arms of this angle are much farther apart at fifteen or twenty metres, however, and lead to a resolution error. Similar ly, if the operator spots a feature point in error by n pixels, that wi l l contribute to a greater absolute measurement error at longer target distances. In A . 5 and A . 6 , the log's centre is moved along a line at z = 0, while in A . 7 and A . 8 , this point was moved along a line at x = 2z, removing some of the symmetry from the corner spott ing process. The log's axis was always held parallel to the image plane however. In both cases, the accuracy and precision of the measurements decreased as the log target was located further away. This result indicates that there wi l l be an upper bound to the distance that logs may be away from the camera. Th i s l imi t w i l l be dictated by the resolution of the imaging system and the max imum focal length of the camera. F ina l ly , in Figures A . 9 and A . 10, the max imum focal length allowed (by the zoom operation) was varied. The lower that this value was constrained to be, the higher the mean error and standard deviation of the radius measurement tests were. This relates very closely to the above arguments for a higher resolution imaging system, as increasing the distance from the focal point to the image plane effectively spreads the same number of pixels over a smaller, real-world viewing area. For a real system, there w i l l be a l imi t as to how much zoom should be allowed, however, as having a too magnified view of the scene wi l l mean that more time wi l l have to be spent panning and t i l t ing the camera around in search of the feature points. Further, it might have some effect on just how close the logs can get to the camera before they become out of focus. 93 The above is an analysis of the errors arising from scaling logs with an ideal pin-hole camera model. All of these comments and recommendations could equally well apply to a real-world situation. As this has decoupled the effects of errors of the camera modelling process from the scaling process, distinct trends were noticeable and conclusions drawn. However, the error values themselves are optimistic, as only the use of an actual imaging system will be able to provide a complete system error description. 0 94 E u o c 10 e.6-0.5-8.4-0.3-0.2-0.1-an • "572451 cm 8 | i i i i i i i i i i i i i i i i i i i | i i i i i i i i i i i i i i i i i i i | i i i i i i i i i i i i i i i i i i i | i i i i i iu i i i i i i r i i i i | i i i i i i i i i 28 40 60 80 loa anale (dearees) Figure A . l : Simulation Results: Mean Log Radius Error vs. Log Angle with respect to the Image Plane; radius = 10 cm, distance to log from camera base point = 15 m, camera height above base point = 5 m E U o **-o > •D 20 40 60 loa anale (dearees) Figure A.2: Simulation Results: Standard Deviation of the Mean Log Radius Error vs. Log Angle with respect to the Image Plane; radius = 10 cm, distance to log from camera base point = 15 m, camera height above base point = 5 m 95 E u L. o L. L. c 10 I I I I I I I I I I I I I I I I I I I I I I I I I I I I B IB 26 30 a c t u a l r a d i u s ( c m ) Figure A.3: Simulation Results: Mean Log Radius Error vs. Log Radius with respect to the Image Plane; distance to log from camera base point = 15 m, camera height above base point = 5 m E U o t-0) > e . 2 ^ , mv\ • 0,1509 cm i i i i i i i i i i i i i i i i i i i i i i i i i i i i e 20 a c t u a l r a d i u s ( c m ) Figure A.4: Simulation Results: Standard Deviation of the Mean Log Radius Error vs. Log Radius with respect to the Image Plane; distance to log from camera base point = 15 m, camera height above base point = 5 m 96 E o o L. I_ c «0 8 .3 — 8.2 — B.l — B r-r-r ! « n • 8.1721 cm • i i i i i i i i i i i B 28 x-axis distance (metres) Figure A.5: Simulation Results: Mean Log Radius Error vs. Distance to the Log along the Line z — 0; radius = 10 cm, camera height above base point = 5 m E U o o > B 28 x-axis distance (metres) Figure A.6: Simulation Results: Standard Deviation of the Mean Log Radius Error vs. Distance to the Log along the Line z = 0; radius = 10 cm, camera height above base point = 5 m 97 e 20 d i s t a n c e at 26.6 d e a r e e s (m) Figure A.7: Simulation Results: Mean Log Radius Error vs. Distance to the Log along the Line x — 1z\ radius = 10 cm, camera height above base point = 5 m E u o o > 8 28 d i s t a n c e at 26.6 d e a r e e s (m) Figure A.8: Simulation Results: Standard Deviation of the Mean Log Radius Error vs. Distance to the Log along the Line x = 2z; radius = 10 cm, camera height above base point = 5 m 98 8.8-s E U o t_ l_ ft) c to 8 288 maximum focal lenath (mm) Figure A.9: Simulation Results: Mean Log Radius Error vs. Maximum Allowed Focal Length (Zoom); radius = 10 cm, distance to log from camera base point = 15 m, camera height above base point = 5 m E U o L. 0) > e.e 8.7-e.6 —[ 8-5—[ 8.4 —| 8.3—j 8 . 2 ^ 8.1 san - 8.3828 cm T — I — • • • i — i — i I i I i i i — i I i i i j—i—r 8 288 maximum focal lenath (mm) Figure A.10: Simulation Results: Standard Deviation of the Mean Log Radius Error vs. Maximum Allowed Focal Length (Zoom); radius = 10 cm, distance to log from camera base point = 15 m, camera height above base point = 5 m 99 Appendix B Experimental Results -Calibration In Chapter 4, a description of the results of some tests run to analyze the camera calibration process were described. This section will include a complete listing of those results. The mean value for the rms error, $, as a function of the number of iterations carried out is shown in Figure B . l , and the standard deviation of this value, a t , is shown in Figure B.2. The image used for those tests consisted of a cube with seven visible vertices that covered a large portion of the image plane (the test image). In the first stage of the calibration, the rms error dropped from a value of over 24 pixels to a value around 2.5 pixels. This drop occurred very sharply (2 iterations) and the standard deviation similarly exhibited a sharp drop at the same time. After two iterations, the first stage exhibited no further optimization of the parameters. Stage 2 produced more modest improvements, with a decrease in the rms error plot down to about 2.14 pixels. The standard deviation of this value over all of the tests decreased to a value of around 0.46 pixels, indicating that the result was quite consistent and not overly dependent on the spotting process involving the operator. This is not to say that the optimization was independent of the image used to calibrate the camera. The image used for these plots contained information about a 100 great deal of the image plane and, thus, provided the best model fit to reality. While the actual camera parameters that one might physically measure might be slightly different than those derived here, a model based on them would not be as accurate on the whole as the model derived by this cal ibrat ion. This cal ibrat ion technique fits model-based coordinate calculations to real-world data. A n y portion of the image space which is not wi th in the span of the input space wi l l not necessarily be well described by the model . It was for the purpose of i l lustrat ion of this fact that a group of five further images were taken and the cal ibrat ion tests re-run. In these images, a smaller version of the cube was located in each of the four corners of the image and the centre, thus intentionally only calibrating a port ion of the image space. Th i s tended to skew the results of the model fit to values that, while being just as accurate (if not more so) for that port ion of the image, were not as conclusive for the rest if the image space. Figures B.3-B.42 show histograms of the values derived from these tests for all ten parameters. In the first plot for each of the camera parameters, the distr ibution of the parameter values derived from the test image are shown. These values are closest to those desired from the opt imizat ion and this is further borne out by the fact that the mean of their values is generally very close to the mean derived from the aggregate of al l of the tests performed here (shown in the fourth plot for each parameter). The second and th i rd plots show the distr ibution of values derived for each of the parameters under more l imited conditions. The image used for the second plot contained the scaled down cube placed in the centre of the image. This is where the model is the most linear. It is not the best fit to the model however, due to the non-linearity towards the edge of the image. The th i rd image shows the results obtained for each parameter upon calibration of the camera w i th just data from the corners of the image plane. 101 The results here show a calibration process fitting a simple, linear model of a camera to a non-linear, real-world situation w i th consistent results. The residual behaviour converges quickly and repeatably. The values for individual parameters derived produce the best fit of the camera model to the information provided to it. The question of whether this is accurate enough is best answerable by using the parameter sets derived from here for the scaling process. This is the subject of Section 4.3.3 and appendix C . 102 IBB C 10 B 2 4 i t e r a t i o n number Figure B. l : RMS Error, Convergence Behaviour Using Data from the Test Image; 7 feature points per image, 5 trials M IB X B. 1 j 1 1 11 1 1 1 11 1 1 1 1 1 1 1 1 1 11 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 to I I i B 2 4 i t e r a t i o n number Figure B.2: Standard Deviation of the RMS Error, tr», Convergence Behaviour Using Data from the Test Image; 7 feature points per image, 5 trials 103 •*-> *3 • J i_ «•-o L. I) 3 C 2 — 1 — 8 I I I I I I I I I wan - 766.52?? pixels s t . dev. - 4.8886 p ixels I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I 748 768 788 focus/vert. res. product (pixels) Figure B.3: Result Distribution for fMv Estimation Using Data from the Test Image; 7 feature points per image, 5 trials fr? 3 V) V o •8 i - l 3 C •can - 786.899? p ixels r l . dev. - 4.5521 pixels 8 l i u i m i l l i n i u m I I m i l m n n H M H i i i i I i i i l i i i M i m h i i m i i i i i i i i i i i i i 748 768 788 •focus/vert. res. product (pixels) Figure B.4: Result Distribution for fMv Estimation Using Data from the Centre Image; 7 feature points per image, 5 trials 104 to «*-» *3 (0 o i C 6-5-4-3-2-1-•etn - 782.7135 pixels st . dev. - 24.1729 pixels 748 768 788 focus/vert. res. product (pixels) Figure B.5: Result Distribution for fMv Estimation Using Data from the Corner Images; 7 feature points per image, 20 trials V) +> 3 to I) o 3 C 6 5-4-3-2-1-aetn - 780.7126 pixels st . dev. - 20.7862 pixels 740 760 780 880 •focus/vert. res. product (pixels) Figure B.6: Result Distribution for / M „ Estimation Using Data from all 6 Images; 7 feature points per image, 30 trials 105 K £ 3-3 V) t) o i. 3 C 2 — • i . -•ear. - 1.2272 tt . dev. - 8.8825 •minim iiiiimiiiiiummimiii j rtr 1.18 1.2 1.22 T 1.24 resolution factor ratio Figure B.7: Result Distribution for Mratio Estimation Using Data from the Test Image; 7 feature points per image, 5 trials +> 3 Vi t) t) i 3 C 4 — 3 — 2 — 1 — •ean - 1.2166 tt . dev. • 8.8824 8 liiiiiiiiiiiiiiiini T T 1.18 1.2 1.22 1.24 resolution factor ratio Figure B.8: Result Distribution for Mratio Estimation Using Data from the Centre Image; 7 feature points per image, 5 trials 106 v> "3 fr) I) o L. t) i 3 C 4 — 2 — 1 — 1.2871 • t . dev. - 8.828? •in 8 t i i i i l i i i I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I n i i l i i i i l i i i i l i i r i i i i i i | i i i l u i i l i i i 1.16 1.2 1.22 1.24 resolution factor ratio Figure B.9: Result Distribution for Mratio Estimation Using Data from the Cornt Images; 7 feature points per image, 20 trials Vi «-» *3 to t) I) 3 C 7-6 5-4-3-2-1-t t . dev v\ - 1.2)28 8.8245 8 l i n i l i i i i M i i i i i i i i i i i i i i i i i i m i i l i i i i I I I I I I I I I I I I I I I I I I I | I I I I I I I I I I I I I 1.16 1.2 1.22 1.24 resolution factor ratio Figure B.10: Result Distribution for Mratio Estimation Using Data from all Images; 7 feature points per image, 30 trials 107 (0 to V V S3 E C 2—I 1—! Kan • -0 .6642 radians st. dev. - B.BB28 radians B * l M I I I I I I I I I I I I I I I I I I I I I II I I I I 1! I I I I 1! I I 11 111 I I I 11 r -6.7 -6.68 -6.66 t h e t a ( x ) (radians) Figure B . l l : Result Distribution for 0X Estimation Using Data from the Test Image; 7 feature points per image, 5 trials IA t) o fc. - i 3 C 2 — 1 — Kan " -B.6755 radians st. dev. » B.BB56 radians 8 'i 111111111111111111 I ' I i i I ' I 111 I ' I 111 •'• i 11 I 11111 111 I -B.7 -6.68 -6.66 theta(x) (radians) Figure B.12: Result Distribution for 0X Estimation Using Data from the Centre Image; 7 feature points per image, 5 trials 108 fr> +> *3 V) *4-o i . t ) 3 C 1 — 3 — 2 — 8—W •ear - -8.BB73 radians rt. dev. - 8.8127 radians 1111111111 I ' I 11111 I I i r i 111 I*I 111 i*i 111 r 11111 i i 1111 -8.7 -8.68 -8.66 theta(x) (radians) Figure B.13: Result Distribution for BT Estimation Using Data from the Corner Images; 7 feature points per image, 20 trials v> *3 Vi t) i -«+• o I) i 3 C 7-6-5-4-3-2-1-B-•ean • -8.6682 radians »t. dev. • 8.8111 radians I I 11 I I I 11 I I I'l I I I I'l I I I I'I I I I I'I I I I I'I 11 I I'I I II I M I I II I II -€.7 -8.68 -8.66 theta(x) (radians) Figure B .H: Result Distribution for 9Z Estimation Using Data from all 6 Images; 7 feature points per image, 30 trials 109 •ew - -1.5755 r t . dev. - B.8BBB radians radians -1.56 -1.54 theta(v) (radians) Figure B.15: Result Distribution for 6V Estimation Using Data from the Test Image; 7 feature points per image, 5 trials 2 3 H 3 V L. o I) - i C B-•can - -1.5699 radians rt. dev. - B.8B29 radians 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 I I 1 1 1 1 1 1 1 -1.56 theta(v) (radians) -1.54 Figure B.16: Result Distribution for 0V Estimation Using Data from the Centre Image; 7 feature points per image, 5 trials 110 •can - -1.567? radians rt. dev. - 8.8093 radians Vt •*-> *3 Vt V IB — •§ 3 C e- M i l l -1.56 -1.54 theta(v) (radians) Figure B.17: Result Dis t r ibut ion for 6V Es t imat ion Using D a t a from the Corner Images; 7 feature points per image, 20 trials vt ** *3 Vt I) 3 C 18 — •van - -1.5694 radians rt. dev. - 8.8861 radians • 11111--1.56 theta(v) (radians) -1.54 Figure B.18: Result Dis t r ibut ion for 0V Es t imat ion Using D a t a from all 6 Images; 7 feature points per image, 30 trials 111 Z 3-3 V) t) L. o L. I) 3 C 2 — • i i -e-•can » B.B156 radians tt. dev. • B.BB1S radians i i i i i i i i i i i t i i i i i t i j i i i i i i i i i i i B.B2 theta(z) (radians) Figure B.19: Result Dis t r ibut ion for 0Z Es t imat ion Using D a t a from the Test Image; 7 feature points per image, 5 trials o - i 3 C IA - 3 H 3 M 2 — 1 — aean - B.8H5 radians tt. dev. • B.BB31 radians B.B2 theta(z) (radians) Figure B.20: Result Dis t r ibut ion for 0t Es t imat ion Using D a t a from the Centre Image; 7 feature points per image, 5 trials 112 fr) fr) V o 4. • i 3 C 6 — 5 — «• 3 — 2 — 1 — •em • 8.8172 radians rt. dev. - 8.8118 radians i t i t i i i i i i i B 8.82 theta(z) (radians) Figure B.21: Result Distribution for 0Z Estimation Using Data from the Corner Images; 7 feature points per image, 20 trials *3 fr) t> fc. • i 3 C 6 — 5 — I H aean rt . dev. 165 radians 7 radians 8 | i I i I i i i I i I i I i I i i i I i i I i I i I i I i I i 8 8.82 theta(z) (radians) Figure B.22: Result Distribution for 6t Estimation Using Data from all 6 Images; 7 feature points per image, 30 trials 113 • i C 6 5-3-2-1-•ean - 528.6256 pixels st. dev. - 8.6664 pixels 8 1111111111111111111111111 111111111111111111 n 111111111 i n i j 588 528 548 h(0) (pixels) Figure B.23: Result Distribution for h0 Estimation Using Data from the Test Image; 7 feature points per image, 5 trials M X o I) •§ 3 C 3 — 2 — •em - 582.4224 pixels st. dev. - 2.6556 pixels I I 111 I I 111 I 11 I I I I I I j I I I I I I I I I 11 I I I 11 I T I j I I I I I I 111 588 528 548 h(0) (pixels) Figure B.24: Result Distribution for h0 Estimation Using Data from the Centre Image; 7 feature points per image, 5 trials 114 fr) +> to V L. 3 C 6 5-4-3-2-1-e - u • i n - 530.6638 pixels rt. dev. - 15.5240 pixels 1111111111111111 iii11111111111111 500 528 540 h(0) (pixels) Figure B.25: Result Distribution for h0 Estimation Using Data from the Corner Images; 7 feature points per image, 20 trials to •*-> *3 fr) 3 C 9-8 — 7 — 6 — 5 — 4 — 3-2-1—I 8 mv\ - 524.3380 pixels rt. dev. - 16.4611 pixels i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i n i i SflB 528 548 h(0) (pixels) Figure B.26: Result Distribution for h0 Estimation Using Data from all 6 Images; 7 feature points per image, 30 trials 115 to 3 to o 3 C 2 — 1 — wan - 382.7529 pixels t t . dev. • 8.2856 pixels 8 h i i i i i i i i i u n 1 382 384 386 v(0) (pixels) IIIIIIIIIIIIIIIIIIIIIIIIIIMIIIIIIIIIIIIIIII1 Figure B.27: Result Dis t r ibut ion for v0 Es t imat ion Using Da ta from the Test Image; 7 feature points per image, 5 trials to £ 3 H 3 to V -g 3 C 2 — •van - 385.4958 pixels st. dev. - 1.2712 pixels 8 h i i i i i i i i i i i r i i i i i i i i i i i l i i i i l i i i i i i i i i i i i i i l i i i i l i i i i i M i i i i i i i i i i i i i j i i i n i i i i i i i i r 1 382 384 386 388 v(0) (pixels) Figure B.28: Result Dis t r ibut ion for v 0 Es t imat ion Using D a t a from the Centre Image; 7 feature points per image, 5 trials 116 fr) fr) •J O L. 3 C 3 — 2 — • e i n - 384.3633 pixels tt . dev. - 2.8551 pixels Ijlllllllllllllllllll l l l l l l l lMl l l l l l l l l j l l l l l l l l l l l l l l l l 382 384 386 366 v (0 ) (r\ i v» 1 •> > Figure B.29: Result Distribution for v 0 Estimation Using Data from the Corner Images; 7 feature points per image, 20 trials fr) •*-> fr) 1) - i C 6" 5-4-3-2-1-8 l i i i i l i in l i i i i M i n - 384.2378 pixels t t . dev. - 2.4978 pixels mmi i i TTTT niilii i i l i i i i l i i i i i i i i l i i i i lni 382 384 366 388 v(0) ( p i x e l s ) Figure B.30: Result Distribution for v 0 Estimation Using Data from all 6 Images; 7 feature points per image, 30 trials 117 to •*-> *3 to o t. 3 C 0-•ew - -0.0387 a » t . dev. - 0.0B39 a r -0.1 X(0) (metres) Figure B.31: Result Distribution for X0 Estimation Using Data from the Test Image; 7 feature points per image, 5 trials to 3 to t) i. O i-3 C 2 — 1 — •em - -0.0397 a s t . dev. - 0.0084 a I ' ' ' ' - e . i X(0) (metres) Figure B.32: Result Distribution for X 0 Estimation Using Data from the Centre Image; 7 feature points per image, 5 trials 118 fr) «-> "3 fr) t) t-««-o i . t) 3 C 4 — 3 — 2 — -e.i • m - -8.8522 tt. dev. - 8.8395 X O ) (metres) Figure B.33: Result Distribution for X0 Estimation Using Data from the Corner Images; 7 feature points per image, 20 trials fr) •*-> *3 fr) t) -g 3 C 7-6-5-4-3-2-I--8.1 •can - -8.8479 • tt. dev. - 8.8328 • X(0) (metres) Figure B.34: Result Distribution for X0 Estimation Using Data from all 6 Images; 7 feature points per image, 30 trials 119 Vi +> *3 Vi V L. *«-o I) • i C 6 5-4-3-2-1-B-•ean » 2.4BBB • tt . dev. - B.BBB3 • i | i i i i 2.5 Y(0) (metres) Figure B.35: Result Dis t r ibut ion for Y0 Es t imat ion Us ing D a t a from the Test Image; 7 feature points per image, 5 trials vt •*-> Vi I) o I) 3 C aein - 2.5B9E m tt . dev. - B.BB33 at 2.5 Y(0) (metres) Figure B.36: Result Dis t r ibut ion for Vo Es t imat ion Using D a t a from the Centre Image; 7 feature points per image, 5 trials 120 •can - 2.4837 t t . dev. - 8.2290 Vi * 3H 3 Vi ft) o V. ft) 3 c 2 — e-2.5 Y(0) (metres) Figure B.37: Calibration Results: Y 0 Estimation Histogram - Corner] Result Dis-tribution for y 0 Estimation Using Data from the Corner Images; 7 feature points per image, 20 trials 8 — vi 3 VI ft) i_ «•-O - i 3 C 5 — 4 — 3-2-1 8-aein - 2.4875 • t t . dev. - 8.8214 • t i . m bud 2.5 Y(0) (metres) Figure B.38: Calibration Results: Y 0 Estimation Histogram - Totat\ Result Distri-bution for Y0 Estimation Using Data from all 6 Images; 7 feature points per image, 30 trials 121 vt 2 3 H D Vt V L. *t- 2—I i C 1 — B-aean - 1.5165 • rt. dev. - B.8B2B • 111111111111111 1 1.46 1.5 Z(0) (metres) 1.52 Figure B.39: Cal ibra t ion Results: Z 0 Es t imat ion His togram - Test] Result Dis t r ibu-t ion for ZQ Es t imat ion Using D a t a from the Test Image; 7 feature points per image, 5 trials Vt -r* *3 Vt t) c •) i C 2 — •m « 1.4928 • rt. dev. - 8.8113 • • 111111 • 111 1111111111111111 11 • • 111 1.48 1.5 Z(0) (metres) 1.52 Figure B.40: Cal ibra t ion Results: Z0 Es t imat ion His togram - Centre] Result Dis t r i -but ion for ZQ Es t imat ion Using Da ta from the Centre Image; 7 feature points per image, 5 trials 122 VI +> *3 VI ft) i_ *t-O L. ft> 3 C 18 — •can • 1.5880 tt . dev. • 8.8088 T T T T 1 I I I I 1 I I I I I I I I I I I 1.48 1.5 1.52 Z(0) (metres) T T T T Figure B.41 Calibration Results: Z0 Estimation Histogram - Corner] Result Distri-bution for Z0 Estimation Using Data from the Corner Images; 7 feature points per image, 20 trials VI 3 V) ft) i. *•-o L. ft) 3 C 18-•em - 1.5018 • t t . dev. - 8.8117 • I I I I I I I I T 1 I I I I I I I T 1.48 1.52 1.5 2(0) (metres) Figure B.42: Calibration Results: ZQ Estimation Histogram - Total] Result Distri-bution for Z0 Estimation Using Data from all 6 Images; 7 feature points per image, 30 trials 123 Appendix C Experimental Results - Scaling Thi s section contains a more complete l ist ing of the results derived from scaling a test cylinder (radius = 12.86 cm, length = 50.48 cm), using those parameters derived from the cal ibrat ion process of appendix B . In Figures C . l - C . 3 , those parameters determined from the test image (which contained information about most of the image plane) were used to scale the object. In C . l , the object was located in the centre of the image plane and the results are correspondingly good. The mean radius is as accurate as could ever be hoped, while the standard deviation of the results (which is perhaps more tell ing in this case) is only about 0.15 cm. Th i s corresponded to 0.06% of full scale, or 0.29 pixels. When the same camera model was used to scale the object in each of the four corners of the image, the result was not as good. Th i s port ion of the image is the least linear which leads to a decrease in both accuracy and precision [21]. Figure C.3 shows the aggregate distr ibution for all of the measurements wi th this model. In Figure C .4 , the twenty data sets derived from the four corners of the image space are used to scale the object in the same corner of the image space that the model was derived for. This led to quite accurate results. When these same models were used to scale the object, first in the centre of the image (Figure C .5) , and then in the opposite corner of the image space (Figure C .6) , the results were degraded. 124 Figure C.5 also contains the results of using the model derived from solely the centre por t ion of the image on objects located in each of the four corners. F ina l ly , Figure C.7 shows the radius measurement dis tr ibut ion for all of the data. The mean is in error by less than 0.06 cm (0.12 pixels), while the standard deviat ion is slightly more than 0.5 cm (1.03 pixels). The same seven result distributions are shown for the length calculations in Figures C .8 - C.14. The accuracy achieved is not nearly as good in absolute value as that of the radius, but is s t i l l quite acceptable. A consistently high bias on the calculat ion of the length would tend to point to a human error in estimating the exact end points of the targets profile or a camera modell ing error in the scale factor of one of the image's axes. S t i l l , the mean length for al l of the tests is wi th in 0.6 c m (1.14 pixels) of reality. The precision of the results indicates that the distribution of the calculations has a standard deviation of 1.2398 cm (almost one-half inch). Considered wi th the fairly accurate mean length derived, one can see that this w i l l quite likely be more accurate than what one could do for a log in a sort yard wi th a measuring stick. A 50 c m target may be simple enough to scale w i th a measuring stick, but it must be remembered that (as determined in the simulation port ion of the test results) the measurements of each feature point are largely independent of the proximity of each of the others. Therefore, the accuracy and precision derived for the cylinder here are going to be independent of its size. W h e n the same cylinder was placed at about a 40° angle to the image plane, the scaling accuracy decreased substantially. The mean measured radius was high by 2.33 c m (see Figure C.15), while the length was low by 2.23 cm (see Figure C.16). It is felt that the only reason for this is the difficulty in locating the end points on the rounded log projection. Whi le this magnitude of error, when combined wi th real-world non-idealities, may lead to an inadequate solution, it can be s imply prevented by restricting the placement of the logs such that they are reasonably parallel to 125 the image plane, as was suggested in the simulation results. A smaller object (radius = 2.54 c m , length = 13.65 cm) was scaled to test that the magnitude of the error would remain largely independent of the size of the object. The results, which have an accuracy and precision on par w i th the standard results, verifies this in Figures C.17 and C.18. F ina l ly , a real log was measured, as shown by the distributions in Figures C.19 and C.20. The actual measured values for the mean end point radius and length were 4.24 cm ( ± 0 . 1 cm) and 1.017 m ( ± 0 . 2 5 cm). These values are less accurate due to the inclusion of a non-cylindrical object to be scaled wi th a cyl indr ical model. The mean error in the radius and the standard deviation of this value are 0.24 pixels and 3.36 pixels, respectively. 126 Vi 4* *3 Vi %i L. **-o i_ V i D C 2 — 1 — •can - 12.6603 ea tt. dev. - 8.1496 ea I j—I—I—I—I 12 13 radius (cm) 1 1 1 1 1 Figure C . l : Distribution of the Log Radius Scaling Experiments; cylinder in the centre of the image, test model, 5 trials vt +> VI t) i-V •8 c 4-: 3-i 2-i 1-: tctn - 13.2486 ca rt. dev. - 8.3398 ca 1 1 1 1 1 1 1 1 1 i T 12 13 radius (cm) Figure C.2: Distribution of the Log Radius Scaling Experiments; cylinder in the corners of the image, test model, 20 trials 127 IA "3 IA I) • i C 3-: 2—. I—. •can • 13.1590 CB t t . dev. - 0.3465 CM 8 ' 1 1 1 1 1 1 j 1 1 i 12 13 radius (cm) Figure C.3: Distribution of the Log Radius Scaling Experiments; cylinder in the corners (4) and the centre (1) of the image, test model, 25 trials IA 3 IA 0) t_ «*-O i . 0) 3 C 6-5-4-3-2-1-•can - 12.8320 cm tt. dev. - 8.3578 cm 12 r a d i u s (cm) Figure C.4: Distribution of the Log Radius Scaling Experiments; cylinder in the corners of the image, matching corner models, 20 trials 128 VI 3 Vi I) t-tf-o fc. 3 C 12 13 radius (cm) Figure C.5: Distribution of the Log Radius Scaling Experiments; cylinder in the centre of the image with corner models and in the corners of the image, 40 trials vt £ 3-3 VI I) fc. «*- 2 o fc. I) 3 C e-aetn tt . dev. 12.7587 cm • 8.6549 cm 12 13 radius (cm) Figure C.6: Distribution of the Log Radius Scaling Experiments; cylinder in the corners of the image, opposite corner models, 20 trials 129 vt +> Vi t) 0) C I B — 5 — •em - 12.8856 ca tt . dev. - B.S234 ca 12 13 r a d i u s (cm) Figure C.7: Distribution of the Log Radius Scaling Experiments; aggregate of all of the experiments, 105 trials 130 IA 3 IA <u <-+-o i _ 0) X ! E 3 C m e a n = 5 2 . 3 7 5 8 cm s t . dev. = 8 . 3 4 8 3 cm 0 I I I I I I I I I I M I I I I I I I I I H I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I 48 5 0 5 2 l eng th (cm) Figure C.8: Dis t r ibut ion of the Log Length Scaling Experiments; cylinder in the centre of the image, test model, 5 trials (A 3 CO 0) 1_ <D E C 5 - j 4 - j 3 - j 2 - ! 1 -mean - 5 1 . 5 9 8 0 cm st. dev. - 0 . 8 9 3 8 cm 0 l i i i n i 1111111111111111111111111 1111111111111111111 4 8 4 9 5 0 51 5 2 5 3 l e n a t h (cm) Figure C.9: Dis t r ibut ion of the Log Length Scaling Experiments; cylinder in the corners of the image, test model, 20 trials 131 tt •H *3 tt ft) O L. ft) 3 C 6 5-4-3-2-1-•can - 51.7946 e» tt. dev. - 8.8762 cm 8 11111111111 I I 11111111 II11111 1111 I I 11111 T 48 49 58 51 52 53 lenath (cm) Figure C.10: Distribution of the Log Length Scaling Experiments; cylinder in the corners (4) and the centre (1) of the image, test model, 25 trials tt •*-> *3 tt ft) ft) f 3 C 6 5-4-3-2-1-•can » 51.1895 cm tt. dev. - 1.1891 cm 11 i n 11 I I I I I | I I I I I I I I I I I I I I 11 I I | I I I I I I I I I T T T 48 49 58 51 52 lenath (cm) 53 Figure C . l l : Distribution of the Log Length Scaling Experiments; cylinder in the corners of the image, matching corner models, 20 trials 132 Vi «•-> 3 Vt V 3 C 6-i 5-i 2-i i - i e •can - 58.6796 cn rt. dev. - 1.336B cn 111111111 I I 1111111111111 48 49 58 51 52 53 lenath (cm) Figure C.12: Distribution of the Log Length Scaling Experiments; cylinder in the centre of the image with corner models and in the corners of the image, 40 trials vt 3 Vt %i i_ «+- 2-O OJ 3 C •can - 58.8237 cm tt. dev. - 1.2178 cn • 1111111111 1111111111111111111 r 48 49 58 51 52 53 lenath (cm) Figure C.13: Distribution of the Log Length Scaling Experiments; cylinder in the corners of the image, opposite corner models, 20 trials 133 3 *> V L. I) 3 C aean - 51.8814 ca-r t , dev. - 1.2398 ca v) 15 18 — 5 — 8 lijili 111ii111111 1111111111111111111 48 49 58 51 52 53 lenath (cm) Figure C.14: Distribution of the Log Length Scaling Experiments; aggregate of the experiments, 105 trials 134 fr) +> *3 t/i t) i_ *«-O &. 3 C H •can - 15.19H cn rt. dev. - 1.2714 ca i i i i i i i i i i i i I ' I i 16 radius (cm) Figure C.15: Distribution of the Log Radius for a Log at 40°to the Image Plane; test model, 5 trials fr) *3 fr) •B 3 C B " i n i m i i i i i i i i i i i i i i i ' aean - 46.2516 cn tt. dev. " 2.8806 cn i i i l i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i m i i i 46 r 48 58 52 lenath (cm) Figure C.16: Distribution of the Log Length for a Log at 40°to the Image Plane; test model, 5 trials 135 tt ^ 3 H 3 tt t) L. • •f 3 C B l i i m i i i i l i i i n i i i i mv\ • 2.9436 ca st. dev. - 8.3862 ea 2.6 i l i n i i u i i j n i i i n i i i i i i i i n i i j i n i n i i n i i i i i i i i i 3 3.2 3.4 radius (cm) Figure C.17: Distribution of the Log Radius for a Smaller Cylinder; test model, 5 trials, actual radius = 2.54 cm tt *3 tt • i C aetn - 13.6525 ca t t . dev. - B.3316 ca 8 l i i i i i i i i i l i i i i i i i i i | i i i i i i i i i l i i i i i i i i i j i i i i i i i i i l i i i i i i i i i | i i i i i i i i i l i i i i i i i i i | i i i i i i i i r 13.6 13.6 14 lenath (cm) 14.2 Figure C.18: Dis t r ibu t ion of the Log Length for a Smaller Cyl inder ; test model, 5 tr ials, actual length = 13.65 cm 136 CO •*-> *3 CO ft) o i-ft) 3 C 1 — K i n " 4.0780 cm st. dev. - 0.1563 cn 0 ' 11 i i i i 11 i I • 111 I 11 11 i 111 i i i 111 i i i 11 111 4 4.2 r a d i u s (cm) Figure C.19: Distribution of the Real Log Radius; test model, 5 trials, actual mean end radius = 4.20 cm co «-> *3 CO ft) t-ft> • i 3 C 2 — 0—1 r •esn • 1.0339 m dev. • 0.8045 st • i i i i 1.03 l e n a t h (m) Figure C.20: Distribution of the Real Log Length; test model, 5 trials, actual mean length = 1.017 m 137 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.831.1-0097093/manifest

Comment

Related Items