UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Identification of salmon can-filling defects using machine vision O’Dor, Matthew Arnold 1998

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


831-ubc_1998-0106.pdf [ 21MB ]
JSON: 831-1.0080877.json
JSON-LD: 831-1.0080877-ld.json
RDF/XML (Pretty): 831-1.0080877-rdf.xml
RDF/JSON: 831-1.0080877-rdf.json
Turtle: 831-1.0080877-turtle.txt
N-Triples: 831-1.0080877-rdf-ntriples.txt
Original Record: 831-1.0080877-source.json
Full Text

Full Text

Identification of Salmon Can-Filling Defects using Machine Vision by Matthew Arnold O'Dor B.A.Sc, University of Toronto, 1995 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF APPLIED SCIENCE i n THE FACULTY OF GRADUATE STUDIES (Department of Mechanical Engineering) We accept this thesis as conforming to thex^qu/Jred/xEk.nd^d THE UNIVERSITY OF BRITISH COLUMBIA February 1998 ©Matthew Arnold O'Dor, 1998 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of | f l f i c K c M \ , ' o c ^ 1 B^\Ae&C-\ry The University of British Columbia Vancouver, Canada Date flWk 4, \wi DE-6 (2/88) Abstract During the salmon can-filling process, a number of can-filling defects can result from the incorrect insertion of the salmon meat into the cans. These can-filling defects must be repaired before sealing the cans. Thus, in the existing industrial process, every can is manually inspected to identify the defective cans. This thesis details a research project on the use of machine vision for the inspection of filled cans of salmon. The types of can-filling defects were identified and defined through consultations with salmon canning quality assurance experts. Images of can-filling defects were acquired at a production facility. These images were examined and feature extraction algorithms were developed to extract the features necessary for the identification of two types of can-filling defects. Radial basis function networks and fuzzy logic methods for classifying the extracted features were developed. These classification methods are evaluated and compared. A research prototype was designed and constructed to evaluate the machine vision algorithms on-line. ii Contents Abstract ii Table of Contents ii List of Tables ix List of Figures xi Acknowledgements xii 1 Introduction 1 1.1 Machine Vision in Industrial Automation 1 1.2 The Industrial Problem 2 1.2.1 The Salmon Can-Filling Process 2 1.2.2 Quality Assurance 3 1.3 Benefits of Machine Vision Inspection 4 1.4 Project Scope 5 1.5 Research Objectives 5 1.6 Thesis Outline 6 2 Literature Review 7 2.1 Machine Vision Grading 7 2.2 Applications of Machine Vision Grading 7 2.2.1 Motivation 9 2.2.2 Product Quality Evaluation 9 iii 2.2.3 Machine Vision Apparatus 10 2.2.4 Image Features 10 2.2.5 Classifiers 11 2.3 Previous Work in Fish Processing Automation 11 2.4 Industrial Food Inspection Systems 12 3 Can-Filling Defects 13 3.1 Quality Control Survey . 14 3.2 Can-Filling Defect Distribution 16 3.3 On-Site Experimentation 17 3.3.1 Plant Interviews 17 3.3.2 Image Acquisition 18 3.4 Can-Filling Defect Definitions 19 3.4.1 Flange Defects 19 3.4.2 Cross-Pack Defects 20 3.5 Can-Filling Defect Data Sets 21 3.5.1 Flange Defects . : 21 3.5.2 Cross-Pack Defects 23 3.6 Summary 24 4 Feature Extraction 25 4.1 Precise Can Location 25 4.1.1 Canny Edge Detection 26 4.1.2 Hough Transform 27 4.1.3 Robust Statistics 28 4.1.4 Detailed Description 29 4.2 Flange Defect Features 34 4.2.1 Flange Profile Extraction 34 4.2.2 HSI Colour Space 36 4.3 Cross-Pack Defect Features '. 37 4.3.1 Can Fill Extraction 38 iv 4.3.2 HSI Thresholding 38 4.3.3 Region Labelling 39 4.3.4 Erosion and Dilation 42 4.3.5 Region Property Calculation 43 4.4 Summary 44 5 Feature Classification 45 5.1 Radial Basis Function Networks • • 45 5.1.1 Network Architecture 46 5.1.2 Training Algorithm 47 5.2 Fuzzy Logic 48 5.2.1 Fuzzy Reasoning 50 5.2.2 Rule Generation 52 5.3 Flange Defect Classification 52 5.3.1 RBFN Approach 53 5.3.2 Fuzzy Logic Approach 57 5.3.3 Flange Defect Classification Examples 62 5.4 Cross-Pack Defect Classification 64 5.4.1 Property Selection 64 5.4.2 Classify and Sum Approach 65 5.4.3 Label and Sum Approach 67 5.5 Summary 67 6 Evaluation 68 6.1 Feature Extraction 68 6.1.1 Can Location 68 6.1.2 Flange Profile Extraction 70 6.1.3 Cross-Pack Extraction 70 6.1.4 Execution Time 71 6.2 Flange Defect Classification 72 6.2.1 Flange Defect Width Error 72 v 6.2.2 Classification Accuracy 72 6.2.3 Execution Time 74 6.2.4 A Comparison of RBFN and Fuzzy Logic Classification 75 6.3 Cross-Pack Defect Classification 75 6.3.1 Cross-Pack Area Error 76 6.3.2 Classification Accuracy 76 6.3.3 Execution Time 77 6.3.4 A Comparison of RBFN Classification and Label and Sum 77 6.4 Semi-Automated Quality Assurance Workcell Integration 78 6.5 Potential Optimizations 79 7 Can Defect Detection System 81 7.1 Requirements 81 7.2 System Design 82 7.2.1 Imaging Hardware 82 7.2.2 Hygienic Design 83 7.2.3 Supporting Frame 83 7.2.4 Can Detector 85 7.2.5 Illumination 85 7.2.6 Software Architecture 88 8 Concluding Remarks 91 8.1 Conclusions 92 8.2 Recommendations 93 References 94 Appendices 98 A Quality Control Survey 98 B Can Defect Detection System User's Guide 119 B.l System Requirements 119 vi B.2 Equipment Setup 121 B.2.1 Supporting Frame 121 B.2.2 Convergent Beam Sensor 121 B.2.3 Parallel Port Interface 121 B.2.4 TCi-Ultra Framegrabber 122 B.2.5 JVC CCD Camera 122 B.2.6 Illumination 122 B.3 Software Installation 123 B.3.1 TCi-Ultra Driver 123 B.3.2 Tiny Port Driver 124 B.3.3 CDDS Program 125 B. 4 Configuring and Running the CDDS 125 B.4.1 Main Window 126 B.4.2 Dialog Boxes 128 B.4.3 Camera Focusing and Calibration 130 B.4.4 CDDS Configuration File: c d d s . i n i 131 B. 4.5 CDDS Automatic Mode 131 C Can Defect Detection System Developer's Manual 133 C. l Window NT Development 133 C. l . l User Interface 134 C.1.2 Machine Vision Code 135 C.2 Unix Development 136 C.2.1 Machine Vision Testing Program 137 C 3 Using the Parallel port for I/O 137 vii List of Tables 2.1 Comparison of machine vision food product grading. 8 3.1 Can-filling defect descriptions 14 3.2 Can filling survey results 15 3.3 Can-filling defect production rates 17 3.4 Can-filling defect production rates 20 4.1 HSI thresholds 38 5.1 Fuzzy logic rules used for the production images 59 5.2 Fuzzy logic rules used for the laboratory images 60 5.3 RBFN classification accuracy for light and dark regions 66 5.4 RBFN evaluation data set classification accuracy for light and dark regions 67 6.1 Flange position errors in pixels at four points on the flange 69 6.2 Execution times for feature extraction algorithms 71 6.3 Defect width RMS errors for the production data 72 6.4 Defect width RMS errors for the laboratory data 72 6.5 Comparison of meat defect classification for production data 73 6.6 Comparison of meat defect classification for laboratory data 74 6.7 Comparison of skin defect classification for laboratory data 74 6.8 Overall classification accuracy for production data 75 6.9 Average flange defect classification execution times 75 6.10 Mean cross-pack area measurement errors for the classification approaches 76 viii 6.11 Cross-pack classification accuracy for each defect type 77 6.12 Overall cross-pack defect classification accuracy 77 B. l JVC TK-1070U camera settings 123 C. l CDDS user interface source code file descriptions 134 C.2 Machine vision source code file descriptions 135 ix List of Figures 1.1 Schematic of the patching table 3 3.1 Salmon can-filling defects 16 3.2 Experimental setup 18 3.3 Example images from production flange defect data set 22 3.4 Example images from laboratory flange defect data set 22 3.5 Example images from cross-pack defect data set 23 4.1 Estimating the location of a can 30 4.2 Edge detection steps 31 4.3 Example of the Hough transform 32 4.4 Histograms of accumulated Hough transform parameters 32 4.5 Example of the Hough Transform using a thinner annulus 33 4.6 Radius and position adjustments 35 4.7 Flange profile examples 36 4.8 HSI colour space 37 4.9 Region extraction process flow chart 37 4.10 Examples of HSI thresholding 39 4.11 Labelled image examples 40 4.12 Region labelling algorithm flow chart 41 4.13 Erosion and dilation diagrams 42 4.14 Examples of regions after smoothing 43 5.1 Structure of a radial basis function network 46 x 5.2 HSI profile of a meat flange defect 49 5.3 A diagram illustrating Mamdani's direct fuzzy reasoning method 50 5.4 Flange defect identification process flow chart. . . 52 5.5 Optimization of RBFN spread constant 55 5.6 Optimization of RBFN SERR exit condition 55 5.7 Examples of skin flange defects 56 5.8 Examples of bone flange defects 56 5.9 Output membership functions 58 5.10 Production data set membership functions 59 5.11 Laboratory data set membership functions 61 5.12 Production data set flange pixel classification examples 62 5.13 Laboratory data set flange pixel classification examples 63 5.14 Feature histograms for dark regions 65 5.15 Feature histograms for light regions 66 6.1 Position error evaluation 69 7.1 Semi-automated quality assurance work-cell with the CDDS 82 7.2 Schematic drawing of the frame and lighting tunnel mounted over a can conveyor. . 84 7.3 Photograph of the CDDS mounted on the demonstration setup 84 7.4 Flange profiles using different illumination 87 7.5 Main window of the CDDS user interface 90 B.l Photograph of the CDDS mounted on the demonstration unit 120 B.2 TCi-Ultra configuration program main window 124 B.3 CDDS program main window 126 B.4 Framegrabber selection dialog box 128 B.5 System configuration dialog box 129 B.6 Automatic image saving configuration dialog box 129 B. 7 Defect definition dialog box 130 C. l Parallel port interface box schematic 138 xi Acknowledgements I would like to thank Dr. Elizabeth Croft for her supervision and guidance through this research. I would also like to thank Dr. Clarence de Silva for his support provided through the NSERC - BC Packers Industrial Research Chair in Fish Processing Automation and the UBC Industrial Automation Laboratory. Additional support was provided by the Garfield Weston Foundation, BC Packers and by NSERC through an Industrial Postgraduate Scholarship. This work would not have been possible with out the help and information provided by Mr. Fred Nolte, Mr. Tom Coleman and the other BC Packers employees who assisted us during our visit to the Prince Rupert Cannery. Thanks to Boyd Allin and everyone else in the Industrial Automation Laboratory for providing a friendly and productive working environment. I would like to thank all my housemates for putting up with me during the stressful periods. Last but not least, I would like to thank Cathie Kessler and my parents for their constant love, support and encouragement. xii Chapter 1 Introduction 1.1 Machine Vision in Industrial Automation Much of the development of machine vision systems for industrial automation began in the early 1980's [1]. The companies involved in this early work were primarily in the automotive, electronics and pharmaceutical industries. Initial tasks involved size measurement, inspection of uniform products1, such as printed circuit boards, and the recognition of machine printed characters. The large number of articles published in the Transactions of the American Society of Agricultural Engineers (ASAE) indicates that there is a great deal of research interest in extending machine vision to the inspection of non-uniform food products. For example, Gunasekaran [2] states that machine vision for food quality assurance currently has limited-market applications. However, machine vision is moving towards widespread acceptance in the food processing industry. In order to achieve this acceptance, machine vision technology must become transparent to the end-user, so that the food industry does not have to understand the inner workings. The type of system required for transparency is unlikely to be developed in the near future. As a result there are still opportunities to research specific problems, such as the inspection of canned salmon, with the view to developing a system which, within its own niche, is as transparent and easy to use as possible. 1 Uniform products are products that are required to be geometrically and spectrally identical with in some small tolerances 1 CHAPTER 1. INTRODUCTION 2 1.2 The Industrial Problem Canned salmon products are important to the economy of British Columbia, Canada, and are valued at approximately 115 million dollars per year [3]. The process for preparing the salmon and packing it in cans has not changed significantly over the past 60 years. Thus, the current salmon canning industry requires a great deal of manual labor for preparing, sorting, packing and inspecting canned salmon products. The large year-to-year fluctuations in the salmon stocks, and the seasonal nature of the industry make it difficult to maintain an efficient, well-trained workforce in this industry. These pressures have led local industry representatives to identify highly automated, high-speed canning as an industrial research problem which must be addressed in order to remain competitive in the global marketplace. In order to reduce costs and increase product recovery efficiencies, the UBC Industrial Automation Laboratory, in collaboration with BC Packers Ltd., a major BC based fish processor, has undertaken to evaluate, improve, and in some cases re-design the canning process. 1.2.1 The Salmon Can-Filling Process In general, can-filling equipment is designed for a specific species of fish. Salmon filling machines were introduced by American Can and Continental Can in the 1930's [4]. These machines have remained essentially the same for the last 60 years. Salmon is packed raw and without the addition of water or oil. This, combined with the fragility of the salmon meat, makes it difficult to pack into cans. The salmon canning process begins by cutting raw "cannery dressed" (gutted, tailed, headed and finned) salmon into cross-sectional steaks. The steaks are fed into the filling machine on a conveyor where they are volumetrically metered and pressed into cans. Typically 5% to 25% of the filled cans have can-filling defects, depending on salmon species, quality of the fish and state of the filling machine. To meet quality control standards, all the cans must be inspected and these in-process can-filling defects must be repaired prior to moving to the next stage of the canning process. After inspection, the cans are vacuum-sealed by the seamer and cooked in a retort. CHAPTER 1. INTRODUCTION 3 1.2.2 Quality Assurance Quality assurance occurs at the patching station which is composed of a rectangular table bisected by a number of conveyors as shown in Figure 1.1. The table is fed by a single line of cans exiting the filling machine at up to 4 cans per second. Underweight cans are ejected from the main conveyor line by a can ejector controlled by a high-speed weighing machine. Workers stand along both sides of the table. The last worker on the table is called the "spotter" and is responsible for inspecting every can before it leaves the table. The remaining 3 to 6 workers alternate between inspecting incoming cans and repairing defective cans. W# = worker Sp = spotter O o o o Scale | from filling machine O underweight queue ooo underweight ejector ooooooooooooooooo-o r — o " / o o o o Sp Figure 1.1: Schematic of the patching table. A final quality assurance check for container integrity is performed on a second inspection line called the can screening line. Container defects result in a poor seal which leads to spoilage. This final inspection for defects occurs after the sealed cans have been in storage a sufficient time to allow for such defects to become apparent trough loss of vacuum in the can. The screening process is regulated by government guidelines [5], while the patching table is unregulated. The screening line is highly automated and consists of a high-speed weighing machine and a double-dud detector. The double-dud detector measures the deflection of the can ends. A can with a poor seal will have a low deflection due to the reduced vacuum inside the can and may also have low weight due to material escaping from the can during cooking. Container defects typically occur at a rate of 13 per 100,000 cans [6], of which 47% are caused by can-filling defects missed by thr patching table. CHAPTER 1. INTRODUCTION 4 1.3 Benefits of Machine Vision Inspection Machine vision inspection of unsealed cans, to detect and identify can-filling defects, is required for the development of a semi-automated quality assurance workcell. The selection of the patching tasks for automation and the design of the workcell is discussed by Allin [7]. The use of machine vision in the workcell offers a number of potential benefits to the salmon can-filling industry. At the current patching table, the spotter is responsible for the inspection of all the cans. The filling machine runs constantly and, when the spotter looks away from the stream, defects may go by undetected. A machine vision system will not look away and can guarantee that 100% of the cans are inspected. This does not mean that it will detect 100% of the defects, but it may have a higher defect detection rate than the spotter. Machine vision can offer greater flexibility to redefine the evaluation of can-filling quality. Cur-rently the evaluation of quality is made by a number of different workers with different interpre-tations of quality. To change their definitions of quality, they must be retrained. On the other hand, with a machine vision system the quality definition would be contained within the system knowledge base and could be redefined by adding or removing information from the knowledge base and by adjusting the parameters governing the decision making process. On-line data collection would also be possible with the addition of machine vision, since the decisions made could be recorded automatically. This information could be used to close the control loop of the filling machine. An increased production rate of filling defects would signal the need for filling machine adjustments. Detailed knowledge of the filling defects could be also be used in marketing and sales to increase the value of the product. Labour costs would be reduced by eliminating the need for the spotter. Re-processing costs would also be reduced, since defects would be detected early in the production process where repairs are less expensive. At higher production speeds (i.e., greater than 4 cans per second and up to 10 cans per second) more filling defects would be missed if manual inspection continues to be used, because the speed of manual inspection is fixed by abilities of the workers. Machine vision based quality control would facilitate the industry's stated desire for higher production speed. CHAPTER 1. INTRODUCTION 5 1.4 Project Scope This work is part of a project to develop a semi-automated quality assurance workcell for the salmon canning industry. The workcell is referred to as semi-automated because it may not be economical to automate all aspects of inspecting and repairing filled salmon cans. Allin's work [7] focussed on the analysis of the current patching table to determine which tasks are technically and economically suitable for automation. A key component of this workcell is an automated machine vision inspection system. This thesis focuses on the research of salmon can-filling defects and the development of a machine vision system to inspect filled salmon cans on-line. 1.5 Research Objectives The objective of this project is to design and build a prototype machine vision system to be incorporated into the semi-automated quality assurance workcell. The specific objectives of this thesis are detailed below: • To develop an understanding of salmon can-filling defects through consultations with quality control experts. Quantify these defects and prioritize them to identify which ones would be most suitable for machine vision inspection. • To observe the production and quality assurance processes at a typical salmon cannery. Col-lect images of defects produced by the filling machine during regular operation to be used to evaluate the machine vision system. • To develop and evaluate machine vision algorithms to detect and classify filling defects. • To select and design the hardware required to implement the machine vision system for the semi-automated quality assurance workcell. This system will have to make the quality assurance decisions on-line. CHAPTER 1. INTRODUCTION 6 1.6 Thesis Outline The structure of this thesis is summarized in the outline below: Chapter 1 Introduction: This introductory chapter. Chapter 2 Literature Review: A review of other industrial applications of machine vision to food quality assurance and grading applications. Chapter 3 Can-Filling Defects: Can-filling defects are identified and defined. The results of consultations with quality control experts and the collected images of can-filling defects are presented. Chapter 4 Feature Extraction: The development of machine vision algorithms to extract the image features necessary for the detection and classification of can-filling defects. Chapter 5 Feature Classification: The use of neural networks and fuzzy logic to aid in the clas-sification of the features extracted in Chapter 4. Chapter 6 Evaluation: The evaluation of the feature extraction and classification algorithms are presented and discussed. Chapter 7 Can Defect Detection System: Discussion of the design and the selection of components for the Can Defect Detection System (CDDS). Chapter 8 Conclusions and Recommendation: A summary of of the results detailed in this work. Suggestions for future improvements to the CDDS are also given. Chapter 2 Literature Review 2.1 Machine Vision Grading Machine vision refers to a system that uses a video camera (CCD camera) or a similar device to receive a representative image of a real scene coupled with a computer system that interprets this image to produce data for the purpose of controlling a machine or process [8]. Grading is the process of separating raw materials into categories of different quality [9]. Machine vision grading involves the combination of these two processes. Specifically, machine vision grading involves the measurement of product quality with machine vision in order to automate product quality assessment, sorting and management. 2.2 Applications of Machine Vision Grading Machine vision grading has been applied to the inspection of a wide range of food products for quality determination. Although the products differ greatly, these applications have some similar-ities. The details of a selected number of applications are tabulated in Table 2.1. The important details of these applications are: (i) the motivation, (ii) the product quality evaluation, (iii) the image features, (iv) the classifier, and (v) the apparatus. These are discussed in the sub-sections below. 7 CHAPTER 2. LITERATURE REVIEW 8 Apparatus | Placement | Automatic Manual Manual Automatic Manual Manual Manual Automatic Manual Manual Manual Automatic Apparatus | Camera | RGB CCD array RGB CCD array Grey CCD array Grey line-scan Grey CDD array Grey CDD array RGB CCD array RGB CCD array Grey CCD array RGB CCD array RGB CDD array RGB CCD array Apparatus | Ilium. | White White Back lighting White Light strips White White White White White White White Apparatus | Processor | PC Not reported PC Special" Not reported PC PC PC/ Special" Not reported PC PC Special^ DSP Classifier | Accuracy | 82% vs. 83% 6? m O l 80%-86% o o> fe? l O O l Not reported fe? 0 0 0 1 fe? oo m 72%-95% fe? C O 0 0 88%-91% vs. 91%-94% fe? r -r - 85%-97% Classifier | Type | Neural vs. Bayes Neural Neural Statistical Neural Bayes Statistical Fuzzy/ statistical Fuzzy C-means: crisp vs. fuzzy Statistical Neural Image Features | * 5-40 0 0 CM r H 0 0 r H C O • C N 0 0 Many C O <o NR Image Features | Type | Colour, edges Colour Histogram Histogram 3D shape 2D shape, texture, edges Colour 2D shape, colour, texture 2D shape Colour Colour Histogram Grades | CM C O C O C O l O CM C O C O C O Product | Features Size, maturity, straightness Doneness Blood spots, dirt stains Blemishes, stem Bruises Plumpness Colour, mechanical damage Shape, weight, colour Shape Doneness Maturity Stem, bruises Product | Type Roses Biscuits Eggs Apples Apples Raisins Bell peppers Herring roe Herring roe Beef Steaks Tomatoes Oranges Author | Steinmetz [10] Yeh [11] Patel [12] Rehkugler [13] Yang [14] Okamura [15] Shearer [16] Croft [17] Cao [18] Unklesbay [19] Choi [20] Recce [21] CHAPTER 2. LITERATURE REVIEW 9 2.2.1 Motivation The motivation behind many of the applications in Table 2.1 was the need to increase inspection consistency and reduce inspection costs by eliminating manual inspection and increasing processing speeds [10-13,15-18,21]. Manual grading inconsistencies are both short term and long term in na-ture [11]. Short term inconsistencies are generally due to worker fatigue. Long term inconsistencies are due to variations in worker's perceptions or interpretation of quality standards. Most of the ap-plications listed in Table 2.1 do not provide the magnitude of the manual inspection inconsistencies. The data given by Okamura et al. indicates that the disparity among raisin graders ranges from 2% to 28% [15]. The disparity range among herring roe graders reported by Beatty was between 11% and 22% [22]. In terms of processing costs and speeds, Recce and Taylor state that the inspection and grading of oranges is the only non-automated task in the citrus packaging plant and limits the processing speed [21]. Steinmetz et al. indicate that the grading and packaging of roses accounts for 50% of the total production cost. Such bottlenecks and costs are typical of most operations where manual quality grading is involved. A few of the applications had other motivations. The works by Yang [14] and Choi et al. [20] were motivated by the need to make grading decisions for research into the improvement of apple and tomato cultivation. Unklesbay et al. used their steak doneness application to compare crisp and fuzzy versions of the C-means algorithm [19]. Most of the machine vision applications reported are related to quality assurance; therefore, the grading information is used to control the packaging process of the products, rather than the production process. However, feedback of quality information to the process can be beneficial. For example, Yeh et al. considered the control of a biscuit baking process in addition to biscuit quality assurance [11]. 2.2.2 Product Quality Evaluation The evaluation of product quality depends on two factors: (i) the features on which the quality judgment is made and (ii) the levels or grades of quality which are defined. These grades may either be strictly defined, e.g., by government regulations, or be a loosely defined concept with a particular definition in the mind of each grader. The product features used in the applications in Table 2.1 CHAPTER 2. LITERATURE REVIEW 10 were particular to each product. Generally, only one product feature was used by a human grader in his/her quality evaluation. The products were sorted into between 2 and 6 quality grades. In all the applications listed in Table 2.1, the products were classified into grades by grading experts. In five of the applications, grading decisions were made based on the expert's interpretation of detailed government codes describing each grade [12,13,15,16,20]. The other applications did not have any quality definition other than the grading decision made by the experts. 2.2.3 Machine Vision Apparatus For the most part, the applications in Table 2.1 used similar apparatus. Most of the applications that specified the type of processor used a PC host computer with a framegrabber. If colour was one of the features of interest, an RGB camera was used. CCD array cameras were used in all but one of the applications. In that specific case, Rehkugler rotated apples in front of a line scan camera in order to inspect their entire surface area [13]. All but two of the applications used uniform diffuse white light to illuminate the products; Patel et al. used high intensity back lighting to examine the interior of eggs [12], and Yang used structured light to determine 3D shape of apples [14]. Several applications used a framegrabber with on-board processing capabilities in the form of a specialized image processor or a digital signal processing (DSP) board. Many of the applications were targeted for industrial use; however, only four of the applications had a mechanism or trigger to automatically position the product in the camera's field of view. 2.2.4 Image Features The feature most commonly used in the reported grading applications was colour. Many of the authors usually stated that the HSI (hue, saturation and intensity) colour space was used because it closely resembles how humans understand colour [23] (Explained in more detail in Section 4.2.2). The mean hue, saturation and intensity of the product was the most typical colour scheme used for grading. Colour and grey scale histograms were also used in many applications. In a few cases, edges in the image were used for sizing [10], locating [20] and evaluating the shape of the product [18]. Edges were detected using the boundaries in a binarized image [18] and with the Sobel operators [15]. Binary shape analysis and image texture was used in [17] and [15]. Although many of these applications are motivated by the need for high speed inspection, very CHAPTER 2. LITERATURE REVIEW 11 few actually detail the speed of their feature extraction and classification algorithms. Processing speeds were given by Croft et al. as 2 Hz [17] and by Recce et al. as 5 to 10 Hz [21]. 2.2.5 Classifiers Three types of classifiers were used in the reported applications: statistical methods, artificial neural networks (ANN) [24,25] and fuzzy logic methods [26]. The statistical methods included the Bayes classifier [27] and regression methods. All of the ANN were multi-layer feed-forward networks. The applications that specified a training algorithm used backpropagation. The fuzzy methods ranged from knowledge-based rules [18] to a fuzzy C-means algorithm [19]. In many of the applications, the classifier was evaluated using the classification accuracy of the algorithm. The classification accuracy refers to the percentage of the products correctly graded. Comparison of the classification algorithms is difficult except in cases where two methods are directly compared [10,19]. This difficulty stems from the fact that classification accuracy is more highly correlated to the relationship between the measured image features and the product grades, and the crispness of the grade definitions, than to the classification algorithm itself. Few applications cited clear reasons for the choice of a specific classification algorithm. In most cases, the reason for using a particular classification algorithm was that it had worked in the past. Cao et al. used fuzzy logic because it mimics the human expert decision-making process [18]. Recce stated that ANNs were used for their efficiency and speed of classification. Unklesbay et al. found that the fuzzy C-means algorithm was more accurate than the crisp version and provided a measure of the confidence in a particular classification [19]. 2.3 Previous Work in Fish Processing Automation There are a number of examples offish processing automation using machine vision. An automated fish deheader, which used machine vision to detect the gill position, was developed by de Silva et al. [28]. A number of researchers have worked on the automation of herring roe grading using shape, colour and texture measured with machine vision [17,18,22,29]. A system for measuring the length of fish was developed by Strachan [30]. Pertusson used the optical spectra of fish flesh to measure the quality of fish flesh [31]. Although these products are more closely related to filled CHAPTER 2. LITERATURE REVIEW 12 cans of salmon than other food and agricultural products, the shapes, colours and textures of these products still differ to some extent; therefore, the results are not directly transferable. 2.4 Industrial Food Inspection Systems There are a number of machine vision-based food inspection systems currently being used in indus-try. The specific details of these systems are unavailable because they are proprietary. Lumetech A/S of Denmark manufactures the Fisheye Waterjet Portion Cutter for trimming and portioning fish fillets [32]. This system uses machine vision to measure the volume of a fillet using structured light strips. The 3D model is then used to control a waterjet cutter to cut the fillet appropriately. Lullebelle Foods Ltd., which processes blue berries, uses a machine vision system to eject unripened berries from the processing line [33]. Berries pass through an illumination box in multiple streams at very high speeds. The vision system fires air jets to eject green berries. Key technologies man-ufactures the Tegra system for grading agricultural products according to size and colour. In their catalog [34], the description of the system states that it is capable of inspecting products such as peas, french fries and peanuts at speeds of up to a million objects per minute. Chapter 3 Can-Filling Defects Many quality evaluations made in the food industry are subjective because they depend on the judgment of different individuals. In order to automate quality evaluation on the can-filling line, the features and defects which affect the quality of a filled can must be understood. Namely, fill features and fill defects must be identified and used to create a quantitative description of a can of salmon. Without clear, quantitative measures of quality it is difficult, if not impossible, to maintain a product-line notion of quality, much less company-wide or industry-wide standards. Documentation of product quality features is possible, but such documentation is difficult to maintain, update and disseminate. For example, interviews with quality control (QC) personnel at BC Packers indicated that many quality definitions are based on the training and knowledge of experts and are not explicitly documented. Thus, in effect, the notion of quality is being determined by the grading personnel on the canning lines, and varies from shift to shift and season to season, as the personnel change. Using a survey, defect distribution data, and on-site experiments, the types of defects produced by the filling machine were identified, quantified, and prioritized. From this list of defects, the most important were defined quantitatively in order to enable machine vision grading. Data sets consisting of images of these defects were collected for developing and evaluating machine vision algorithms. 13 CHAPTER 3. CAN-FILLING DEFECTS 14 3.1 Quality Control Survey In meetings with a salmon canning quality control (QC) expert (Mr. F. Nolte, BC Packers Ltd.), the filling defects listed in Table 3.1 were identified [35]. Table 3.1: Can-filling defect descriptions. Filling Defect Description Category Cross-pack Steaks inserted sideways with skin showing. Appearance Flange Obstructions Meat, skin or bones over the flange. Safety Underweight Can does not satisfy government weight regulations. Regulatory Overweight Can is overfilled and may not be properly sealed. Safety Poor coloration Meat is not a uniform colour. Appearance Flange damage Can is scratched or dented during the filling process. Safety These defects can be subdivided into three categories: (i) safety defects, (ii) regulatory defects and (iii) appearance defects. Safety includes all cases where obstructions or damage to the flange interfere with vacuum sealing of the can. Such defects can result in seam defects which could lead to spoilage and the possibility of botulism contamination. Regulatory defects are those which violate government regulations. Appearance defects are any defects that would cause a customer to believe the product is of low quality. Table 3.1 gives examples of each defect category. To gather further information on filling defects, a series of questions was developed as a 21-page survey (Appendix A) for salmon canning QC experts. This survey consisted of three major sections which elicited information on: • The frequency and severity of quality problems. • The characteristics of well-filled and poorly-filled cans. • The identification of filling problems in images of hand-packed cans. The survey was filled out and returned by four of the six qualified QC experts at BC Packers. The problems identified as most severe were flange obstructions (Figure 3.1(b)) and flange dam-age. The problems identified as occurring most frequently were cross-pack (Figure 3.1(c)), flange obstructions and underweight cans. More detailed results are listed in Table 3.2. Initially there was some concern as to whether or not the hand packed cans used in the survey would accurately CHAPTER 3. CAN-FILLING DEFECTS 15 represent machine filled cans. However, one of the QC experts used the images from the survey to create a training document for patching table workers, indicating that the images used in the survey were accurate representations of machine filled cans. Table 3.2: Can filling survey results. Problem Severitya Frequency b Cross-pack 4.0 3.75 Flange Obstructions 5.0 3.25 Underweight 4.0 2.25 Overweight 3.75 2.00 Poor coloration 3.75 2.00 Flange damage 4.50 2.00 a 5-most severe, 1-least severe b5-most frequent, 1-least frequent Using the expert's comments, the following descriptions of well-filled cans and poorly-filled cans were developed. A well-filled can (Figure 3.1(a)) consists of one to three clean cut, transverse salmon steaks with a uniform bright colour. A poorly-filled can consists of any number of the following problems: ragged cuts, too many small pieces (Figure 3.1(d)), too much visible skin, bones on the surface, bruises, blood spots (Figure 3.1(e)) or mixed colours (Figure 3.1(f)). These appearance problems are in addition to flange defects and incorrect weight. The identification of can-filling problems in the images in the third section of the survey were somewhat inconsistent. The experts disagreed on the filling problems in 8 of the 36 images used in the survey. This suggests that the characteristics of a well-filled can are not crisply defined. The inconsistent answers could also be due to the open-ended nature of this question. CHAPTER 3. CAN-FILLING DEFECTS 16 (d) (e) (f) Figure 3.1: Salmon can-filling defects, (a) Well filled can. (b) Bone flange defect, (c) Cross-pack defect, (d) Too many pieces, (e) Blood spot, (f) Mixed colours. 3.2 Can-Filling Defect Distribution The can-filling defect distribution given Table 3.3 is based on QC data collected at the BC Packers cannery in Prince Rupert, BC, during August 1995 [36]. The data was collected over 120, 1 minute intervals at various times by QC inspection personnel. The data was collected at the patching table while the cans were moving. As a result, the data may not reflect the actual defect distribution because defects may have been missed. The observations were made on wide range of species and qualities. CHAPTER 3. CAN-FILLING DEFECTS 17 Table 3.3: Can-filling defect production rates. % of all cans % of all defects Defect type Mean Std defective cans Flange obstruction 4.9 1.9 30.9 Cross-pack 6.2 2.9 39.5 Underweight 2.5 2.4 16.0 Overweight 0.6 0.5 3.5 Other 1.6 1.7 10.1 Non-defective 84.2 • - -3.3 On-Site Experimentation A number of experiments were conducted at the BC Packers cannery in Prince Rupert BC in September 1996 (during the canning season). The objectives for the experiments were: • To interview plant.QC workers. • To acquire images to test can feature extraction algorithms. • To measure the defect production rate and distribution exiting the filling machine. • To measure the failure rate of the patching table. • To observe the operation of the patching table. • To collect data from the high speed weighing machine. Only the first three objectives are relevant to this thesis; the remaining objectives are discussed in detail by Allin [7]. The plant interviews and the acquired images are discussed in the following subsections. The defect production rate using the acquired images is discussed in Section 3.4 because it depends on the defect definitions. 3.3.1 Plant Interviews A number of the QC personnel were interviewed at the Prince Rupert plant. The results of these interviews were mixed. Flange and cross-pack defects were confirmed as the most severe and frequent problems. However, there was no clear consensus among the QC personnel as to the features or characteristics of a cross-pack defect or a flange defect that requires patching. CHAPTER 3. CAN-FILLING DEFECTS 18 From the interviews it was determined that a dark skin cross-pack is worse than a light skin cross-pack. Visible skin at the centre of a can is worse that skin near the flange. There is an acceptable level of cross-pack but it is not well defined. BC Packers processes six grades of canned salmon. The area of acceptable cross-pack decreases with higher quality grades. Flange defects seem to be more clearly defined. The presence of skin or bones on the flange is unacceptable. There is an acceptable amount of meat which can remain on the flange since it will be dispersed by the sealing process. The maximum acceptable amount of meat could not be clearly defined. The plant manager also stated that detecting physical damage to the flange is also desirable, but is not necessarily done by the patching table staff. 3.3.2 Image Acquisition An image acquisition system was built and is shown in Figure 3.2. This system was designed to acquire images during production and to minimize disruptions to the can-filling line. The system could be moved to different locations on the processing line in a matter of minutes. Only the optical sensor had to be directly attached to the conveyor guide-ways. The image acquisition was triggered by the sensor every seven cans so that cans from all six pockets in the filling machine were imaged. On each trigger, the framegrabber digitized a frame from the RGB camera and stored it on the computer's hard disk. The components of this system were reused for the Can Defect Detection System which is detailed in Chapter 7. Figure 3.2: Experimental setup used at the Prince Rupert plant. Images were acquired before and after the patching table on a 1/2 pound canning line which was packing previously-frozen pink salmon. Two sets of 300 images were taken at each location for a total of 1200. The resolution of the images was limited to 240 by 320 pixels for each of three CHAPTER 3. CAN-FILLING DEFECTS 19 colour channels because the cans were moving . The acquired images provide a broad set of test images to ensure that the machine vision algorithms developed are applicable to the widest possible set of fill defects. Only the images acquired before the patching table were used because the cans in this set of images had not been modified by the patching table. Examples of these images are shown in Figure 3.3 and 3.5. 3.4 Can-Filling Defect Definitions On the basis of the information gained from the quality control survey and the plant interviews, cross-pack and flange defects (obstructions or damage) were identified as the best candidates for machine vision detection. This selection was made because these defects were the most frequent and are the most well-defined. Flange defects were also selected because they are safety defects which must be repaired to prevent container defects. These defects must be quantitatively defined for use in the machine vision system. The definitions must distinguish between critical can-filling defects, which result in a defective can that must be repaired, and non-critical defects. 3.4.1 Flange Defects The interviews with the QC personnel clearly show that the presence of bones or skin on the flange results in a critical flange defect. This leaves the acceptable amount of meat on the flange as the only undefined parameter for determining the severity of a flange defect. Flange defects due to flange damage were not considered because no data concerning the frequency of this damage or images of the damage were provided or acquired. The acceptable amount of meat on the flange could not be defined precisely by the QC personnel. The width of the meat defects was selected as a potential measure of severity because it is well defined and measurable. The meat toughness may also affect the acceptable amount of meat but is difficult to measure in an image. In order to decide if a meat defect is critical, the critical width threshold must be established. In the presence of conflicting information from QC personnel and in the absence of any clearly 1The horizontal lines in a standard video signal are interlaced and consist of two fields taken 1/60 second apart, which means a moving object will be in a different location for the second field. CHAPTER 3. CAN-FILLING DEFECTS 20 defined metric the critical width was defined as 4.2% for the purposes of this thesis. This threshold could be easily redefined if more suitable data were available. In the case of multiple defects on a flange, only the most severe defect is considered. Bone and skin defects are considered to be more severe than meat defects. Only the largest meat defect is considered. This definition results in three grades of flange defects: critical defects which consist of all cans with at least one skin or bone defects and cans with at least one meat defect with a width greater that 4.2% of the flange; non-critical defects consisting of cans with meat defects less 4.2% in width; and non-defective cans with no flange defects at all. 3.4.2 Cross-Pack Defects The severity of a cross-pack defect depends on the size, position and colour of the piece of skin. It was not possible to determine a detailed definition of cross-pack severity from the survey and plant interviews. There was a difference between on-line and off-line grading of cross-pack defects. Given unlimited time to scrutinize a can, any amount of visible skin was considered a defect by some QC personnel. The definition was simplified and a critical cross-pack area threshold was defined to distinguish between cross-pack defects that required patching and those that did not. Dark and light cross-pack were considered to have equal severity. The cross-pack area distribution before and after the patching table was estimated from the acquired images. The area of cross-pack in a can was estimated by comparing the cross-pack size to circular templates of different areas. Table 3.4 shows the cross-pack area distribution for all 1200 images acquired at the cannery. It is clear from this table that there is a drop in the rate of cans exiting the patching table with a cross-pack area above 12.5%. Therefore, the critical cross-pack threshold was set to this value. Table 3.4: Can-filling defect production rates. Cross-pack area range (% of can fill area' Camera position < 0.083 < 0.125 < 0.167 < 0.250 < 0.333 > 0.333 Before patching table 14.86 5.68 4.34 2.67 0.835 0.0 After patching table 13.40 5.36 1.84 1.17 0.168 0.0 CHAPTER 3. CAN-FILLING DEFECTS 21 3.5 Can-Filling Defect Data Sets The acquired images were reviewed to generate data sets used to develop and evaluate defect detection algorithms. 3.5.1 Flange Defects Two data sets were used for the development and evaluation of the flange defect detection algo-rithms. The first set of 300 images acquired before the cans passed through the patching table was selected due to its superior illumination and image quality. This set was reduced to 272 images because 28 images contained incomplete cans or more than one can. This data set is referred to as the production data set. It contains 5 skin defects, 7 bone defects, 27 critical meat defects, 219 non-critical meat defects and 12 non-defective cans. Example images from this data set are shown in Figure 3.3. CHAPTER 3. CAN-FILLING DEFECTS Figure 3.3: Example images from production flange defect data set. (a) Skin flange defect, (b) Bone flange defect, (c) Meat flange defect. The background is the checkweigher conveyor. 22 ( c ) Figure 3.4: Example images from laboratory flange defect data set. (a) Skin flange defect, (b) Bone flange defect, (c) Meat flange defect. Note that the 1 /4 pound cans had a coloured coating which makes the flange a gold colour. CHAPTER 3. CAN-FILLING DEFECTS 23 A second set of images was acquired in the laboratory to provide more data on skin and bone defects. These images were acquired using the Can Defect Detection System (Chapter 7). For these images, previously-frozen pink salmon was hand-packed into 1/4 pound cans. The laboratory data set contained 125 images: 25 light skin defects, 25 dark skin defects, 25 bone defects, 18 critical meat defects, 7 non-critical meat defects and 25 non-defective cans. Examples images from this data set are shown in Figure 3.4. 3.5.2 Cross-Pack Defects One data set was used for cross-pack defect detection algorithm development and evaluation. This data set contained the same 272 images used for the production flange defect data set. This data set contained 16 defective cans and 256 non-defective cans. Examples of cross-pack defects requiring patching are shown in Figure 3.5 Figure 3.5: Example images from cross-pack defect data set. (a) Light skin cross-pack, (b) Dark skin defect. CHAPTER 3. CAN-FILLING DEFECTS 24 3.6 Summary A survey of and interviews with QC personnel were used to better understand can-filling defects. Flange and cross-pack defects were identified as the best candidates for automated inspection. Simple and usable definitions for these defects were developed using the available data. These definitions used a width threshold for flange defects and an area threshold for cross-pack defects. Example images of these defects were acquired for developing and evaluating the machine vision algorithms. Chapter 4 Feature Extraction In the previous chapter, flange defects and cross-pack defects were identified as can-filling defects suitable for automated machine vision inspection. To achieve this goal, features that can assist in the identification of these defects must be extracted from the images of the filled salmon cans. For flange defect identification, the pixels that make up the flange must be extracted from the background of the image. Also, the can fill must be extracted from the image for cross-pack detection. The segmentation of light and dark skin from the can fill is required to measure the size of a cross-pack defect. However, before the extraction of any of these features, the can centre must be precisely located in the image. 4.1 Precise Can Location Finding the precise location of the can's centre is important to the extraction of can-image features, especially for the flange. The flange of the can is of particular interest in this application. For the given image-acquisition configuration, with an image resolution of 320 by 240 pixels, the flange is normally only 3 pixels wide. Thus, the algorithm must locate the centre of the can within ± 1 pixel to ensure that the can flange is correctly identified and extracted from the image, using a priori knowledge of the can radius. Since it is desirable for the machine vision system to be retro-fitted into an existing salmon canning line, it may not be possible to manipulate the background of the images to make the can-locating algorithm as simple as might otherwise be possible, (e.g., using a background with a uniform contrasting colour and no strong edges). Thus, the can-locating 25 CHAPTER 4. FEATURE EXTRACTION 26 algorithm must be as independent as possible from the image background. A feature that is consistently and clearly present in images of filled cans is the inside edge of the flange. The magnitude of the inside edge gradient is a function of the contrast between the shadow created by the flange and the flange itself. The colour of the salmon meat can vary but does not change sufficiently to affect the magnitude of this edge. The magnitude of the outside edge of the flange, on the other hand, depends on the contrast with the background. The algorithm described in the following subsections uses the Canny edge detector [37] in combination with the Hough transform for locating circles [38]. Robust statistics is used to enhance the output of the Hough transform to precisely locate the centre of cans in images with non-uniform backgrounds. 4.1.1 Canny Edge Detection Canny [37] derived an optimal edge detector using numerical optimization. This optimization uses three criteria: good edge detection, good edge localization and a single response to an edge. It was found that the derived detector could be closely approximated by the derivative of a Gaussian function. The output of the detector is further enhanced using noise estimation and hysteresis thresholding. For this thesis, the Canny edge detector, implemented in Vista [39], is used. The Canny edge detector was selected because the single response to each edge was easier to work with than multiple response edges found with the Sobel operator [40]. The Canny detector was also faster than the Marr-Hildreth edge detector [41], which produces edges similar to those produced by the Canny detector. The Canny detector essentially finds edges using the maxima of the gradient (V) magnitude of an image (I) smoothed with a Gaussian (G): is the 2-D Gaussian function. Where x and y represent the distance from the centre of the Gaussian function. Increasing the value of a, the width of the Gaussian, will increase the smoothing of the |V(G* / ) | , (4.1) where * represents a convolution and, (4.2) CHAPTER 4. FEATURE EXTRACTION 27 image. This will improve the performance of the edge detector in the presence of noise, but it will also limit the detection of high spatial frequency edges. Because the 2-D Gaussian operator is separable into two 1-D functions Gx and Gy, Equation 4.1 can be rewritten as: | V ( G „ * ( G X * J ) ) | . (4.3) This significantly reduces the required number of computations. The noise in the output of the detector is estimated by averaging the response over a neigh-bourhood at each point in the image. Taking the local maximum of the gradient magnitude in each neighbourhood eliminates multiple responses to an edge. Next, hysteresis thresholding is applied in conjunction with edge linking. Edge linking gener-ates edge segments by associating adjacent edge pixels with each other. Hysteresis thresholding is applied to prevent large edge segments from being divided into smaller edge segments. This thresholding scheme uses two gradient magnitude thresholds, a high threshold and a low threshold. Pixels with magnitudes above the high threshold are automatically considered to be part of an edge segment, while pixels above the low threshold are only considered if they are adjacent to a pixel above the high threshold. 4.1.2 Hough Transform The Hough transform [42] was originally developed to detect straight edges in images. It has also been extended to circles [38,40], ellipses [43] and generalized to arbitrary shapes [44,45]. The Hough transform and its variants use the position and/or orientation of edge pixels to solve systems of equations. The solutions from each edge pixel, or group of pixels, are accumulated in a parameter space. Peaks in the parameter space are taken as the location of a feature (line, circle or ellipse). Since the cans are circular, the Hough transform for circles is used for locating cans. The transform for locating ellipses was also considered, since the image pixels are not perfectly square1. However, this version of the transform is unstable when the eccentricity of the ellipse is very close to 1, which is the case in this application. Using the position (x,y) and orientation (6) of an edge pixel, the center of a circle {xc,yc) of 1In this case the pixels have a ratio of 1:1.02 CHAPTER 4. FEATURE EXTRACTION 28 known radius (R) can be calculated using: 6 = arctan{^-) (4.4) 9x where gx and gy are the x and y components of the edge pixel gradient. For dark objects and light background (i.e., gradient away from the object center): xc = x-Rcos(9) (4.5) yc = y-Rsin(e). (4.6) For light objects and dark background (i.e. gradient towards the object center): xc = x + Rcos(0) (4.7) yc = y + Rsin{9). (4.8) The xc and yc for each edge pixel are calculated and accumulated. As long as there is only one circle in the image and any noise has a Gaussian distribution, the mean value in the xc and yc parameter space will accurately identify the center of the circle. The use of robust statistics can improve the accuracy of the Hough transform in the presence of non-Gaussian noise. 4.1.3 Robust Statistics Statistical measurements such as the mean and standard deviation of a sample are based on the assumption that the error in a sample has a Gaussian distribution. A specialized field of statistics known as robust statistics [46] has a number of procedures that are less sensitive to changes in the assumed error model. Pixel value noise distribution in images is typically Gaussian, but other errors often have non-Gaussian distributions [40]. An example of parameters, which may have non-Gaussian distributions, are the vertical and horizontal estimates of the centre of a circle computed in a Hough transform parameter accumulation. These parameters may not have Gaussian distribution when non feature edges are inadvertently included in the accumulation. The median is an example of a robust alternative to the mean. This can be shown by comparing their breakdown points. The breakdown point is the largest number of samples that can be outliers before the result is significantly affected. Even with a large sample size, a single outlier can result in an incorrect mean. On the other hand, (50% — 1) of a sample may be outlying and not affect CHAPTER 4. FEATURE EXTRACTION 29 the median of a sample [40]. Computing the median requires only a slightly higher number of operations than the mean does because, like the mean, the number of operations required for the median scales up with the number of samples [47]. 4.1.4 Detailed Description The actual algorithm to precisely locate a can in an image is based on the Hough transform described in Section 4.1.2. A number of additional steps, which are detailed below, were added to improve the robustness and performance (i.e. speed) of the algorithm. 1. Initial location estimation. An estimate of the of the can's centre location is made using the following assumptions: • The cans move horizontally through the camera field of view. • There is only one can per image and it is contained completely within the image. • The contents of a can are significantly different from the background. • The approximate vertical centre of the can is known a priori from the conveyor location in the image. Using these assumptions, a 21-pixel tall horizontal slice of a "template" background image, taken with no can present, is subtracted from a corresponding slice of an acquired image containing a can. The absolute value of the difference is then thresholded to eliminate pixels which are not significantly different. The centroid of all the significant pixels is taken as the horizontal center of the can. An example of this estimation is shown in Figure 4.1. This estimate is sensitive to changes in the amount of meat and cross-pack in a can. Using the outside edges of the flange in a horizontal slice was also considered as a method of estimating the location of the centre, but this method was much less reliable because the edges were not always visible in the slice. The estimate is insensitive to the threshold value used. The threshold value was empirically determined be 50, where the pixel value range is 0 to 255. CHAPTER 4. FEATURE EXTRACTION 30 MLA X 1 (a) (c) Figure 4.1: Estimating the location of a can. (a) Original image, (b) Background slice (21 pixels tall), (c) Horizontal slice of (a) with (b) subtracted and thresholded at a pixel value of 50. 'x' is the estimated centre. 2. Detection of edges Edges in the image are detected using the Canny edge detector. The V G operator and noise suppression are applied to the entire image. To eliminate most of the extraneous edges, the hysteresis thresholding and edge linking procedures are only applied to an annulus around the centre calculated in Step 1. The rout and rj„ of the annulus are determined as: where rfiange is the radius of the inside edge of the flange (determined a priori) and e is the maximum error of Step 1. In 100 test images, this error was less than 11 pixels. An example of the annulus of interest and edges detected are shown in Figure 4.2(a) and Figure 4.2(b). Ideally, if the flange is clean, the inside edge of the flange should be one long closed edge. Therefore, short edges are less likely to be part of the inside edge of the flange. Edges less than 50 pixels long were eliminated in the edge linking step of Canny edge detector. An example of the linked edges is shown in Figure 4.2(c). (4.9) fin — fflange & (4.10) CHAPTER 4. FEATURE EXTRACTION 31 (a) (b) (c) Figure 4.2: Edge detection steps, (a) Annulus copied from Figure 4.1(a). (b) Canny edge detector applied to Figure 4.2(a). (c) Edge segments in Figure 4.2(b) with lengths > 50 pixels. 3. Applying the Hough transform. The HT described in Section 4.1.2 is applied to the linked edge segments. The flange is lighter than the shadow cast on the fill inside the can. Thus, Equations 4.5 and 4.6 are used. The coordinates of the circle's center (xc, yc) for each edge pixel are only accumulated if they lie within the rectangle bounding the annulus (Figure 4.3). Despite these constraints, it is not possible to eliminate all of the edges which do not belong to the inside edge of the flange. Thus, the median of the candidate centre coordinates is selected. The selection of this robust statistic gives a good result, given the non-Gaussian noise in the accumulated parameter space, (Figure 4.4). CHAPTER 4. FEATURE EXTRACTION 32 50 100 150 x (pixels) 200 Figure 4.3: Hough transform using the all segments of the inside flange edge in Figure 4.2(c). Legend: '-' Accepted (inside edge); '- -' Rejected (outside edge); 'x' Estimated can centre. 500 500r 10 30 50 70 90 110 130 150 170 190 " 10 30 50 70 90 110 130 150 170 190 x coordinate of centre y coordinate of centre (a) (b) Figure 4.4: Histograms of xc and yc for the points plotted in Figure 4.3. (a) Mean xc = 98.1. Median xc = 96.3. (b) Mean yc = 105.5. Median yc = 107.1. CHAPTER 4. FEATURE EXTRACTION 33 4. Second Iteration of the Hough transform. The Hough transform is repeated using a thinner (e' < e) annulus centred at the new estimate of the can's location determined in Step 3. The half thickness (e') for this annulus was set to 4 pixels to ensure that the flange is not clipped due to camera distortion. An example of this step is shown in Figure 4.5. This step usually does not alter the centre estimate significantly, but is performed in case the initial can centre estimate in Step 1 was poor. 100 x (pixels) W (b) Figure 4.5: (a) Thinner annulus copied from Figure 4.1(a). (b) Second iteration of Hough transform using the segments edge found in the thinner annulus. Legend: '-' Accepted (inside edge); '- -' Rejected (outside edge); 'x' Estimated can centre. 5. Adjusting the can radius estimate. In the final step of this algorithm, the position and radius of the located circle is adjusted to better fit the flange of the can. This step was added due to the combined distortion of the lens, camera and framegrabber, which causes the flange to appear elliptical in the image. Calibration would yield a transformation from distorted to undistorted image coordinates to correct for this distortion, making this step unnecessary. However, the test data images were not correctly calibrated, and as a result there was a difference in the x and y scales of the CHAPTER 4. FEATURE EXTRACTION 34 images of approximately 2%, which is quite significant over a diameter of 186 pixels. The adjustments are made by evaluating the median errors between the flange edge pixels and circumference of the circle located with the HT. The errors were evaluated in four 45° sectors centered at 0° (right), 90° (top), 180° (left) and 270° (bottom). The x and y and radii are adjusted separately to transform the circle into an ellipse as required: ARX = e ' e / t + e " g f e t , (4.11) ARy = et°P + e b o t t o m . (4.12) The position is adjusted using: Ax = ARX - erighu (4-13) Ay = ARy - ebottom- (4.14) An example of these adjustments is shown Figure 4.6. If no edge pixels are found in any one of the four sectors, it is assumed that the algorithm failed to locate the can. When the algorithm fails to precisely locate a can, the can is assumed to be defective. Many defects, such as no meat in the can or too many flange obstructions, would adversely affect the edge detection and result in failure of the can locating algorithm. 4.2 Flange Defect Features Using the calculated can location and radius, the image can be sampled around the flange of a can. The sample pixels acquired in the RGB colour space are converted to the HSI colour space to simplify the analysis of these flange profiles. The HSI values of the profile pixels are used as classification features in Chapter 5. 4.2.1 Flange Profile Extraction The can location algorithm returns the (Cx, Cy) coordinates of the centre and the Rx and the Ry of the inside edge of the flange. Therefore, a method of sampling points along the circumference of an ellipse is necessary. Fast computer graphics algorithms for drawing ellipses [23] were considered, CHAPTER 4. FEATURE EXTRACTION 35 ) i , : — i , 1 o1 ' ' " •— 0 50 100 150 0 50 100 150 x (pixels) x pixels) (a) (b) Figure 4.6: Adjustments made to the calculated flange ( — ) radius and position to better fit the flange pixels ( - ). (a) No adjustment, (b) Better fit to edge pixels after adjusting by ARX = 1.36, ARy = 0.29, ACX = 0.30 and ACy = -0.28. but they were limited to integer values. Instead, sample points (P) are determined by evaluating: Px = (Rx + d)cos(6) + Cx, (4.15) Py = {Ry + d)sin{6) + Cy, (4.16) at a number of equally spaced values of 6. To sample the center line of the flange, d is set to Wf/2, half the width of the flange. The value of the profile at each point is determined by bi-linearly interpolating the value from the neighbouring pixels [47]. Since the flange is an ellipse, the length of the circumference represented by each sample would not be equal. However it is assumed that the length represented by each is equal. In this application, the typical ellipse has an eccentricity of 0.982. Therefore, errors introduced by this assumption are negligible. 2 A circle has an eccentricity of 1.0 CHAPTER 4. FEATURE EXTRACTION 36 To average out noise in the flange profile, the flange is sampled at three values of d: Wf/2 — 1, Wf/2 and Wf/2 + 1. At each radius the flange is sampled at 720 points. The three profiles were averaged to produce the profiles shown in Figure 4.7. u 87 x a 97 x,200 & 100 0 0 o200 0 u 87 x S97 oi 1 u c 1 § 0 . 5 1 :. MB;. . . • ! / S: 1 ITT Jii * 1 100 200 300 400 500 600 Flange Position (pixels) (a) 700 £ o 100 200 300 400 500 600 700 Flange Position (pixels) (b) Figure 4.7: Flange profiles of the can shown in Figure 4.1. (a) RGB colour space, (b) HSI colour space. 4.2.2 HSI Colour Space Many of the applications presented in Chapter 2 indicate that the HSI colour space is more useful for the analysis of images. An example of a flange profile converted to the HSI colour space is shown in Figure 4.7(b). An algorithm for converting from the RGB colour space to the HSI colour space can be found in [23]. The HSI colour space is defined below for reference. The HSI colour space is usually represented by a cone as shown in Figure 4.8. The hue, saturation and intensity of a pixel define a point in this cone. Hue (H) refers to the colour of a pixel and defines angular position in the cone. Because the hue is circular, a hue of 0.0 and 1.0 represent the same colour, red. Saturation (S) refers to the purity of the colour. A saturation of 1.0 is a pure colour, while a saturation 0.0 is the absence of colour (black or white). This results in the hue being undefined for a saturation of 0.0. For example, the hue profile in Figure 4.7 is very noisy because the flange is almost white. Thus, the hue is poorly defined. Intensity (I) defines how light or dark a pixel is. It is represented by the vertical axis of the cone. CHAPTER 4. FEATURE EXTRACTION 37 Figure 4.8: HSI colour space, (a) HSI cone, (b) Hue profile with S = 1.0 and I = 1.0. (b) Saturation profile with H = 0.0 and I = 1.0. (c) Intensity profile with H = 0.0 and S = 1.0 4.3 Cross-Pack Defect Features Cross-pack is skin visible on the surface of a filled can of salmon and appears as light and dark regions in an image of the can fill. To determine if a can contains a cross-pack defect, the total area of these regions must determined. Unfortunately, not all light and dark regions are pieces of skin: there are sometimes dark shadows and bright pieces of meat inside the can. Therefore, a number of properties of these regions are calculated below and used to classify the regions as cross-pack or not cross-pack as discussed in Chapter 5. Figure 4.9 outlines the process of extracting these regions. Each step of the process is detailed in the following sub-sections. Can Fill Extraction • HSI Thresholding it- Region Labelling • Erosion and Dilation Region Property Calculation Figure 4.9: Region extraction process flow chart. CHAPTER 4. FEATURE EXTRACTION 38 4.3.1 Can Fill Extraction An image of the fill inside a can is extracted from the image of the can using the calculated location of the can's centre and the known fill radius. This circular region of the image is then converted from the RGB color space to the HSI colour space using the algorithm given in [23]. 4.3.2 HSI Thresholding Thresholding an image allows objects in an image to be separated from the background, if the objects have a uniform parameter such as colour or brightness. Image thresholding is usually binary. Pixels above a threshold are assigned one value, while pixels below are assigned another. In the case of detecting cross-pack, skin can be separated from the salmon meat by thresholding the saturation channel of the image. The skin pixels have low saturations, while meat pixels have higher saturations. The distinction between light and dark skin can be made by further thresholding the intensity channel for pixels with saturations below the saturation threshold. This is not necessary for the current definition of a cross-pack defect because light and dark skin have the same severity. However, this information may be useful for a future, more complete, definition of cross-pack. The optimum values for the above mentioned thresholds were determined empirically. The values in Table 4.1 appeared to best separate skin from meat and light skin from dark, while minimizing the size of the non-skin regions. Examples of two binary images, produced using these thresholds, are shown in Figure 4.10. Table 4.1: HSI thresholds. Channel Value Saturation 0.30 Intensity 0.75 (a) (b) (c) Figure 4.10: Examples of images binarized using the thresholds in Table 4.1. (a) Original image, (b) Low saturation pixels with low intensity (dark skin). (c) Low saturation pixels with high intensity (light skin). 4.3.3 Region Labelling Thresholding an image separates pixels with similar parameter values from the background. To identify the regions in an image, neighbouring pixels must be grouped together. This is done by labelling the image. Each group is given a different pixel value, which is referred to as a label. There are two types of neighbouring pixels: "8-connected" and "4-connected". In an image with rectangular pixels, each pixel is surrounded by eight other pixels. A pixel is "8-connected" to any one of these surrounding pixels. The "4-connected" neighbours of a pixel are the surrounding pixels that are not diagonally adjacent. For this application, the "8-connected" definition was used due to the sparseness of the thresholded image. Using the "4-connected" definition would unnecessarily break up the regions into smaller ones. A labelling algorithm based on the one described by Davies [40] was developed. A flow chart of this algorithm is shown in Figure 4.12. Examples of output from the labelling algorithm can be seen in Figure 4.11. The labelling process can be broken down into three steps: 1. Propagate labels: The binary image is searched, from the top-left corner to the bottom-right corner, for fore-ground pixels. When a foreground pixel is found, the previous row is checked for neighbouring pixels. If a neighbouring pixel is found, its label is used; otherwise, a new unused label is assigned. This label is then propagated to all the neighbouring pixels in the same row of the image. When no neighbouring pixel is found, the search for unlabelled foreground pixels CHAPTER 4. FEATURE EXTRACTION 40 continues at the next pixel in the row. If the end of the row is reached, the search continued at the beginning of the next row. This search continues until the end of the image is reached. As pixels are labelled, the bounding box around a region is recorded. 2. Find neighbouring regions: The perimeter of each bounding box is examined to see if any pixel has a neighbour with a different label. If a neighbour is found, the pixels in the smaller region are relabelled with the label of the larger region. 3. Remove small regions: Regions below a size threshold are deleted to reduce the number of regions. The smaller regions are more likely to be noise. (a) (b) Figure 4.11: Labelled image examples. Labelled version of Figure 4.10 (a) Found one dark region (b) Found two light regions. CHAPTER 4. FEATURE EXTRACTION No Search boundaries for neighbours Yes Remove Small Regions Stop Re-label region with larger heighbour's labell Figure 4.12: Region labelling algorithm flow chart. CHAPTER 4. FEATURE EXTRACTION 42 4.3.4 Erosion and Dilation As can be seen in Figure 4.11, even after labelling, the regions are still very sparse. The number of pixels with a particular label will not give a good measurement of the size, if the holes are not filled in. The missing pixels can be filled in using erosion and dilation. Erosion and dilation of a region in a binary image can be thought of as a form of smoothing. The algorithms are illustrated in Figure 4.13. A region is eroded by deleting all the pixels in a region with less than eight neighbours. A region is dilated by adding pixels until all the original pixels have eight neighbours. (a) (b) (c) Figure 4.13: Erosion and dilation diagrams, (a) Original region: • pixels, (b) Region erosion: Kl pixels to be deleted, (c) Region dilation: • pixels to be added. For the purpose of filling in the holes, the regions are dilated twice and eroded once. The two dilations fill the pixel in and the the erosion removes the extra pixels added around the perimeter of the region. The results of this process can be seen in Figure 4.14. CHAPTER 4. FEATURE EXTRACTION 43 (a) (b) Figure 4.14: Examples of regions after smoothing with two dilations and one erosion. Dark region outlined in blue. Light regions outlined in green (a) Regions: 1 - light skin, 2 - shadow, 3 - dark skin, (b) Regions: 1 - light skin 2 - light meat, 3 - dark skin. 4.3.5 Reg ion P rope r t y Ca lcu la t ion After regions have been labelled and smoothed, their various properties can be calculated accurately: • Size: Size of the region as a percentage of the total fill area. • Mean hue, saturation and intensity: Mean value of each channel of the HSI image for all the pixels in a region. • Standard deviation of hue, saturation, and intensity: Standard deviation of each channel of the HSI image for all the pixels in a region. • Radial position: Radial position of a region's centroid relative to the centre of the can. • Undefined hue: The percentage of pixels in a region with a saturation values of 0.0. CHAPTER 4. FEATURE EXTRACTION 44 4.4 Summary In this chapter, a number of features were extracted from images of cans filled with salmon. A robust algorithm for precisely locating a can's centre in an image with a non-uniform background was developed using the Canny edge detector and the Hough transform for circles. The calculated centre of a can is used to extract a profile of the flange. Light and dark regions in the can fill are separated from the surrounding salmon meat using HSI thresholding and region labelling. The flange pixel and region properties will be used in Chapter 5 to classify flange defects and cross-pack defects. The performance of the algorithms presented in this chapter are evaluated in Chapter 6. Chapter 5 Feature Classification This chapter discusses the use of the extracted can-image features, described in Chapter 4, for the classification of flange defects and cross-pack defects. Radial basis function networks (RBFNs) and fuzzy logic classifiers are considered as possible methods for defect classification. A brief introduction to RBFNs and fuzzy logic precedes the discussion of the classification approaches used. 5.1 Radial Basis Function Networks Radial basis function networks have been used successfully in a number of classification applications. RBFNs have been used for the analysis of Landsat-4 data [48], for visual autonomous road following [49], and for phonemes classification for speech recognition [50]. In these papers, RBFNs are used as an alternative to multilayer perceptrons (MLPs) trained with the backpropagation learning algorithm. In all of the above mentioned cases, RBFNs had equal or superior classification accuracy compared to MLPs. RBFNs also offered faster training times. In some cases, the training times were two or three orders of magnitude faster [50]. There are a number of reasons for the superior performance of RBFNs, due mainly to their architecture [51,52]. MLPs create boundaries between classes by combining lines or curves (or hyper-surfaces in higher dimensional space); whereas RBFNs produce circular boundaries (or hyper-shells in higher dimensional space). Hence, fewer hidden layer nodes are required to separate one class from another which completely surrounds it. The decreased training times for RBFNs are 45 CHAPTER 5. FEATURE CLASSIFICATION 46 attributed to the decoupling of the training of non-linear parameters from the training of other parameters. Thus, each parameter can be trained efficiently with a specialized algorithm. Since RBFNs are locally tuned, their response is concentrated over the range of the input values. 5.1.1 Network Architecture The structure of an RBFN is shown in Figure 5.1. RBFNs are considered to be a special two layer network. The weights connecting the input layer to the hidden layer have values of 1. These weights perform no computation and require no training so the input layer is not counted. The response of the hidden nodes to the inputs is governed by a Gaussian activation function. The hidden layer is connected to the output layer by a set of trainable linear weights. Therefore, the values of the output nodes are weighted sums of hidden node responses. Output Layer Trainable Weights Hidden Layer Input Nodes Figure 5.1: Structure of a radial basis function network. The primary difference between MLPs and RBFNs is the activation function used in the hidden layer nodes. MLPs usually use a sigmoid activation function. The output of this function varies monotonically from 0 to 1 over the range from — oo to oo. In an RBFN, the sigmoid function is replaced by a Gaussian function: (Xf - Cj)T(x t - Cj)" Pj(xt) = exp 2a] (5.1) where pj is the hidden node response, Cj is the centre of the basis function, <T| is the width of the basis function and x$ is the ith input vector. This activation function results in a non-zero response in a localized region of the input space. The response decreases as the distance from the basis function centre increases. The output nodes are connected to the hidden nodes by a set of CHAPTER 5. FEATURE CLASSIFICATION 47 linear weights, 9j^. The value of the kth. output (o )^ node is given by: dfc(xt) = p(xt)0fc, (5.2) where p(xt) is a row vector containing the outputs of the hidden nodes and 6k is a column vector containing the weights for the fcth output node. 5.1.2 Training Algorithm The orthogonal least squares (OLS) training algorithm developed by Chen et al. [53] offers a number of advantages over traditional RBFN training algorithms. Other training algorithms [52,54] select basis function centres arbitrarily or use a clustering algorithm. This often led to networks that are larger than necessary, or to numerically ill-conditioned problems due to the basis function centres lying too close together. The OLS algorithm is a systematic approach to the problem of basis function centre selection. The OLS algorithm treats the training of an RBFN as a linear regression problem in which E must be minimized for L training samples. The training equation becomes: D = P 6 + E, (5.3) where D (the output) is a matrix of dependent variables, P (the hidden layer output) is the set of regressors, O (the weight matrix) is the set of parameters and E is the error matrix. These variables have the following forms: D[LXQ] = [di • • • dL]T, dt = [di • • • dQ], 1 <t < L, (5.4) P[LxJV] = [Pi •••Piv],Pi = [Pi(xi) • • •p i(x L)] T , l <i<N, (5.5) 6[iVxQ] = Pi • • -0^,9, = [9l---eQ],l<j<N, (5.6) E[LxQ] = [d • • • C L ] T , et = {e1---eQ],l<t<L, (5.7) where Q is the number of output nodes, L is the number of training samples, and N is the number of hidden nodes. The regressors, P, form a set of basis functions centres, which can be decomposed into a set of orthogonal basis functions, W . These orthogonal basis functions span the same space as P, and can be found using the classical Gram-Schmidt method [53]. The set of orthogonal basis CHAPTER 5. FEATURE CLASSIFICATION 48 functions satisfies: P = WA. (5.8) Here, A is an N x N upper triangular matrix of centres. The Gram-Schmidt method computes one column of A at time to make the jth column of W orthogonal to the previous j — 1 columns. Typically, the input vectors are selected as the basis functions. The space spanned by N basis functions in P can be approximately spanned by a subset of Ns significant basis functions, P'. The OLS algorithm uses P' to approximate P and adds orthogonal basis functions, Wi, to P' until E drops below some threshold, where W{ is the basis function which reduces E the most. The widths of the basis functions are kept constant in order to simplify the training algorithm. The selection of the fixed width is important to the performance of the RBFN. If too small a width is used, then the network may require many more nodes than necessary. If the width is too large, then the basis functions may overlap and the separation of classes close to each other may not be possible. M A T L A B implementation The MATLAB Neural Network Toolbox, utilized in this work, implements the OLS algorithm as described above, but the width (a) is redefined as a spread constant (SC), where: g c = ^ -Mi /2 ) i ( M ) The training exit condition is the sum of the squared error (SERR), which is defined as: L Q SERR = £ ^ £ 4 , f c . (5.10) t=l k=l 5.2 Fuzzy Logic Fuzzy logic is an extension of conventional set theory that uses fuzzy sets. First proposed by Zadeh [55], fuzzy sets have an additional parameter, which is a real number value, that expresses the degree to which an object belongs to a set. Conventional sets are referred to as crisp sets and have a degree of membership of 1 or 0. Fuzzy relations extend crisp set operations such as union and intersection to fuzzy sets. Fuzzy reasoning uses rules to make inferences. Fuzzy logic has gained CHAPTER 5. FEATURE CLASSIFICATION 49 popularity for use in control systems [56-58], but it is also used for classification problems [17,18] as discussed in Chapter 2. Essentially, fuzzy logic is another method of representing a non-linear mapping between inputs and outputs. One advantage of using fuzzy logic is that it represents, and computes with, uncer-tainty in a straight-forward and natural manner. This is important since most practical applications contain some uncertainties [59]. It also offers a number of other advantages over other methods which are detailed below. The variables and rules relating fuzzy sets can be described in linguistic terms. This makes it simpler to explain them to the non-specialist user (i.e., a technician or quality control operator). It also allows expert knowledge to be formalized. For example, considering the flange defect profile shown in Figure 5.2, some observations about the flange are possible. The flange is white while the salmon meat is red, or the flange has a low saturation while the meat has a higher saturation with a red hue. The hue of the white flange is not meaningful because it has a low saturation (Section 4.2.2). This knowledge can be expressed as: IF hue IS red AND saturation IS high THEN defect IS meat. (5.11) Flange Position (pixels) Figure 5.2: HSI profile of a meat flange defect. The pixels of the meat defect have a red hue («0.0) and a high saturation («0.4). Fuzzy logic is also flexible. Namely, once the basic relationships have been defined in a rule-base, the membership function, defining terms such as red or high, can be tuned to improve the performance of the fuzzy system. CHAPTER 5. FEATURE CLASSIFICATION 50 5.2.1 Fuzzy Reasoning A common method of reasoning with fuzzy logic is Mamdani's direct method [60]. This is im-plemented in the MATLAB Fuzzy Logic Toolbox [59] and is discussed in detail by Tanaka [26]. Mamdani's method uses min- and max-operators to make inferences with a set of fuzzy rules ex-pressed in an IF-THEN format. IF x IS Ai AND y IS Bi THEN z IS C1: Weight 1.0. (5.12) IF x IS A2 OR y IS B2 THEN z IS C2, Weight 0.33. (5.13) The fuzzy rules (5.12) and (5.13) describe relationships between the input x and y, and the output z. The input variables x and y and the output variable z are referred to as universes of discourse. The membership functions ( / i^ i (%), MBi ix)---) describing the fuzzy sets {A\,B\...) are defined in Figure 5.3. In the example, membership functions are assigned letters but words or phrases could be substituted. Antecedent Consequent Figure 5.3: A diagram illustrating Mamdani's direct fuzzy reasoning method. CHAPTER 5. FEATURE CLASSIFICATION 51 The five steps of the reasoning process are described below: 1. Fuzzification: The input variables x0 and y0 are fuzzified by determining the degree to which they belong to Ai,A2, Bi and B2• Evaluating the membership functions at x0 and y0 results in WAI , WA2>WB\ and WB2-2. Application of Operators: The antecedent of a rule may contain one or more premise, such as x IS A\. When a rule contains more than one premise, an operator must be applied. The operators defined in Mamdani's method are AND and OR. The AND operator is defined as the minimum degree of membership for all the premises in a rule, while OR is defined as the maximum degree of membership. In the example, the result of the AND operation is WB1 for (5.12) while the result of the OR operation is WB2 f ° r (5.13). It is also possible to define the AND and OR using methods such as the product of the degrees of memberships [59], but the minimum and maximum method is the most commonly found in the literature [26]. 3. Implication: The output of the operators is the implication which is used to shape the output membership function. The output membership function is assigned a degree of membership equal to the implication of the rule. The weight of a rule may be used to further shape the membership. The weight is applied by clipping the implication at the weight value. As shown in Figure 5.3 for the second rule (5.13), the implication (Wc2) is clipped at the weight value of 0.33. This allows one rule to be more significant than another. 4. Aggregation: The aggregation of all the implications is produced by taking the maximum degree of mem-bership for all values in the output universe z. It could also be produced by summing the degrees of membership for each rule, for all values of z. CHAPTER 5. FEATURE CLASSIFICATION 52 5. Defuzzification: Typically, defuzzification is accomplished by calculating the centroid of the area under the aggregate curve. In this case the centroid is z0. Defuzzification is also possible using other methods such as the bisector of the aggregated curve. The above reasoning process is generally implemented by discretizing the universes of discourse. MATLAB toolbox implements the Mamdani method in this way. An algebraic implementation, in which the membership functions are limited to linear functions, is discussed by Tanaka [26]. 5.2.2 Rule Generation Automatic methods such as ANFIS (Adaptive Neuro-Fuzzy Inference System) [61] and FCM (Fuzzy C-Means) clustering [62] are useful for generating rules when starting with no a priori knowledge. However, many successful applications of fuzzy logic to classification use expert knowledge to de-velop rules [17, 18]. Since there exists expert knowledge concerning the classification of flange defects, this knowledge was used to generate the rules implemented in this work. 5.3 Flange Defect Classification In Section 4.2.1, a method for extracting flange profiles from filled can-images was presented. Using the flange profile a series of classification steps are used to identify the flange as defective or non-defective. The flange defect identification process is outlined in Figure 5.4 and the three final steps are discussed below: Acquire Extract Flange Profile Classify Profile Pixels fe Determine Defect Width Apply Defect Definitions Image w w w w Figure 5.4: Flange defect identification process flow chart. Classify flange pixels: Individual flange profile pixels are classified as belonging to a particular defect type or as part of the clean flange. Two classification approaches were implemented: one using RBFNs and the other using fuzzy logic. The output of these classifications is the probability or confidence (rather than a crisp classification decision) that the pixel belongs to CHAPTER 5. FEATURE CLASSIFICATION 53 a particular defect type. Both of the approaches were applied to both the production images (images acquired at the Prince Rupert plant) and the laboratory images (images acquired from laboratory tests). The classification approaches are discussed in the following sections. Determine defect widths: This is accomplished using a hysteresis thresholding, similar to the method discussed in Section 4.1.1. For each defect type, high and low confidence thresholds were determined manually. All the profile pixels with a confidence greater than the high threshold are grouped together as one defect. This defect is extended to neighboring pixels with confidences greater than the low threshold. The width of a defect in pixels is the distance along the flange between the first and last pixels of a defect. Although the width of a defect is only necessary for meat defects, the width can also be useful for suppressing noise, which might produce a result similar to a defect with a small width. The two classification approaches (RBFN and fuzzy logic) can be directly compared because they have the same type of output and are applied to the same data and use the same thresh-olding scheme. Examples of flanges analyzed with both approaches are given in Section 5.3.3. Apply defect definitions: The measured widths are compared with the defect definitions, es-tablished in Chapter 3, to determine whether a can is defective. This step is discussed in Chapter 6, where the RBFN and fuzzy logic approaches are compared and evaluated. 5.3.1 R B F N Approach The inputs to the RBFN were the hue, saturation and intensity of a flange pixel. This results in an input vector with three values ranging from 0 to 1. There is one output node for each defect type considered. Output values greater than 1 and values less than 0 are clipped. The number of hidden nodes was determined by the training algorithm. A training data set and an evaluation data set were generated for each application of the RBFN approach. Each data set consisted of 400 samples containing input vectors and the desired output vectors. The time required to train a network with 400 samples was typically 5 to 10 minutes on a Pentium 133 MHz based PC. Training time increases as the square of the number of training samples due to the related increase in the computational and memory requirements of the training algorithm. The input vectors for the training samples were generated by randomly selecting an CHAPTER 5. FEATURE CLASSIFICATION 54 equal number of pixels from groups of manually classified pixels. There was one group of pixels for each defect type and one for containing clean flange pixels. The desired output vectors were generated by assigning a value of 1 to the output node, corresponding to the defect type associated with the input values, and Os to all the other nodes. The output vectors for a clean flange pixel were Os for all nodes. The spread of the RBF and exit condition of the training algorithm are not determined au-tomatically and must be specified. The optimum value of the spread parameter depends on the number of inputs and the numerical range of input values. The sum of the squared errors de-pends on the number of output nodes, number of training samples and the numerical range of the output values. There exists a relationship among these parameters, but it is complex and highly non-linear. For this reason, the SC and SERR were optimized by training multiple networks with different parameter values. The optimum parameter values were selected as those which produced a network with the minimum number of nodes and maximum classification accuracy. Production Images Since there were very few bone and skin defects in the production images, only meat defects were considered in this data set. This resulted in a single output node. The training and evaluation data sets each contained 200 clean flange pixels and 200 meat defect pixels. To optimize the RBFN for size and classification accuracy, the spread constant and the SERR exit condition were varied and optimum values were selected. As shown in Figure 5.5, the classifi-cation accuracy remained essentially constant over the range evaluated, while the number of nodes in the network varied a great deal. An optimum spread constant value of 0.15 was selected to minimize the network size, while maintaining the classification accuracy of 97.5%. In Figure 5.6, the classification accuracy remained constant for all values of the SERR exit condition, while the number of nodes leveled to 7 nodes at around 8.5. Thus, an SERR of 8.5 was selected as the optimum exit condition. Generally the defect widths were insensitive to the threshold values used because the confidence that a pixel was meat defect was either high or low. As a result, the high and low thresholds for the meat defect confidence were selected to be 0.9 and 0.5 respectively. CHAPTER 5. FEATURE CLASSIFICATION 55 0 0.5 1 1.5 2 0 0.5 1 1.5 2 Spread Constant Spread Constant (a) (b) Figure 5.5: Optimization of RBFN spread constant, (a) Number of nodes in a network (b) Classifi-cation accuracy of the network while the spread constant was varied from 0.05 to 2, by increments of 0.05, while the SERR was held constant at 4. 0 2 4 6 8 10 0 2 4 6 8 10 Sum of Squared Error Sum of Squared Error (a) (b) Figure 5.6: Optimization of RBFN SERR exit condition, (a) Number of nodes in a network (b) classification accuracy of the network while the SERR exit condition was varied from 0.5 to 10 by increments of 0.5, while the spread constant was held constant at 0.15. CHAPTER 5. FEATURE CLASSIFICATION 56 Laboratory Images In this application of the RBFN approach, meat, dark skin and light skin defects were considered. Skin defects were divided into two classes, light skin and dark skin, because the two types have significantly different pixel values, as shown in Figure 5.7. Bone defects were not separately classified in this approach because their small width and transparency make them difficult, if not impossible, to detect using the flange profile (Figure 5.8). Some bones may be detected as meat defects because bones are often attached to the meat. Thicker bones are less transparent and may be classified as white skin. Flange Location (pixels) Flange Location (pixels) (a) (b) Figure 5.7: Examples of skin flange defects, (a) Light skin pixels have yellow-orange hue, low saturation and high intensity, (b) Dark skin pixels have a blue hue, low saturation and low intensity. 580 600 620 640 Flange Location (pixels) (a) 660 93 98 1 u £0.5 0 8 1 Jo.5 c/3 0 * 1 gO.5 50 200 Flange Location (pixels) (b) 250 Figure 5.8: Examples of bone flange defects, (a) Thin almost undetectable bone and a thicker bone with light skin properties, (b) Bone attach to a large piece of meat. CHAPTER 5. FEATURE CLASSIFICATION 57 Using the same optimization process as for the production images, the optimum spread constant was found to be 0.15, while the optimum exit condition was found to be 25. The spread constant was varied from 0.05 to 2 by increments of 0.05, and the exit condition was varied from 5 to 100 by increments of 5. The optimal exit condition for these images was three times larger than the production image value because there were three output categories in this case (meat defects, light skin defects and dark skin defects). The optimum spread constant was the same because the number of inputs and the input ranges were identical. The optimum values produced a network with 45 nodes. The same high and low thresholds of 0.9 and 0.5 were used for dark skin and light skin. However, the distinction between a clean flange pixel and a meat pixel is not as clear as between clean and skin defects. This is probably due to the golden coating on the flange of the cans used for the laboratory images. Therefore, the low threshold for the meat output was raised to 0.7. 5.3.2 Fuzzy Logic Approach In this approach, the input variables were also the hue, saturation and intensity of a pixel. Mem-bership functions were defined for clean flange pixels and each defect type. There was one output variable for each defect type. There was no output for clean pixels because this output would be the inverse of the others. The value of the output was considered to be the confidence that a pixel belonged to a particular defect type. The rules were generated using expert knowledge. The membership function were tuned using the histograms of the pixel values. There were two sets of primary rules used in the fuzzy logic approach. First, if a pixel's hue, saturation and intensity all matched those of a particular defect class, it was considered to have a high confidence of being that type of defect. A version of this rule was defined for all the pixel types, except for the clean flange pixels because there was no output for this type. Second, if a pixel's hue, saturation and intensity all match those of a defect type, it was considered to have a low confidence of being the other defect types. A version of this rule was was defined for all the defect types and for the clean flange type. These two sets of rules were the primary rules and were given a weight of 1.0. For a given pixel, if one input had a low degree of membership, while the other two had a high degree of membership for a particular defect, then the implication of the corresponding rule would CHAPTER 5. FEATURE CLASSIFICATION 58 be insignificant because it would have a low membership in the consequent set. This led to incorrect defect width measurements because a defect could be broken up into a number of smaller defects. Therefore, a set of secondary rules were defined. For these rules, if any two of the hue, saturation or intensity values of a pixel matched those of a defect type, then the pixel was considered to be that type of defect with a medium confidence. These rules were given a weight of 0.5. The membership functions for the consequent fuzzy sets were a mix of triangular and trapi-zodal functions. The same set functions were used for all outputs. The functions describing the membership of low, medium and high confidence levels are shown in Figure 5.9 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Confidence Level for Defect Type Figure 5.9: Output membership functions defining low (- -) medium (—) and high (- •) confidence levels for each defect type. The membership functions defining the antecedent (input) hue, saturation and intensity fuzzy sets for defect type were created using histograms. Typically, 700 manually selected pixels of each defect type were used to generate the histograms. The frequency of a particular value in a histogram was interpreted as the degree of membership for each set. The hue, saturation (sat) and intensity (int) histogram for each defect type were normalized independently. In some cases, the membership function had to be tuned manually. The reasoning for these adjustments is discussed below. , Production Images As with the RBFN approach, only meat defects were considered for the production images. Fig-ure 5.10 shows the membership functions defining the hue, saturation and intensity of meat and clean flange pixels. This application of the fuzzy logic approach required no tuning of the mem-bership functions. The rules used to classify the pixels are given in Table 5.1. The high and low confidence thresholds were set to 0.7 and 0.5 for this application. CHAPTER 5. FEATURE CLASSIFICATION Table 5.1: Fuzzy logic rules used for the production images. 59 # Rule Weight 1 I F hue IS meat-hue A N D sat IS meat-sat A N D int IS meat-int T H E N meat IS h igh 1.0 2 I F hue IS clean-hue A N D sat IS clean-sat A N D int IS clean-int T H E N meat IS low 1.0 3 I F hue IS meat-hue A N D sat IS meat-sat T H E N meat IS med 0.5 4 I F sat IS meat-sat A N D int IS meat-int T H E N meat IS med 0.5 5 I F int IS meat-int A N D hue IS meat-hue T H E N meat IS med 0.5 ~i r 1 T" O 00 a 0.5 1^ i i : L _ "0 0.1 0.2 0.3 0.4 0.5 Hue (a) _j i_ 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 Saturation (b) 0.5 00 CD Q 0 1 1 1 1 . s 1. 1 1 / \ / /"* • - -/ 1 / / / / . • 1 . • > . — •" 1 \ \ \ \ 1 1 ' — . 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Intensity (c) Figure 5.10: Membership functions defining the (a) hue, (b) saturation and (c) intensity for meat (- •) and clean (••) pixels in the production images. CHAPTER 5. FEATURE CLASSIFICATION 60 Laboratory Images As in the RBFN approach, meat, dark skin and light skin were classified. The rules used to classify the laboratory flange pixels are given in Table 5.2, while the membership functions are defined in Figure 5.11. The high and low confidence thresholds used were 0.6 and 0.45, respectively. The intensity membership function for light skin had to be tuned. As can be in seen Figure 5.11, the original membership function had a very narrow range (0.95-1.0), with a high degree of mem-bership. This means that the implication of the rule was rarely significant in the defuzzification. Since no other membership function had a high degree of membership in the intensity range from 0.8 to 1.0 the light-skin membership function could be extended to cover this range without affect-ing the classification of the other defects. This modification improved the classification of white skin pixels. Table 5.2: Fuzzy logic rules used for the laboratory images. # Rule Weight 1 IF hue IS meat-hue AND sat IS meat-sat AND int IS meat-int THEN meat IS high 1.0 2 IF hue IS meat-hue AND sat IS meat-sat AND int IS meat-int THEN dskin IS low 1.0 3 IF hue IS meat-hue AND sat IS meat-sat AND int IS meat-int THEN lskin IS low 1.0 4 IF hue IS dskin-hue AND sat IS dskin-sat AND int IS dskin-int THEN dskin IS high 1.0 5 IF hue IS dskin-hue AND sat IS dskin-sat AND int IS dskin-int THEN meat IS low 1.0 6 IF hue IS dskin-hue AND sat IS dskin-sat AND int IS dskin-int THEN lskin IS low 1.0 7 IF hue IS lskin-hue AND sat IS lskin-sat AND int IS lskin-int THEN lskin IS high 1.0 8 IF hue IS lskin-hue AND sat IS lskin-sat AND int IS lskin-int THEN meat IS low 1.0 9 IF hue IS lskin-hue AND sat IS lskin-sat AND int IS lskin-int THEN dskin IS low 1.0 10 IF hue IS clean-hue AND sat IS clean-sat AND int IS clean-int THEN meat IS low 1.0 11 IF hue IS clean-hue AND sat IS clean-sat AND int IS clean-int THEN dskin IS low 1.0 12 IF hue IS clean-hue AND sat IS clean-sat AND int IS clean-int THEN lskin IS low 1.0 13 IF hue IS meat-hue AND sat IS meat-sat THEN meat IS med 0.5 14 IF sat IS meat-sat AND int IS meat-int THEN meat IS med 0.5 15 IF int IS meat-int AND hue IS meat-hue THEN meat IS med 0.5 16 IF hue IS lskin-hue AND sat IS lskin-sat THEN lskin IS med 0.5 17 IF sat IS lskin-sat AND int IS lskin-int THEN lskin IS med 0.5 18 IF int IS lskin-int AND hue IS lskin-hue THEN lskin IS med 0.5 19 IF hue IS dskin-hue AND sat IS dskin-sat THEN dskin IS med 0.5 20 IF sat IS dskin-sat AND int IS dskin-int THEN dskin IS med 0.5 21 IF int IS dskin-int AND hue IS dskin-hue THEN dskin IS med 0.5 CHAPTER 5. FEATURE CLASSIFICATION 61 Figure 5.11: Membership functions defining the (a) hue, (b) saturation and (c) intensity for meat (- •), dark skin (—), light skin (- -), and clean (••) pixels in the laboratory images. The modified light skin intensity membership function is shown as ( - X - ) . CHAPTER 5. FEATURE CLASSIFICATION 62 5.3.3 Flange Defect Classification Examples Figures 5.12 and 5.13 contain examples of flanges that have been analyzed using both of the approaches discussed above. Each row of these figures contains two sub-figures of the same flange. The left-hand one was generated using RBFN classification, while the right-hand one was generated using fuzzy logic classification. At the top of each sub-figure is an image of the flange profile. Below the images are plots of the confidence level for each defect type. The blue bars on the images are the limits of the manually measured width, while bars on the plots are limits of the defects found using using the hysteresis thresholding of the confidence levels for each defect type. In Figure 5.12, both approaches have high contrast between meat and clean flange pixels. The range of the fuzzy logic confidence levels is smaller due to the defuzzification method, but this limitation does not affect the results. Confidence values of close to 0 and 1 were not possible because the centroids of the high and low confidence membership function were within this range. The separation between meat pixels and clean flange pixels in Figure 5.13 was not as good as in Figure 5.12. This is probably due to the fact that the flange in the laboratory images had a golden hue. There was better contrast between skin pixels and clean flange pixels in this case. Shadows created by some defects were interpreted as dark skin defects with small widths. •3 87 * 92 & 97 as 102 1 g 1 i i 0 ' 5 u. "c n no .<? 0 os O • V . 1II / 1 m\ Uti •8 5 .210 &\S as 20 | | 1 2^0.5 j § o 100 200 300 400 500 600 700 £ u " 100 200 300 400 500 600 700 Flange Position (pixels) Flange Position (pixels) (a) (b) Figure 5.12: Examples of pixel classifications and defect width measurements from the production data set. Meat defect with a width of 67 pixels: (a) RBFN width = 73; (b) Fuzzy logic width = 71. CHAPTER 5. FEATURE CLASSIFICATION 63 1 8 8 l ill •2 a J2 ±0: •3 p. = £ 1 §0.5 J 0 §0.5 200 400 Flange Position (pixels) (a) 600 > 8 I1 I N 1 £ 1 ; 88| ! 93| ft 98 103 1 fo.5 0 , 1 las ' o . I )0.5 1 0 W4 200 400 600 Flange Position (pixels) (b) .3 55 3 97 as 103 1 ||0.5 > I 0 8.S 1 |^0.5 § °' U c 1 | l a s • • < l M / v ^ - • .J a l i i 0 200 400 Flange Position (pixels) (c) 600 ilo. > o t j & 5 la „ 200 400 Flange Position (pixels) (d) 600 3 . a§ai •a 8.s 1 IS 0.5 g U 0 I £0.5 . .w/ iA» '0 is § 00 0.5 I, SS0.5 ~1 600 200 400 Flange Position (pixels) (f) 200 400 Flange Position (pixels) (e) Figure 5.13: Examples of pixel classifications and defect width measurements from the laboratory data set. Meat defect with a width of 25 pixels: (a) RBFN (width = 31); (b) Fuzzy logic (width = 31). Dark skin defect with a width of 49 pixels: (c) RBFN width = 53 (d) Fuzzy logic width = 52. Light skin defect with a width of 31 pixels: (e) RBFN width = 36; (f) Fuzzy logic width = 35. CHAPTER 5. FEATURE CLASSIFICATION 64 5.4 Cross-Pack Defect Classification In Section 4.3, a technique for separating and labelling dark and light regions with low saturations in the can fill, was presented Ideally, the area of these regions would be directly related to the area of the can fill covered with cross-packed skin. Unfortunately, the dark regions could be either dark cross-pack or shadows, and the light regions could be either light cross-pack or low saturation (whitish) meat. To accurately measure the cross-pack area, the labelled regions must be classified into two groups: regions which are cross-packed (CP) and regions which are not (NCP). The total area of cross-pack is the sum of the light and dark cross-pack region areas. Two approaches to measuring the cross-pack area were considered. In the first approach, "Clas-sify and Sum", RBFNs were trained to classify the regions based on their properties. In the second approach, "Label and Sum", it was assumed that the NCP regions were insignificant in area when compared to the CP regions. All the labelled regions were used to compute the total area of the cross-pack. The performance of these two approaches are evaluated and compared in Chapter 6. 5.4.1 Property Selection Nine region properties were calculated for each region, as discussed in Section 4.3.5. Although the NCP regions were labelled using the same thresholds as the CP regions, some differences in the properties were anticipated, since the two were distinguishable to the human eye. All the regions (726 total) in the cross-pack image data were manually classified as CP or NCP. The region property histograms of each region type (dark and light) were compared to determine which properties were significantly different. As shown in Figure 5.14, there were four region properties that had significantly different histograms. Dark CP regions were larger than shadow regions. Regions created by shadows were concentrated near the flange. Dark CP regions had higher mean intensities and had few pixels with an undefined hue. Only two properties had significantly different histograms for light regions (Figure 5.15). Light CP regions were significantly larger than NCP regions. The histogram showing the percent of undefined hue also shows some differences. The other region property histograms for each region type were essentially identical. CHAPTER 5. FEATURE CLASSIFICATION 65 0.1 0.2 0.3 Size (% of fill) (a) 0.4 0.2 0.4 0.6 0.1 % Undefined Hue (b) 0^.4 0.2 0.4 0.6 Mean Intensity (c) 0.8 0-1 M . . 0.6 m 0.8 1 Centroid Position (R /Rrj (d) Figure 5.14: Feature histograms for dark regions: (a) Size of region, (b) Percent of region pixel with an undefined hue. (c) Mean intensity of region pixels, (d) Radial position of the region centroid. Regions manually classified as dark CP (- -) and as dark NCP (—). (x)'s are the centers of the histogram bins. 5.4.2 Classify and Sum Approach Using the properties selected for the region type as inputs, one RBFN was trained to distinguish between dark CP and NCP, while another network was trained to make the same classification for light regions. Each network had a single output with a desired value of 1 for CP regions and 0 for NCP regions. Training and evaluation data sets, with 56 training samples each, were generated for the dark and light region networks. An equal number of CP and NCP regions were randomly selected from the set of all regions for each data set. The optimum values for the spread constant and the SERR were determined using the same optimization procedure used for the flange defect RBFNs. The optimum parameter values and CHAPTER 5. FEATURE CLASSIFICATION 66 0 0.05 0.1 0.15 0.2 0 0.2 0.4 0.6 0.8 Size (% of fill) % Undefined Hue (a) (b) Figure 5.15: Feature histograms for light regions: (a) Size of region, (b) Percent of region pixel with an undefined hue. Regions manually classified as light CP (- -) and as light NCP (—). (x)'s are the centers of the histogram bins. range of values evaluated and the resulting size network size are tabulated in Table 5.3. When compared to dark regions, the distinction between the light CP and light NCP was not very good and required many nodes with small spread constants. Table 5.3: RBFN evaluation data set classification accuracy for light and dark regions. RBFN for Dark Regions RBFN for Light Regions SC SERR # Nodes SC SERR # Nodes Optimum Value 0.5 3 6 0.06 5.6 20 Evaluated Range 0.05 - 2.0 0.1 - 4.0 - 0.01 - 0.4 0.4 - 16.0 -Step Size 0.05 0.1 - 0.01 0.4 -The classification accuracy of the networks trained using the optimum parameter values are given in Table 5.4. The output of the networks was thresholded at 0.5, to determine whether or a region was CP or NCP. Correct classification refers to CP classified as CP and NCP classified as NCP; while, false positive refers to NCP classified as CP and false negative refers to CP classified as NCP. As expected, the classification accuracy of the dark region network was greater than the light region network. The light region network was significantly better at classifying NCP. The next step of this approach is to sum the areas of regions classed as CP to determine the total cross-pack area in a can. This area is compared to the cross-pack area threshold determined in Chapter 3, to determine whether a can is defective. CHAPTER 5. FEATURE CLASSIFICATION 67 Table 5.4: RBFN evaluation data set classification accuracy for light and dark regions. Classification RBFN for Dark Regions RBFN for Light Region Correct 84.6% 75% False Positive 7.7% 5.8% False Negative 7.7% 19.2% 5.4.3 Label and Sum Approach If all the labelled regions are assumed to be CP, then the total area of cross-pack can be determined by summing the labelled areas. A preliminary investigation in the previous section showed that using the calculated region properties would make it difficult to achieve a high classification accu-racy. Overall, CP regions had larger areas than NCP regions. Therefore, the incorrect classification of large light and dark CP regions could result in a defective can being classified as non-defective. Assuming that all labelled regions are CP will not introduce that much error, since on average NCP regions only make up 12.4% of the total area of the regions in cans considered to contain a cross-pack defect. 5.5 Summary In this chapter RBFN and fuzzy logic classification were introduced and applied to the classification of flange pixels. A form of hysteresis thresholding was applied to the classification results to determine the widths of flange defects. The measured widths will be used in the next chapter to determine whether a can is defective. The performance of the two classification approaches will also be compared. RBFN classification was also considered as a method classifying light and dark regions as CP or NCP to more accurately determine the total CP area. This approach will be compared to simply assuming all the labelled regions are CP in the next chapter. Chapter 6 Evaluation In this chapter, the feature extraction algorithms presented in Chapter 4 and the feature classifica-tion approaches presented in Chapter 5 are evaluated. The success rate, calibration and execution time of the feature extraction algorithms are evaluated and discussed, as are the classification accuracy and computation cost of each classification approach. The suitability of each classifica-tion approach for integration into the Can Defect Detection System is also discussed. Potential optimizations to all the algorithms are also considered. 6.1 Feature Extraction This section evaluates the feature extraction algorithms presented in Chapter 4. Only the success rate of the can locating algorithm was formally evaluated. For the remainder of the algorithms the calibration issues are discussed and the execution times are reported. All these algorithms were implemented in the C programming language, which allowed for the measurement of on-line execution times. 6.1.1 Can Location The success rate of the can-locating algorithm was determined by manually measuring the error at four points A, B, C, and D, as shown in Figure 6.1. The error is defined as the radial distance from the calculated position of the flange to the actual center-line of the flange. For a 100-can sample from the production data set, the mean error at the four positions is given in Table 6.1. These 68 CHAPTER 6. EVALUATION 69 Figure 6.1: Position error evaluation points A, B, C and D. The magenta line is the calculated flange position. For this can the errors at each point are: A = 1,B = 0, C = 0, and D = 0. Table 6.1: Flange position errors in pixels at four points on the flange. Flange point A B C D Mean error 0.09 0.31 0.61 0.20 errors resulted in one failure of the can-locating algorithm, where a failure is defined as an error of magnitude larger than 1 pixel at any of the four points. The single failure was due to many meat flange defects breaking up the inside edge of the flange into smaller edges, which were then rejected by the algorithm. The algorithm did not fail for any non-defective cans and had a 99% success rate over all. The success rate of the algorithm was not evaluated for the laboratory data set. However, during the development of the Can Defect Detection System a success rate of approximately 95% was observed. These failures were attributed to a number of causes. A large cross-pack can cause a large error in the initial can location estimate, which results in an offset of the first annulus. A flange defect at point A, B, C or D can cause the radius adjustment step, which corrects for non-square pixels, to fail. The roundness of a can also affects the success rate: cans which had been dropped or dented were not located properly. Since, in general, failure of the algorithm is related CHAPTER 6. EVALUATION 70 to can defects, cans which fail the algorithm could simply be automatically classified as defective. Currently this algorithm cannot detect all failure modes, such as small can centre location errors, but this functionality could be added. Can Location Algorithm Calibration In order for the algorithm to function optimally, it must be properly calibrated. Changes to the system that vary the size of a can in an image require recalibration of a number of parameters, including the inside flange radius and the flange thickness. For example, the laboratory images used a different type of a can than those in the production images, i.e., with a different inside flange radius. Careful measurement of this value was required for the Hough transform. The inside flange radius was determined by taking half the mean vertical (from A to C) and horizontal (from B to D) diameters for five cans. It was not necessary to change the annulus radii for small changes in can image size (l%-3%), but this parameter would require modification for larger changes. The Canny edge detector parameters did not require adjustments for different setups. However, a new background image was required when the background changed. Each of these adjustments could be automated through a calibration procedure performed at the beginning of a production run. 6.1.2 Flange Profile Extraction Extraction of the flange profile depends on accurate location of the can centre. If the can locating algorithm fails, the flange extraction fails also. The only additional parameter required for flange profile extraction is the width of the flange. The calibration of this parameter is performed manually using the same method used for the inside flange radius described in Section 6.1.1. 6.1.3 Cross-Pack Extraction The cross-pack extraction also depends on the accuracy of the can-location algorithm, but not as heavily as the flange extraction. The fill radius was defined to be 2 pixels less than the radius of the inside flange in order to decrease the area of shadows. Thus, this algorithm can tolerate a greater error in the can position. The parameters of this algorithm that required calibration were the thresholds used to separate high and low saturation, and the thresholds used to separate light and dark pixels. These thresholds were optimized manually in Chapter 4 and would require CHAPTER 6. EVALUATION 71 modification for different salmon species. A more robust segmentation algorithm that would be less sensitive to changes and require less calibration would probably be more computationally intensive and would not necessarily improve the final classification of defective cans. The error in cross-pack area measurement was not evaluated due to the large amount of time required to manually label each pixel in a number of images. 6.1.4 Execution Time The execution time of the individual steps of the feature extraction algorithms was measured. The times in Table 6.2 were measured using C-code compiled under Linux using gcc1 and executed on a Pentium 133 MHz computer. The times shown are the average for 10 cans. Possible optimizations and other methods for reducing the execution time are discussed in Chapter 7. Table 6.2: Execution times for feature ex-traction algorithms. Operation Time (s) % Position Estimate 0.009 0.5 Canny Edge Detector 1.366 79.0 Hough Transform 1 0.033 1.9 Hough Transform 2 0.031 1.8 Stretch Circle 0.003 0.2 Flange Profile 0.035a 2.0 Can fill RGB to HSI 0.098 5.7 Region Labelling 0.118b 6.8 Region Properties 0.036b 2.1 Total 1.729 100.0 aTime to extract a 3-line profile; 21-line profile used for many figures takes 7 times as long. bTime depends on number of regions. Version 2.7.2 with options -02 -m486 CHAPTER 6. EVALUATION 72 6.2 Flange Defect Classification In Chapter 5, flange profile pixels were classified using RBFN and fuzzy logic. These two approaches can be directly compared because they produce the same type of output. The performance of these approaches is evaluated on two levels. First, errors in the calculated defect widths are compared. Second, and more important, the critical flange defect classification accuracies are compared. Since a significant error in the calculated width could result in an incorrect defect classification, the latter is used as the sole performance evaluator. 6.2.1 Flange Defect Width Error The root-mean-square (RMS) width errors for production data (meat defects only) are given in Table 6.3; those for laboratory data (meat, light skin and dark skin defects) are given in Table 6.4. The width error is defined as the difference between the calculated width and the manually measured width. The RMS error was used because the errors were both positive and negative and there was a wide range of errors. Squaring the error weights the large, more significant errors more heavily. The larger errors were caused by one large defect being split into two smaller defects. The error values were mixed and neither classification approach had consistently lower error values. Table 6.3: Defect width RMS errors for the production data (pixels). Table 6.4: Defect width RMS errors for the laboratory data (pixels). Flange Classification Approach Defect Type RBFN Fuzzy Meat 8.8 13.3 Flange Classification Approach Defect Type RBFN Fuzzy Meat 15.7 9.9 Light skin 7.4 12.7 Dark skin 10.2 8.6 6.2.2 Classification Accuracy Table 6.5 gives classification accuracy of each approach using the production data set, while Ta-bles 6.6 and 6.7 give the accuracies for the laboratory data set. The entries in the tables are the percentage of cans in each class that were correctly classified. The first column of each table is the width above which a defect was considered critical. In all cases the clean flange definition was a CHAPTER 6. EVALUATION 73 can with no flange defects. Dirty flanges were defined as cans with meat defects with widths less than the critical width and therefore they are considered to be non-defective cans. The overall classification accuracy for each classification approach, using the production data set, is given in Table 6.8. The overall classification accuracy is a percentage of correctly classified cans (defective plus non-defective). The corresponding table for the laboratory data is not given because distribution of flange defect widths was arbitrary and did not reflect the actual flange defect distribution of a salmon canning line. An increase in width error for meat defects resulted in decreased classification accuracy of cans with critical meat flange defects. For the production data set, RBFN had lower width errors and higher classification accuracy, while for the laboratory data set the same was true for the fuzzy logic approach. Decreasing the critical width increased the critical flange defect classification accuracy, but also increased the number of false positives (non-defective cans classified as defective). A decreased width error did not increase skin defect classification accuracy. The RBFN had better classification accuracy for the skin defects. Although the fuzzy logic approach incorrectly classified three skin defects, these errors did not result in the incorrect classification of defective cans because the skin defects were classified as other critical defects. There were other false positives, such as classifying the shadow caused by a meat defect as dark skin (Figure 5.13), but this did not result in the incorrect classification of any non-defective cans. No cans with completely clean flanges were incorrectly classified. Therefore, overall, the two approaches had the same classification accuracy of defective cans containing skin flange defects. Table 6.5: Comparison of meat defect classification for production data. Width Threshold (pixels) Defective Non-Defective Critical Flange Defect 24 cans Dirty Flange 219 cans Clean Flange 12 cans RBFN Fuzzy RBFN Fuzzy RBFN Fuzzy >30 91.7% 79.2% 98.2% 96.8% 100.0% 100.0% >25 95.8% 79.2% 93.2% 89.5% 100.0% 100.0% >20 100.0% 87.5% 79.9% 79.0% 100.0% 100.0% CHAPTER 6. EVALUATION 74 Table 6.6: Comparison of meat defect classification for laboratory data. Defective Non-Defective Width Critical Flange Defect Dirty Flange Clean Flange Threshold 18 cans 7 cans 25 cans (pixels) RBFN Fuzzy RBFN Fuzzy RBFN Fuzzy >30 83.3% 94.4% 57.1% 42.9% v 100.0% 100.0% >25 83.3% 94.4% 57.1% 42.9% 100.0% 100.0% >20 88.9% 100.0% 57.1% 42.9% 100.0% 100.0% Table 6.7: Comparison of skin defect classification for laboratory data. Width Threshold Dark Skin Defect (25 cans) Light Skin Flange (25 cans) (pixels) RBFN Fuzzy RBFN Fuzzy >0 100.0% 96.0%a 100.0% 92.0%b One defect was classed as light skin. One defect was classed as dark skin and one was classed as meat. 6.2.3 Execution Time The flange defect classification approaches were implemented in MATLAB, so the execution times in Table 6.9 cannot be directly compared with the C-code execution times in Table 6.2. The execution times in Table 6.9 are the average times to classify all 720 pixels in a flange profile. The fuzzy logic approach execution times are significantly longer than those for the RBFN approach because the fuzzy logic implementation uses many for-loops, which are inefficient in MATLAB. However, they can be compared using the flops command which returns the number of floating-point operations (FLOPS) executed. Using this command, it was found that the fuzzy logic approach required 4 to 5 times as many operations. Both approaches required more operations for the laboratory images because classifying three defect types required more nodes (RBFN) and more rules (fuzzy logic). One of the reasons cited for using fuzzy logic was its computational efficiency. The inefficiency observed herein is due to the discretization of the membership functions. MATLAB appears to sample the membership function at 181 values. No method for changing this number was found. Reducing the number of samples would increase the efficiency and may not greatly affect the inferences made. CHAPTER 6. EVALUATION 75 Table 6.8: Overall classification accuracy for production data. Width Classification False False Threshold Accuracy Positive Negative (pixels) RBFN Fuzzy RBFN Fuzzy RBFN Fuzzy >30 97.7% 95.3% 1.6% 2.0% 0.8% 2.0% >25 93.7% 89.0% 5.9% 9.8% 0.4% 2.0% >20 82.8% 80.8% 17.3% 18.0% 0.0% 1.2% Table 6.9: Average flange defect classification execution times. Production Images Laboratory Images RBFN Fuzzy RBFN Fuzzy MFLOP a 0.118 0.540 0.883 2.52 Execution Time (s) 0.186 33.7 0.765 94.5 aMillions of FLOPS. 6.2.4 A Comparison of R B F N and Fuzzy Logic Classification Although the fuzzy logic approach had superior performance in the more complex laboratory data set, it is not necessarily the best approach when the other factors are considered. The fuzzy logic approach will require significantly more computations unless it can be implemented more efficiently. While tuning the membership functions led to higher classification accuracies, the tuning process requires more user intervention than training the RBFNs. Each approach will likely require training or tuning for salmon of different species or quality. Therefore, overall, the RBFN approach is superior for this application. 6.3 Cross-Pack Defect Classification Light and dark regions in the can fill were classified as CP or NCP as described in Chapter 5 using RBFN and the label and sum approaches. These two approaches are compared using the error in calculated area of the cross-pack regions. The overall accuracies of classifying cans with serious cross-pack defects are also compared. CHAPTER 6. EVALUATION 76 6.3.1 Cross-Pack Area Error Table 6.10 contains the average CP area error as a percentage of the total fill area for the RBFN and label and sum approaches. In general the label and sum approach has smaller area errors. The label and sum approach always over estimated the CP area because all regions were assumed to be CP. Incorrect classification of the larger CP regions consistently caused the RBFN approach to underestimate the CP total area. Table 6.10: Mean cross-pack area measurement errors for the classification approaches. Can Type RBFN Classification Label and Sum Light CP Dark CP Light CP Dark CP Non-defective -0.9% -1.0% 0.4% 1.6% Defective -1.4% -2.2% 0.4% 2.1% 6.3.2 Classification Accuracy The accuracies of classifying the cans in the cross-pack data set with both serious cross-pack defective cans and non-defective cans is shown Table 6.11, while the overall classification accuracy is shown in Table 6.12. In each case the cans are classified by comparing the total CP area, as determined by the region classification approach used, with the threshold value in the first column. The CP area is measured as a percentage of the total can fill area. The columns labelled as "Manual" refer to the manual classification of the light and dark regions as CP and NCP, not the manual estimation of CP area used to determine cross-pack area threshold in Chapter 3. Applying the threshold value of 12.5% only 68.8% of the defective cans were correctly classified using manual classification of the regions. This indicates that this threshold value was probably tod large. Therefore, the CP area threshold was reduced to 10.0% and further to 7.5% in an attempt to improve the classification accuracy of the defective cans. Reducing the threshold had the side effect of increasing the number of false positives; however, the overall classification accuracy of the RBFN approach was maximized at 10.0%. The label and sum approach was better for classifying defective cans because it over estimated the CP area, while RBFN classification had a better overall classification with fewer false positives. CHAPTER 6. EVALUATION 77 Table 6.11: Cross-pack classification accuracy for each defect type. Cross-Pack Threshold Cross-Pack Defects (16 Cans) Non-Defective (256 Cans) RBFN Label and Sum Manual RBFN Label and Sum Manual 12.5% 50.0% 75.0% 68.8% 99.6% 96.9% 99.6% 10.0% 75.0% 93.8% 87.5% 98.4% 91.8% 98.1% 7.5% 87.5% 100.0% 93.8% 96.5% 80.5% 95.7% Table 6.12: Overall cross-pack defect classification accuracy. CP Thresh. Class. Acc. False Positive RBFN L and S RBFN L and S 12.5% 96.7% 95.6% 0.4% 2.9% 10.0% 97.1% 91.9% 1.5% 7.7% 7.5% 96.0% 81.6% 3.3% 18.4% 6.3.3 Execution Time On average only 2.7 regions had to be classified per can. The computational cost of classifying the regions was not considered because it would be insignificant in comparison to the number of computations required for RBG-to-HSI transformation and region labelling. Thus, the execution time for the region classification is not significant. 6.3.4 A Comparison of RBFN Classification and Label and Sum Overall, the RBFN classification is superior to the "Label and Sum" approach because it produces fewer false positives. However, the "Label and Sum" approach would be better, if it desirable to select out as high a proportion of the defective cans as possible. Fuzzy logic may be more convenient than RBFN, if the definition of a cross-pack defect were modified to include a measure of severity, which would incorporate other features used by QC experts, such as position and colour of the cross-pack. In this case, generating an unbiased set of training samples, which were representative of the full range of cross-pack features values and cor-responding severity, would be difficult. Generating a rulebase based on more detailed consultations with industry workers may be easier and perhaps lead to improved cross-pack defect detection. CHAPTER 6. EVALUATION 78 6.4 Semi-Automated Quality Assurance Workcell Integration It is important to evaluate the classification approaches in the context of the semi-automated quality assurance workcell. Allin [7] simulated and evaluated the current patching table and developed a number of automation alternatives for a quality assurance workcell. The most economically viable alternative was a vision/trimmer system. The trimming machine, developed Marco Marine, is already in use on a few canning lines. It cleans the flanges of all cans leaving the filling machine. The proposed vision/trimmer system would use a vision system to detect flange defects and route defective cans from two canning lines to a single trimming machine, thus, saving the cost of an additional trimming machine. All flange defects must be detected and repaired. This is possible for meat and skin flange defects, if the meat width threshold is lowered to 20 pixels from the 30 pixels threshold (4.2% of the flange) defined in Chapter 3. Lowering the threshold would result in more false positives, but this would not be a problem, as the trimmer has the capacity to handle up to 50% of the cans from each line; therefore, an increase in the false positive rate could be absorbed by the system. The RBFN classification only detects meat and skin flange defects; the vision/trimmer system would also require the detection of bone defects and flange damage. Cross-pack defects are appearance defects and are not a good candidate for automation due to the dexterity required to correct the defects [7]. However, automated detection and routing of these defects to the workers could result in savings of worker time spent scanning the production line for such defects. Detecting 100% of the defective cans would require lowering the CP area threshold to 7.5% and using the "Label and Sum" approach. This approach would produce a large number of false positives and would increase the workload at the patching table. Using the RBFN approach with a threshold of 10.0% would be a better choice. Using this approach resulted in 75% classification accuracy for defective cans compared to the 60% patching rate observed on the current patching table [7]. CHAPTER 6. EVALUATION 79 6.5 Potential Optimizations Including flange pixel classification, the total process would require more than 2 seconds per can using the current algorithms and hardware. A number of possible optimizations for reducing the time required to detect flange and cross-pack defects are detailed below: Edge detection optimizations: Currently, the Canny edge detector is applied to the entire im-age, but only the edge information from an annulus around the flange is used. The largest annulus used was approximately 20% of the image. Designing the edge detector so that it is only applied to the annulus would reduce the number of computations required by a factor of five. Since this operation takes almost 80% of the total execution time, the saving could be significant. Flange defect detection optimizations: Since bone defects were not considered and the other defects typically had widths greater than 10 pixels, the angular scale of the flange profiles could be reduced by a factor of four without adversely affecting detection of defective cans. This would decrease the required number of computations and time by the same factor. Cross-pack defect detection optimizations: At the resolution used, the can fill area is dis-cretized into approximately 25000 pixels. This is much higher precision than required for this application. Reducing the can fill image by a factor of two will reduce by a factor of four the required number of computations for calculations related to labelling the regions, and is unlikely to affect the overall classification accuracy of defective cans. Hardware improvements: Since this project started, x86-based processors have increased in speed. Using the iCOMP Index 2.0 benchmark figures from Intel's website, a Pentium II 300 MHz is three times faster than the Pentium 133 MHz. Therefore, using a Pentium II 300 Mhz computer would result in a threefold increase in speed, assuming there are no memory bottlenecks or other factors that would limit the performance of the faster processor. It may also be possible to use a DSP board to do some or all of the vision computations. The performance increase resulting from using a DSP board will not be considered because it is difficult to determine due to the different processor architecture. CHAPTER 6. EVALUATION 80 Code cleanup and general improvement: Other improvements, such as precomputing values where possible and reducing type conversions, is estimated to result in a 10% performance increase overall. Taking all of these improvements into account, it would take 0.141 seconds using production flange defect classification and 0.189 seconds using the laboratory classification. This would be sufficient for the current line speed of four cans per second, but may not be enough for future high speed canning systems. Higher speeds maybe be possible with a DSP board or more processors. Chapter 7 Can Defect Detection System A research prototype of the Can Defect Detection System (CDDS) has been designed and built. It will serve as a means to evaluate and demonstrate the machine vision algorithms that have been developed. The prototype will also be used as a test bed for future improvements. This chapter outlines the system's functional requirements, design and implementation. 7.1 Requirements Figure 7.1 is a schematic diagram which shows the CDDS in relation to the proposed semi-automated salmon canning quality assurance work-cell. In the future, the CDDS will be required to interface with the high speed weighing machine, control the can sorting mechanism and provide performance feedback to the filling machine. After consultations with a plant manager and other QC experts, the following requirements for the CDDS have been identified: • Operate at line speed. This is currently 4 cans/s, and in the future it may be as high as 10 cans/s. • Identify all.safety defects. These include flange obstructions and flange damage. • Identify more frequent appearance defects. The most frequent appearance defect is cross-pack; other appearance defects such as poor fill will not be addressed in the prototype system. • Retrofit easily into existing can-filling line. • Meet health and safety standards for food processing equipment. 81 CHAPTER 7. CAN DEFECT DETECTION SYSTEM 82 Automatic patching machine r *— o Weighing machine CDDS Sorting machine To seamer O Q o oo Figure 7.1: Semi-automated quality assurance work-cell with the CDDS. 7.2 System Design The CDDS consists of four major components: (i) the imaging hardware, (ii) the illumination system, (iii) the supporting frame, and (iv) the software. The imaging hardware for the CDDS was also used for the experimental data collection system. Most of the improvements made for the CDDS were in the illumination and software components of the system. The design and selection process of the components are outlined below; the specific details and various settings used are detailed in Appendix B. 7.2.1 Imaging Hardware There were three components necessary for the imaging system: a host computer, a framegrabber and a CCD camera. Cost, flexibility, speed and the need for colour images (RGB) influenced the selection of the imaging hardware. A PC-based system was chosen since they generally cost less than other computer systems. A mid-range PC system was purchased for the project, and it is described in Appendix B. The requirement for flexibility in image processing routines implied that a hardware based image processing system with a fixed set of image processing functions was unsuitable for this project. The new PCI bus standard offered the potential to transfer live video to the host processor or CHAPTER 7. CAN DEFECT DETECTION SYSTEM 83 another image processing board for processing. At the time of the project initiation (Summer of 1995), the only framegrabber available for purchase, which supported RGB cameras and acquiring to host memory, was the Coreco TCi-Ultra. This board also has many other advanced features which are not used. Since that time, many other framegrabber manufactures have released similar products. The RGB camera selected was a JVC TK-1070U as it was already available in the laboratory. The camera was equipped with Cosmicar 16mm 1:1.4 lens, a good general purpose lens for machine vision applications as it has a focal range of 0.25 m to infinity. 7.2.2 Hygienic Design In [63], Timperley summarizes various standards and guidelines for designing food processing equip-ment. Most of the guidelines do not apply to this application because no part of the machine vision system comes in contact with the food. The guidelines state that the equipment should not cause product contamination, such as insufficiently secured nuts and bolts falling into the product. A pos-sible source of contamination from the CDDS is glass from a broken light bulb. This contamination risk was eliminated by placing all light bulbs below the open cans. Although the machine vision system does not come in contact with food, all surfaces in the cannery are washed (hosed down) regularly. For this reason, all water sensitive devices must be protected in an industrial version of this vision system (NEMA 4 standard containment). The issue of waterproofing the components was not considered in the design of the research prototype. 7.2.3 Supporting Frame A mounting frame was required to position the camera, can detector and illumination. Since this frame was intended for a prototype, it was designed to maintain flexibility and adjustablity. An industrial version of this system should be designed for minimum size in order to easily integrate into existing can-filling lines. The frame shown in Figure 7.2 was designed to be placed over a conveyor. The camera mount allows the camera to be adjusted from 25 to 33 cm from the base of the frame. This adjustment is required to change the scale of the can in the field of view. The size of the frame is larger than required to allow for the possibility of redesigning the illumination. CHAPTER 7. CAN DEFECT DETECTION SYSTEM 84 The photo in Figure 7.3 shows the frame mounted above the demonstration setup. In this setup, a slide is used in place of a conveyor to make the system more portable for demonstration purposes. 560 mm y« 430 mm y Figure 7.2: Schematic drawing of the frame and lighting tunnel mounted over a can conveyor. Figure 7.3: Photograph of the CDDS mounted on the demonstration setup. CHAPTER 7. CAN DEFECT DETECTION SYSTEM 85 7.2.4 Can Detector A convergent beam optical sensor was used to detect passing cans. For the most part the response of this sensor was independent of an object's optical properties or surface orientation. These characteristics were required for the experimental setup. Typically, magnetic or inductive sensors are used to detect passing cans, but these sensors must be mounted under or on the side of a can conveyor and would have required modifications to the can-filling line. The convergent beam sensor could be mounted anywhere. In the demonstration configuration, the convergent beam optical sensor is mounted on the side of the slide. The sensor is connected to the host computer via the parallel port to trigger the acquisition and processing of an image. 7.2.5 Illumination Illumination is an important aspect of any machine vision application. Visual enhancements of important object features via illumination can reduce the complexity of the computations required to detect the features. Although filters and coloured light have been used successfully in a number of machine vision applications [64], they were not considered for this application due to the wide range of features of interest. For example, a colour filter could enhance the image of the red meat on the flange but would degrade the image of the skin. Diffuse uniform white light enhances the features equally and yields the greatest amount of information in a single image. Most of the machine vision applications reported in Chapter 2 used diffuse uniform white light. The applications that discuss how this illumination was achieved used empirical methods suited to their products. Davis [40] discusses some theoretical considerations for achieving uniform diffuse illumination. When optical properties of the surface are of interest, diffuse uniform light can make the formation of the image less dependent on geometry of the light sources by reducing shadows and highlights caused by specularities. This form of light can be achieved with n point sources. If light from one source is blocked at some region, there will still be n — 1 sources illuminating this region. Narrow concavities could still block most of the light sources, but these regions would be small. A fluorescent ring light can be used to approximate n point sources. The use of a ring light was not considered because it would not provide enough light and does not satisfy some of the other design constraints discussed below. CHAPTER 7. CAN DEFECT DETECTION SYSTEM 86 A number of factors dictated that high intensity lighting would be required. Colour CCD cameras are more than one order of magnitude less sensitive than monochrome CCD cameras (10-15 lux versus <0.5 lux). To prevent blurring of the acquired images due to motion, the cans should move less than 1 pixel while the shutter is open. The can conveyors at BC Packers had speeds ranging from 0.6 m/s to 0.8 m/s. With the scale used, 1 pixel is approximately 0.5 mm square. This required the shutter to be open for less than 0.625 milliseconds. This corresponds to a shutter speed of 1/2000 second on the JVC camera. The higher conveyor speed required for 10 can/s would increase the light requirement because shutter speed would also have to be increased. As mentioned earlier, due to safety requirements, the light bulbs must be located below the cans or encased in a transparent shatterproof enclosure. The second option leads to further difficulties in terms of keeping the enclosure clean, and avoiding shadows or highlight effects. Diffuse, indirect illumination on the other hand would reduce any highlights caused by salmon scales and water droplets and minimize shadows. However, the losses incurred in reflecting and diffusing the light also increases the amount of light required. Generally, as long as the illumination is diffuse, the higher the intensity of the light the better, since this allows the aperture of the camera to be set to a higher f-stop, increasing the camera's depth of focus. To reflect the light emanating from the sources located below the level of the cans onto the viewing area, a reflective surface is required. Typical materials for reflective light diffusing umbrellas and discs found in local photographic supply stores fall into two broad categories: those with a metallic coating on the fabric and those with a metallic backing. Their are several varieties in each of the categories, each with different types of coatings: gold, silver, or mixed. A white translucent fabric with a silver backing was selected and tested for this application based on the recommendation of a photographic materials sales consultant. On testing this material, it produced diffuse light with no bright spots. As the size and geometry of the umbrellas or discs were not well suited to this application, a similar material was fabricated in the lab using the same layered design. CHAPTER 7. CAN DEFECT DETECTION SYSTEM 87 100 150 200 250 Flange position (degrees) (a) 0 50 300 350 100 150 200 250 Flange position (degrees) (b) Figure 7.4: Comparison of HSI intensity profiles using (a) direct illumination (experimental setup) and (b) indirect illumination (demonstration setup). Profile (b) has a 57% lower peak to peak amplitude. The new illumination set up reflects light from a cylindrical tunnel covered with a thin white polyester fabric backed with aluminum foil, as shown in Figure 7.3. The improvement over the direct illumination used in the experimental set up at Prince Rupert can be seen by comparing the flange HSI value profiles in Figure 7.4. Although the light bulbs used in the demonstration setup are not below the flange of the open cans, this modification could be made without any major design changes. A hemispherical reflective surface would have likely provided further improvement, but the cylindrical tunnel was significantly easier to build and gives good results. A wetable material with similar reflectance properties would be required for an industrial implementation, since the current design cannot withstand being hosed down. Choi et al. [20] used a similar illumination design, but their tunnel was painted with a flat white paint. Simply using flat white paint would make the tunnel washable, but the illumination provided was not evaluated. CHAPTER 7. CAN DEFECT DETECTION SYSTEM 88 Four 50 W, 12 volt DC halogen light bulbs were used for the illumination of the demonstration setup. These bulbs were selected for their steady light output, low cost, and high efficiency relative to incandescent bulbs. Standard fluorescent bulbs were not considered because their oscillating light output is noticeable at the shutter speed used. High frequency fluorescent illumination would not suffer from this problem, but high frequency power supplies were deemed to be too expensive: $340 for 8 W, as compared to $12 for 50 W with the halogen bulbs selected. After the construction of the CDDS, a lower cost source of high frequency fluorescent power supplies was found: $30 for 32 watts. The halogen bulbs cause air temperature in the tunnel to rise to 40 C. This does not significantly raise the temperature of a can because it spends less 0.5 seconds in the tunnel. The high temperature would cause problems if a can were to get stuck in the tunnel. The use of high frequency fluorescent lights would significantly reduce this problem because they are 3 to 4 times more efficient than halogen bulbs. 7.2.6 Software Architecture The software written for the CDDS will not be discussed in detail here. More details about the software can be found in the CDDS Users Manual (Appendix B) and the CDDS Developers Manual (Appendix C). The general architecture of the software was designed to keep the machine vision code separate from the user and hardware interfaces so they could be reused in future versions of the system. The machine vision code and the user interface are briefly described in the following sub-sections. CHAPTER 7. CAN DEFECT DETECTION SYSTEM 89 Machine Vision Code The vision algorithms were written in ANSI C and use the Vista computer vision library [39]. The algorithms were developed under Linux to take advantage of various tools included with Vista. The Vista library was ported to Windows NT so the machine vision code could be compiled and used with the user interface. Most of the features of the machine vision code were described in Chapter 4. The major features of this code are listed here: • Binary image labelling function. • Can locating algorithm. • Flange sampling function. • Functions for calculating region properties. • Functions to manipulate circular, elliptical and annular regions of interest. • Graphics functions for drawing circles and outlining regions. • Hough transform for circles. • RGB to HSI transformation. The RBFN classification is not implemented in this version of machine vision code. Only the label and sum approach for classifying cans with cross-pack defects is included. User Interface The user interface was written in C++ using the Microsoft Foundation Classes. The program for the prototype version of the CDDS runs under Microsoft Windows NT 4.0. A screen capture of the main window is shown in Figure 7.5. All of the controls for the system and configuration dialog boxes are accessible from buttons on the main window. It displays the details of any defects found in the most recently processed can. The total number of defects of each type are also recorded. The system can be configured to save the images of all the cans passing through the system for later processing. CHAPTER 7. CAN DEFECT DETECTION SYSTEM Untit led - cddtest File Edit Vtew Help Live j Original Don Pack I Range) No Image Messages: ; Acquire Tine: 81 rm Process Time: 1953 TO Display Time: 238 ras NBMI FJe to Save d Vemp\data0001 raw Ready PievkwsCarc Can Found: Yes Cioss-Pack Defect: Yes Light Area 23.9% Dark Are* 21.6% Total Area 45.55; Flange Detect No Type: None Width: 0 Pixels Can History. - — Delect Type Count Cioss Pad. 4 Range: C None: 2 Tolafc ! of Total 66,7% 0.0% 33.3.% 100.0% Process Image: Stop fit- •• • -ration: — Take BG Image! System AitoSave Figure 7.5: Main window of the CDDS user interface. Chapter 8 Concluding Remarks In this work, the following research objectives were considered: • To identify and quantify can-filling defects suitable for machine vision inspection. • To select and design the hardware required to implement the machine vision system for the semi-automated quality assurance workcell. • To collect images of defects produced by the filling machine during regular operation and use them to evaluate the machine vision system. • To develop and evaluate machine vision algorithms to detect and classify filling defects. Each of these objectives was considered and met with varying degrees of success. Salmon can-filling defects were identified and two defect types were selected for machine vision inspection. Algorithms were developed for extracting the features required to detect these defects. These features were classified using fuzzy logic and RBFNs. The classifications were evaluated using can images acquired in the cannery and in the laboratory. Many of these algorithms were integrated into the research prototype machine vision inspection system. However, a substantial amount of future work still remains to develop a complete industrial version of this vision system for the semi-automated salmon can-filling quality assurance workcell. 91 CHAPTER 8. CONCLUDING REMARKS 92 8.1 Conclusions The investigation of can-filling defects showed that there are many different types of defects, occur-ring with varying frequency and severity. Practical quantitative definitions for flange and cross-pack defects were developed using width and area thresholds. As well, a survey of quality control experts lead to the industry development of related documentation in the form of a training manual for patch table workers. Algorithms were developed to extract the features required to detect flange and cross-pack defects. A robust algorithm for precisely locating the can centres was developed using the Canny edge detector and the Hough transform. Sampling the elliptical flange to produce a linear profile allowed for the classification of the flange pixels and measurement of defect widths. The HSI thresholding and region labeling was an effective method for separating cross-pack from salmon meat. The region properties limited the classification accuracy of the light and dark regions. Fuzzy logic and RBFN classification of flange defects were compared. The RBFN approach was superior because it was computationally more efficient and training the networks was more straightforward than generating the fuzzy rules and tuning the membership functions. The poor computational performance of the fuzzy logic approach was due to the inefficiency of the MATLAB fuzzy logic toolbox implementation. The RBFN classification approach had an overall classification accuracy, for the production data, in the range from 82.8% to 97.7% for flange defects and from 96.0% to 97.1% for cross-pack defects. In the context of the proposed semi-automated quality assurance workcell, these classification accuracies would be sufficient to replace manual inspection of these defects with machine vision inspection. The CDDS did not meet all the requirements for the industrial system but was satisfactory for laboratory use. Although the system is not currently washable it was designed with safety and hygienic design constraints in mind. It provides an on-line testbed for the feature extrac-tion algorithms. Using the CDDS, it was found that indirect illumination provided more uniform illumination of the cans than direct illumination, but a higher total power was required. CHAPTER 8. CONCLUDING REMARKS 93 8.2 Recommendations The evaluation and testing of the CDDS in this thesis was limited to images of single species acquired during a single day and images of cans hand-packed in the laboratory. The system should be evaluated on-site with feedback from QC personnel. The sensitivity of the vision algorithms to changing can-fill parameters could be evaluated. This would also serve to refine the defect definitions. For the machine vision inspection to completely replace manual inspection the CDDS must be able to detect more types of can-filling defects-in particular, bone flange obstruction and flange damage. One possible method of detecting bones would be look for edges which cross the flange. The can locating algorithm's sensitivity to dented cans suggests that it could be used to detect flange damage. The RBFN network classification should also be integrated into the system. Many of the algorithms used rely on threshold values which were determined manually. Au-, tomated methods of setting these thresholds would be essential in an industrial prototype. The training of the RBFN classifiers should also be streamlined. Some changes to the equipment should be considered for an industrial version of the CDDS. The use of a DSP board should be investigated. Although a faster host computer would likely be sufficient to run an optimized version of the current algorithms at a speed of four cans per second, operating at a higher speed and detecting more defects types would not be possible using only the host computer's processor. Using a colour CCD camera with higher sensitivity would simplify the illumination design because less light would be required. A lower cost framegrabber would probably be sufficient because the host memory acquisition feature of the current frame grabber is used. Some safety and hygienic design issues were considered in this work but there are a number issues that could not be addressed. These include the design of a user interface that is simple and effective enough for use in the noisy and dirty environment of the cannery. Also the protection of the camera, computer, and other sensitive equipment was not considered. These issues must be considered to build an effective and reliable machine vision system for salmon canning quality assurance. References [1] C. Loughlin, "A close inspection of vision systems," Sensor Review, pp. 135-142, July 1987. [2] S. Gunasekaran, "Computer vision technology for food quality assurance," Trends in Food Science and Technology, pp. 245-256, aug 1996. [3] Province of BC Ministry of Agriculture, Fisheries and Food, "BC seafood industry year in review," 1992-1994. [4] A. W. Timperley, "Filling operations," in Canning of Fish and Meat (R. J. Footitt and A. S. Lewis, eds.), pp. 136-158, London: Blackie Academic & Professional, 1995. [5] Department of Fisheries and Oceans, Ottawa, Ontario, Canada, Metal Can Defects: Identifi-cation and Classification Manual, 1989. [6] British Columbia Institute of Technology, Vancouver, BC, Canned Salmon: Screening Line Theory and Operation, 1993. [7] B. Allin, "Analysis of industrial automation of a food processing quality assurance workcell," Master's thesis, University of British Columbia, Dept. of Mechanical Engineering, 1997. [8] N. Zuech, Applying Machine Vision. New York: John Wiley and Sons, 1988. [9] J. G. Brennan, J. R. Butters, N. D. Cowell, and A. E. V. Lilly, Food Engineering Operations. London: Applied Science Publishers Limited, 2 ed., 1976. [10] V. Steinmetz, M. J. Delwiche, D. K. Giles, and R. Evans, "Sorting cut roses with machine vi-sion," Transactions of the American Society of Agricultural Engineers, vol. 37, no. 4, pp. 1347-1353, 1994. [11] J. C. H. Yeh, L. G. C. Harney, T. Westcott, and S. K. Y. Sung, "Colour bake inspection system using hybrid artificial neural networks," Proceedings of the 1995 IEEE International Conference on Neural Networks, pp. 37-42, 1995. [12] V. C. Patel, R. W. McClendon, and J. W. Goodrum, "Detection of blood spots and dirt stains in eggs using computer vision and neural networks," Applied Engineering in Agriculture, pp. 253-258, 1996. [13] G. E. Rehkugler and J. A. Troop, "Apple sorting with machine vision," Transactions of the American Society of Agricultural Engineers, pp. 1388-1397, 1986. 94 REFERENCES 95 Q. Yang, "Apple stem and calyx identification with machine vision," Agricultural Engineering Research, pp. 229-236, 1996. N. K. Okamura, M. J. Delwiche, and J. F. Thompson, "Raisin grading by machine vision," Transactions of the American Society of Agricultural Engineers, pp. 485-492, 1993. S. A. Shearer and F. A. Payne, "Colour bake inspection system using hybrid artificial neural networks," Transactions of the American Society of Agricultural Engineers, pp. 2045-2050, 1990. E. A. Croft, C. W. de Silva, and S. Kurnianto, "Sensor technology integration in an intelligent machine for herring roe grading," IEEE/ASME Transactions on Mechatronics, vol. 1, no. 3, pp. 204-215, 1996. L. X. Cao, C. W. de Silva, and R. G. Gosine, "A knowledge-based fuzzy classification system for hearing roe grading," Intelligent Control Systems, pp. 47-55, 1993. K. Unklesbay, J. Keller, N. Unklesbay, and D. Subhangkasen, "Determination of doneness of beef steaks using fuzzy pattern recognition," Journal of Food Engineering, pp. 79-90, 1988. K. Choi, G. Lee, J. L. Han, and J. M. Bunn, "Tomato maturity evaluation using color image analysis," Transactions of the American Society of Agricultural Engineers, pp. 171-176, 1995. M. Recce and J. Taylor, "High speed vision based quality grading of oranges," in Proceeding of the IEEE International workshop on neural networks for identification, control, robotics and signal image processing, pp. 136-144, 1996. D. A. Beatty, "2D contour shape analysis for automated quality by computer vision," Master's thesis, University of British Columbia, Department of Computer Science, 1993. J. D. Foley, A. van Dam, S. K. Feiner, and J. F. Hughes, Computer Graphics: Principles and Practice. Reading, Massachusetts: Addison-Wesley, 1990. R. P. Lippmann, "An introduction to computing with neural nets," IEEE Acoustics, Speech and Signal Processing Magazine, pp. 4-22, 1987. D. R. Hush and B. G. Home, "Progress in supervised neural networks," IEEE Signal Processing Magazine, pp. 8-39, Jan. 1993. K. Tanaka, An Introduction to Fuzzy Logic for Practical Applications. New York: Springer, 1997. R. O. Duda and P. E. Hart, Pattern Classification and Scene Analysis. New York: John Wiley and Sons, 1973. C. W. de Silva, R. G. Gosine, Q. M. wu, N. Wickramarachchi, and A. Beatty, "Flexible automa-tion of fish processing," Engineering Applications of Artificial Intelligence, no. 2, pp. 165-178, 1993. REFERENCES 96 [29] S. Kurnianto, "Design, development, and integration of an automated herring roe grading sys-tem," Master's thesis, University of British Columbia, Department of Mechanical Engineering, 1997. [30] N. J. C. Strachan, "Length measurement of fish by computer vision," Computers and Elec-tronics in Agriculture, pp. 93-104, 1997. [31] J. Pertusson, "Fish quality control by computer vision," in Optical Spectra of Fish Flesh and Quality Defects in Fish (L. F. Pau and R. Olafsson, eds.), pp. 136-168, New York: Marcel Dekker, Inc, 1991. [32] "Flexible systems for trimming and portioning," World Fishing, pp. 13-14, June 1997. [33] E. A. Croft, "Personal communication and plant tour of Lullebelle Foods Ltd.," June 1996. [34] Key Technology, Inc., "Product catalog," 1995. [35] E. A. Croft, "Personal communication and plant tour of Lullebelle Foods Ltd.," Nov. 1995. [36] R. Gibb, "Personal communication," 1996. [37] J. Canny, "A computational approach to edge detection," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 8, pp. 679-698, 1986. [38] R. O. Duda and P. E. Hart, "Use of the hough transformation to detect lines and curves in pictures," Comm. ACM, vol. 15, pp. 11-15, 1972. [39] A. Pope and D. Lowe, "Vista: A software environment for computer vision research," in Proceeding of the IEEE Computer Society conference on Computer Vision and Pattern Recog-nition, pp. 768-772, 1994. [40] E. R. Davies, Machine Vision: Theory, Algorithms, Practicalities. Academic Press, 1997. [41] D. C. Marr and E. Hildreth, "Theory of edge detection," Proceedings of the Royal Society (London), pp. 187-217, 1980. [42] P. V. C. Hough, "Method and means for recognizing complex patterns." US Patent 3069654, 1962. [43] S. Tsuji and F. Matsumoto, "Detection of ellipses by a modified Hough transform," IEEE Transactions on Comput., vol. 27, pp. 777-781, 1978. [44] P. M. Merlin and D. J. Farber, "A parallel mechanism for detecting curves in pictures," IEEE Transactions on Comput., vol. 28, pp. 96-98, 1975. [45] D. H. Ballard, "Generalizing the Hough transform to detect arbitrary shapes," Pattern Recog-nition, vol. 13, pp. 111-122, 1981. [46] P. Hubber, Robust Statistics. John Wiley & Sons, 1981. [47] W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Numerical Recipes in C: The Art of Scientific Computing. New York: Cambridge University Press, 1994. REFERENCES 97 S.-R. Lay and J.-N. Hwang, "Robust construction of radial basis function networks for classi-fication," vol. 33, pp. 2037-2044, Nov. 1990. M. Rosenblum and L. S. Davis, "An improved radial basis function network for visual au-tonomous road following," IEEE Transactions on Neural Networks, vol. 7, no. 5, pp. 1111-1120, 1996. S. Renals and R. Rohwer, "Phoneme classification experiments using radial basis functions," in Proceedings of the International Joint Conference on Neural Networks, pp. 461-467, 1989. Y.-F. Wong, "How gaussian radial basis functions work," in Proceedings of the IEEE Interna-tional Joint Conference on Neural Networks, pp. 133-138, 1991. D. R. Hush and B. G. Home, "Progress in supervised neural networks," IEEE Signal Processing Magazine, vol. ?, pp. 8-39, Jan. 1993. S. Chen, C. F. N. Cowan, and P. M. Grant, "Orthogonal least squared learning algorithm for radial basis function networks," IEEE Transactions on Neural Networks, vol. 2, pp. 302-309, march 1991. J. Moddy and C. J. Darken, "Fast learning in networks of locally-tuned processing units," Neural Computation, vol. 1, no. 1, pp. 281-294, 1989. L. A. Zadeh, "Fuzzy sets," Information and Control, pp. 338-353, 1965. C. W. de Silva, Intelligent Control: Fuzzy Logic Applications. New York: CRC Press, 1995. C. C. Lee, "Fuzzy logic in control systems: Fuzzy logic controllerp—part I," IEEE Transactions on Systems, Man and Cybernetics, pp. 404-418, Mar. 1990. C. C. Lee, "Fuzzy logic in control systems: Fuzzy logic controllerp—part II," IEEE Transac-tions on Systems, Man and Cybernetics, pp. 419-435, Mar. 1990. The MathWorks, Natwik, Massachusetts, Fuzzy Logic Toolbox User's Guide, 1995. E. H. Mamdani, "Application of fuzzy logic algorithms for control of a simple dynamic plant," in Proceeding of IEEE, pp. 1585-1588, 1974. J. S. R. Jang and N. Gully, "Anfis: Adaptive-network-based fuzzy inference systems," IEEE Transactions on Systems, Man and Cybernetics, pp. 665-685, May 1993. J. C. Bezdek, Pattern Recognition with Fuzzy Objective Function Algorithms. New York: Plenum Press, 1981. A. W. Timperley, "Canning factory standards," in Canning of Fish and Meat (R. J. Footitt and A. S. Lewis, eds.), pp. 60-87, London: Blackie Academic k. Professional, 1995. B. Park, Y. R. Chen, M. Nguyen, and H. Hwang, "Characterizing multispectral images of tu-morous, bruised, skin-torn, and wholesome poultry carcasses," Transactions of ASAE, vol. 39, pp. 1933-1941, oct 1996. H. Messmer, The Indispensable PC Hardware Book. Don Mills, Ontario: Addison-Wesley, 1990. Appendix A Quality Control Survey University of British Columbia Industrial Automation Laboratory Salmon Can Filling Quality Control Survey In order to help us gain a better understanding of the salmon can filling process and the factors that effect the quality of a can of salmon, the following survey has been developed. In particular, the criteria and methods for identifying and correcting patching problems before the can is sealed is being studied. In conjunction with BC Packers personnel we have developed a number of questions, contained herein. In addition, we have packed and photographed a number of cans of salmon with possible filling problems. These may not be representative of the problems that are actually produced by a real salmon canning machine. If the packed cans are not representative of the problems actually seen on the patching line, please describe the differences as best as you can. We greatly appreciate your time and input in this project. 98 APPENDIX A. QUALITY CONTROL SURVEY 99 1. A number of quality control problems are indicated below. Please rate them on their severity and frequency from 1 to 5: 5 meaning "critical failure" or "most common" - 1 meaning "minor problem" or "least common". Please fill in any other quality control problems which can occur on the canning line. Severity Frequency - Cross-pack 5 4 3 2 1 5 4 3 2 1 - Bones on the top of the steak 5 4 3 2 1 5 4 3 2 1 - Too many backbones 5 4 3 2 1 5 4 3 2 1 - Skin and/or bone obstructing the edge . 5 4 3 2 1 5 4 3 2 1 - Under-weight 5 4 3 2 1 5 4 3 2 1 - Over-weight 5 4 3 2 1 5 4 3 2 1 - Poor meat colouration 5 4 3 2 1 5 4 3 2 1 - Damage to can edge 5 4 3 2 1 5 4 3 2 1 - Too much air space 5 4 3 2 1 5 4 3 2 1 - Other 5 4 3 2 1 5 4 3 2 1 - Other 5 4 3 2 1 5 4 3 2 1 5 4 3 2 1 - Other 5 4 3 2 1 5 4 3 2 1 2. Please briefly list, in order of importance, the features which are associated with "good appearance" and "poor appearance." Good Appearance Poor Appearance APPENDIX A. QUALITY CONTROL SURVEY 100 3. Can the presence of meat on the edge of the can cause a sealing problem? - Yes ( ) If yes when? - Sometimes ( ) -No ( ) 4. Is there a limit as to the number of small steaks with backbones which would be allowed to be in one can? - Yes ( ), Max. # ( ) -No ( ) Comment: '. 5. How much head space is considered too little and too much for each of the following can sizes? 1/4 lb. Low: High: 1/2 lb. Low: High: 1 lb. Low: High: 6. The following are pictures of packed salmon cans. Please indicate which cans have problems that warrant patching by circling the problem and commenting beside each picture what steps would be required to patch the can. If the problem is extremely rare or very common please make a note of this. In particular, we are interested in determining the following: (a) Which cans have enough meat, skin, or bone on the edge of the can to require cleaning. (b) Which cans have enough skin showing to be considered a cross packed can or simply unacceptable. APPENDIX A. QUALITY CONTROL SURVEY Can 1 Problems: 101 Can 2 Problems: APPENDIX A. QUALITY CONTROL SURVEY Can 3 Problems: Can 4 Problems: APPENDIX A. QUALITY CONTROL SURVEY Can 5 Problems: Can 6 Problems: APPENDIX A. QUALITY CONTROL SURVEY 104 Can 7 Problems: APPENDIX A. QUALITY CONTROL SURVEY Can 10 Problems: APPENDIX A. QUALITY CONTROL SURVEY 106 Can 12 Problems: APPENDIX A QUALITY CONTROL SURVEY Can 13 Problems: 107 Can 14 Problems: APPENDIX A. QUALITY CONTROL SURVEY 108 APPENDIX A. QUALITY CONTROL SURVEY 109 Can 17 Problems: Can 18 Problems: APPENDIX A. QUALITY CONTROL SURVEY Can 20 Problems: APPENDIX A. QUALITY CONTROL SURVEY 111 Can 21 Problems: Can 22 Problems: APPENDIX A. QUALITY CONTROL SURVEY 112 Can 23 Problems: Can 24 Problems: APPENDIX A. QUALITY CONTROL SURVEY Can 25 Problems: Can 26 Problems: APPENDIX A. QUALITY CONTROL SURVEY Can 28 Problems APPENDIX A. QUALITY CONTROL SURVEY 116 Can 32 Problems: APPENDIX A QUALITY CONTROL SURVEY 117 Can 33 Problems: Can 34 Problems: APPENDIX A. QUALITY CONTROL SURVEY Can 35 Problems: Can 36 Problems: Appendix B Can Defect Detection System User's Guide This appendix contains the User's Guide for the Can Defect Detection System (CDDS), outlining how to set up and use the system. The design of this system is discussed in Chapter 7. This system serves to demonstrate some of the can defect detection algorithms developed for this thesis in an on-line mode. The guide is organized as follows: • System Requirements: The required equipment and software. • Equipment Setup: How to set up the equipment. • Software Installation: How to install the various hardware drivers and CDDS program. • Running the CDDS: How to configure and run the CDDS program. B . l System Requirements The list of required equipment for the CDDS is derived from the equipment that is currently used for the demonstration configuration (as shown in Figure B.l). This setup uses an angled slide to deliver the can to the image capture area at a speed near canning line speed. The can triggers an optical sensor which, in turn, triggers the camera to acquire an image of the can. Possible equipment modifications and substitutions are noted in the list below: 119 APPENDIX B. CAN DEFECT DETECTION SYSTEM USER'S GUIDE 120 Figure B.l: Photograph of the CDDS mounted on the demonstration unit. • Host Computer - Pentium 133 MHz or higher. - Motherboard: Many constraints are placed on the motherboard by the requirements of the Coreco TCi-Ultra framegrabber. The motherboard must have a BIOS version that is PCI-PCI bridge aware. American Megatrends (AMI) BIOS is bridge aware, but many versions of Award BIOS are not. The motherboard must have the latest PCI chip set the earlier version (430FX) did not support the high bandwidth required by the TCi-Card. Also, there must be one full length PCI slot. Many motherboards do not have one because the CPU fan is in the way. - IDE hard drive: The Adaptec 2940xx SCSI interface cards are not compatible with the TCi-Ultra card. There may be problems with other SCSI cards. - No video card in addition to the TCi-Ultra. - Windows NT 4.0. • Coreco TCi-Ultra framegrabber with driver version 1.15. • JVC TK-1070U RGB CCD camera with 16 mm, 1:1.4 C-mount lens. APPENDIX B. CAN DEFECT DETECTION SYSTEM USER'S GUIDE 121 • Camera output cable, framegrabber input cable and four BNC barrel connectors. • Blue parallel port adapter box. (See Appendix C for the schematic.) • Honeywell Microswitch Convergent Beam Sensor. Any can detecting sensor could be substi-tuted provided the parallel port adapter box is appropriately modified. • Supporting frame mounted on the demonstration unit. The frame could be mounted over a can conveyor with modifications to the illumination. • 12V Car battery. B.2 Equipment Setup For the most part these instruction assume that the CDDS is used in conjunction with the demon-stration setup. Some tips for modifying the setup are also given. B.2.1 Supporting Frame If the frame is used with the demonstration slide, no adjustments are required because everything is bolted together. If it is used with a conveyor, it should be placed so that the camera field of view is centred on the conveyer. The placement should be done with full lighting and triggering available so that images can be acquired for feedback. B.2.2 Convergent Beam Sensor The convergent beam sensor is currently attached to the demonstration slide. It should not need any adjustment unless it used in a different configuration. See the sensor manual for instructions on how to adjust the range and sensitivity. The sensor should be placed 4 to 5 cm ahead of the image centre. The sensor does not trigger the camera directly, so some lead time is required to process the signal. B.2.3 Parallel Port Interface The blue parallel port interface box provides power to the sensor and allows the output of the sensor to be read by the host computer. There are three connectors on the box: one 4-pin plug APPENDIX B. CAN DEFECT DETECTION SYSTEM USER'S GUIDE 122 for the sensor, one 25-pin parallel port connector and one PC-power bus connector to take power from the PC power supply. All of the connections should be made when the computer is off. If all the connections are made properly, the LEDs on the box should flash when the host computer is booted and the relay in the box should click when the beam is broken. The relay should also click when the manual trigger button is pressed. B.2.4 TCi-Ultra Framegrabber If there is already a video card in the computer, remove it and insert the TCi-Ultra card into the full-length PCI slot. If the computer does not boot and the motherboard meets all of the requirements above, consult the manual or contact Coreco technical support. When the computer boots, install the Windows NT video driver for it. Windows NT should autodetect the TCi-Ultra as an "S3 Compatible" video card. In the "Display" section of the Windows NT "Control Panel", the "Color Palette" should be set to "65536 colors" and the "Refresh Frequency" should be set to "60 Hz". If this is not done the computer will crash when an image is acquired. B.2.5 J V C C C D Camera Plug the camera output cable into the camera. Connect the red (red channel), green (green chan-nel), blue (blue channel) and yellow (sync channel) wires of the camera output cable and the framegrabber input cable together using the BNC-barrel connectors. Then plug the framegrabber input cable into the TCi-Ultra card. Plug the camera into the power supply and make sure the controls are set according to Table B.l. B.2.6 Illumination If the demonstration slide is used, no major modifications are required. The bulbs should be checked to ensure that they are firmly in their sockets. The reflector tunnel should be adjusted so that it is hemispherical and is not in contact with any of the bulbs. Ensure that the switch is in the off position and connect the power cables to the battery screw terminals. The polarity is indicated on the cables. Turn the lights on to make sure all the bulbs work. The four bulbs draw about APPENDIX B. CAN DEFECT DETECTION SYSTEM USER'S GUIDE 123 Table B.l: JVC TK-1070U camera settings. Control Setting White Balance Auto - Press "set" with the lights turned on. Gain Fixed - +6 dB Detail On Gamma 1 D-Sub Out RGB Shutter 2000 Iris f2.0 Focus A little more than 0.3 m 24 amps, so the battery is drained very quickly. Charge the battery after using the lights for any amount of time. B.3 Software Installation This section assumes that Window NT 4.0 has been successfully installed on the host computer and that you have administrator privileges. During the install process, the TCi-Ultra card should have been autodetected as an "S3 Compatable" video card. B.3.1 TCi-Ultra Driver Version 1.15 of the TCi-Ultra driver is required for the CDDS. Version 2.xx of the driver does not seem to work as well as version 1.15. The driver can be obtained from: ftp: / / f tp. coreco. com/ clients/coreco/public/tci/nt/drv_115/ntdrvll5. zip. To install the driver, simply unzip the file, run the setup program and follow the instructions to install the driver in the default location. After the installation is complete, reboot the computer and run wconfig.exe to configure the card. Make all the necessary adjustments for the configuration window to look like Figure B.2. If the host frame buffer is changed from 8-bit mono to 32-bit colour the computer should be rebooted again. APPENDIX B. CAN DEFECT DETECTION SYSTEM USER'S GUIDE 124 Oculus-TCi Configuration ME File £cit Help (Host Memory Section -Width Height Logical Memory Format J W O zl J 4 8 0 3 Number of Frame Bufferfe* |T Buffer Type: I R G B 32 (8888) 16 7M Colors Packed JjJ Acquisition Section Camera Type: I R G B Color Camera Camera Connector | R G B 81 zl Acqu. Video Format: JRS-170/NTSC Square Pixel. 640x480 zl Display Section • Secondary V G ^ R G B 32 (8888) 16.7M Colors Packed zl Logical Memory Format: Width Height 640 ~j [480 TCI-Ultra Options Monitor Mode f* Passthrough! <~ Dual 256-Color Keying Color: 2 3 9 High Color Keying Color: Red: (239 Overlay Buffer Type: Monochrorr Green: JO "Tj Blue: fb" Figure B.2: TCi-Ultra configuration program main window. B .3 .2 T inyPor t Driver The TinyPort is a shareware Windows NT driver which allows easy access of a single block of 10 Port addresses which are not used by another Windows NT driver. This driver was originally ob-tained from ftp://ftp.armory.com/pub/electronics/LPT/tinyport.zip. General installations instruction can be found in the readme file in the zip file. The specific instructions for the CDDS are: 1. Unzip tinyport.zip into a directory (i.e. c:\tinyport). 2. Open the Windows NT "Control Panel" and double click on the "Devices" icon. 3. Stop the following devices: Parallel, Parport and ParVdm APPENDIX B. CAN DEFECT DETECTION SYSTEM USER'S GUIDE 125 4. Disable all of the devices stopped in step 3. All of these devices must be disabled because they use the parallel port and would conflict with the TinyPort driver. After disabling these devices, printing to the parallel port will not be possible. 5. Copy tinyport.sys to c:\winnt\drivers. 6. Run the tpconfig.exe with the following options: tpconfig ISA 0 378 3 "UBC" <reg code> 1 The address of the parallel port should be set to 0x378 in the computer's BIOS configuration program. This option is usually found under "Peripheral Setup" menu item of the BIOS configuration program. 7. Type: net start tinyport. 8. Go back to the "Devices" program in the "Control Panel". There should now be an entry for the tinyport driver in the list of devices. If there is not, look for any error messages the in the "Event Viewer." Select tinyport, click on the "Startup" button and select "Automatic." The TinyPort will start every time the computer boots. B.3.3 CDDS Program Unzip the cdds.zip file on the disk attached to the back cover into a directory such as c:\cdds. The following section goes into the details of configuring and using the CDDS program. B.4 Configuring and Running the CDDS This section describes how to configure and use the CDDS program. It is organized as follows: • The functions of controls in the main window and dialog boxes are explained. • Instructions on how to focus and calibrate the camera. • Instructions on how to edit the configuration file. • Instructions on how to set the program to run in "Automatic Mode". ^he actual registration code can be found on IAL Lab Management webpage APPENDIX B. CAN DEFECT DETECTION SYSTEM USER'S GUIDE 126 B.4.1 Main Window The main window of the CDDS program is shown in Figure B.3. It is divided up into six areas: J L Untitled - cddtest File Edit ¥iew Help MM Live | Original Cross Pack | Flange | Nc • Messages: — Acquire Time: 81 ms Process Time: 1953 ms Display Time: 230 ms Next FJe to Save: d:\temp\data0001.raw Ready Previous Can: Can Found: Yes Cross-Pack Defect Yes Light Area: 239% Dark Area: 21.6% Total Area 45 5% Flange Defect: No Type: None Width: 0 Pixels Can History: Defect Type Count % of Total Cross Pack: Flange: None: Total: 66.7% 0.0% 33.3% 10.0%! Figure B.3: CDDS program main window. Process Image: ! Stop Framegrabber Configuration: Take BG Image) System Defects Auto Save Image Window: The images of cans are displayed here. There are five tabs at the top of the window which select the type image to be displayed. - Live: With this tab selected, the buttons in the framegrabber area are activated. Trig-gering of the framegrabber and processing of the images are bypassed when this tab is selected. This tab should be used when focusing and testing the camera. — Original: Selecting this tab will display an unprocessed image of a can every time the framegrabber is triggered. APPENDIX B. CAN DEFECT DETECTION SYSTEM USER'S GUIDE 127 - Cross-pack: Selecting this tab will display an image with the cross-pack regions and flange outlined every time the framegrabber is triggered. - Flange: Currently, this tab is identical to the cross-pack tab. When flange defect clas-sification is integrated into the program, it will display a flange image with the defects highlighted. - No Image: When this tab selected no image is displayed. This reduces computational overhead required to display the images; thus, a higher can processing speeds can be achieved. • Previous Can: This area of the main window displays the defects found in the previously processed can. Only the "Can Found" and "Cross-Pack Defect" fields are updated. The "Flange Defect" fields are not updated. • Process Image: This area contains two buttons: - The "Single" button is used to manually trigger acquisition and processing of an image. A can should be in the field of view when this button is used. - The "Start"/"Stop" button starts and stops the "Automatic Mode". When the "Start" button is clicked, an image is acquired and processed every time the sensor beam is broken. The "Start" button, which becomes the "Stop" button, can be clicked again to take the program out of "Automatic Mode". • Framegrabber: Controls to operate the framegrabber manually: - Snap: Acquires an image and performs no processing. - Grab: Does not currently work. It should show live video from the camera. • Messages: This area displays the time it took to acquire, process and display the previous image, as well as the next image file to be saved. • Can History: This area displays statistics concerning the number and types of defect found. • Configuration: This area contains four buttons: APPENDIX B. CAN DEFECT DETECTION SYSTEM USER'S GUIDE 128 - Take BG Image: Acquires an image of the background. - System: Opens the "System Configuration" dialog box. - Defects: Opens the "Defect Definition" dialog box. - Auto Save: Opens the "Automatic Image Saving" dialog box. B.4.2 Dialog Boxes Framegrabber Selection The "Framegrabber selection" dialog box (as shown in Figure B.4) appears when the program is first started. Selecting the "TCi Card" option initializes the TCi-Ultra and starts the program normally. Initializing the TCi-Ultra takes a few seconds. The "Dummy Card" option can be selected to run the program with out the framegrabber. When this option is selected, the "Automatic Mode" and all the frame grabber buttons are disabled and a default image is loaded. This image can be processed using the "Single" Button. Select Framegiabbei • Select Device— <? iTC'iCaid r Dummy Card OK Figure B.4: Framegrabber selection dialog box. System Configuration The "System Configuration" dialog box (as shown in Figure B.5)currently has two options: • Trigger Delay: This governs the time delay until an image is acquired after a can is detected by the sensor. Adjust this delay so that cans are centred in the images. The default value of 10 ms is appropriate for the demonstration slide configuration. • Process Image: When this option is not chosen the images are not processed in "Automatic Mode." This can be useful when trying to acquire and save images at high frequencies on the canning line. APPENDIX B. CAN DEFECT DETECTION SYSTEM USER'S GUIDE 129 Configure System Trigger Delay (ms): OK 10 Cancel W Process Can Image Figure B.5: System configuration dialog box. Automatic Image Saving This dialog box is used to enable and configure the automatic saving of acquired images. The options are described below: • Auto Save Path: Location to save the images. • Root Filename: Root to be used to generate the image filenames. For example entering a root of data would generate files named dataOOOl .raw, data0002 .raw, ... • Reset File Number: Check this box to reset the file number to 0001. • Enable Auto Save: Check this box to enable automatic saving of the images. Each image acquired in "Automatic Mode;' and saved in consecutively numbered files. Auto Save Configuration Auto Save Path: OK |d:Memp\| Cancel Root Filename: |~ Reset File Number (data f" Enable Auto Save Figure B.6: Automatic image saving configuration dialog box. Defect Definitions This dialog box is used to edit the defect definitions. Currently the only defect definition parameter that is configurable is the "Cross-Pack Area Threshold". Its value can range from 0.0% to 100.0% APPENDIX B. CAN DEFECT DETECTION SYSTEM USER'S GUIDE 130 of the fill area. Cross-Pack Area Threshold: Cancel Figure B.7: Defect definition dialog box. B.4.3 Camera Focusing and Calibration To bring the image into focus, turn the lights on, click on the "Live" tab of the image window and use the "Snap" button to update the image. Adjust the focus ring of the lens until the flange is in focus. This is usually easier to do if a piece of paper with text on it is placed on top of the can. The iris should be set to approximately f 2.0 when the battery is fully charged, if it is not completely charged, the iris will have to be set a little lower. If the demonstration slide configuration is used, the distance from the can to the lens should not need adjustment. If the system is used in a different configuration, the distance should be adjusted so that the can occupies about 80% of the image height. If a new configuration is used, the parameters inside_edge jradius and f lange-thickness will have to be determined and changed in the cdds. in i file. To do this follow these steps: 1. To acquire the necessary images uncheck the "Process Images" box in the "System Configu-ration" dialog box. 2. Enable automatic saving of the images. Edit the other auto save options as required. 3. Click the "Start" button. 4. Place can in the field of view and use the manual trigger on the blue interface box to acquire an image. This process should be repeated several times with different cans. 5. Click the "Stop" button. APPENDIX B. CAN DEFECT DETECTION SYSTEM USER'S GUIDE 131 6. Calculate the mean radius of the inside edge of the flange and width of the flange. Measure each dimension in the horizontal and vertical directions for each can and take the mean value. It is easier to measure the diameter and divide by two. These measurements can be made by loading raw image files into MATLAB and using the cddsload.m m-file which is in the cdds. zip file. Once loaded, the ginput function can be used to make the measurements. Put the results in the cdd. in i file. The format of this file is discussed in more detail in the next section. The thickness and radii of the annular regions of interest (AROI) will also have to be adjusted based on the radius and thickness. B.4.4 C D D S Configuration File: cdds. i n i The default version of the cdds. in i configuration file for use with the demonstration slide con-figuration is included in the cdds. zip file. The configuration file contains a list of parameter names, such as inside_edge_radius and f lange_thickness, followed by numbers. The numbers with decimal points can be floating point values while those without must be integer values. Each parameter/number pair must be on a separate line. The parameters must appear in the current order. B.4.5 C D D S Automatic Mode In "Automatic Mode" an image is acquired and processed whenever a can is detected. The results of the defect detection algorithms are displayed in the "Previous Can" area and are logged in the "Can History" area. These are the steps required to enter into "Automatic Mode": 1. Click on the "Live" tab. 2. Turn on the lights. 3. Click the "Snap" button and check to see if the image is in focus. 4. Click on any tab except "Live". 5. Click the "Take BG Image" button. 6. Click the "Start" button. APPENDIX B. CAN DEFECT DETECTION SYSTEM USER'S GUIDE 132 7. Drop cans down the slide. 8. If can is consistently incomplete in the image, adjust the "Trigger Delay" in the "System Configuration" dialog box. This can be done without exiting "Automatic Mode". Note that the can speed is variable when using the slide, and it is normal to see some cans partially outside the image. Drop the cans from different positions to adjust the speed. 9. Click the "Stop" button to exit "Automatic Mode". Appendix C Can Defect Detection System Developer's Manual This appendix discusses how to modify and recompile the user interface and the machine vision code in Windows NT. It also discusses how to modify and recompile the machine vision code in Unix and how to use the PC parallel port for input and output. These instructions assume that the compiled version of the CDDS program discussed in Appendix B functions properly. C.l Window NT Development The following programs and libraries are required to recompile the CDDS program: • Microsoft Visual C++ version 4.0 or 5.0. • cddswin.zip on the attached disk. • TinyPort driver interface library. • Ported Vista library vista . l ib , in cddswin.zip. • TCi-Ultra interface library, which comes with the source code for the Coreco example pro-gram tcipro.exe. This program and the library can be found in files ntprxll4.zip and ntprcll4.zip 1 . 1 These files are registered on the I A L lab management webpage. 133 APPENDIX C. CAN DEFECT DETECTION SYSTEM DEVELOPER'S MANUAL 134 Follow these instruction to set up the computer to compile the CDDS source code: 1. Install Visual C++. 2. Unzip cddswin.zip into an empty directory. 3. Place v i s ta . l ib in the library path of the compiler. 4. Place vista directory into the compiler include path. 5. Ensure that all the libraries are in the compiler's library path and that the associated header files are in the include path. To modify the code, open cddtest.mdp file in Visual C++. The functions in the source files are briefly discussed in the following sections. Comments in the source code provide more detailed information. C . l . l User Interface The user interface was built using the standard Microsoft Foundation Classes (MFC) template. All window controls and dialog boxes were created and edited using the Class Wizard and the Resource Editor. The functions provided by each source file are listed in Table C.l . Table C. l : CDDS user interface source code file descriptions. Source File Description AutoSaveDlg.cpp Automatic image saving dialog box. Config.cpp System configuration dialog box. DefineDefects.cpp Defect definitions dialog box. Front End. cpp Main window and control functions. MainFrm.cpp File created by MFC. Sensor.cpp Can sensor interface. StdAfx.cpp File created by MFC. TCi.cpp Interface to TCi framegrabber driver. Cddtest.cpp File created by MFC. CddtestDoc.cpp File created by MFC. Cddtest View.cpp File created by MFC. dibapi.cpp Windows bitmap manipulation. io.cpp Interface to the TinyPort driver. sdevice.cpp Framegrabber Selection. APPENDIX C. CAN DEFECT DETECTION SYSTEM DEVELOPER'S MANUAL 135 C.l.2 Machine Vision Code The machine vision code implements the algorithms presented the body of this thesis. Table C.2 lists the type of functions that can be found in each source file. This code depends on the Vista image library which is available from f tp: \ \ f tp. cs .ubc. ca\pub\local\vista. Vista is designed for use in Unix, but is written in ANSI C and is therefore somewhat portable. The following section describes the steps required to port the code to Window NT. It is probably easier to modify the machine vision code in Unix because all the Vista man-pages (help files) are available for reference. Table C.2: Machine vision source code file descriptions. Source File Description binarize.c Image binarization functions. binops.c Binary image operations: erosion, dilation. can.c Can data type creation and manipulation functions. cddedges.c Functions to create and manipulate edge data types for Hough transform. copyroi.c Functions for copying image ROIs. drawcirc.c Function to draw a circle. eroi.c Functions to create and manipulate EROIs and AROIs. findcan.c Functions that implement the can location algorithm. flange, c Flange extraction functions. hedge.c Hough transform functions. hsvreg.c HSI region property calculation functions. iops.c Various image operation functions. label.c Image labelling algorithm functions. outline.c Functions to outline flanges and cross-pack areas. perim.c Function to find the perimeter of a region. processcan.c Main function to apply all of the feature extraction algorithms. region, c Function to create and manipulate regions. rgb2hsv.c RGB to HSI transformation function. rprop.c Other region property calculation functions. select.c Functions required to calculate the median. slice, c Can location estimation algorithm functions. vision.h Header file with all the data types and function prototypes. xform.c Various image coordinate transformation functions. xpack.c Functions to extract the cross-pack features. APPENDIX C CAN DEFECT DETECTION SYSTEM DEVELOPER'S MANUAL 136 Porting the Vista Library To Windows NT A very simple approach was used to port the Vista Library: everything that did not work and was not essential was removed. These are the functions and files that had to be removed: • All files using an X-windows library function. • All references to X-windows headers files had to be deleted. • All files related to postscript printing. • All functions related to the fast Fourier transform (FFT). Many of the functions used the rint rounding function, which is in the Unix C library, but not in the Windows NT C library. A similar function with the same functionality was added to allow these functions to be compiled. For convenience a library file was compiled using the modified Vista source code for use with Visual C++. It is included in cddswin.zip and is bound by the copyright below. Vista Copyright Copyright 1993, 1994 University of British Columbia Permission to use, copy, modify, distribute, and sell this software and its documentation for any purpose is hereby granted without fee, provided that the above copyright notice appears in all copies and that both that copyright notice and this permission notice appear in supporting documentation. UBC makes no representations about the suitability of this software for any purpose. It is provided "as is" without express or implied warranty. C .2 .Unix Development The machine vision code can also be used and modified in Unix. The machine vision code was originally developed using Linux, but it will probably work in any version of Unix. This code requires that the Vista library is installed. Installation instructions for Vista are provided with the package. The cddsunix.zip file contains all the files necessary for Unix development. All the machine vision source files in cddsunix.zip are identical to those in cddswin.zip. There is also an APPENDIX C. CAN DEFECT DETECTION SYSTEM DEVELOPER'S MANUAL 137 addition source code file cddstest.c included with cddsunix.zip, which contains the command line interface for the machine vision code. C.2.1 Machine Vision Testing Program The cddstest program can be compiled using the included Makefile. The program options are listed below: Usage: cddstest <options> [ i n f i l e ] [ o u t f i l e ] , where <options> includes: -help Prints usage information, - i n <string> Input f i l e . Default: (none) -out <string> Output f i l e . Default: (none) -backgound <string> F i l e containing background image s l i c e . Required, -config <string> Configuration f i l e . Required, -debug [ true I f a l s e ] Prints out various parameters and writes the intermediate steps to f i l e s . Default: false -timer [ true I false ] Outputs execution time of each step. Default: false The outfile will contain a processed version of the can image with the flange and cross-pack areas highlighted. The many of the files created with the debug option set were used to create the figures in Chapter 4. The times generated with the timer option set are the same as those used in Chapter 6. The included example image can be processed with this command: cddstest - i n example.v -out outlined.v -config prod.ini -backgound prod.bg.v -debug true -timer true C.3 Using the Parallel port for I/O The CDDS uses the PC parallel for input and output. A detailed explanation of the PC parallel port can be found on page 844 of [65]. There are three registers associated with the parallel port: the data, status and control registers. For the CDDS these registers are located at 0x378, 0x379 and 0x37A. (This could be changed by modifying the source code.) Using these registers, it is APPENDIX C. CAN DEFECT DETECTION SYSTEM DEVELOPER'S MANUAL 138 possible to get a total of 11 bits of output and 5 bits of input. The blue parallel port interface box only uses 1 bit of input (bit 3 of the status register, 0x379) and 2 bits of output (bits 0 and 1 of the data register, 0x378). It is possible to add more inputs and outputs by building a new interface box based on the schematic in Figure C.l . parallel port pins 2 5 power connector red +5v black . 1 black ' r yellow +12v 5-330Q I V v W -330Q red-green i 0 0 K Q I—vvw-relay switch sensor connector Figure C. l : Parallel port interface box schematic. In the above schematic each parallel port pin that is used is buffered by one bit of the 74H367 hex buffer chip. This protects the parallel port and prevents the interface from drawing too much current. The relay coil is energized when the sensor is triggered, this set the input bit high. The relay is also energized when the switch is pressed. The output bits are connected to the LEDs. Note that the second switch and the 74LS04 chip (not shown in schematic) are left over from a previous design and are not used. 


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items