UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

A model-based approach to complex contour generation for process automation using computer vision Gamage, Lalith D. K. B. 1993

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


831-ubc_1993_spring_phd_gamage_lalith.pdf [ 10.63MB ]
JSON: 831-1.0080874.json
JSON-LD: 831-1.0080874-ld.json
RDF/XML (Pretty): 831-1.0080874-rdf.xml
RDF/JSON: 831-1.0080874-rdf.json
Turtle: 831-1.0080874-turtle.txt
N-Triples: 831-1.0080874-rdf-ntriples.txt
Original Record: 831-1.0080874-source.json
Full Text

Full Text

A MODEL-BASED APPROACH TO COMPLEX CONTOUR GENERATION FORPROCESS AUTOMATION USING COMPUTER VISIONByLalith D. K. B. GamageBSc. (Hons), Moratuwa, Sri Lanka; MSc., Leicester, U.K.A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFDOCTOR OF PHILOSOPHYinTHE FACULTY OF GRADUATE STUDIESMECHANICAL ENGINEERINGWe accept this thesis as conformingto the required standardTHE UNIVERSITY OF BRITISH COLUMBIAAugust 1993© Lalith D. K. B. Gamage, 1993In presenting this thesis in partial fulfilment of the requirements for an advanceddegree at the University of British Columbia, I agree that the Library shall make itfreely available for reference and study. I further agree that permission for extensivecopying of this thesis for scholarly purposes may be granted by the head of mydepartment or by his or her representatives. It is understood that copying orpublication of this thesis for financial gain shall not be allowed without my writtenpermission.(Signature)Department of mECHAN/c4L En/6/NecR/A16The University of British ColumbiaVancouver, CanadaDate^1g 41)6 /993DE-6 (2/88)AbstractThe research described in this thesis addresses the development of methodology for fastextraction of complex processing information of an object using relatively simple on-linemeasurements and prior knowledge and information that can be generated off-line on theobject. Even though the approaches investigated here are quite general, the techniquesare developed with respect to the specific application of flexible automation of a fishprocessing cell.Robotics and computer vision have been combined to produce useful results in avariety of applications. Vision guidance may be applied at least in two ways here. Cuttingcontours of fish could be generated off-line and these could be used to generate thereference signals for the cutting controller. Alternatively, cameras mounted on the cuttercould be used to guide the cutter in real time. The present research concentrates on thefirst approach since the objective is to generate the cutting contour sufficiently fast tomeet the process speed requirements. The complexity of the vision problem is eased tosome extent by the fact that the object (fish) is already recognized, at least in a genericsense.Low-to-medium-level computer vision techniques involving image transformation, en-hancement and analysis of basic features such as edges, regions, shape, colour and texture,are reasonably well established and widely applied. The application of these techniquesto directly generate the cutting contour of an arbitrary fish would be computationallyslow and unacceptable in terms of process speed requirements. The present researcheffort is directed at expediting vision-based generation of cutting contours through theuse of model-based vision techniques. One of the goals of this research is to develop aknowledge-base and a model-base to support the complex feature extraction procedure.Use of low-level image analysis techniques to obtain non-complex features at high speedsfrom the fish images is a further goal, as this is related to the first goal.In the model-based approach, the fish models are generated using a representativedatabase of measurements on fish. The data includes a cutting contour and some dimen-sional measurements which are referred to as attributes or features. The cutting contoursare non-dimensionalized, transformed, and grouped into an appropriate number of modelsusing a systematic grouping procedure. Each model contains a nondimensional cuttingcontour for the corresponding group of fish, and a set of attributes. In real-time opera-tion, the measured attributes have to be matched to the model. The main contributionof this research is the methodology for the generation of rules for model matching orclassification. The a priori probability distribution of attributes in each group is usedto generate the rules for the model that corresponds to the group. Rules generated forall models are collected in a rule base and used for classification. A systematic methodfor integrating expert and heuristic knowledge is developed in order to improve the effi-ciency of the classification process. An extensive process of error analysis of the presentmethodology is also presented.The techniques developed in the present research were implemented in a prototype fishprocessing cell that is being developed in the Industrial Automation Laboratory. This cellconsists of a vision station, a knowledge-based system, a robotic cutter, and a motorizedconveyor unit. In a heading (head removal) operation, each fish on the conveyer is imagedat the vision station and the corresponding cutting contour is determined with the helpof the knowledge-based system. This cutting contour is transformed into a drive signalfor the robotic cutter. At the cutting station, a fish will be gripped and cut accordingto the trajectory generated for that particular fish. In the present prototype system,however, the robot draws the corresponding cutting contour on a board placed on theconveyor bed. The vision system, the cutter/gripper controls, and the conveyer controlscommunicate and synchronize the activities of various components in the fish processingsystem. Specific ways of transferring the new technology to the Canadian fish processingindustry are currently being explored.This research also compares the performance of the developed rule-based methodologywith other classification and estimation methodologies. These include neural networks,Bayesian inferences, k-nearest neighbour method, and multiple regression. The results ofthese comparisons are presented. Also, as a preliminary requirement for implementation,algorithms have been developed for on-line measurement of the attributes using low-levelimage analysis techniques.ivTable of ContentsAbstract^ iiList of Tables^ xList of Figures^ xiiAcknowledgement^ xvi1 Introduction 11.1 Objectives of the Research ^ 21.2 The Specific Application 31.3 Approach and Justification ^ 41.4 Contributions of the Research 61.5 Overview of the Thesis ^ 72 Literature Review 92.1 Statistical Decision Theory and Pattern Classification ^ 102.2 Knowledge-Based Systems, and Rule-Based Systems 112.3 Computer Vision ^ 142.4 Knowledge-Based Computer Vision ^ 183 Data Acquisition and Analysis 203.1 The Approach ^ 213.2 Data Acquisition 22v3.3 Data Analysis ^ 263.4 Model Grouping Through Mismatch Analysis ^ 323.4.1^Model Comparison ^ 353.5 Feature Selection ^ 383.5.1^Development of Rules ^ 404 Rule Generation 424.1 Formal Analysis of Rule Generation ^ 424.1.1^On-line Inference ^ 484.1.2^Error Analysis 524.1.3^Comparison of Bayesian Error to Rule Base Error ^ 524.1.4^Rule Base Error for Multi-Class, Multi-Modal Case ^ 534.1.5^Density Estimation Kernel and Size of Subrange (h) ^ 584.2 Rule Generation Through Connectivity Analysis ^ 584.2.1^Threshold Level Selection ^ 594.3 Illustrative Example ^ 624.3.1^Rule Prioritization and Threshold Level Selection ^ 654.3.2^Results ^ 694.4 Integration of Heuristics and Expert Knowledge ^ 714.4.1^Integration of Knowledge into Fish Rule Base: An Illustration 725 Attribute Measurement 765.1 Attribute Measurement Using Projection Profile Method ^ 775.2 Attribute Measurement Using Region Boxing^ 835.3 Comparative Evaluation of the Techniques 835.4 Measurement of Orientation ^ 84vi^5.4.1^Least Rectangle Method ^^5.4.2^2nd Moment of Area Method 5.4.3^Fundamental Ellipse Method ^5.4.4^Results and Comparison 859092946 Pattern Recognition Techniques 986.1 Pattern Recognition in Automated Fish Processing ^ 1016.2 Neural Networks ^ 1016.3 A Neural Network for Fish Classification ^ 1086.4 Multiple Regression ^ 1116.5 Bayesian Approach 1246.6 k-Nearest Neighbor Method ^ 1287 Comparison of Recognition Techniques 1317.1 Rule-Based Method ^ 1317.1.1^Real-Time Complexity Comparison to Bayesian Approach 1337.2 Neural Network ^ 1347.3 Multiple Regression 1367.4 k-Nearest Neighbour Method^ 1387.5 Summary of Classification Results 1398 Pattern Recognition for Estimation of Cutter Position 1408.1 Off-line Data Acquisition and Analysis ^ 1418.2 Development of an Estimation Model 1429 Laboratory Implementation 1499.1^"V-cut" Butchering Machine ^ 149vii9.2 Contour Cutting System ^  1509.3 Automatic Rule Generation  1559.4 Cutter Technology ^  1599.4.1 Water-jet Cutter  1599.4.2 Laser Cutter ^  1609.4.3 Wire-Cutter  1609.4.4 Rotary Blade Cutter ^  1639.4.5 Reciprocating Blade Cutter  1649.4.6 Comparison and Recommendations ^  1659.5 Gripper Considerations ^  1659.5.1 Comparison and Recommendations ^  16710 Conclusions and Future Work^ 17010.1 Conclusions ^  17010.2 Main Contributions of the Thesis ^  17210.3 Future Work^  172Nomenclature^ 174Bibliography^ 178A Overview of OPS/83^ 188A.1 Rule-Based Programming ^  188A.2 High Level Programming  189A.3 Working Memory ^  189A.3.1 Element Declarations ^  189A.3.2 The make, modify, and remove Statements ^ 190viiiA.4 Rules ^  190B ESHED SCORBOT-ER VII Robotic Manipulator^ 192B.1 Robot Specifications ^  192B.2 Robot Controller Specifications ^  192B.3 Programming Considerations  198C SD512 Vision System^ 202C.1 System Configuration and Specifications ^  202C.2 Available Functions ^  203C.2.1 Feature Enhancement^  203C.2.2 Image Segmentation  205C.2.3 Feature Extraction ^  206C.2.4 Template Matching and Correlation ^  207C.2.5 Pyramid Processing ^  207C.2.6 Graphics ^  208C.2.7 Video Control and Image Acquisition ^  208C.2.8 IKS/VR Variables ^  209C.3 Application Software Development ^  210D Interface Circuitry and Communication Software^ 213D.1 The Interface Circuit ^  213D.2 Synchronization Circuit  213D.3 Communication and Synchronization Software ^  215ixList of Tables3.1 Typical Data of Fish Dimensions^ 253.2 Typical Values of Index of Mismatch Compared with Fish No. 2. ^. 303.3 Indices of Mismatch between the Models ^ 373.4 Percentages of Wastage and/or Unwanted Material Left in the Body tothe Total Weight of fish ^ 374.1 A Sample of Data on Fish Dimensions for Group 5. ^ 634.2 The Medians y of the Subranges of the Relationships for Group 5^ 634.3 Probabilities of the Multinomial Distribution of Density Values of Rela-tionships for Group 5. ^ 664.4 Probabilities of the Multinomial Distribution of Density Values of Rela-tionships for All Groups^ 674.5 Probabilities of the Multinomial Distribution of Density Values of Rela-tionships for All Groups^ 694.6 Rule-Based Method Classification Results for Resubstitution and Leave-One-Out Methods. ^ 705.1 Typical Times for Various Image Processing Operations. ^ 845.2 Typical Optimal Angles of Cut. ^ 855.3 Typical Results from the Three Methods of Computing the Angle of Ori-entation^ 97x6.1 Neural Network Classification Results for Resubstitution and Leave-One-Out Methods^ 1126.2 Multiple Regression Estimation Results for Resubstitution and Leave-One-Out Methods^ 1236.3 k-Nearest Neighbour Classification Results for Resubstitution and Leave-One-Out Methods. ^ 1307.1 Summary of Results. 1398.1 Sample Test Results of Estimate Using Multiple Regression. ^ 1448.2 Sample Test Results of Estimate Using a Neural Network ^ 145B.1 Robot Arm Specifications. ^ 193B.2 Robot Controller Specifications 195B.3 ACL Command Summary^ 199xiList of Figures3.1 The System for the Off-line Development of a Model-Base for Salmon. 233.2 A Distributed Control Structure for On-line Fish Processing^ 243.3 Nomenclature for Geometric Data of a Fish ^ 253.4 Nomenclature for Geometric Data on Fish Head 263.5 Reference Frames for Coordinate Transformation in the Cutting ContourAnalysis^ 273.6 Representation of the Index of Mismatch as the Minimum Absolute AreaBetween Two Contours. ^ 313.7 First Stage of the Mismatch Analysis Procedure. ^ 343.8 Second Stage of the Mismatch Analysis Procedure. 364.1 Probability Density Function of a Single-mode, Two-Class Case^ 474.2 Probability Density Function of a Single-mode, Two-class Case and theSelection of Windows for Rule Generation   474.3 Graphical Illustration of the Error Difference Between the Rule Base andBayesian Classification Methods. ^ 544.4 The Generalized Connectivity Diagram of a Model. ^ 604.5 The Connectivity Diagram for Group 5. ^ 644.6 Probability Distribution of Density Values of Relationships for Group 5. . 664.7 Probability Distribution of Density Values for All Groups. ^ 684.8 Illustration of a Decision Tree for the Fish Classification System ^ 73xii5.1 Captured Image Under Structured Lighting. ^  775.2 The Isolated Largest Continuous Image Segment (Fish in the Image). .  ^ 785.3 A region of Interest of the Fish Image.  ^795.4 Gaussian-Smoothed Fish Head Image. ^  805.5 Edge-Enhanced Head Image. ^  815.6 Projection Profile of the Head Image ^815.7 Two Angles of Importance for Recovery Improvement in Head Removal inFish Processing^  865.8 The Least Rectangle Enclosing a Fish. ^  875.9 The Case of Mouth Closed. ^  895.10 The Case of Mouth Open  905.11 Illustration of the Parameters Used in Calculation of Second Moment ofArea ^ 915.12 Distances to a Reference Point and the Corresponding Angle ^ 955.13 The Graph of Distance to a Reference Point Vs. the Angle. ^ 955.14 An Illustration of Three Methods to Calculate the Orientation  ^966.1 Schematic Representation of Pattern Classification  ^986.2 (a) Learning the Mapping Function Using a Training set. (b) EstimationUsing the Learned Mapping Function. ^  1006.3 Schematic Representation of a Linear Discriminant^  1026.4 Schematic Representation of a Multiclass Linear Discriminant^ 1036.5 Schematic Representation of a Semilinear Feedforward Net ^ 1066.6 Normal Probability Plot of Residuals for Coefficient al  1146.7 Scatter Plot of Residuals for Coefficient al. ^  1156.8 Normal Probability Plot of Residuals for Coefficient a2^ 1166.9 Scatter Plot of Residuals for Coefficient a2. ^  1176.10 Normal Probability Plot of Residuals for Coefficient a3^ 1186.11 Scatter Plot of Residuals for Coefficient a3. ^  1196.12 Normal Probability Plot of Residuals for Coefficient a4^ 1206.13 Scatter Plot of Residuals for Coefficient a4. ^  1217.1 Schematic Representation of a Node in the Neural Network ^ 1347.2 Schematic Representation of Patterns in 2-D Feature Space. ^ 1357.3 Density of Patterns for Two-Class, Single-Feature Classification^ 1398.1 Illustration of the Features of a Fish Head^  1418.2 Normal Probability Plot of Residuals of Point Estimate Using a NeuralNetwork^  1468.3 Scatter Plot of Residuals of Point Estimate. ^  1479.1 Schematic Representation of the Model-Based Contour Cutting Workcell. 1539.2 An Image Captured by the Vision System^  1549.3 A Typical Output of the Model-Based Contour Generation ^ 1549.4 An Image of a Salmon with the Cutting Contour Superimposed^ 1559.5 P-Rules with Density 9 (above), and Density 6 (below)^ 1579.6 P-Rules with Density 4 (above), and Density 2 (below)  1589.7 Cutting Geometry of Fish in Contour Cutting for Recovery Improvement. 1619.8 Illustration of Two Stage Cutting in Head Removal^ 1629.9 Proposed Wire-Cutting Assembly for Robotic Contour Cutting. ^ 1629.10 Proposed Rotary-Blade Cutter Assembly for Robotic Contour Cutting. ^ 163xiv9.11 Proposed Reciprocating-Blade Cutter Assembly for Robotic Contour Cut-ting ^  1649.12 Schematic Diagram of the Synchronous Gripper^  1669.13 Schematic Diagram of the Intermittent Gripper  168B.1 Robot Arm Axes^  194B.2 Operating Range of the Robot. ^  194C.1 SD512 System Block Diagram.  204D.1 Interface Circuit between the Robot Controller and SD512^ 214D.2 Synchronization Circuit of the Conveyor and the Vision Software. . . . . 216D.3 Complete Wiring Diagram of the Interface and the Synchronization Circuit.218X VAcknowledgementI would like to thank Dr. C. W. de Silva for his constant supervision and guidancethroughout this research. Special thanks to my supervisory committee, Dr. A. S. Atadan,Dr. R. G. Cosine, Dr. P. D. Lawrence, and Dr. V. J. Modi, for many valuable suggestionsand comments. I would also like to thank the Advanced Systems Institute of BritishColumbia for providing a research grant (Grant No. 90-2151C) to carry out the researchdescribed in this thesis. Additional funding has been provided by Natural Sciences andEnginering Research Council's Industrial Automation Chair held by Dr. C. W. de Silva.Last but not least I would like to thank my parents and my wife for their constant supportand encouragement.xviChapter 1IntroductionGeneration of processing information is important in many industrial automation tasks.Processing in this context essentially refers to mechanical processing. Applications mayinclude arc welding, inspection, sealant or glue applications, sewing, spray painting, andmechanical processing of fish. In many of these applications, the processing informationhas to be determined on-line through direct gauging of the workpiece or object. In suchsituations, the speed and the reliability of the information generated depends on thegauging technique or the sensing mechanism. Fast and accurate methods of generatingprocessing information are important in this regard.In many of these application areas, there is well established knowledge in the form ofexperience or expertise. For example, an expert in welding may relate that if part A isto be welded to part B, then the rod type must be X and the gap between the two mustbe Y. An experienced shoe quality control inspector may express that if the shoe type isP, then the inspection seam trajectory must be Q.Likewise, knowledge can be acquired from experienced personnel or from experts, gen-erally known as domain experts, by interviewing or questioning them. In some instances,the knowledge acquired from the domain experts may be complete, but in some otherinstances the domain expert may be unable to describe his/her knowledge completely.The definition of complete in this context is that everything relevant to the problemcan be obtained from the experts or can be derived from the acquired knowledge. Also,1Chapter 1. Introduction^ 2there may be cases where knowledge cannot be acquired from the domain experts nor isavailable from any other form.In cases where complete knowledge of the problem can be captured from the domainexperts, the most common approach is to build an expert system. In instances where noknowledge can be captured and where there is no algorithmic solution to the problem, astatistical approach is the usual practice. Also, there are methods such as neural networkswhich utilize "learn from examples" approach. In cases where the knowledge capturedfrom the domain experts is not complete, statistical or neural computing methods can beapplied. However, partial but useful knowledge about the problem can be incorporatedinto the statistical approach such that it enhances the reliability and the performance ofthe system.1.1 Objectives of the ResearchThe main objective of the research here is to develop a methodology for fast extractionof complex processing information of an object. The term complex is used because theinformation pertaining to the processing is complex to extract using conventional sensors.Use of an approach such as computer vision alone to accomplish this may either limitthe processing speeds or be impossible due to the nature of the processing information.The objective, therefore, is to use knowledge acquired from the domain experts and/orfrom the examples to extract complex processing information. Knowledge gained fromthe domain experts is usually in the form of if-then rules. Statistical knowledge capturedfrom sample data is usually in the form of analytical rules. Development of a rule-based system, which utilizes knowledge acquired from both of these methods, is anotherobjective of this research. Also, how expert knowledge and statistical knowledge togetherChapter 1. Introduction^ 3can give rise to better performance and higher reliability is explored.Alternatively, the problem can be addressed as a pattern recognition problem. Com-parison of several pattern recognition techniques for extraction of complex processinginformation of an object with the developed rule-based methodology is also an objectiveof this research.Development of a practical workcell to demonstrate the feasibility of the developedmethodology is the final objective of the research. This may involve transformationof processing information into necessary signals to drive the various components of thesystem. Also, control and synchronization signals to establish proper communication andsynchronization among various components of the system must be generated.1.2 The Specific ApplicationEven though the approaches investigated here are general, the techniques are developedwith respect to the particular application of flexible automation of a fish processing cell.In this system, functional flexibility is achieved through knowledge-based computer visionand a robotic cutter [de Silva, 1991]. In a heading operation, each fish on a conveyerwill be imaged at the vision station and the corresponding cutting contour must bedetermined. This will be transformed into a drive signal for the robotic cutter. Thecutting contour of a fish is essentially a complex feature since it cannot be determined bya computer vision system alone without any a priori knowledge of the object. Ideally, thecutting contour should follow the collarbone, and in fact part of the collarbone can beestimated from the gill contour obtained from the vision data. The other part, however,is not visible and must be derived. In particular, the latter part of the contour shouldremove the pectoral fin. The cutting contour should also be determined by the systemChapter 1. Introduction^ 4such that it maximizes the yield for a given fish.With a standard throughput of 120 fish per minute, a butchering operation has to becarried out in 500ms. A cutting contour for a fish, therefore, has to be generated alsoin less than 500ms. Due to the complexity of the contour, however, this is not feasibleusing current technology. Prior knowledge of fish along with few on-line measurementswill therefore be used to extract this feature. This leads us to the secondary objectivesof this research, which are the generation of the knowledge-base and the establishmentof the on-line measurements.1.3 Approach and JustificationAs described in the above section, the main objective of the proposed research is to de-velop a methodology to extract complex processing information of an object using simpleon-line measurements and prior knowledge. The approach used in the present researchis to first address a specific problem, and then attempt to generalize the procedures thatare developed to solve the specific problem. Two major components are emphasized,namely on-line measurements and prior knowledge.The knowledge-base contains two components, the knowledge acquired from the do-main experts and the knowledge gained from examples. The knowledge from the domainexperts is captured by interviewing or questioning them, while the knowledge from exam-ples is captured using statistical data analysis. It is possible that additional knowledgecan be gained by studying and working with the problem. The knowledge captured fromthe experts and the knowledge gained by working with the problem are most commonlyexpressed in the form of if-then rules. As mentioned earlier, this knowledge may not besufficient to solve the problem. This knowledge, however, can be used to improve theChapter 1. Introduction^ 5performance and the reliability of the system. It is, therefore, important to integrate thisknowledge with the knowledge captured from statistical data analysis. In the presentapproach, statistical knowledge is also represented by if-then rules so that they can beconveniently integrated into the rule base.The development of the statistical component of the knowledge-base is divided intothree subtasks. Firstly, a database of fish data is developed from still pictures and videotapes of fish, collected at a local fish processing plant. This database contains the detailsof the cutting contours of fish and some important dimensional measurements. Thesemeasurements are referred to as attributes or features. Secondly, the cutting contours inthis database are non-dimensionalized and transformed to a reference frame that, allowsa comparison of contours and grouping of similar contours. Finally, a knowledge-base isdeveloped to identify the group of a particular fish based on simple on-line measurements.Statistical analysis of the database of fish measurements is carried out to develop the rulesfor the knowledge-base. In particular, relationships between the fish measurements areinvestigated during the development of the rules.A suitable set of fish measurements, which can also be quickly determined by conven-tional computer vision algorithms, must be identified. These measurements are chosensuch that they best represent a particular group. Low-to-medium level image analysistechniques are used to establish these measurements. This involves image transformation,enhancement and analysis of basic features such as edges, regions, and shapes.Once the cutting contour is established as the inference of the knowledge-based sys-tem, it is dimensionalized and transformed to a drive signal for the robotic cutter. Atthe cutting station, the fish is gripped and cut according to the trajectory generated forthat particular fish. Two commonly used approaches can be adopted. The first approachrequires the position, velocity, and acceleration of the manipulator's joints at selectedChapter 1. Introduction^ 6locations along the trajectory. The second approach requires the path that the cuttermust traverse as an analytical function.Formal procedures for developing rules for the knowledge-based system from data areformulated. This is essentially the generalization of the procedures that are used to solvethe specific problem. A systematic scheme is developed for the system to incorporateknowledge in the field to improve efficiency and reliability. Furthermore, conflicts andredundancies may appear in the rule-base. The probabilities of the relationships betweenthe attributes can be used to approach this problem.Finally, all components of the system are integrated to form a single processing cell.The vision system, the cutter/gripper controls, and the conveyer controls communicateand synchronize the activities of various components through data, control, and synchro-nization signals.1.4 Contributions of the ResearchGeneration of rules for a knowledge-based system using probability theory is the maincontribution of this research. Multivariate probability density distributions of attributesare used for rule generation as well as for rule prioritization. Combining the knowledgealready available from the expertise and experience with the knowledge gained fromstatistical analysis to enhance the reliability and the performance of the system is anothercontribution of this research.This research also provides a set of algorithms to compute attributes for the rule-based system using image data. Comparison of knowledge representation techniquesfor this type of industrial application is also a contribution of the research. The maincontributions to the industry are the new technology and a practical workcell for fishChapter 1. Introduction^ 7processing.1.5 Overview of the ThesisChapter 2 describes the literature related to this thesis. In particular three major areas,namely computer vision, statistics, and expert systems, are reviewed. In Chapter 3, pro-cedures developed for data acquisition and analysis are presented. In the same chapter,algorithms developed to group objects are also described.Chapter 4 describes the rule generation methodology developed in the present re-search. It gives a formal analysis of the methodology first, and then describes the rulegeneration through connectivity analysis. An extensive error analysis of the method isalso presented. An illustrative example given in this chapter describes rule generationthrough connectivity analysis. Finally in Chapter 4, a method to integrate expert andheuristic knowledge with statistical knowledge is presented.Chapter 5 gives the vision algorithms developed for feature extraction. Here, analyt-ical background as well as examples are presented. Commonly used pattern recognitiontechniques such as neural networks, Bayesian approach, k-nearest neighbour method,and multiple regression are described in Chapter 6. These methods were applied to thefish processing problem and the results are shown. Chapter 7 compares the methodsdeveloped in Chapter 6 with the developed rule-based methodology.In Chapter 8, statistical estimation system is developed to estimate a point in thehead of a fish. This point will be used in head removal of fish processing. A neuralnetwork estimator is also developed and the results of the two methods are compared.The laboratory implementation of the systems developed in the present research arepresented in Chapter 9. The feasibility of a prototype workcell with an integrated ruleChapter 1. Introduction^ 8base, a model base, a vision system, and robot is demonstrated. An automatic methodto generate the rules is also developed in this chapter. Finally, Chapter 10 concludes thethesis and highlights some of the continuing work from this thesis.Chapter 2Literature ReviewFrom an initial feasibility analysis of an existing indexing mechanism which uses tactilesensing of the fish anatomy, it was concluded that a tactile sensing mechanism is notsuited for this application. Since the indexing mechanism did not work efficiently andresulted in considerable wastage, it was decided to investigate vision based techniques asa sensor in the removal of fish heads prior to canning. High speed computer vision is alsoattractive since the processing time for this operation is critical (—, 300ms).Further study of the problem revealed that the knowledge about fish cutting can beobtained from experienced butchers. The area of knowledge-based systems was studied inorder to use this knowledge to aid the computer vision procedures. Although this knowl-edge is very useful in developing an automated butchering system, it was incomplete andinadequate. Therefore, statistical methods were investigated to extract new knowledgethat can be used with the existing knowledge to develop a system for automated fishprocessing. In view of the relevant topics for this work, the literature review includesa discussion of selected papers from the areas of statistics, knowledge-based systems,general computer vision, and knowledge-based computer vision.Knowledge-based vision area was also researched to study the methodologies that areavailable to incorporate knowledge into a computer vision system.9Chapter 2. Literature Review^ 102.1 Statistical Decision Theory and Pattern ClassificationThe field of statistical decision theory is very well researched and major contri-butions to the field are documented in [Neyman and Pearson, 1928], [Wald, 1950],[Ferguson, 1967], and [Blackwell and Cirshick, 1954]. In addition, statistics and de-cision theory is closely related to game theory which was developed greatly dueto [Von Neumann and Morgenstern, 1944]. The pioneering decision theory work by[Neyman and Pearson, 1928] is based on hypothesis testing and the probability of er-ror, and this was generalized by Wald [Wald, 1950] to include loss and risk.The fundamental Bayesian approach to pattern recognition has been avoided bymany statisticians mainly because there is no reasonable way to determine the a pri-ori probabilities for many problems. The approach assumes that the decision problemis posed in probabilistic terms and that all the relevant probability values are known[Duda and Hart, 1973].Anderson [Anderson, 1958,1984] develops the Bayesian theory of classification ingeneral terms and applies it to multivariate normal distributions. He proves that theBayesian procedure is optimal and derives a quadratic discriminant function. The use ofdiscriminant functions for classification was introduced by [Fisher, 1936]One of the first researchers to apply the Bayesian approach for pattern recognitionis Chow [Chow, 1957], who introduced a provision for rejection and developed a funda-mental relation between error and rejection rates. More recent work on Bayesian classi-fication involves error estimation and effects of sample size on classification performance[Raudys and Pikelis, 1980].Jain and Hoffman [Jain and Hoffman, 1988] describe an evidence-based recognitionsystem to recognize 3-D objects from range images. In this system, they employ a fewChapter 2. Literature Review^ 11crucial (notable) features of the object, and each object is represented as one or morerules in a rule base developed using these features. An object in the range image isidentified by computing a similarity measure between the observed features and thoserepresented in rule form. It recognizes only those objects represented in the rule baseand any new object is refuted.Although some of the concepts of this paper are relevant to the present research, thepresent problem is much more complicated in the sense that the objects in this case arenaturally varying. Also, it is impossible to assign a rule for each object let alone manyrules for an object.As previously mentioned, a statistical approach is the most common for problems witheither incomplete or no heuristic knowledge. Neural networks are commonly investigatedfor these types of applications.In the present research, the generation of complex process information is treated asa pattern classification problem in which processing information for a given object isselected from one of the several categories on the basis of some key measurements on theobject.2.2 Knowledge-Based Systems, and Rule-Based SystemsKnowledge-based systems are programs which use collection of facts, rules of thumb,and other knowledge about a given field coupled with methods of applying those rulesto make inferences. Expert systems have been built in a wide variety of domains[Hoffman et. al., 1986] [Hays-Roth et. al., 1983] and some of the applications includemedical [Buchanan and Shortliffe, 19841 [Buchanan and Shortliffe, 1975], process plan-ning [Schaefer et. al., 1988], process diagnosis [Ishida, 1988], education [Tompsett, 1988],Chapter 2. Literature Review^ 12and mineral exploration [Duda et. el., 19791. They differ substantially from conventionalcomputer programs because their tasks are ill-defined, knowledge intensive, and have noalgorithmic solutions.Production rules (also called IF THEN rules) are the most popular representation ofdomain knowledge needed for an expert system. Hence, an expert system built using pro-duction rules is also known as a rule-based system. Production rules were brought intoArtificial Intelligence (Al) by Newell, who had seen their power and simplicity demon-strated in Floyd's [Floyd, 1961] work on formal languages and compilers. Feigenbaumbegan advocating the use of production rules to encode domain specific knowledge inDENDRAL [Feigenbaum et. el., 1971]. Waterman picked up on the suggestion but de-cided to work with rules and heuristics of the game of poker [Waterman, 1970].As described in chapter 1, the objectives of the present research is to use simple on-line measurements and prior knowledge on the object to extract complex features. It isof paramount importance, therefore, to represent prior knowledge in a knowledge-base.The approach in the current research is to represent knowledge in the form of rules.Literature in the area of rule-based expert systems was primarily reviewed to studyhow knowledge can be represented as rules. There are numerous rule-based systemsavailable for knowledge representation, however, rule-based systems that incorporateboth statistical and heuristic or expert knowledge have not been reported.Buchanan and Shortliffe [Buchanan and Shortliffe, 1984] describe, in detail, a consul-tant expert system, MYCIN, which provides medical diagnoses and prescribes drugs forbacterial infections. Gayle and Dankel [Gayle and Dankel, 1986] present a knowledge-based system for drug interactions which helps medical personnel in identifying the poten-tial of harmful drug induced effects in patient care. Hoffman and CarviedesChapter 2. Literature Review^ 13[Hoffman et. al., 1986] present a rule-based expert system for repair domain which di-agnoses defects in electronics systems, guides a technician through appropriate repairprocedures, verifies diagnoses, monitors repair effectiveness, and maintains records forsubsequent evaluation and quality control. A rule-based system for geometrical reason-ing for the interpretation of line drawings is presented by Whitaker [Whitaker, 1986]. Thesystem derives the object on the basis of three orthogonal two dimensional views andattempts to represent a draftsman's knowledge in deducing the structure of the object.A study of the structure of these rule-based systems, and in particular the arrange-ment of rules to improve efficiency, provides the necessary background in rule-basedsystems.Gottinger [Gottinger, 1988] introduces two types of expert systems which involvestatistical expertise: statistical consulting programs and pattern finding programs indatabases. Pattern finding systems use statistical techniques such as correlations anddiscriminants as well as knowledge about their domain to search a database and discoverrelationships. For example, the RX program [Blum, 1982] searches through a medicaldatabase for possible causal relationships and uses medical knowledge to rule out commoncauses and spurious correlations. The work in this paper led to the idea of extractingrelationships from a database. Birman [Birman, 1982] presents a rule-based method forECG analysis that identifies cardiac abnormalities. In this system, each waveform isrepresented as a tree of rule activations.Howard and Rehak [Howard and Rehak, 1989] discuss the importance of interfacingexpert systems with databases and they present a knowledge-aided database manage-ment system (KADBASE). This is a flexible interface in which multiple databases andknowledge-based systems can communicate as independent, self descriptive components.An expert system for labeling segments in infrared imaging is presented by RobertsChapter 2. Literature Review^ 14[Roberts, 1986]. This system labels high priority targets, low priority targets, roads,trees, forests, and potential clearings. Watanabe et. al. [Watanabe et. al., 1989] com-pare ART [ART, 1984], STROBE [Lafue and Smith, 1985], LOOPS [Stefik et. al., 1983],and OPS83 [Forgy, 1985] with their CL - Conceptual network-based Language - a LISP-based tool kit to assist knowledge engineers. This paper is of interest to this researchbecause it compares 0PS83, which is the language used in the present implementation,with other languages. Carbonell, Cullingford, and Gershman [Carbonell et. al., 1981]discuss a knowledge-based system to translate languages.2.3 Computer VisionAlthough human vision has been an academic subject of interest since seventeenth cen-tury, computer vision and digital image processing have largely evolved only during thelast 30 years [Levine, 1985]. In particular, during the past 20 years there has been aconsiderable growth of interest in problems of computer vision [Fu, 1982]. The needhas created an increasing activity in the development of theoretical methods and exper-imental software and hardware for use in vision systems and the design of such systems[Fu and Young, 1986]. Fu [Fu, 1982] defines computer vision as the process of extracting,characterizing, and interpreting information from images of a three dimensional world.This process may be subdivided into six principal areas (1) Sensing, (2) Preprocessing,(3) Segmentation, (4) Description, (5) Recognition, and (6) Interpretation. Sensing isthe process that yields a visual image. Preprocessing deals with techniques such as noisereduction and enhancement of details. Segmentation is the process that partitions theimage into objects of interest. Description deals with the computation of features (eg.size, shape) suitable for differentiating one type of object from another. RecognitionChapter 2. Literature Review^ 15is the process that identifies these objects (eg. wrench, bolt, engine block). Finally,interpretation assigns meaning to an ensemble of recognized objects.In more recent publications [Marr, 1982], Marr and Hilderith describe computer visionin a different way. Marr's view of vision is from raw primal sketch --> full primal sketch21D --> 3-D models. A contribution of the raw primal sketch are edges, bars, blobs,and terminators, and these are characterized by orientation, contrast, length, width, andposition. The full primal sketch deals with issues such as virtual contours or subjectivecontours in the image. The 21D sketch describes the properties of a surface includingspatial properties, reflectance, and illumination. Deriving the 3-D models from 2-21Dsketch is task and situation dependent and it requires knowledge about the scene. Marr'stheory of object representation is an important part of much work on computer visionand is, therefore, relevant to the research described in this thesis.Horn [Horn, 1986] describes the computer vision problem as a two stage process. Inthe first stage (generally called the image analysis stage), a symbolic description of theimage is produced. In the second stage, the scene analysis stage, a symbolic descriptionof the scene is produced from the symbolic description of the image. This descriptionis closely related to the present research where an image analysis stage extracts theattributes of an object from the image, and a scene analysis stage classifies an objectdescribed by the attributes to a given class. Although pattern classification and sceneanalysis are generally considered as two different issues, it is also possible to considerclassification as a special case of scene analysis. In classification, however, the resultsof image analysis are used to make a class or group assignment rather than to define acomplete description in scene analysis.In the specific application that is considered in the present work, the generation of acutting contour for a fish by recognizing the pattern of the gill plate in a conventionalChapter 2. Literature Review^ 16manner was initially investigated. Three methods have been discussed to recognize apattern by [Fu(3), 1982]: (1) template matching, (2) decision theoretic approach, and (3)structural and syntactic approach. In template matching approach, a set of templatesor prototypes, one for each pattern class, is stored in the machine. The input patternwith known classification is matched or compared with the template of each class, andthe classification is based on a preselected matching criterion or similarity measure.In decision theoretic approach, a pattern is represented by a set of N features and thedecision-making process is based on a similarity measure (between features) which, inturn, is expressed in terms of a distance measure. The classification of patterns is basedon a statistical decision rule or membership function. In the structural and syntacticapproach, a pattern is often represented as a string, a tree, or a graph of pattern primitivesand their relations. The decision-making process, in general, is a syntax analysis.Much work has been done to describe boundaries or shapes of the objects in animage [Rosenfeld, 1975]. Shapes or boundary of an object can be applied in two waysfor the problem of cutting contour generation in fish processing. On-line measurementscan be transformed into parameters describing the shape of the object, and analysis ofboundary contour of an object can be used to determine its orientation. [Brady, 1982]describes a vision system for parts description and acquisition. The paper reports thatmany objects in stable configuration can be recognized from their bounding contour anda procedure is developed for the extraction of two dimensional shapes and contours.[Persoon and Fu, 1977] use Fourier descriptors to describe boundary curves or shapes.In this paper they deal only with nonoverlapping boundaries that are completely known.The classes are described by a few fixed prototypes that may be rotated, translated,or scaled. [Chellappa and Bagdazian, 1982] describe a transform coding technique forclosed 2-D image boundaries. The boundary is represented by either (x,y) co-ordinatesChapter 2. Literature Review^ . 17of the end points of the segments or by magnitude of successive radii vectors that areequi-spaced in angle around the given boundary.Moment invariant functions have the desired property that they are invariant underimage translation and rotation. Sensed objects can be matched with the predefined mod-els through this process. [Dudani et. al., 1977] discuss a method to identify aircrafts bymoment invariants extracted from binary images. This paper particularly concerned withthe recognition of complex man-made objects rather than natural objects or simple solids.[Dirilten and Newman, 1977] address the problem of matching patterns modified by in-tensification, translation, magnification, and rotation-reflection. [Reddi, 1981] derives aset of radial and angular moment invariant functions which are capable of identifyingrotated, translated, reflected, and scaled images. This technique is of interest to the fishprocessing application since fish have a significant size variation despite a similarity incutting contour shape.Vision guided inspection is also widely used in industry today. Reinhold illustratesan installation of an automatic vision inspection system to inspect sheet metal parts[Reinhold, 1982]. This paper addresses the crucial issues of transfer of technology to theindustry such as packaging processing electronics for the factory environment, reliablyilluminating and imaging the part under test, providing a satisfactory user interface,and designing equipment to be maintained by factory personnel. Although the primaryobjective of this research is to develop a knowledge-based methodology for fast extractionof complex processing information of an object, the issues described in this paper are alsoaddressed because they are some of the goals of the Industrial Automation Laboratory.Chapter 2. Literature Review^ 182.4 Knowledge-Based Computer VisionIncorporating knowledge into a vision system can help in many ways. Specifically, in thepresent research it is incorporated to speed up the process.Bolles and Cain [Bolles and Cain, 19821 highlight the major problem associated withcurrently available machine vision systems: they only recognize and locate isolated partsagainst a contrasting background. These systems recognize binary patterns by measuringglobal features of a region such as area, elongation and perimeter length, and comparethese values with stored models.Darwish and Jain [Darwish and Jain, 1988] compare two methods of visual patterninspection: image comparison and rule based n.• Lhods. The first method involves thewell known template matching technique in which a pattern under test is compared witha reference pattern. The disadvantages of this method include the requirement for precisealignment prior to comparison and the difficulty in matching templates in the presence oftolerances in specifications. The rule-based method identifies defective patterns if theydo not conform with the specified design rules.In the popular model-based object recognition approach, the primary objective is anefficient match between features that have been extracted from sensory data and cor-responding features in object models. Perkins [Perkins, 19781 presents a model-basedvision system for industrial parts that can determine the position and orientation ofcomplex curved objects. The paper, however, does not consider the speed of the pro-cesses. Kjeldsen, Bolle, and Sabbah [Kjeldsen et. al., 1988] discuss a model based ap-proach to recognize complex shapes using primitive shapes such as planes, cylinders,cones, and spheres. First, partial geometric descriptions of the primitives are extracted,Chapter 2. Literature Review^ 19and based upon these partial descriptions a more complete geometric description is ob-tained. This information is then used to index into an object database. Hence, complexrecognition is achieved in stepwise fashion. A model-based vision system is presentedby [Suzuki and Arimoto, 19881 to recognize continuous handwritten letters. The letterrecognizer uses a parallelogram that covers the maximal letter and processes a sequenceof letters while shifting the window. Recognition is achieved by calculating a similaritymeasure.Chapter 3Data Acquisition and AnalysisThe main objective of the present research is to develop vision-based techniques forgenerating complex processing information of an object using measured features and apriori knowledge of the object. This will lead to a knowledge-based computer visionsystem for process trajectory generation in industrial automation. Although the systemwill be developed with emphasis on automated fish processing, the theory can as well beadopted to other automated processes. The formal analysis of the theory is described inChapter 4.A majority of commonly available computer vision systems is incapable of real-timeprocessing, particularly, when the position, orientation, and other features of a movingobject have to be computed for this purpose at speeds of greater than 2Hz. Hence, aknowledge-based approach is developed to solve this problem.The approach used in the present research is to first address a specific problem,and then attempt to generalize the procedures that are developed to solve the specificproblem.The specific approach employed has the following components:1. Off-line analysis of fish data to develop a database of fish models. This databaseis developed using static pictures and video tapes of fish, collected at a local fishprocessing plant. The database consists of a set of geometric dimensional measure-ments and a cutting contour for each fish.20Chapter 3. Data Acquisition and Analysis^ 212. Mathematical analysis of the cutting contours to compare and group similar con-tours. The cutting contours represented in the database are dimensional, and there-fore, they should be non-dimensionalized and transformed to a reference frame inorder to compare and group similar contours.3. Identify a suitable set of attributes which can be quickly determined by conventionalimage analysis algorithms. These attributes have to be carefully chosen such thatthey best represent the corresponding model. Also, they should be simple enoughin order to determine them sufficiently fast by the vision system. Low to mediumlevel image analysis techniques involving image transformation, enhancement, andanalysis of basic features such as edges, regions, and shapes are adopted.4. Develop an attribute driven rule-based system to identify a model of a particularfish. Statistical analysis of the attribute data, which are in the database, is carriedout to develop these rules. Specifically, relationships between attributes are exam-ined. Also, heuristics and experience are integrated into the rule base to improvethe performance of inference.5. Investigate the possibilities of developing faster algorithms to determine the aboveattributes.6. Develop a procedure to generate cutting trajectories.3.1 The ApproachThe method described below for the generation of cutting contours utilizes an integratedmodel-based and rule-based (knowledge-based) approach [Gamage and de Silva, 1991a].Consider the system shown in Figure 3.1. Suppose that a series of models representing allChapter 3. Data Acquisition and Analysis^ 22possible variations of fish that are normally expected at a processing machine is generated.Each entry in the database will consist of a cutting contour and a set of attribute valueswhich identifies the particular model. A set of attribute values is processed at the visionstation during real time operation of the process (Figure 3.2). These attribute values,when transmitted into the working memory (input) of a rule-based system, would firea rule which in turn would identify a model of fish. The model provides the necessarycutting contour to the robotic cutter. The time taken to compute the characteristicattribute values is very much shorter than the time needed to compute a cutting contourfrom an image of a fish. The generation of the model base and the rule base are necessarilycarried out off-line. Hence, computer overhead does not present a serious obstacle here.3.2 Data AcquisitionThe development of fish models was carried out by first recording data on representativesamples of fish and then analyzing the data to compare and group them.A large number of fish images, in the form of still pictures and video, was recordedat a local fish processing plant. These images were, subsequently, read into a visionworkstation in order to identify a suitable cutting contour and a set of geometric features(attributes) for each image. This process is schematically illustrated in Figure 3.1. Thebest cutting contour (the cutting contour that recovers most meat from the head whileremoving the pectoral fin) was identified with the help of fish cutting experts. The cuttingcontour, at this stage, is in terms of pixel co-ordinates as shown in Figure 3.3.All possible geometric features of the fish that can be identified through a visionsystem were also recorded. These features are highlighted in Figure 3.3 and Figure 3.4.The dimensional measurements of samples of fish together with the cutting contour wereOtiI-%CD5.4i--■. .HCDC/D‘<el,r-CD5...,o'-1CDc-t-0.--...... •clCICD<roo5co<-4-0...,P,4oCDCD,toP,I-1-,0I-tCIDP)50Chapter 3. Data Acquisition and Analysis^ 24Figure 3.2: A Distributed Control Structure for On-line Fish Processing.Chapter 3. Data Acquisition and Analysis^ 25stored in a database. A typical set of data of fish dimensions are shown in Table 3.1.1^Nose2 Eye3 - 8^Cutting Contour9 - 10 Width11^TailFigure 3.3: Nomenclature for Geometric Data of a Fish.Table 3.1: Typical Data of Fish Dimensions.Fish No. Nose-Eye(cm)Nose-Gill(cm)Nose- • - •(cm)Length(cm)Width(cm)Si 3.4 10.2 • •^• 59.0 12.6S2 3.3 10.7 • •^• 59.5 13.4S3 3.9 11.7 -^•^- 63.7 14.3S4 3.6 10.8 -^-^• 56.4 12.6S5 4.5 11.6 -^•^• 60.5 13.8P4 3.2 9.7 -^-^• 48.6 12.3CH5 4.8 14.5 • •^• 77.8 18.4Chapter 3. Data Acquisition and Analysis^ 261 Nose2. Eye3 Flap4,5 Flap Width7,8 Gill Width6 Gill9 Fin Top10 Fin CenterFigure 3.4: Nomenclature for Geometric Data on Fish Head.3.3 Data AnalysisPrior to the analysis, the data must be processed such that all measurements are non-dimensionalized with respect to a common reference frame. Data processing, therefore,includes co-ordinate transformation, curve-fitting, and scaling.A cutting contour, as mentioned previously, is a set of pixel co-ordinates obtainedfrom the fish image. In order to match and group similar contours, it is necessary totransform all contours to a common reference frame.This transformation was accomplished by fixing a local co-ordinate frame at one end ofthe contour, and moving this frame to a known reference frame through a translation anda rotation as illustrated in Figure 3.5. The cutting contours were then nondimensionalizedso that the chord length of the contour is unity. This process can be mathematicallyChapter 3. Data Acquisition and Analysis^ 27expressed as follows:AFigure 3.5: Reference Frames for Coordinate Transformation in the Cutting ContourAnalysis.Suppose that the pixel co-ordinates obtained from the image are (4,^, (x:, y:),. ,^), then translation is simplyXt = X - X 1 *^ (3.1)The chord length of the contour isChapter 3. Data Acquisition and Analysis^ 28L =^— yD2 + (xin — )2= v y;r + 42.^ (3.2)Now, the contour can be non-dimensionalized as follows:xixi(3.3)The Sine and the Cosine values of the angle of rotation (0) can be expressed asSin0 = y" andCos0 = x"'^ (3.4)because the chord length of the contour is unity. The process of rotation can be expressedin matrix form as follows:x-^—Sin() Cos0yi^C os0^Sin01[ yr^(3.5)Since the angle of rotation has to be 9 in the opposite direction, the transformationmatrix becomesChapter 3. Data Acquisition and Analysis^ 29Y,^Cos° —Sine l[yrxi^Sine^Cost9^xr(3.6)and can also be expressed asYi ^1^(4 — Y1)2 +^— 4)2xi](x721 —^(Yni^Y:.^(3.7)(Yni — Y1)^(xn' —^x: — The comparison procedure is facilitated by first curve fitting each set of pixel co-ordinates in to a polynomial expression to obtain an analytical cutting contour. Severalmethods were tried for this purpose including spline and least squares approaches. Themost satisfactory cutting contour was obtained by polynomial curve fitting as shownbelow:n-1^n-2x 1n-1^n-2x2 x2,n-1 xn-2nYiY2yn1^al1(3.8)anChapter 3. Data Acquisition and Analysis^ 30which is of the formy = ,V.a^ (3.9)where y e Rn, a 6^X 6 WnXn, and (xi, yz) are the coordinates of the cuttingcontour. The analytic form of the cutting contour is then given by the polynomialy(x)^a1xn-1^a2xn- 2^. .. 4_ an^ (3.10)The comparison of cutting contours is accomplished by first picking an arbitrary con-tour and matching it with all remaining contours. A parameter called index of mismatch(IM) is computed to determine how well two curves correlate with each other. This in-dex of mismatch is the minimum absolute area between the two curves. In other words,this is the area under the two curves that is not common to both curves as shown in Fig-ure 3.6, and it is obtained by minimizing the absolute area between the contours througha process of frame rotation (SO). A sample of typical values of index of mismatch isshown in Table 3.2.Table 3.2: Typical Values of Index of Mismatch Compared with Fish No. 2.Fish No. Index of MismatchSi 0.1177S2 0.0000S3 0.0295S4 0.0207S5 0.0527This minimum value of the area is selected as the index of mismatch and it is calcu-lated according to the following formula.Chapter 3. Data Acquisition and Analysis^ 31Figure 3.6: Representation of the Index of Mismatch as the Minimum Absolute AreaBetween Two Contours.Chapter 3. Data Acquisition and Analysis^ 32IM = Min^Abs(yi(x) — y (x))dx^(3.11)40)In which yz denotes the ith contour which is transformed into the closest-match con-figuration of the jth contour (y3( x)) through a sequence of incremental rotations (SO), asillustrated in Figure 3.6. A tolerance value is selected for the index of mismatch and alldata below this value are grouped into one model. The tolerance value selected is directlyrelated to the wastage, therefore, this value is chosen so that the wastage incurred byselecting a particular contour (model) is acceptable (less than 1% of the total weight ofthe fish). The grouping process is repeated with another contour and so on until all thedata have been grouped into models. Note that, some samples of fish may belong tomore than one group due to tolerance value selected. In such a case, the model whichgives the lowest index of mismatch is selected.3.4 Model Grouping Through Mismatch AnalysisGrouping data into a reasonably compact set of models is done by a procedure of mis-match analysis for the complex pattern of interest[Gamage and de Silva, 199114 In thepresent implementation, the complex pattern is a cutting contour, and each data item inthe database contains this contour with some dimensional measurements related to thecontour. The grouping of contours into models is done in two stages. In the first stage,each contour is compared with all the remaining contours and a group of contours similarto that contour is determined. In the second stage, the groups of similar contours whichmay have common elements are compared to determine the most appropriate group towhich a common contour should be assigned. In this manner, a smaller number of distinctChapter 3. Data Acquisition and Analysis^ 33groups with no common elements is developed.The main steps of the first stage of the procedure are1. Pick a contour and compare it with all the remaining contours contained in thedata.2. Compute the index of mismatch (IM) which represents the level of deviation ofone contour with the other.3. Select a suitable threshold level for the index of mismatch and group all the dataitems (contours) which have their indices of mismatch below the threshold into onemodel.4. Repeat steps 1, 2, and 3 for all the contours.This process will generate a large number of groups, equal in number to the numberof contours. The flow chart given in Figure 3.7 illustrates this procedure.The second stage involves the following steps1. Pick a group which was generated in the first stage and examine each of its elementsto check whether it is repeated in any other group.2. If the particular contour is common to two or more groups, compare the indicesof mismatch of this contour within these groups, assign the contour to the groupwhich has the lowest index of mismatch for this contour, and delete it from theremaining groups.3. If not repeated, there is no conflict and retain the element in the same group.Pick aContourCompare withother contours andcompute the indexof mismatch (IM)Ignorethis contourInclude thecontour inthe groupChapter 3. Data Acquisition and Analysis^ 341YesiYes(-Stop DFigure 3.7: First Stage of the Mismatch Analysis Procedure.Chapter 3. Data Acquisition and Analysis^ 35This procedure will, in general, reduce the number of groups considerably. The flowchart given in Figure 3.8 illustrates the second stage of the model grouping process. Thisgrouping method was compared with a clustering method known as k-means clustering[Wilkinson, 1990] and it produced similar results. The k-means algorithm, however, doesnot guarantee that all the data in a particular group satisfies the specified tolerancelevel. Furthermore, the grouping method described here allows us to use any featureas the matching parameter. In the present case, it was the absolute area between twocontours.3.4.1 Model ComparisonAs a result of the grouping procedure described above, the present fish data set wasclustered into five groups with each having a representational contour for the group.This representational contour of each group is called the model.In order to estimate the costs associated by selecting an incorrect model, it is possibleto use the index of mismatch between two contours. As defined previously, the indexof mismatch is the minimum absolute area between two contours. This value, therefore,relates to the wastage of useful meat and/or unwanted material left in the body of theprocessed fish as a result of selecting an incorrect model. Both wastage of useful meatand leaving unwanted material in the body have their associated costs. In particular, thelatter case needs further processing. Table 3.3 shows the indices of mismatch betweenmodels for all five groups. Note that the last column of this table shows the area differencebetween the corresponding model and a straight "V-cut" (please refer to Figure 5.7).After manually butchering several medium size (1.5kg — 2.0kg) fish, it was possibleto relate the indices of mismatch given in Table 3.3 to a percentage of ratio of wastageand/or unwanted material in the body to total weight of the fish. These percentageAre thecommon groupsexhausted?Retain in thesame groupChapter 3. Data Acquisition and AnalysisPick aGroupRead an elementfrom the group36Comparethe IM in acommonroupDelete itfrom the groupHigherLower or SameAre allthe elementsrocessed?Keep the elementsin the group whichhas the lowest IMYesFigure 3.8: Second Stage of the Mismatch Analysis Procedure.Chapter 3. Data Acquisition and Analysis^ 37Table 3.3: Indices of Mismatch between the ModelsModel 1 2 3 4 5 Straight Cut1 X 0.0352 0.0496 0.0277 0.0239 0.09592 X 0.0180 0.0294 0.0388 0.12363 X 0.0226 0.0529 0.14004 X 0.0314 0.11745 X 0.0986values are shown in Table 3.4. As in Table 3.3, the last column of this table indicates theapproximate wastage of useful meat if a straight "V-cut" is performed rather than therespective model. The "V-cut" butchering machine is also a prototype machine whichis being developed in the Industrial Automation Laboratory. In existing butcheringmachines, head removal is performed by a rotary chopping knife which causes about 10%,--, 15% wastage of useful meat on average.Table 3.4: Percentages of Wastage and/or Unwanted Material Left in the Body to theTotal Weight of fishModel 1 2 3 4 5 Straight Cut1 X 2.0% 2.2% 1.6% 1.4% 3.4%2 X 1.0% 1.7% 2.0% 4.2%3 X 1.3% 2.2% 4.8%4 X 1.8% 4.0%5 X 3.2%As an example, a 1.6kg fish which belongs to model 1 was manually butchered ac-cording to all five models. The following figures show the actual wastage in grams, whilein the cases of model 2 and model 3 operations, in addition to these figures, the contourscut into the head and left some gill and pieces of collarbone in the body resulting furtherChapter 3. Data Acquisition and Analysis^ 38processing.Model 2 - 26g + gill and collarbone piecesModel 3 - 30g + gill and collarbone piecesModel 4 - 25gModel 5 - 22g3.5 Feature SelectionOne of the basic problems in pattern recognition is to identify a set of features (attributes)that gives the best classification results. The purpose of feature selection is to recognizethese features for the problem concerned.One way to approach this problem is to collect all possible features that can bemeasured using the existing sensing device. When the number of features increases,however, the procedures that are analytically and computationally manageable in low-dimensional spaces can become completely impractical in large (50 ,,, 100) dimensionalspaces [Duda and Hart, 1973]. This problem is known as curse of dimensionality.A variety of methods for dimensionality reduction have been proposed in the litera-ture [Meisel, 1972] [Kendall and Stuart, 1966] [Harman, 1967], because the curse of di-mensionality poses a major problem for many pattern recognition procedures. The mostcommon methods include principal component analysis and factor analysis. The Princi-pal Component Analysis (PCA) finds a lower-dimensional representation that accountsfor the variance of the features. The Factor Analysis (FA) finds a lower-dimensional rep-resentation that accounts for the correlation among the features. In the present research,the latter approach was employed to reduce the number of geometric features.23Chapter 3. Data Acquisition and Analysis^ 39The process of dimensionality reduction through factor analysis can be thought ofas the process of removing or combining highly correlated features. Also, this can betreated as a clustering procedure. For example, consider a data matrix of n rows with d-dimensional samples. On the one hand, ordinary clustering can be considered as groupingrows to form a smaller number of cluster centers which can be used to represent data.On the other hand, dimensionality reduction can be thought of as combining columns(features) to form a reduced number of columns to represent the data.Duda and Hart [Duda and Hart, 19731 describe a hierarchical clustering procedure toreduce the dimensionality and it uses correlation between two features. The correlationbetween two features can be defined as:(3.12)where a ^the covariance of the features i and j, and u. and CI 3 are the variances offeatures i and j respectively.Also, —1 < pi, < 1 and, therefore, 0 < 4 < 1 with pi22 = 0 for completely uncor-related features and 4 = 1 for completely correlated features. Hence, it is clear thatcorrelation is a measure of similarity between two features. For example, if the correla-tion between two features is large then they are good candidates to be merged into one,or one can be removed leaving the other to represent data. This procedure leads to thefollowing hierarchical dimensionality reduction procedure.1. Initialize dimensionality (d) with the number of features in the original data set.2. Compute the correlation matrix of features (acz).3. Find the most correlated pair of features (aci and ac,Chapter 3. Data Acquisition and Analysis^ 404. Merge act and ac, or delete one of them (say ac„).5. Decrement d by one.6. Go to step 2.Duda and Hart [Duda and Hart, 1973] describes this procedure to continue until thedesired number of features are established. In our case however, this procedure was con-tinued until the correlation coefficients between features fell below a selected threshold.Geometric features such as nose-to-eye, nose-to-flap, nose-to-gill, nose-to-the centerof the pectoral fin, nose-to-the edge of the pectoral fin, width at flap position, width atgill position, width at the center of the fish (width), and length of the fish were initiallychosen for the problem of contour selection for fish butchering. With the hierarchicaldimensionality reduction procedure described it was possible to reduce the number offeatures to four. These features are nose-to-eye length, nose-to-gill length, width, andlength of the fish. The next phase is to develop a set of rules to identify a particularmodel using these features (characteristic attributes).3.5.1 Development of RulesThe production of rules is considered to be one of the most difficult and time consumingstage in the development of rule-based systems [Hand, 1985] [Rowley, 1986]. Even wherethere are experts willing to assist the knowledge engineer in the development of theexpert system, it may be that the expert has great difficulty in making the rules explicit[Hand, 1985].Data analysis for database design has been advocated for all system design and hasbeen accepted in conventional database applications [Held and Carlis, 1989]. It is stillChapter 3. Data Acquisition and Analysis^ 41foreign to the design of rule-bases, however. In the present research, the rules are devel-oped by analyzing data to establish the relationships between attributes. As mentionedin the previous section, the attributes selected are nose-to-eye length, nose-to-gill length,width, and length of the fish which are respectively represented as aci, ac2, ac3, and ac4.In general, the attributes aci, ac2, ac3, and ac4 may have the most general many<-4 many many 4-) many relationships among each other. This means that for aparticular group, all the possible values of attribute aci may be associated with all thepossible values of attribute ac2, all the possible values of attribute ac2 may be associatedwith all the possible values of attribute ac3, and so on. Further examination of data,however, reveals that the many to many relationships are not most general as described.The entire set of fish in a group is divided into subsets on the basis of the multivariateprobability density of the candidate attributes as described in Chapter 4. The sameresults (subsets) can be obtained by setting up a connectivity diagram which describesthe relationships between the attributes. Although it is possible for all attribute valuesto participate in the relationships, there can be hidden dependencies which restrict thisparticipation. In other words, any arbitrary combination of attribute values may notcorrespond to a set of valid fish data.Once the proper relationships and the corresponding subsets are identified, they canbe grouped such that a logical combination of these determine a unique model. Theformal analysis of this process and the generation of rules through connectivity analysisare described in Chapter 4.Chapter 4Rule GenerationIn the off-line analysis of rule generation, the data were grouped (clustered) throughmismatch analysis as described in Chapter 3. In this chapter, the formal analysis ofthe rule generation methodology is first described. Secondly, a practical rule generationsystem using connectivity analysis is presented. The connectivity analysis, which is adirect application of the theory developed in the formal analysis section, is illustrated bya simple example. Also in the formal analysis section, an error analysis for the developedmethodology and a comparison of error between the present method and the Bayesianapproach are developed. Finally, a scheme for integrating expert knowledge and heuristicsinto the statistical rule base is developed.4.1 Formal Analysis of Rule GenerationAfter grouping data with similar patterns (feature of interest) into the respective groups,it is necessary to generate rules for these groups. Possible relationships among the featureswithin each group are identified to generate the rule base for classification.Consider a set of m features ack, k = 1,2, , m. For each feature in a group orclass, the measured (non-dimensionalized) feature values are arranged in a column in theascending order of the numerical value. A relationship is defined as a set of feature valuesfor a data item (a tuple of features). For example, a relationship comprises one value fromeach of the m columns and it forms a vector in m-dimensional space (arm). The subranges42Chapter 4. Rule Generation^ 43for each feature column are determined by the number of relationships falling into them.Specifically, if a particular set of subranges (subranges for aci, ac2, ,acn.,) has a largernumber of relationships than a set of subranges slightly perturbed in their respectivecolumns then the set of subranges which gives the maximum number of relationships inthe neighborhood is considered for rule generation. In multi-dimensional space (in thiscase m-dimensional space), these subranges form hyper-cubes.The rules for a particular group are determined by the density of the relationships.The objective is to determine if a given feature vector of an object belongs to a highdensity region of arn feature space. In order to do this, it is necessary to determine thedensity functions of each group. The procedure for estimating the density functions isdescribed below.The probability P that a feature vector ac belongs to the region 7Z, of a particulargroup C given byP = p(a_cri)dac (4.13)where p(ac)C3) is the multivariate conditional probability density function, the proba-bility density function of ac given that the group or class is C3. The probability P canalso be described as the smoothed or averaged version of p(aciC3). Now, suppose that nsamples of ac (acl, ac ,  ,) are drawn according to the probability law p(acr 3).Then the probability that k samples belong to region 7Z is given by binomial lawPk = Pk(1 — P)n_k (4.14)with E(k) = nP as the expected value of k. Again, P is the probability that a samplebelongs to the region R given that the group is Cr With the binomial law, the maximumChapter 4. Rule Generation^ 44likelihood estimate for P is(4.15)If the region R is assumed to be small, thenp(acr3)dac^p(acri).V^ (4.16)where ac is the point within R and V is the volume of the hyper-cube enclosed by 1Z.Combining equations 4.13, 4.15, and 4.16 p(acIC3) can be estimated as follows:If the sample size is large, thenk In75(aciC j)^V(4.17)kconverge inprobabilityBut for fixed volume V, f)(ac C3 is an average of p(acr3).P fRp(ac1C3)dacfl(acIC^=  ^ (4.18)V^fR dacFor finite sample case, V can be specified as a function of n. A general result fordensity estimates is that as n increases V should decrease. Rudemo[Rudemo, 1982] statedthat for univariate case the length h should decrease as n-1. For m-dimensional casewith n samples the volume can be expressed as follows:(4.19)Chapter 4. Rule Generation^ 45Here, h is the size of each subrange. Without loss of generality, h, which is the sizeof each subrange, can be taken to be the same for each class provided that the attributevalues are normalized by their respective ranges. The parameter h can also be definedas the length of each side of the hyper-cube. While the size of h can be found by crossvalidation, automatic methods are also available (although this is still an active researcharea in statistics) for choosing h. This issue will be discussed in Section 4.1.5.Now define the function 6b(u) as follows:1 Ink! <^k = 1,2, ^ ,mOCtS)^ (4.20)0 otherwiseThe function 0(u) defines a unit hyper-cube in m-dimensional space with its centerat the origin. Therefore, the function(ac — ach )where ac, ac E C3^ (4.21)will be unity if ac belongs to the hyper-cube of volume 14, centered at ac and is zerootherwise. Hence, the number of samples in this hyper-cube is given bykr, En (ac —hacwhere ac, ac E C3^(4.22)1=.1Now, the estimate 75(acIC3),1 n 1 (ac — ac15(acjC3) —E -0  ^(4.23)n i=1^hIn order to find the high density regions, it is necessary to find the local maxima offin(acrj). These maxima are found by taking the partial derivatives of 13,2(aciC 3) with=0^for k = 1,2,....,mOackaiin (g_c i c; )Chapter 4. Rule Generation^ 46respect to each feature (ack) and equating the derivatives to zero. Furthermore, thesecond derivatives (partial) with respect to all features must be negative for a maximum.The criteria for local maxima are(4.24)a2Mgcic3) < 0 for V k = 1,2, ...., m^(4.25)OacZNext, the concept of rule generation is addressed. The basic idea here is to centereach rule at the local maximum of the probability density functions isr,(acjC3)P(C.,).Note according to Bayes relation that, this amounts to maximizing P(C3Iac), which is anintuitively appealing criterion for generating rules for a particular class C3. In addition,each rule is assigned an activation region and a priority. The assignment of the activationregion and the priority is best understood by considering a simple 1-D distribution withsingle mode for a two class case as shown in Figure 4.1.In establishing the rule-base, certain regions of the density function are deemed sig-nificant, as illustrated in Figure 4.2. Specifically, lobes of the density function whichare above a predefined threshold are considered for rule generation. In view of this, therule boundaries are set at the points of intersection of this threshold with the densityfunctions, and the height of the window (rule priority) w, is selected so as to preservethe probability within the window.The required window height is calculated as1^a.2=  ^p(acri)P(Ci)dac^(4.26)(ac2 — ac1)p (ac I CdP(C1)p(acjC2)P(C2)2Threshold' • • •^• • • :^• • •• •• •^• • •••• • • • •^" • •^• •• • • : • • •: •• •^" • •^• •^•^.CIC I^OC2 OC3Chapter 4. Rule Generation 47Figure 4.1: Probability Density Function of a Single-mode, Two-Class Case.Figure 4.2: Probability Density Function of a Single-mode, Two-class Case and theSelection of Windows for Rule Generation.Chapter 4. Rule Generation^ 48where aci and ac2 are selected at the points of intersection of the density function andthe threshold.Similarly,1^fac4W2^J p(ac1C2)P(C2)dac^(4.27)(ac4 — ac3) ac3In practice, the density functions p(ac)C3)P(C3) will each have more than one modalpoint and this will lead to a number of rules centered at the local maxima of the densityfunctions. The priority of each rule will be determined by the height of the windowassociated with each peak, which is calculated as in Equations 4.26 and 4.27.For many problems, there may be several classes and several attributes associatedwith them. Also, there may be several peaks within the same class. For the general case(ith case of the jth class), therefore, the Equations 4.26 and 4.27 can be expressed asWii1 p(aAC;)P(Ci)dac1[1][ni. —(4.28)where [I] is the m x m identity matrix, the subscripts u and I correspond to the upperand lower bounds of the region 1? for each attribute ack in the attribute vector ac.4.1.1 On-line InferenceIn general, the equations 4.24 and 4.25 have many solutions corresponding to the modalpoints of the density function p(acIC3). Each modal point i will have an associated hyper-rectangle which corresponds to a rule in the rule base.Define a function 12 such thatChapter 4. Rule Generation^ 49„^1 1Z1 21 V uk and hk^k = 1,2, ^ ,mhk 0 otherwiseThen,( ac — 71, .i)fl ^hkijdefines a hyper-rectangle at^and i corresponds to the z' solution of the equations 4.24--43and 4.25 for the jth class. Also, ik can be thought as the cluster centers correspondingto each solution in the ith class. The length (hk„) of each side of the hyper-rectangle canbe expressed in terms of the corresponding attribute values as follows:^hki; —ackiju — acko k 1, 2, ^ , m and {Vi > 0}^(4.29)Also, the cluster centers op ) can be expressed as_,3ackii. ackii.k = 1, 2, ^Okij 2^ , m and {Vi > 0} (4.30)where k is the attribute number, and u and I correspond to the upper and lower bounds ofthe attribute of the selected cluster (cluster i). The attribute values ack„u and ackifi areselected such that the least hyper-rectangle formed by these ranges enclose the projectionof the intersected density function at the threshold level on to the m-dimensional co-ordinate system. This process is better understood for the 1-D case shown in Figure 4.2.A graphical representation of this procedure is the connectivity diagram as shown inFigure 4.4. Examination of this diagram reveals that high density areas are representedas densely concentrated relationships. It is possible, therefore, to use this diagram to formChapter 4. Rule Generation^ 50rules for the group or class. Rule generation through connectivity analysis is described,in detail, in Section 4.2.^Now, for a given feature vector ac the values of 12,3 can be computed for i = 1, 2, ^ PJwhere p, is the number of peaks (modal points) or solutions in the jth class, j^1, 2, ^ q)and q is the number of classes.Three possible cases exist.1. f2,3 = 1 for unique i and j.2. I-2,3 = 1 for more than one j.3. fi,; = 0 V i and j.Case 1 is straightforward since a given feature vector belongs to the group or classC3. In case 2, the given feature vector belongs to more than one class, and the class C,that gives maximum MacIC3) can be selected.That is, select class C3 if23„(acr ) fin(acICT)^Vr j^ (4.31)This criterion is appropriate if all classes or groups have the same a priori probability.If the a priori probabilities are different the following rule can be used.j(acIC3)P(C3) 73„(aciC,.)P(C,.)^Vr j^(4.32)where P(C) and P(C3) are the a priori probabilities of groups C,. and C3 respectively.Note that, this rule is of the form of Bayesian decision rule.In the present research, however, the probability values at each point of the proba-bility distribution are not stored because this is not practical, in particular, with largeChapter 4. Rule Generation^ 51dimensional spaces. Hence, in the current approach, only the significant regions (i.e., theregions with high probability density) are represented and used for on-line inferencing.In order to determine the class when there is ambiguity, the following rule is adopted.Select class C, ifw., > w .,^Vr (4.33)Here, • stands for any i value of the corresponding group. Again, if the class a prioriprobabilities are different,w..;_13(C.;)> w,P(Cr)^ (4.34)can be used.In case 3, the feature vector ac does not belong to any of the high density regions.In this case, the present methodology does not classify the feature vector into any ofthe groups, instead a default class is assumed. The regions of the density functions thatcorrespond to this case are the regions that are eliminated by the threshold level. Thepresent approach prefers a dominant default class to a less certain class. In other words,this methodology assigns a new object, which has attribute values corresponding to alow probability density area, to a class that has the highest a priori probability (P(C3)).The reason for this is that, when there is no other evidence, assigning a feature vectorto the class with highest a priori probability reduces the probability of error, as shownin Section 4.1.4.The height of the hyper-rectangles (w,J) which corresponds to the relationship densityin the connectivity diagram can be used to prioritize the rules of the rule base. The rulegeneration procedure using connectivity diagram is described in Section 4.2.Chapter 4. Rule Generation^ 524.1.2 Error AnalysisAgain for simplicity, a 1-D distribution with single mode for a two class case as shownin Figure 4.1 is considered. As illustrated in this figure, the Bayesian error for thesedistributions is given byP(errorjac) =and a total classification error of{ P(C2jac) if ac E C1P(Cilac) if ac C C2(4.35)P(error) =^p(acIC2)P(C2)dac + j'cc p(acri)P(Ci)dac.^(4.36)acBThe total error for the rule-based representation of the distribution functions shownin Figure 4.2 is calculated asociJP(error)^p(acICOP(Ci)dac +^p(acIC2)P(C2)dacac,^P(aciC2)P(C2)dac +^p(acri)P(Ci)dac(2ci^ ■lc2acs ac4^J p(acIC2)P(C2)dae +^p(acICOP(Ci)dac+ a p(acri)P(Ci)dac +^p(acr2)P(C2)dacc4^ ac4-colac'.^ac2=^p(ac)dac + I p(ac1C2)P(C2)dacaci,acs^ac4^ oo+ j p(ac)dac + I p(aciCi)P(Ci)dae + I p(ac)dac. (4.37)..c2 ,2C3 I1C44.1.3 Comparison of Bayesian Error to Rule Base ErrorBayesian classification error for the two class, 1-D case as shown in Figure 4.1 can beexpressed as given in Equation 4.36. This can also be expanded as given below.Chapter 4. Rule Generation^ 53aciP(Bayes Error) , I .p(acr2)P(C2)dac + I p(acr2)P(C2)dacaci'acB^ ac3J+^p(acr2)P(C2)dac + j p(acri)P(Ci)dacac2 i ..c.Bac4 oo+ f P(acri.)P(Ci)dac + I P(acri)P(Ci)dac (4.38)ac3^ ac4where aci, ac2, ac3, ac4, and acB are as defined in Figures 4.1 and 4.2. With Figure 4.2,the probability of rule base classification error can be expressed as follows:P(Rule Base Error)aci^ aci= jao p(ac1C2)P(C2)dac+ 1 p(acri)P(Ci)dac-.0ac2 ac+ j^B. P(aCiC2)P(C2)daC + I p(acIC2)P(C2)dacaci (2c2acB ac3+ I .p(acri)P(Ci)dac + .1 p(acjC2)P(C2)dac42C2^ .2clitac3 ac4+ j..13cLp(acri)P(Ci)dac + j p(acICOP(Ci)dac(.e3.^ 0.+ I p(aciC2)P(C2)dac + f p(acri)P(Ci)dac (4.39)ac4 ac4The difference between Bayesian and rule base classification error is, therefore, the dif-ference between Equations 4.38 and 4.39 and is given below.P(Error Di f f.) =faci^ acBL. p(acri)P(Ci)dac + f p(acri)P(Ci)dacaC2ac3 oo+ f p(acIC2)P(C2)dac + I p(acr2)P(C2)dac (4.40)acB^ ac4These errors are highlighted in Figure Rule Base Error for Multi-Class, Multi-Modal CaseIn this section, a general result for classification error for multi-class, multi-model, andmulti-dimensional case is derived.Chapter 4. Rule Generation^ 54p(ac l CdP(CdC4 OC1 11 pocic2)p(c2)2Threshold^OCI^PC2 0C3Figure 4.3: Graphical Illustration of the Error Difference Between the Rule Base andBayesian Classification Methods.Theorem : For the multi-class, multi-model case, the probability of rule base classifica-tion error isq P.i^ q Pk1-EE [f p(acri)P(Cj)clac^EE^p(acrk)P(COdad^(4.41);=1 i=1^Rij^ k=1 1=1^,0,1,1,wherei - Cluster number of class j.j - Dominant class of cluster i.p; - Number of clusters of class j.q - Number classes.- ith region or hyper-rectangle of the jth class.Chapter 4. Rule Generation^ 551 - Cluster number of class k.k - Class of an intersecting cluster with class j.A - Intersection indicator1 if cluster i of class j intersects with cluster 1 of class k,A =^and j is the dominant class of all intersecting classes0 otherwise- Intersection of Ri j with RIAProof :Consider a region i, R2,3, for the jth class. This region is a hyper-rectangle in whichthe density function corresponding to the jth class (p(acIC3) has higher values than allthe other classes.Therefore, the summation termq Pi>J p(acICJ)P(Cj)dac (4.42)is the summation of the probabilities of the dominant class of each hyper-rectangle. Iftwo clusters intersect, the probability of the non-dominant class within the intersectedregion should also be counted for the probability of error. Therefore the last term on theright hand side,q PkE E^p(acrk)P(Ck)dac,gives the total probability of all intersecting regions. This quantity will be added only ifcluster i of class j is dominant (has the largest window height of all intersecting clusters).Chapter 4. Rule Generation^ 56Also, since the total probability^fc° p(acICJ)P(C;)dac = 1,^ (4.43)j=1 -0°the probability of classification error for the general case isP3^ q PkP(Error) = 1 — EE^p(cicIC3)P(C3)dac — AEE^p(q_c_ICk)P(Ck)daci .j=1 i=1 R'a^ k=1 1=1(4.44)As described in Section 4.1.1, when a new object does not belong to any class itassumes a default class (the class that has the highest a priori probability, say P(Ch )).This further reduces the probability of rule base classification error.Theorem : For a multi-class, multi-model case with a default class, the probability ofrule base classification error is1 — P(Ch)^Ir^f p(acri)P(C;)dac — p(aciCh)P(Ch)dacj=i i=1 [^i,.i9 Pk— AEE I^p(acrk)P(Ck)dac — p(asiCh)P(Ch)dad}k=1 1=1 R,..1,1,14where all the parameters are as previously defined.ProofSince all the non-peak regions (all regions below the threshold level) are assigned classCh, the probability incurred by this class must be subtracted. Now consider the followingterm.qJ1=1 s=1 Pt.i P(QC_ICIL)P(Ch )d_QC — E^f^p(aCICh )P(Ch )Ckt_C]k=1 1=1 1?i,jlk,, Chapter 4. Rule Generation^ 57This term accounts for the probabilities of class Ch within the high density regions.As in the previous case, the term preceded by A subtracts intersecting regions. The aboveterm has to be deducted from the total probability of class Ch in order to account forthe probability of this class within the non-peak regions. Hence, the termq^P3 q^PkP(Ch) -EE[fj=1 i=ip(aclCh)P(Ch)dac — EEk.1 1=1p(aclCh)P(Ch)dadwhere P(Ch ) is the total probability of class C3. Now, by subtracting the above termfrom the probability of error given in Expression 4.44 and by rearranging the terms, theprobability of error,q PjP(Error) = 1 — P(Ch) — IEELID P(f_z_gri)P(Ci)dac — p(ac1COP(Ch)dac3=1..1— ARi p(acICOP(Ck)dac — p(acICOP(Ch)dad}Ic=1 1=1^.j.l.kcan be obtained.For a multi-dimensional distribution with no parametric form, the probability valuesof the complete multi-dimensional space have to be estimated and stored in order to usethe Bayesian classification rule. When inferencing on-line, these probability values shouldbe retrieved and compared for each class in order to classify a new object. In the presentapproach, however, only the significant regions are stored. This improves the efficiencyof the on-line inferencing process by a considerable factor. For example, consider a 3-Dfeature space (as with the fish classification case) with values are sampled at each 0.1interval for density representation. Also, assume the range of each value is from 0 - 10.This gives 1,000,000 discretized probability values to be stored in a database for on-lineclassification using Bayesian rule. However, if only the significant values were consideredfor on-line inferencing, only a fraction of this number has to be stored. For example, forChapter 4. Rule Generation^ 58the fish contour classification case, only 22 significant regions were stored resulting in 22entries.4.1.5 Density Estimation Kernel and Size of Subrange (h)In the analysis of rule generation, the kernel used (0(u)) is a uniform function in the range(-1, 1). Research in the area of density estimation presents other types of kernel whichmay give improved performance. An alternative to the uniform function is the Gaussianwhich takes all available data samples into account, giving appropriate weighting to each,to estimate the value of the distribution at a particular point. In contrast, the uniformkernel only considers the samples falling in a particular window, and all samples in thiswindow are given the same weighting. The Gaussian kernel has the following form.1^.2W h(x) = ^ e- 270-h\/271-(4.45)where the parameter h is the bandwidth of the estimating kernel and it controls thesmoothness of the distribution estimate. Also, it is related to the size of the subrangeand therefore to the shape of the distribution. If h is too small the estimated relationshipdistribution will be ragged; if it is too large the distribution will be oversmoothed. Anempirical choice for h is discussed in [Rudemo, 1982].4.2 Rule Generation Through Connectivity AnalysisConsider a set of m features ack,k = 1,2, ,m For each feature (attribute) in a groupor class, the measured (non-dimensionalized) feature values are arranged in a column inthe ascending order of the numerical value. The feature values of each data item in thegroup are plotted in their respective columns. The plotted attribute values in adjacentChapter 4. Rule Generation^ 59columns are joined by line segments. There are m-1 such line segments for one dataitem (a tuple of attributes) in a group and the completely connected line segments forall attributes form a relationship. If there are n data items in a group there will be nsuch relationships for the model that is formed by the particular group. In this manner,a connectivity diagram is obtained for the particular group, as shown in Figure 4.4. Thesubranges for each attribute column are determined by a window defined by 0(x) (thefunction 0(x) was introduced in Section 4.1) and maximizing the number of relationshipsthat fall within this window. For example, if a particular set of local regions of attributecolumns (local regions of columns aci, ac2, ,acrn) give a peak of occurrence of therelationships, this set of local regions is taken as subranges. The subranges are denotedby ACk23. The subscripts k, i, and j represent the attribute number, the subrangenumber, and the group number respectively. Initially, the size of the subrange is chosento be small and it is iteratively increased to obtain the optimum value. The optimum sizeis usually obtained by cross validation. Automatic methods are also available as discussedin [Rudemo, 1982]. As previously mentioned, if the size of the subrange is too small therelationship distribution of the connectivity diagram will be too ragged (scattered); if itis too large it will be over-smoothed and important properties that are useful for rulegeneration will be obscured. Once, the subranges are determined in this manner, theycan be logically combined with the aid of the connectivity diagram to formulate the rulesfor the group.4.2.1 Threshold Level SelectionThe statistical procedure used for selecting the relationships having adequate densitylevels is described now. First, the number of repetitions of distinct relationships (connec-tivities) in a connectivity diagram is determined, and the relationships are arranged inACm3i'IACI3i•ACip11 AC2P IIAttribute 1(aci)^A Relationship lineSegTentAC 11J^x= 1L.y...itNumber of occurrenceof the relationshipAC14Attribute 2(ac2)IA relationshipAC24Attribute m(ac3)ACm2JOverlapped relationships(densely present)Chapter 4. Rule Generation^ 60Figure 4.4: The Generalized Connectivity Diagram of a Model.Chapter 4. Rule Generation^ 61the ascending order of the frequency (density) repetitions. Suppose that the correspond-ing density-variable values are xl, x2, ..., xm. The number of distinct relationships thathave each of these density values are counted and this frequency data is used to fit anappropriate probability distribution (such as Poisson or multinomial) to this data. Thediscrete probability function for the Poisson distribution ise'Axif(xi) = ^xi!and for the multinomial distribution, it is (4.46)n!^  TT xi•H x,!(4.47)The parameters A or pi are estimated from the frequency data obtained from theconnectivity diagram. The Poisson distribution can be fitted to some of the models,but the more general multinomial can be fitted to most models. The subsequent stepsinvolved are :1. Estimate the parameters of the distribution (eg., using the method of moment ormaximum likelihood estimate)2. Test the goodness of the fit3. Select the cut-off value c for the relationship density (stochastic) X such that theprobabilityP(X < c) <^ (4.48)Chapter 4. Rule Generation^ 62where p is the cumulative probability associated with the low density relationshipsthat will be discarded (typically 0.05^0.10).4. Select c to be the threshold level for relationship density, i.e.,If X < c, disregard the corresponding relationshipsIf X >= c, extract the corresponding regions from the connectivity diagram forrule generationThe purpose of disregarding the values of X less than c is a) to improve the efficiencyof on-line inferencing by considering only the significant regions, b) to eliminate thepossible noisy data that might be present in the group. This procedure of rule basegeneration is carried out off-line.When new data is available, the parameters of the distribution are estimated again.Depending on the new value of the estimated parameters, a new cut off value is computed.In this manner, the system can learn which rules should be deleted and perhaps which newrules should be included. The knowledge base is continuously updated by this procedure.4.3 Illustrative ExampleAn example of the process of rule generation using connectivity analysis is described inthis section. In this example, the fish data for a particular group (Group 5) is chosen.A sample of measured attributes in this group is shown in Table 4.1. The attributesac2, and ac3, are Nose-Eye/Length, Nose-Gill/Length, and Width/Length of fishrespectively, and these attributes are normalized with respect to their range so that theyvary from 0.0 to 1.0.Figure 4.5 shows the connectivity diagram for Group 5 which was drawn using thedata for this group according to the procedure described in Section 4.2. In this figure,Chapter 4. Rule Generation^ 63Table 4.1: A Sample of Data on Fish Dimensions for Group 5.Fish No. aci ac2 ac31 0.53 0.56 0.782 0.35 0.27 0.313 0.85 0.73 0.394 0.34 0.32 0.325 0.33 0.29 0.356 0.31 0.32 0.367 0.79 0.62 0.458 0.32 0.37 0.419 0.32 0.28 0.2710 0.78 0.68 0.49the "bold faced" numbers in each attribute column represent the median (711,23) of thesubranges ACk13. Table 4.2 shows the medians of subranges of the relationships for Group5.Table 4.2: The Medians 77 of the Subranges of the Relationships for Group 5.ik 1 2 31 0.90 0.65 0.352 0.70 0.60 0.303 0.45 0.85 0.30The subrange ACk, is defined in terms of n as(ii,,i hk,,)Acko = nkil^2 '71k13 + 2where hk,3 is length of a side of a hyper-rectangle as defined in Equation 4.29(4.49)Chapter 4. Rule Generation^ 64Figure 4.5: The Connectivity Diagram for Group 5.Chapter 4. Rule Generation^ 65Now, the rules for Group 5 can be formulated with the aid of the connectivity diagramand Table 4.2. Rules that assign a new fish to Group 5 are shown below.Rule 1AC115 AND AC215 AND AC316Model 5Rule 2AC125 AND AC225 AND AC326---) Model 5Rule 3AC135 AND AC235 AND AC335---+ Model 54.3.1 Rule Prioritization and Threshold Level SelectionIn Figure 4.5, the number on the relationship clusters represents the peak height ofthe relationship density within the highlighted regions. In other words, this number isthe number of relationships within the window formed by the kernel h  defined inFunction 4.21. The threshold level chosen for this diagram is 2, i.e., the relationships withdensity value 1 are omitted. Table 4.3 summarizes the probabilities of the relationshipdensity values of the connectivity diagram shown in Figure 4.5 while Figure 4.6 gives themultinomial probability distribution of the density values.Once the connectivity diagrams for all groups are drawn, a table similar to Table 4.3and a distribution similar to Figure 4.6 for all groups can be formulated. The next step ofthe process is to fit a suitable probability distribution for the relationship density values.A Poisson model was attempted first, but it was found that it does not provide a goodfit for the data. A multinomial distribution was then fitted and Table 4.4 shows theChapter 4. Rule Generation^ 66Table 4.3: Probabilities of the Multinomial Distribution of Density Values of Relation-ships for Group 5.No. of Occurrence X, Probability P(X2)1 0.0942 0.1333 0.1714 0.1105 0.1206 0.0767 0.1518 0.0009 0.146iilii ili1^2^3^4^5^6^7^8^9Relationship DensityFigure 4.6: Probability Distribution of Density Values of Relationships for Group 5.Aa_ oChapter 4. Rule Generation^ 67resulting probabilities. Figure 4.7 shows the multinomial probability distribution.Table 4.4: Probabilities of the Multinomial Distribution of Density Values of Relation-ships for All Groups.No. of Occurrence Xi Probability P(X2)1 0.0902 0.1233 0.1524 0.1575 0.1106 0.0947 0.0798 0.0359 0.04210 0.02311 0.07012 0.01013 0.00714 0.008The relationships with density values less than 2 are omitted because this gives therequired value for p- according to the relation 4.48. Also, they do not present any con-sistent association of the subranges for formulating the rules. What this means is thatthese are the valleys or the non peak regions of the density function (p(acIC3)). There-fore, according to the notation given in relation 4.48 c = 2, i.e., the threshold level forinclusion of the relationships is 2. In the connectivity diagram shown in Figure 4.5, onlythe regions with densities higher than or equal to this value are shown.In order to assign priorities to these regions, the window heights (w13) as given inEquation 4.28 must be computed. For example, the probabilityLp(acIC;)P(C;)dac = 0.331Chapter 4. Rule Generation^ 680.1^I I I I I^ ii. MIN NM■2^3^4^5^6^7^8^9^10^11^12 13^14Relationship DensityFigure 4.7: Probability Distribution of Density Values for All Groups.for the region 1 of Group 5 (i.e, the region with a peak height 7). The volume ofthe hyper-rectangle formed by the subranges associated with this region is 8 x 10.Therefore,0.33w).5 =^ 8 x 10-3 = 41.25Since the probability used in this computation is class conditional probability, theabove value must be multiplied by the class a priori probability to obtain the actualpriority. The priority value for this is therefore,Priority15 = wi5P(C5) = 41.25 x 0.16 = 6.6I1The priorities for the three rules of Group 5 are summerized in Table 4.5.Chapter 4. Rule Generation^ 69Table 4.5: Probabilities of the Multinomial Distribution of Density Values of Relation-ships for All Groups.Rule (Region) Probability Volume Priority1 (R.1) 0.33 8.0 x 10 6.62 (7.2) 0.15 6.0 x 10' 4.03 (R.3) 0.43 8.0 x 10' ResultsRules for all five groups were generated in a manner described in the previous sectionsand implemented in a rule base. The system was tested using two common methods,namely resubstitution method and leave one out method. For the resubstitution method,a test set of data was generated by selecting five samples from each group of data thatwere used to generate the rule base. This data set was tested with the system and theresults are shown in Table 4.6. In this table, fl, f2, and f3 represent the ratios offeatures (nose-eye length)/(length), (nose-gill length)/(length), and (width)/(length) offish respectively. These values are normalized with their respective ranges such that theyvary from 0.0 to 1.0. The group column indicates the actual group of the correspondingdata item (fish). The result columns show the group number if the data item is correctlyclassified else "x" or "nc" which respectively means that the data item is incorrectlyclassified or not classified.In leave one out approach, one data sample is left out and the system is trained withthe rest of the data. The system is then tested with this sample. This process repeatedfor all the samples in the database. The results of this approach for the test set generatedfor resubstitution are also shown in Table 4.6.Chapter 4. Rule Generation^ 70Table 4.6:^Rule-Based Method Classification Results for Resubstitution andLeave-One-Out Methods.Sample Classification ResultNo. fl f2 f3 Group Resub. Lye. 1 Out1 0.25 0.23 0.35 1 1 12 0.51 0.53 0.44 1 1 13 0.54 0.47 0.42 1 1 14 0.41 0.43 0.23 1 nc nc5 0.52 0.54 0.47 1 1 16 0.77 0.86 0.46 2 2 27 0.17 0.38 0.54 2 2 28 0.53 0.54 0.65 2 2 29 0.53 0.53 0.47 2 x x10 0.82 0.87 0.42 2 2 211 0.49 0.47 0.53 3 x x12 0.70 0.69 0.39 3 3 313 0.61 0.61 0.34 3 3 314 0.55 0.40 0.27 3 3 315 0.42 0.31 0.17 3 3 316 0.66 0.53 0.38 4 4 417 0.57 0.56 0.43 4 4 418 0.43 0.46 0.32 4 4 419 0.40 0.49 0.37 4 4 420 0.88 0.85 0.63 4 4 nc21 0.53 0.56 0.78 5 5 nc22 0.35 0.27 0.31 5 5 523 0.85 0.73 0.39 5 5 524 0.34 0.32 0.32 5 5 525 0.32 0.28 0.27 5 5 5Chapter 4. Rule Generation^ 714.4 Integration of Heuristics and Expert KnowledgeOnce the statistical rules are generated as described in Section 4.3, they can be im-plemented in an expert system shell to infer the model or the group of a new object.The efficiency of this process can be further improved by integrating heuristics or expertknowledge in to the system. This process is described now.Consider a situation where there are q number of groups and m attributes for eachobject in any group. If the rules for this system were generated as described in Section 4.3,they will have the following form.< ac < A (ac2, t < ac < ac213.) A • • • A (acmiji < ac < acini3.)where ac are the attributes, S, is the ith statistical rule of the jth class, 1 and u are thelower and upper bounds of the subranges formed by attributes ac, and A denotes theAND operation.Rules for Vi and Vj are collected in the rule base system. Although these rules areadequate to infer the class or group of a new object, the rule search can be considerablyimproved, resulting in faster inference speed, if additional available knowledge about theproblem is integrated into the system.Now consider the following additional facts given by an expert in the field.• Object type A can only be in groups jA = {1, 3, • • -}• Object type B can only be in groups jB^{2,4, • • •}•• Object type T can only be in groups jT = {1, 5, • • •}Chapter 4. Rule Generation^ 72These facts can be used as a hierarchical preprocessor prior to searching for a statis-tical rule. For example, rules can be expressed as follows:Mi Type A SijOnce the left term of the right hand side of this rule is unified, the rule will search fora statistical rule that satisfies the right term of the right hand side in order to achieve thegoal (i. e. M3). Since the number of elements in each of the sets jA, jB, • •, jT smallerthan the number of total groups (because jA, jB, • • •, jT C J, where J = {1, 2, • • , q}),the search space for rules is considerably smaller in comparison to the total search spaceresulting in faster inference speed.Identification of the object being Typet is common sense knowledge, however, thefacts considered above are expert knowledge.4.4.1 Integration of Knowledge into Fish Rule Base: An IllustrationThe following facts were obtained from the fish cutting experts regarding the cuttingcontours.• Sockeye Salmon can only have contour models js = {1, 3, 5}• Pink Salmon can only have contour models jp = {2, 3}• Coho Salmon can only have contour models j, = {1, 4}• Chum Salmon can only have contour models j ch = {4,5}A decision tree can be developed for the fish classification rule base as shown inFigure 4.8. This tree can be directly translated into Prolog rules as follows. The treeshown in Figure 4.8 is a simplified version of the actual decision tree./-11,-ICDg=•bo1—■cnc-r-I-Icl-i-.....00Pticor,in- •.....0H,-,rocoF-1-,0,-1<-1-co't....;,..,nET'rncn.— .r,6cf)'c;17(..A;.-0(7-.--,t4=-Pzi=copbco,-,D,c-,.....o---.1cA.,Chapter 4. Rule Generation^ 74fish(M) : — sockeye, ss(M).fish(M) : — pink, sp(M).fish(M) : — coho, sc(M).fish(M) : — chum, sm(M).ss(M) : — in_range(AC1, ac111/, aclllu), in_range(AC2, ac211/, ac211u),in_range(AC3, ac311/, ac311u), M = 1.ss(M) : — in_range(AC1, ac1211, ac121u),in_range(AC2, ac2211, ac221u),in_range(AC3, ac3211, ac321u), M = 1.ss(M) : — in_range(AC1, ac113/, ac113u), in_range(AC2, ac2131, ac213u),in_range(AC3, ac3131, ac313u), M = 3.Chapter 4. Rule Generation^ 75ss(M) : — in_range(AC1, ac115/, ac115u), in_range(AC2, ac2151, ac215u),in_range(AC3, ac3151, ac315u), M = 5.in_range(X, X1, X2) : — X > Xl, X < X2.Here, functors and constants are in lower case letters and variables are in upper caseletters.Chapter 5Attribute MeasurementThe attributes of a fish that are necessary for model matching have to be measured on-line quickly and accurately by the vision system. Low-to-medium level image analysistechniques are adopted for this purpose. These attributes serve as the context for deter-mining a model from the model-base using the rule-base of the system as described inprevious chapters. The attributes selected from factor analysis of a more extensive set offeatures are nose-to-gill length, nose-to-eye length, length, and width of the fish. Two im-age processing algorithms are being used for accomplishing the nose-to-gill length. Onemethod, as described in [Riahi and de Silva, 1990][de Silva, Riahi, and Gamage, 1991],is based on the isolation and enhancement of an elongated edge-like structure formedby the shadow effect of the gill plate of a fish. In this approach, the blobs formed bythresholding the enhanced image and grouping edges through a connectivity procedureare boxed. The longest box with the largest aspect ratio (length/width) determinesthe gill coordinates. In the other method, an enhanced 2-D image of the gill region istransformed into a 1-D representation through directional averaging of pixel intensity inthe direction close to the edge direction. Characteristic peaks in the transformed image(projection profile) establish the gill position and the eye position. The theory behindthe projection profile method is described in Section 5.1. This method is generally fasterthan the boxing method.76Chapter 5. Attribute Measurement^ 77Figure 5.1: Captured Image Under Structured Lighting.aributes, length and width of the fish are measured by isolating the largest continu-ous segment (corresponds to the fish) in the image and identifying the smallest rectanglewhich encloses this segment. By ensuring the length direction of the fish is parallel toa known axis (x-axis) of the image, the length and the width are determined by theco-ordinates of the rectangle.5.1 Attribute Measurement Using Projection Profile MethodTo measure the attributes nose-to-eye length and nose-to-gill length, it is necessary tolocate the eye position and gill position of the fish in the image. Therefore, these featuresof the fish are enhanced by using structured lighting as shown in Figure 5.1. Notice thatin Figure 5.1 that the gill and eye of the fish are enhanced, while the other features inthe head region of the fish image are washed out by the flood lights.e Isolated Largest Continuous Image Segment (Fish in the IChapter 5. Attribute Measurement^ 78An image captured under this lighting condition is thresholded and segmented toisolate the largest box with a high aspect ratio. This box corresponds to the fish in theimage as shown in Figure 5.2. Once the co-ordinates of this box are determined, relativelytrivial processing such as run length encoding within the box will give the length andwidth of the fish. Statistical surveys on salmon show that the gill of the fish lies withinfirst quarter from the nose. In other words, nose to gill distance is less than or equal toof the length of the fish. Therefore, only the first quarter of the identified box, as shownin Figure 5.3, is processed in order to determine the location of the gill and eye features.The head image is, first, convolved with a Gaussian filter to eliminate the random highfrequency noise.g(x,Y) -1^(.2+y2) 2ircr2 e^22 (5.51)Chapter 5. Attribute Measurement^ 79Figure 5.3: A region of Interest of the Fish Image.Now, let the image be i(x, y), thenii(x,y), i(x,y)*g(x,y) = 1°. .17:i(u,v)g(x — u,y — v)dudv^(5.50)is the resultant Gaussian smoothed image, as shown in Figure 5.4. The function g(x, y)is the 2-D Gaussian function, and can be expressed as given below.Features enhanced using structured lighting give rise to step changes in the intensity(edges) in the image. These intensity changes (edges) are enhanced by convolving theresultant image obtained above with a gradient operator (filter). The resultant image isshown in Figure 5.5.Let the gradient operator be d(x, y), thenChapter 5. Attribute Measurement^ 80Figure 5.4: Gaussian-Smoothed Fish Head Image.z"(x,y) =- iI(x,y)* d(x,y) =00 r- 00 - 00 i(u, v)d(x — u, y — v)dudv^(5.52)where d(x, y) is the gradient operator of the forma f (x ,y)d(x, y) =ax(5.53)This type of directional gradient operator is used to enhance the edges only in adirection of interest. The resultant image i,"(x, y) obtained in this manner is projectedon to an aids (1-D signal) to give the projection profile. Since the curvature of the gill islow and its direction is approximately parallel to the y-axis of the image, the projectionof 2-D image ill(x, y) on to x-axis gives a large peak as shown in Figure 5.6 at the positionof the gill.00Projection Profile = p(x) =.0 (x, y)dy (5.54)Chapter 5. Attribute Measurement^ 81Figure 5.5: Edge-Enhanced Head Image.Figure 5.6: Projection Profile of the Head Image.Chapter 5. Attribute Measurement^ 82A secondary peak of p(x), not so large as gill peak but large enough to distinguishfrom other features, in between the gill and the nose gives the eye position of the fish(Figure 5.6).Although the above equations are given in continuous domain to emphasize the math-ematics, all computations are essentially carried out in discrete domain as follows.If the digital image captured is /(X, Y), then[M/2] [M/2]P(X, Y) /(X, Y) * G(X,Y)^E E i(x + i,Y j)G(—i, —j)^(5.55)-LM/2i - [M/2fwhere G(X, Y) is the M x M Gaussian filter and P(X, Y) is the resultant smoothedimage.Convolving the derivative of a Gaussian will give the same result as convolving theimage first with the Gaussian, and then convolving the result with the derivative operator.a^ ag(x,y) .^(i(x' y) * g(x'y)) = ^ * 2.(x, y) (5.56)This result is true because both the convolution and the derivative are linear opera-tors. Therefore, in actual implementation, the first derivative of the Gaussian is convolvedwith the digital image to enhance the edges. Since the edges of interest lie in x-direction,the derivative of the Gaussian filter is also taken in the same direction.[M/2_1 [M/2]/11(X, Y) = /(X, Y) * Gx(X,Y) = E E^+ i,Y j)Gx(—i,—j)^(5.57)- [M/2J - [M/2]where Gx(X, Y) is the first derivative of the Gaussian with respect to x. Now, theprojection profile, P(X), isChapter 5. Attribute Measurement^ 83NP(X) = E i"(x, Y)^ (5.58)Y=0where in(X, Y) is the N x N enhanced image. The features of the image are determinedby the peaks of P(X).5.2 Attribute Measurement Using Region BoxingThis method was primarily developed to compute the nose-to-gill length of a fish and, asmentioned previously, is based on the isolation and enhancement of an elongated edge-likestructure formed by the shadow effect of the gill plate of a fish. As in the projection profilecase, an image captured under structured lighting is first convolved with a directionaledge operator (first derivative of a Gaussian). The resultant image is thresholded andthe edges in the binary image are grouped in order to form contour segments. Edgegrouping on a binarized image is much faster than on a gray scale image. Next, thecontour segments are boxed and the longest (in the direction of the gill) box with largestaspect ratio is chosen as the box that contains the gill contour. The coordinates of thisbox can be used to determine the position of the gill, hence, the nose-to-gill length.5.3 Comparative Evaluation of the TechniquesThe edge grouping process in the region boxing method is computationally expensiveand time consuming. This problem can, however, be reduced if a region of interest (aregion which contains the gill contour) is selected. As mentioned in Section 5.1, throughstatistical analysis of the dimensional measurements on Pacific Salmon, it is estimatedthat the distance from nose to the gill-plate edge is approximately 20% of the length ofChapter 5. Attribute Measurement^ 84the fish with ±5% variation. Edge grouping into contour segments within the region ofinterest reduces the computational cost by 50%.Directional averaging method is much faster than the region boxing method. Table 5.1shows typical processing times associated with various steps of each method. These timeswere obtained by implementing each algorithm on a vision station (IRI SD512) availablein the Industrial Automation Laboratory.The accuracy of both methods is approximately ±1.2mm, and the robustness of bothmethods are acceptable. The directional averaging method is chosen because of its speed.Table 5.1: Typical Times for Various Image Processing Operations.Process Complete Frame512 x 512Reduced Window200 x 200Image Capture3 x 3 ConvolutionEdge GroupingDirectional Averaging33ms137msN/A40msN/A25ms100ms8ms5.4 Measurement of OrientationThe algorithms described in Section 5.1 assume that the angle of fish is known, specif-ically, they assume that the fish is aligned with one of the axes of the image plane ofthe camera. For accurate computation of the attributes as well as the positioning of thecutter, the orientation of the fish on the conveyer has to be established. Furthermore, ifthe contour cutting system is not in operation a default angle cut has to be performed.Two angles of importance are shown as 0,, and 0. in Figure 5.7. Specifically, 0,, representsthe angle of V-cut with associated recovery of useful meat within the head region. Thisangle will be fixed in this particular implementation and will be achieved by a doubleChapter 5. Attribute Measurement^ 85angular cutter. The cutting angle, Oa, of an angle cut is chosen to recover the meat inthe top region of a head while eliminating the pectoral fin. Statistics show that (seeTable 5.2) the optimum angle of Oa is approximately 600 .Table 5.2: Typical Optimal Angles of Cut.Existing Angles New Angles Nose-Gill(cm)Meat Saving(oz)Oa Ot, Oa Ot,70 60 60 35 11.4 2.570 50 60 35 10.7 2.070 60 60 35 11.4 2.070 60 60 35 11.4 1.575 60 60 35 11.4 1.0The concept of orientation is examined in the context of a particular ap-plication. Three techniques of measuring orientation of fish are developed[Carnage and de Silva, 1990]. The methods are applied to samples of Pacific salmonand the results are compared with the values obtained through direct measurement oforientation in these samples.5.4.1 Least Rectangle MethodThe first method employs a direct geometrical definition of orientation. In this method,the nose and the tail co-ordinates in a digitized image of fish are determined throughimage enhancement and processing.The image is first thresholded to make the image binary. This eliminates the sensornoise and the background illumination variations [Jain, 1989]. Next, the image is seg-mented through a process known as labeling which groups continuous image segmentstogether. Following segmentation, each continuous segment is boxed with the least rect-angle and the largest of which is selected as the fish in the image (Figure 5.8).Chapter 5. Attribute Measurement^86Figure 5.7: Two Angles of Importance for Recovery Improvement in Head Removal inFish Processing.Chapter 5. Attribute Measurement^ 87Figure 5.8: The Least Rectangle Enclosing a Fish.Following is a high level description of the steps associated with this technique.1. Synchronize the camera with the conveyer.2. Snap an image if a fish is detected on the conveyer.3. Threshold and store the image in the frame buffer.4. Label the image regions into continuous segments (blobs) through connectivityanalysis.5. Box the continuous segments.6. Isolate the largest box and assign a unique pixel intensity value for the correspond-ing blob.7. Scan the edge region of the box and determine the tail and nose co-ordinates.Chapter 5. Attribute Measurement^ 888. Compute the angle of orientation.Once the largest box is isolated systematic scanning of the edge regions of the rectanglewill identify the co-ordinates of the nose and the tail. When the mouth of a fish is open,two nose points are obtained by this procedure. The mid point of these two locationsis used as the nose point. Similarly, two tail points are obtained for the two corners ofthe tail fin. Again the mid point of the two is assifmed the tail point. The orientation isthen computed as follows using the nose and tail co-ordinates.Consider the two possible cases, mouth open and mouth closed, as shown in Figure 5.9and 5.10.The Case of Mouth Closed:If the two tail co-ordinates are (x1, y1) and (x2, y2), then the mid point of the tail isand if the nose co-ordinates are((xi 4-x2) (Y1 4-Y2))2^2(5.59)N — (x3, y3)the angle of the orientation is given by= tan-1 [1Y3 (y1 +Y2)/211x3 — (xi + x2)/21] •The Case of Mouth Open:The tail co-ordinates are same as above and the nose co-ordinates are(5.60)(5.61)Chapter 5. Attribute Measurement^ 89N ((x3 + x4) (Y3 + Y4))2^2the angle of the orientation is= tan-1 1(Y3 Y4)/2 (Yi + y2)/2111(x3 + x4)/2 — (x1 + x2)/21] •(5.62)(5.63)Figure 5.9: The Case of Mouth Closed.This method assumes that the complete image of the fish is available. Structuredstrobe lighting is used to improve the contrast between the background and the objectand to avoid the blur due to the motion of the conveyer.Chapter 5. Attribute Measurement^ 90Figure 5.10: The Case of Mouth Open.5.4.2 2nd Moment of Area MethodThis method uses second moment of area to establish orientation of an image [Jain, 1989][Faber and Stokely, 1988]. In this method, first, the head region is enhanced and pro-cessed as before (Steps 1-3 of method 1) to obtain the silhouette image of the region. Thearea within this silhouette is assigned a uniform pixel intensity, and the second momentsof area about the orthogonal co-ordinate axes of a reference frame through the centroidand the corresponding cross moment of area are computed as follows (Figure 5.11).The second moment of area about an axis through the centroid is given byIC ,_ E Er2(m,n)(m,n) eRE E [(n — ii)cos(0) — (m — iff.)sin(0)]2^(5.64)(man) eR(m,n) eR1if= E E n.N(5.66)Chapter 5. Attribute Measurement^ 91Figure 5.11: Illustration of the Parameters Used in Calculation of Second Moment ofArea.where (fri, n) is the mass center and is defined asM =1—N E Ern(inn) ER(5.65)Note that, N is the number of pixels which represents the area of the image. Itis well known that the second moment of area about orthogonal co-ordinate axes andcross moment of area provide the direction of the major and minor principle axes withrespect to the reference frame. The two principle axes are orthogonal. The angle of theorientation, 0, is given by1^ 2M11  10 = –tan' {2^M20 — MO2 i(5.67)Chapter 5. Attribute Measurement^ 92whereM13 E E(m - ffi)2 (n -^ (5.68)(m,n) el?This method assumes that the orientation of the fish is given by the major principalaxis. The assumption is sometimes questionable because, for example, for fish in thesame direction, the computed moments of area will depend on many factors such aswhether the mouth is open or closed, fins are trimmed or not, and so on. Furthermore,the moments calculated are very sensitive to the noise present in the image. For example,if there is noise at a distant point from the center of gravity of the image, the momentscalculated will not be accurate.5.4.3 Fundamental Ellipse MethodIn this method, the orientation of an object is defined as the orientation of the majoraxis of the fundamental ellipse that fits the border of the object. The fundamental ellipsecan be extracted by the Fourier series expansion of the borderline contour of the image.Fourier descriptors of the borderline contour of an object, in the past, has been used todescribe the shape of objects in an image [Lin and Chellappa, 1987]. A special featureextraction algorithm was developed to extract the bounding contour of an image. Thisalgorithm is given below.1. Locate any point on the boundary.2. Go to one of the adjoining pixels.3. Check whether this pixel is on the boundary.4. If on the boundaryChapter 5. Attribute Measurement^ 93(a) Add this pixel to the boundary.(b) Go to step 2.5. Else(a) Check whether this pixel is in the body or out of the body of the image.(b) If in the bodyi. Move towards the edge.ii. Go to step 3.(c) Elsei. Move towards the body.ii. Go to step 3.6. Repeat Until this pixel = stating pixel.This array of pixels along with the distances to a reference point in the body ofthe image and the corresponding angle to this pixel from a reference axis are stored(Figure 5.12).If we now plot these distances against their angle to the reference axis, we will obtaina graph as shown in Figure 5.13. It is apparent from this figure that there are two cycleswithin the period of 27r. Hence, the coefficients corresponding to the first harmonic (halfthe fundamental period) provide the fundamental ellipse from which the orientation ofits major axis can be computed in a straightforward manner as follows:f(9) = c--t-;- + i (an cos(n8) + bn sin(nO))n=1(5.69)Chapter 5. Attribute Measurement^ 94whereNan = — AO) cos(n8)6,=11= — E f(8) sin(n9)58For the case of n=2, we haveNa2 = —^f (0) cos(20)602=1.1b2^—f(8)sin(29)8971.(5.70)(5.71)(5.72)(5.73)where N is the number of discrete points on the contour. Consequently, the angle oforientation is given by= an (5.74)2^a2 •This approach works well even when there are unknown irregularities (due to collapsedfin or broken tail) on the borderline of the image. These irregularities, which generallycorrespond to the higher harmonics of the Fourier series, will not affect the fundamentalellipse.5.4.4 Results and ComparisonThe actual angle of the fish was compared with the results obtained from the threemethods. Table 5.3 shows some of the angles compared (in degrees), and Figure 5.14illustrates the results obtained using the three methods.The three methods discussed above have approximately the following execution times.Chapter 5. Attribute Measurement^ 95Figure 5.12: Distances to a Reference Point and the Corresponding Angle.r[j]80j(N Points)Figure 5.13: The Graph of Distance to a Reference Point Vs. the Angle.MomentMethodChapter 5. Attribute Measurement^ 96 BoxinaMethodAnale 36.016(a) Captured Image. (b) Least Rectangle Method.(c) Second Moment of Area Method.^(d) Fundamental Ellipse Method.Figure 5.14: An Illustration of Three Methods to Calculate the Orientation.Chapter 5. Attribute Measurement^ 97Table 5.3: Typical Results from the Three Methods of Computing the Angle of Orienta-tion.Box Moment Ellipse Actual-2.85 -4.83 -4.04 -3.822.14 1.41 2.45 2.2310.63 9.57 9.51 9.6715.46 13.90 16.04 15.8620.82 19.32 20.17 20.1424.92 23.03 25.58 25.4236.01 35.02 35.43 34.5342.41 43.82 43.92 43.1146.52 47.58 48.08 47.75Region Boxing Method^200ms2nd Moment of Area Method - 150msFundamental Ellipse Method - 250msThe accuracy, speed, and the robustness of the least rectangle method is acceptable,but it requires a clean straight fish image. The moment method is the fastest method,however, it is very sensitive to the threshold value selected and the noise present in theimage. The noise problem can be reduced by convolving the image with a Gaussian filterprior to the calculation of moments, and automatic thresholding can be employed toimprove the accuracy of the computed moments. It is also assumed in this method thatthe object under consideration (fish) is symmetric. The main advantage of this methodis its speed and the robustness. The ellipse method was found to be superior to theother methods due to its accuracy of the results. The robustness of this method is alsoacceptable. If the processing time is not critical, this method is recommended to measurethe orientation. However, if time is critical, the moment method is recommended.-x lx2xkInstantiationsof generic objectGenericobjectChapter 6Pattern Recognition TechniquesPattern recognition, in general, is concerned with pattern classification as well as esti-mation of parameters. The classification procedure can be defined as a mapping fromfeature space to a class-membership space[Pao, 19891. This is illustrated in Figure 6.1.R(f(5))^p. c(X)Patterns:^Mapping^Classrepresentations^function indexof instatiationsin terms offeaturesFigure 6.1: Schematic Representation of Pattern Classification.Here, X is the object of interest and it may have different instantiations representedby X1, 2C2 ....Xk . The process of classification comprises two steps. In the first step aspecific appearance of the object is described in terms of features, i.e., X f (X), while inthe second step, a mapping ( R(f(X)) ) is carried out to determine the class membership.For example, consider the process to be classification of apples. Let X1, X2, ...., Xk beapples moving on a conveyor. With this set up a suitable set of features to representapples may be the colour, the shape, and the size. Therefore, f(x) will be {colour, shape,98Chapter 6. Pattern Recognition Techniques^ 99size}. With a different application, however, the optimal features may be different. Theimportance of the function f(), therefore, is to select the appropriate set of features foreach application.Developing a classification model involves feature extraction and learning the mappingfunction RO. Feature extraction deals with how and what features should best describethe object in the form of f(). Learning involves training the mapping using a set oflabeled (already classified) data.Pattern recognition also deals with estimation of an object parameters from a givenset of features. Estimation of parameters is a more general procedure than classification.In fact, classification can be viewed as a special case of estimation in which the class indexcan be thought of as the only parameter being estimated. The advantage of estimationover classification is that the former gives a continuous value to the parameter beingestimated in contrast to the latter which picks the best suited class for the given case.Figure 6.2a illustrates how a training set can be used to train the mapping function,R(), off-line for an estimation model. Figure 6.2b shows how this mapping function isexercised to estimate the parameters of the object. The notation used here is same asbefore except, A(xi)....A(x_k) represent the parameters (attributes) of the pattern. Toillustrate this, consider the problem of determining the price, quality, and size of apples.The possible features for a particular instantiation (eg. apples on a conveyor) may be thecolour, the shape, and, the weight. The attributes that are to be estimated using thesefeatures are clearly the price, the quality, and the size. According to the notation used,f(xT) = {colour, shape, weight} and A(xT) 2= {price, quality, size} where T denotes thetranspose.x— I 1^11.^f(x1)f(42)xkRtrf(x)------------4'.*Chapter 6. Pattern Recognition Techniques^ 100 A(5i)Ns4ff=v2)MachinelearningxA(X2;Niik-0.. Rtri(x)rxk(a)Training set Mappingfunction-4*(b)Figure 6.2: (a) Learning the Mapping Function Using a Training set. (b) EstimationUsing the Learned Mapping Function.Chapter 6. Pattern Recognition Techniques^ 1016.1 Pattern Recognition in Automated Fish ProcessingReal world situations cannot be assessed in terms of isolated facts, rather they can bedescribed in terms of patterns of interrelated facts. This interrelation may, in some cases,be implicit. For example, it may be implicitly understood that a given set of facts cor-responding to the same object or situation[Pao, 19891. In other cases, a pattern may bemeaningful only because of explicit relationships among the various features of the pat-tern. For example in fish classification, since the relationships between cutting contoursand the dimensional measurements of the fish are not implicit, explicit relationships haveto be established between the cutting contours and the other dimensional features.Various combinations of features of fish were tested for possible patterns relating tothe cutting contours. These features include length, width, nose-eye length, nose-gilllength, and so on, and the problem was approached in terms of both the classification ofcontours and the estimation of the parameters of the contours. Classification is carriedout using several techniques including a rule-based method, neural networks, Bayesianinference, and k-nearest neighbour classifiers. Multiple regression and neural networkmethods were used to estimate the parameters of the contours.The theoretical development as well as a practical approach to rule generation throughconnectivity analysis of the rule base system is described in Chapter 4. In this chapter,the alternative approaches are described in detail, and the results of these approachesare presented as well.6.2 Neural NetworksA network of elemental processors arranged in a manner similar to biological neural netsthat is capable of learning how to recognize patterns is known as a neural network. TheChapter 6. Pattern Recognition Techniques^ 102elemental processors are considered to be the neurons and the network is said to bea Perceptron[Pao, 1989][Duda and Hart, 1973]. Neural networks are also referred to asParallel Distributed Processing or Connectionism in the literature.The present research is based on the concept of Generalized Delta Rule (GDR) for-mulated by Rumelhart, Hinton, and Williams[Rumelhart, et. al., 19861. GDR is knownto be a practical way of implementing a Perceptron like system and the basic theory ofit is discussed later in this section.A simple linear discriminant network is an array of multipliers and summing junctionsas shown in Figure 6.3. The lines in the box of this figure act as multipliers and thecommon node of these lines is a summing junction. The factor of multiplication foreach line is given by w1. When the pattern feature values are presented to the inputsof this two class classification system, the output indicates if the feature belongs to acertain class, provided that the correct weighting factors are known. These weights aredetermined using a training set during the initial learning phase.Input pattern output bweight vector wFigure 6.3: Schematic Representation of a Linear Discriminant.xl --ill•X ntA node (input)Input Layer^ Output LayerChapter 6. Pattern Recognition Techniques^ 103A generalized multiclass classification system consists of several outputs as shown inFigure 6.4. In this figure, i and o represent the input and output layer nodes. Also,the values xi....x„ represent the input pattern, bi....bk represent the output class, andw(ai) represent the weights of the connections. This generates a matrix of weights incontrast to the two class problem where weights form a vector. Again, the weight matrixis learned using a training data set. To form an analytical method to determine weights,we shall consider the two class (c1 and c2) classification problem as modeled below.Figure 6.4: Schematic Representation of a Multiclass Linear Discriminant.Xw = b^ (6.75)where X is the matrix of training set patterns (x), w is a column vector of weights, andb is the output values specified by the training set. Each row of matrix X is a featurevector x of the form XT = (X1, X2, ...Xn) and the number of rows of this matrix is equal toChapter 6. Pattern Recognition Techniques^ 104the number of training data. For example, if the number of features is n and the numberof training data is m, X will be an 'in x n matrix. Also,x E c1 jib, is positive (+1)^ (6.76)x^El if bi is negative (-1)^ (6.77)where c1 is one of the two possible classes and Ei means that not c1. Said another way,if b is positive, then the given feature vector is classified to be in class c1 else it is not inclass c1 which means it belongs to class c.Equation 6.75 can be solved using pseudo-inverse of X to obtain the weighting factors,w = Xt b^ (6.78)where Xt is the pseudo-inverse and is defined asxt (xTx) -1 xT.^ (6.79)In practice, this analytical procedure may lead to inaccurate results[Pao, 19891, andresearchers formulated the following rules to find w through iterative procedures.Let w1 = arbitrary (the initial values of the weights), then the value of weight vectorat the kth step of iteration istiik^j)c pok wT .xk )xk^ (6.80)where bk is the value of the output b for pattern Xk and might be +1 for patterns in aclass and —1 for patterns not in that class, and b is the output vector for all patterns,Chapter 6. Pattern Recognition Techniques^ 105i.e., bT^(bi, b2, ....bk....bm). The term Xk is the pattern being considered at step k.Expression 6.80 is a rule for updating the weight factor until all pattern vectors areclassified correctly. The procedure is to add a certain amount of Xk to wk if Xk is notclassified correctly. The proportionality factor, p(bk _ wT.xk)xk, is zero or vanishinglysmall if Xk is classified correctly, as clear from equation 6.75.Equation 6.80 can be rewritten asAw ziSx^ (6.81)where (5. = (b — wT x).Equation 6.81 is known as the Generalized Delta Rule. The above described algorithmto find the weight vector w is called the linear Perceptron algorithm. In the linearcase, there is no advantage of having more than one layer because the weights of theother layers can be simplified to obtain the same results as with one layer. The linearPerceptron algorithm, however, is not capable of yielding a discriminant that can solvemore general problems. For example, the "exclusive-or" or the parity problem cannotbe solved using a linear discriminant or Perceptron. The present research, therefore,uses a semilinear feedforward net which uses GDR with back propagation of errors.This net uses a semilinear activation function to compute the output of each node.Although the activation function is nonlinear, it is considered to be semilinear becauseof its approximate linear regions. The nonlinear activation function provides a betterdiscrimination of the input patterns. The net is called feedforward because data flowsonly in the forward direction. In the learning phase of this net, the errors of weightsare propagated backwards, i.e., the errors of the weights of layer i, for example, areconsidered when computing the weights of the z — 1 layer. The system architectureChapter 6. Pattern Recognition Techniques^ 106for such a net is schematically shown in Figure 6.5. While the lengthy derivation ofmathematical formulae for a semilinear feedforward net with back propagation of erroris not illustrated here, the important results are given below.InputPatternOutputPatternInputLayerHiddenLayerOutputLayerFigure 6.5: Schematic Representation of a Semilinear Feedforward Net.With reference to Figure 6.5, the net input to a node in layer j isnet; = Ew 2^ (6. 82)The output of node j iso;^f(net;)^ (6.83)where f is the activation function. For the sigmoidal activation function the output ofnode j,Chapter 6. Pattern Recognition Techniques^ 10711 + e —(netj ej )/90 (6.84)where t is a threshold or bias and Bo determines the the shape of the function.In the training phase, the network adjusts its weights according to the input featuresprovided by the training set so as to reach the desired output. The objective in thisphase is to minimize the system error given by the following expression:1^E^E E(tpk — Opk )22P P k(6.85)where to, is the target or desired output and opk is the observed output at the kth layerfor the pth case. In the learning phase, the following iteration is carried out until thesystem error falls below a threshold.Awii(n + 1) = n(s, +^ (6.86)where n is known as the learning rate and a is the momentum term. The term 8 isaE^defined as^and tv32 are the weights of the links between layers i and j. The termallet, 7Awiz(n + 1) is the incremental (or decremental) change in weight at the (n 1)th step.The first term on the right hand side of equation 6.86 simply changes the weight of a linkin proportion to the change in error with respect to the corresponding weight, i.e.,  E,while the second term specifies that the change at (n 1)th step should be somewhatsimilar to to the change undertaken at the nth step. The momentum term a controls themomentum from the nth step to the (n 1)th step. The terms y and a are determinedby a trial and error procedure for the case in question.Chapter 6. Pattern Recognition Techniques^ 1086.3 A Neural Network for Fish ClassificationThis section explains the application of neural network theory to the fish classificationproblem. In this process, a set of 200 fish were used to train a classifier that could selectone of five classes of cutting contour. The procedure for grouping fish into the classesis described in Chapter 3. The five classes were determined by setting a threshold tothe wastage within a particular class. Clearly, this is a multiclass classification problem,hence, the following model was formulated.f(FW)= B^ (6.87)where F is the matrix of training fish pattern features (ac) and f is the activationfunction. The feature vector ac was chosen as follows:acT^(nose_eye/length,nose_gill/length,width/length).^(6.88)W is the weight matrix that has to be learned during the training phase, and B is thetarget output matrix which gives the class of the input pattern.A neural network with one input layer, one hidden layer, and an output layer wasconstructed and made to converge by experimenting with the number of nodes for thehidden layer as well as the learning rate and the momentum rate. Since a network withone hidden layer required a large number of hidden nodes, a second hidden layer wasadded. It was clear from these experiments that the number of clusters in the inputfeature space has a direct relationship to the number of nodes that has to be presentin the network. The number of nodes that were experimentally found to give the bestconvergence was 20 for the first hidden layer and 8 for the second hidden layer. TheChapter 6. Pattern Recognition Techniques^ 109number of nodes in the input layer is the same as the number of input features and thenumber of nodes in the output layer is equal to the number of classes.Another network was developed to facilitate dimensional input features rather thannon-dimensional features used in the previous case. In this case, features were not dividedby the length, rather the length was taken as a separate feature. The number of outputnodes used in this network is same as the previous case. The number of nodes in thefirst hidden layer was increased to 40 because this resulted in faster convergence. Thenumber of nodes in the input layer was 4 as there were four input features in this case.This seemed to converge better than the previous case, however, both networks were notadaptive enough to cope with the new fish. This may be due to the large number of nodesused. A large number of nodes, on the one hand, may give rise to good partitioning ofthe input feature space, but these regions may be too small or too crisp for the systemto adapt to a new fish. A small number of nodes, on the other hand, will be moregeneral but for the present set of data, it was impossible to converge the system to givean acceptable error level.Various learning rates and momentum terms were investigated in order to speed upthe convergence. High learning rates correspond to rapid learning but may result inoscillations in convergence. A large momentum term tends to dampen the oscillationbut may cause to slow the rate of learning. The optimum values for learning rate andmomentum term were obtained experimentally and they were 1.0 and 0.1 respectively.The major problem encountered during training is that the system did not convergewith the complete set of data. Deleting the fish that were eliminated for the rule-basedsystem (i.e. fish data below the selected threshold level) and training the network withthe remaining fish showed reasonably good performance in convergence as well as inclassification. It is reasonable, therefore, to assume that this data is noisy, and may haveChapter 6. Pattern Recognition Techniques^ 110caused random variations in the feature space. The elimination process is discussed inChapter 4.The system was tested in two ways. The first method is the resubstitution methodmentioned in Chapter 4. In this method, a sample of data that is used to train thenetwork is tested against the system and the results are tabulated in Table 6.1. Thesame sample of data, that was used to test the rule base system, is used in order tocompare the results. Furthermore, the same sample will be used to test other systems aswell. In Table 6.1, fl, f2, and f3 as previously mentioned represent the ratios of features(nose-eye length)/(length), (nose-gill)/(length), and (width)/(length) respectively. Thegroup column indicates the group of the corresponding features or the group of thefish. The two results columns indicate the classification results of the two methods.The corresponding cell is marked "a", "x", or "nc" if the classification of the featurevector is ambiguous, incorrect, or unsuccessful (not classified) respectively, it shows thegroup number of the fish if it is correctly classified. An ambiguous classification occurswhen the classifier indicates that the feature vector belongs to two or more classes orgroups. An incorrect classification labels the input feature vector to a wrong class whileno classification does not label the input vector into any class or group (all outputs ofthe classifier are zero or close to zero).In the leave one out approach, one data sample is left out and the remaining data setwas used to train the network. Once the network is trained, it is tested with the samplethat was left out. This process is repeated for all samples in the database and the resultsfor the test sample are tabulated in Table 6.1.In another approach to this problem, a neural network was trained to estimate thecoefficients of the cutting contour polynomials. The model used was the same as forclassification except that the output matrix now consists of the values of the coefficients.Chapter 6. Pattern Recognition Techniques^ 111The behavior of this network is similar to that of the classification network.6.4 Multiple RegressionMultiple regression is an extension of simple regression, in that there are several inde-pendent variables rather than just one independent variable which is the case in simpleregression, to estimate a dependent variable[Weisberg, 1980] [Draper and Smith, 1966][Rice, 1988]. For example, a regression model with two independent variables to estimatea dependent variable is as follows.z = + 01 x + 02y^ (6.89)In the present case, the cutting contours are represented by polynomial functions, andthe coefficients of these polynomials can be directly estimated using multiple regression.Therefore, the dependent variables are the polynomial coefficients of the cutting contoursand the independent variables are the features of the fish. The following regression modelwas formed to estimate the polynomial coefficients.ai = Oio^Onaci Oizacz 013ac3 (6.90)az — 1320^Pziaci 022aC2 023aC3 (6.91)a3^030 + 031ac1 1332aC2 033aC3 (6.92)a4 = 040 + Anaci /342ac2 /343ac3 (6.93)in which a, are the polynomial coefficients and ac, are the features or attributes of thefish. There are four coefficients in the polynomial model of the cutting contours, hence,the four expressions in the above model. All fish data in the database were initially usedChapter 6. Pattern Recognition Techniques^ 112Table 6.1: Neural Network Classification Results for Resubstitution and Leave-One-OutMethods.Sample Classification ResultNo. fl f2 f3 Croup Resub. Lye. 1 Out1 0.25 0.23 0.35 1 1 12 0.51 0.53 0.44 1 1 x3 0.54 0.47 0.42 1 1 14 0.41 0.43 0.23 1 1 15 0.52 0.54 0.47 1 1 16 0.77 0.86 0.46 2 2 27 0.17 0.38 0.54 2 2 28 0.53 0.54 0.65 2 2 29 0.53 0.53 0.47 2 a nc10 0.82 0.87 0.42 2 2 211 0.49 0.47 0.53 3 x x12 0.70 0.69 0.39 3 3 313 0.61 0.61 0.34 3 3 x14 0.55 0.40 0.27 3 3 315 0.42 0.31 0.17 3 3 316 0.66 0.53 0.38 4 4 417 0.57 0.56 0.43 4 4 nc18 0.43 0.46 0.32 4 4 419 0.40 0.49 0.37 4 4 420 0.88 0.85 0.63 4 4 a21 0.53 0.56 0.78 5 5 x22 0.35 0.27 0.31 5 5 523 0.85 0.73 0.39 5 5 524 0.34 0.32 0.32 5 5 525 0.32 0.28 0.27 5 5 5Chapter 6. Pattern Recognition Techniques^ 113to compute the parameters /310 043. Although the model was acceptable according tothe residual analysis, the variances of the coefficients of the features (as) were signifi-cantly large. Some of the outliers (possible noisy data), therefore, were eliminated usingthe procedure described in the development of the rule-based system (this is described inChapter 4). Again, these outliers correspond to the noisy data mentioned in the previoussection. The regression, after eliminating the outliers, improved considerably resultingin acceptable p-values, F-ratios, and variances. The p-value and the F-ratio of regressionanalysis indicate the goodness of the model[Rice, 1988]. Furthermore, the validity of themodel built was checked in several ways including normal probability plots and scatterplots of residuals. These are shown in Figures 6.6 to 6.13. The normal probability plotis used to observe the distribution pattern of the residuals. If the residuals are normallydistributed the model cannot be further improved, for example, by introducing non lin-ear terms. A straight line in a normal probability plot indicates that the residuals arenormally distributed. It is clear from these figures that the above model resulted approx-imately normally distributed residuals. The scatter plot also allows us to graphicallyobserve the randomness of the residuals.The coefficients of the features computed during the regression phase were imple-mented in an estimation model to estimate the cutting contour coefficients using theabove model as given below.al = —6.612 — 0.032ac1 + 0.147ac2 + 0.031ac3 (6.94)a2 = 13.000 + 0.111ac1 — 0.289ac2 — 0.108ac3 (6.95)a3 = —9.038 — 0.101ac1 + 0.182ac2 + 0.119ac3 (6.96)a4 =^2.194 + 0.049ac1 — 0.042ac2 — 0.028ac3 (6.97)11-2o//i.:1Chapter 6. Pattern Recognition Techniques^ 114Normal Probability Plot for Coefficient a 1-2^1^o^1^2ResidualFigure 6.6: Normal Probability Plot of Residuals for Coefficient al.Chapter 6. Pattern Recognition Techniques^ 115Residual Plot for Coefficient a I21o-2-3••. „^ .a• •^• • •▪ •^•^.^.•% •' . •^• ....^..• •••• •• •_ • • • -• •• • .. .. ,...^•. ..%.a• ..a •• •••s •s••- -•%-3.2^-3.0^-2.8^-2.6^-2.4^-2.2Estimate of Coefficient a 1Figure 6.7: Scatter Plot of Residuals for Coefficient al.32-2-3101Chapter 6. Pattern Recognition Techniques^ 116Normal Probability Plot for Coefficient a2-2 —1 o 1 2 3ResidualFigure 6.8: Normal Probability Plot of Residuals for Coefficient a2.1-321o-2.^.^.^..^ .•^.• •• . a .• • •• • •• ...^.. .^. .. ..^.•^. .^... :. • ..• .. ,• •• le^. •.^.• • • .^. • •• • •• .•^1^•.^.^.• • •• . •^•--• d,-_-Chapter 6. Pattern Recognition Techniques^ 117Residual Plot for Coefficient a23^4^5 6Estimate of Coefficient a2Figure 6.9: Scatter Plot of Residuals for Coefficient a2.1f21—1- 2o1:I^ I./1Chapter 6. Pattern Recognition Techniques^ 118Normal Probability Plot for Coefficient a3-2^1^o 1^2ResidualFigure 6.10: Normal Probability Plot of Residuals for Coefficient a3...• 0 • • •.^ .•••••:. ...^.a • sa•..• •^. • ...W.' -•. : ..... '^.• 1.• •^e . • •^•. a• • .•^.•: .• •• •^•• • g••••e••••-•••aChapter 6. Pattern Recognition Techniques^ 119Residual Plot for Coefficient a321o-2-3-3.5^-3.0^-2.5^-2.0Estimate of Coefficient a3Figure 6.11: Scatter Plot of Residuals for Coefficient a3.32-2-31Chapter 6. Pattern Recognition Techniques^ 120Normal Probability Plot for Coefficient a4-3 -2 — 1 1 2 3ResidualFigure 6.12: Normal Probability Plot of Residuals for Coefficient a4.11••• a^•• •• ••••• •••• :•• a• I• o.•a •• a •••••• • 0:^•a• •:•••aa•••••Chapter 6. Pattern Recognition Techniques^ 121Residual Plot for Coefficient a432-2-30.7^0.8^0.9^1.0^1.1^1.2Estimate of Coefficient a4Figure 6.13: Scatter Plot of Residuals for Coefficient a4.Chapter 6. Pattern Recognition Techniques^ 122The system was tested with the test data set prepared earlier, and the poly-nomial coefficients were estimated. The polynomials constructed with the esti-mated coefficients were compared with the model cutting contour polynomial ofthe corresponding group to compute the error. The errors or the indices ofmismatch[Gamage and de Silva, 1991a][Gamage, 1990] were computed for both resub-stitution and leave one out methods, and the results are tabled in Table 6.2. If the indexof mismatch between the estimated contour and the model contour for the correspondinggroup is within the selected threshold level, the corresponding group number is markedin the results column, an "x" is marked otherwise. The tolerance level chosen whengrouping fish was 0.0150. Therefore, if the index of mismatch for a particular sample isgreater than the tolerance level it is considered to be unacceptable. About 72% of thetest data were estimated within the specified tolerance level for the resubstitution caseand 68% for the leave one out method. The fact that about 28% is out of the limits ismainly due to the very nature of regression. Regression attempts to average two extremevalues to minimize the square error, which results in one common (average) contour fortwo extremely different contours. It is clear from Table 6.2 that all samples from group1 are out of the specified tolerance level. The reason for this is because the number ofdata in group 1 was considerably smaller than the number of data in each of the othergroups. The best estimated coefficients, therefore, tend to bias towards the groups withlarge number of data.A very important feature of regression is that it provides a basis for understanding theimportance of the features being used. For example, if the parameter of a particularfeature (standardized) is low it means that this feature is rarely significant or insignificantfor the estimate of the dependent variable. The opposite is also true, that is, if the featurecoefficient has a high value, the feature itself has greater influence on the estimate.Chapter 6. Pattern Recognition Techniques^ 123Table 6.2: Multiple Regression Estimation Results for Resubstitution and Leave-One-OutMethods.Sample Classification ResultNo. f1 f2 f3 Group Resub. Lye. 1 Out1 0.25 0.23 0.35 1 x x2 0.51 0.53 0.44 1 x x3 0.54 0.47 0.42 1 x x4 0.41 0.43 0.23 1 x x5 0.52 0.54 0.47 1 x x6 0.77 0.86 0.46 2 2 27 0.17 0.38 0.54 2 2 28 0.53 0.54 0.65 2 2 29 0.53 0.53 0.47 2 2 210 0.82 0.87 0.42 2 2 211 0.49 0.47 0.53 3 x x12 0.70 0.69 0.39 3 x x13 0.61 0.61 0.34 3 3 x14 0.55 0.40 0.27 3 3 315 0.42 0.31 0.17 3 3 316 0.66 0.53 0.38 4 4 417 0.57 0.56 0.43 4 4 418 0.43 0.46 0.32 4 4 419 0.40 0.49 0.37 4 4 420 0.88 0.85 0.63 4 4 421 0.53 0.56 0.78 5 5 522 0.35 0.27 0.31 5 5 523 0.85 0.73 0.39 5 5 524 0.34 0.32 0.32 5 5 525 0.32 0.28 0.27 5 5 5Chapter 6. Pattern Recognition Techniques^ 1246.5 Bayesian ApproachThis approach was investigated in order to classify fish into various groups. The objec-tive of the Bayesian approach is to express a posteriori probability in terms of a prioriprobability and class conditional probability. Consider the following probabilities;P(C2) = the a priori probability that a pattern belongs to class C,.P(x)^The probability that a pattern is x.P(x}C,) = The class conditional probability that the pattern is x given that it belongsto class C.P(C,Ix) = the a posteriori conditional probability that the patterns class membership isC. given that pattern is x.P(C,, x) = the joint probability that the pattern is x and that its class membership isThe Bayes relation isP(Cilx)^P(ICi)P(Ci)/ P(_)^ (6.98)and for continuous valued patterns it isP(Ciix) = p(IC)P(C)/p()^ (6.99)where p(x1C1) and p(x) are the conditional probability density that the pattern is x giventhat it belongs to class C1, and the probability density of pattern x respectively.Chapter 6. Pattern Recognition Techniques^ 125The classification is based on the value of the conditional probability for an individualclass. For example, if for classes C, and C3, the conditional probabilities P(C,Ix) >P(Cx), then we conclude that the given set of features belong to the class C3. The majorproblem of Bayesian classification is to compute the values of p(xiCi), i.e., to estimatethe probability density function of p(xiCz). However, if the features are assumed to benormally distributed the following expression, which is the multidimensional Gaussiandensity, can be used to compute p(xlCi):= ((2 -1 1 exp[ (x )711(x 1.±.)1 (6.100)where x is the N component feature vector, is the N component mean vector, and Eis the N x N covariance matrix.Even with the assumption that the features are normally distributed, expression 6.100still a computer intensive task to evaluate. In most cases, an additional assumption ismade to simplify this task as given in Anderson [Anderson, 1958,1984], i.e., E for allclasses are assumed to be the same. Although these two conditions are not imposed by theoriginal Bayesian relationship, they are necessary for any real time application to be fea-sible. Now we shall consider the Bayesian classification as given in [Anderson, 1958,1984]for a case similar to the fish classification problem. In this case, however, it is assumedthat distribution is multivariate normal with equal covariance matrices, though this wasnot the case for the fish data. These assumptions are made in order to compare the com-putational complexity of the Bayesian approach with the present rule-based approach.The Bayesian approach selects the class which gives the highest a posteriori conditionalprobability for a given feature vector, i.e. select class C3 ifChapter 6. Pattern Recognition Techniques^ 126P(C,Ix) > P(Crjac) dr = 1, • • • ,q;r^j (6.101)This is same asP(aIC;)P(C3) > p(IC,)P(Cr) dr = 1,•• • ,q;r j.since13(_igi)P(C3) P(C4) =p4)^•Inequality 6.102 can also be expressed as(6.102)( 6. 103 )p(x1C3)^P(Cr) p(IC)^P(C)^r = 1,•• • ,q;r^j. (6.104)r3)The density function p(Cr) for the multivariate normal distribution with equalcovariance matrices is given below.2,41(7r)^((270N/21E11/2) 1 exp[-4— )T-1(x_ IL)]^(6.105)With this distribution function, Inequality 6.104 can be expressed as follows:exp [-1(x — p 3)TE-1(x — 113)] P(Cr)(6.106)Ltr)TE_1(x^)} > p(u„3) dr = 1, • • ,q;r j.exp [-(x —Since the logarithmic functions are monotonically increasing, Inequality 6.106 can bewritten in log terms as1_ [(x_ )TE-1(_ 113) _^JTE-1(_ /)]^P(CT)L. > logP(C dr = l,,q;r j.,)(6.107)Chapter 6. Pattern Recognition Techniques^ 127Now, the left-hand side of 6.107 can be expanded as1 [T-1x _ xT1_ itTE-1xxT E-1 x xr E-1^ear E-1 x^E-1 ]by simplifying and rearranging the terms the following is obtained.1xTE-1(113 _ )_ 2013 + it )71-1013^) (6. 10 8 )This is the discriminant function for the multivariate normal case. Now, if the dis-criminant function (6.108) is represented as up., the classification rule is, select class C3ifP(C)up. > logCT Vr = 1, • • • ,q; r^j . (6.109)P(3)^The terms E-1(p3 — ) and^)TE'(iij — ) of 6.108 can be pre-computedand implemented in a classification system. For a system with three features, the firstterm is a 3 x 1 vector and the second term is a constant. Now, if there are five classes,the discriminant functions u3k can be written as^U12 - 0-012^crinaci^a212ac2^0-312aC3^ (6.1 1 0)^U13 — 0-013^0-213ac2^cr313ac3u45 a045^a145ac1^cr245ac2^C734542C3P(C,.)pir = log P(C3)• (6. 11 2 )Chapter 6. Pattern Recognition Techniques^ 128where akr. k = 1, 2, 3 are the components of the vector E-1(// — ) and a037. are the--3constants computed from --1(p,^)TE-1(y — ). Although there are twenty up. func-tions,only ten functions have to be computed because u3,^—urj. Once the up.s arecomputed, the following rules will assign a new feature vector to the corresponding class.: if u12 - P12) U13 > P13) U14 > P14) U15 > P15C2 : if u21 > P212 U23 > P23) U24 > P24) U25 > P25C5 : if u51 > P51 U52 > P52 U53 > P53) U54 > P54where6.6 k-Nearest Neighbor MethodAs one would expect from the name, this method classifies a feature vectorx by assigning it the class most frequently represented among the k-nearestsamples]Therrieu, 19891 [Duda and Hart, 1973]. In other words, a decision is made byexamining the groups of the k-nearest neighbors and taking a vote. Consider a featurevector x in a feature space. The euclidean distances from x to its k nearest neighboursare computed and the decision is based on the groups of the neighbours. For example,consider case where k is equal to 5, i.e., the 5 nearest neighbours are considered for classi-fication. Now, for a particular feature vector x, if the resulted groups of the 5 neighboursare arranged according to the ascending order of the resultant euclidean distances asfollowsChapter 6. Pattern Recognition Techniques^ 129Neighbour Group1 12 13 14 25 4it is possible to conclude that the feature vector belongs to Group 1 by taking a vote. Inmaking the decision, the groups of closer neighbours can be given larger leverage thanthe distant neighbours. This type of classification is known as nonparametric techniquesbecause it does not require the parameters of the probability distributions of the trainingdata.The k-nearest neighbor algorithm was implemented and tested with the set of testdata used previously. The results are shown in Table 6.3. As it is clear from this table thatthe classification rate is about 76% for the resubstitution method and 68% for the leaveone out method. The low classifying rate is due to the inability to generalize the trainingdata. Furthermore, the classification is based on a "look up table" approach, hence, itis sensitive to noisy data. The other drawback of this method is that it needs to accessthe whole database on-line in order to obtain the neighbours for a given feature vector.This may considerably slow down the processing in particular with a large database.Chapter 6. Pattern Recognition Techniques^ 130Table 6.3:^k-Nearest Neighbour Classification Results for Resubstitution andLeave-One-Out Methods.Sample Classification ResultNo. fl f2 f3 Group Resub. Lye. 1 Out1 0.25 0.23 0.35 1 1 12 0.51 0.53 0.44 1 1 x3 0.54 0.47 0.42 1 x x4 0.41 0.43 0.23 1 x x5 0.52 0.54 0.47 1 1 16 0.77 0.86 0.46 2 2 27 0.17 0.38 0.54 2 2 28 0.53 0.54 0.65 2 2 29 0.53 0.53 0.47 2 x x10 0.82 0.87 0.42 2 2 211 0.49 0.47 0.53 3 x x12 0.70 0.69 0.39 3 3 313 0.61 0.61 0.34 3 3 314 0.55 0.40 0.27 3 3 315 0.42 0.31 0.17 3 3 316 0.66 0.53 0.38 4 4 417 0.57 0.56 0.43 4 x x18 0.43 0.46 0.32 4 x x19 0.40 0.49 0.37 4 4 420 0.88 0.85 0.63 4 4 421 0.53 0.56 0.78 5 5 522 0.35 0.27 0.31 5 5 523 0.85 0.73 0.39 5 5 524 0.34 0.32 0.32 5 5 525 0.32 0.28 0.27 5 5 xChapter 7Comparison of Recognition Techniques7.1 Rule-Based MethodAs shown by the test results, the overall performance of the the rule-based system isvery good. The speed of the method is high and satisfies the necessary real-time re-quirement for the application in fish processing. The real-time requirement of the systemis to process two fish per second which leaves only 500ms for processing and controls.These include feature extraction from image analysis, classification, dimensionalization,co-ordinate transformation, determination of robot actuator commands, and communica-tion. Even though the time left for classification is approximately 200ms, the rule-basedsystem is capable of achieving this constraint. With 22 rules and 3 features for each rulein the present implementation, the classification time is approximately 2ms on a PC 486computer.The accuracy of classification, in this context, can broadly be viewed in two ways; thepercentage of correct classification and the percentage of incorrect classification. Apartfrom these two classifications, however, there are two other possible classifications; am-biguous classification and no classification. An ambiguous classification occurs when afeature vector is classified into more than one class. If the feature vector does not belongto any class it is considered as being not classified. To illustrate this, consider the fol-lowing example of outputs of a classifier for a feature vector that belongs to class 2, forfour different test cases.131Chapter 7. Comparison of Recognition Techniques^ 132OutputClassCase 1 Case 2 Case 3 Case 41 0.2 0.1 0.9 0.12 1.0 0.1 0.9 0.03 0.0 0.2 0.1 0.14 0.1 0.9 0.0 0.25 0.0 0.0 0.2 0.1Now, if a threshold level of 0.5 is selected for the classification to a particular classto be true, case 1 is clearly correctly classified because the only output greater than 0.5is the output 2 which is same as the class of the feature vector. Case 2 has classifiedthe feature vector to class 4, hence, the classification is wrong. The outputs 1 and 2 areboth set to be true in case 3, resulting in an ambiguous classification, while none of theoutputs of case 4 is set above the threshold level giving no classification.In the cases of ambiguous and no classifications, a default action (or no action) canbe performed. In the case of wrong classification, however, the system has no knowledgeabout its error, therefore, a wrong action is performed. In fish processing this may notbe a serious matter, but in some other cases, such as in hazardous environments, it maybe more critical. As can be seen from Tables 4.6, 6.1, and 6.3, the rule-based systemgives lowest percentage of wrong classification for both resubstitution and leave one outmethods. The percentage of correct classification of this method is also high. Ambiguousclassification is not possible with this method because the inference engine of the rulebase resolves the conflicts and gives only one output. The robustness of the method isalso good resulting in about 80% classification rate for the leave one out test.The learning phase of the rule-based methodology was carried out by setting up aconnectivity diagram as described in Chapter 4. The methodology is somewhat similar toChapter 7. Comparison of Recognition Techniques^ 133the density estimates using Parzen windows. The advantages of the present method are:a) it uses only the significant regions of the density resulting in efficient classification, b)it represents the high density regions in the form of rules which can be integrated withexpert or heuristic knowledge which are already in this form, c) it allows the knowledgeengineer or the designer to view high density regions graphically and understand thepatterns or the relationships in the database. Also, this allows a better understandingof the possible noisy data in the database. The validity of the solution is, therefore,graphically supported rather than blindly arriving at it.In order to update the rule-base or learn continuously, it is necessary to collect moredata to update the connectivity diagrams of each group and to construct new diagramsif necessary. The updated connectivity diagrams will enable the engineer to add morerules, change priorities of the existing rules, and to delete the redundant rules.7.1.1 Real-Time Complexity Comparison to Bayesian ApproachA Bayesian classifier for the fish problem is given in Expressions 6.110 and 6.111 ofChapter 6. In this system, however, it was assumed that the distribution of the featureswere multivariate normal with equal covariance matrices. The complexity for this systemis 30 multiplications, 30 additions, 20 comparisons, and 15 logical AND operations. Alloperations are floating point. For the rule base system with 22 rules and three featuresfor each rule, the complexity is 132 comparisons and 66 logical AND operations.These operations were simulated on a PC 486 computer in order to compare the speedof the two methods. From this experiment, it was clear that the rule base system runstwice as fast as the Bayesian classification system.Chapter 7. Comparison of Recognition Techniques^ 1347.2 Neural NetworkThe performance of the neural network classifier depends on the complexity of the net-work. The complexity, in this context, refers to the number of nodes, hence, the numberof inter-connections in the network. To illustrate this, consider a node as shown inFigure 7.1.Figure 7.1: Schematic Representation of a Node in the Neural Network.As it is clear from Figure 7.1, each node has to perform (n — 1) additions and nmultiplications, ignoring the computations in the activation function. Here, n is thenumber of inputs to the node. It is, therefore, obvious from this example that as thenumber of nodes increases the computation overhead may become a serious obstacle forreal-time applications such as head removal in fish processing. The on-line classificationtime for the fish contour classification case is approximately 50ms.The accuracy of classification of this method is better than the rule base system forthe resubstitution test, however, it was about 72% for the leave one out method. Thepercentage of incorrect classification is 16%, which is also high compared to the rule basexia a aaaa aaaaaaaaaaaaaaaaaaa a a abbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb• bbbbbbbbbbbbbb• bbbbbx2Chapter 7. Comparison of Recognition Techniques^ 135approach.The robustness of the neural network is one of its key features but it depends onthe level of training and the number of nodes. For example, if a network is trainedwith a number of nodes which is considerably larger than the required number of nodesto partition the feature space, and if the network is overtrained to produce very sharpboundaries to separate the regions of the feature space, the robustness of the networkwill be very poor. To illustrate this, consider the following example of 2-D feature spaceshown in Figure 7.2.Figure 7.2: Schematic Representation of Patterns in 2-D Feature Space.Now, if a network is trained with large number of nodes, as described in the previousparagraph, to partition the feature space with boundaries as shown, a test pattern "x"will not be classified, even though one may classify it as being "a". The robustness ofthe fish classification network was acceptable giving 72% classifying rate for the leaveChapter 7. Comparison of Recognition Techniques^ 136one out test.The initial learning of the network is slow and hard because of the convergence methodcurrently used. The convergence method used in training the weights is known as gra-dient descent method. In fact, most of the negative implications of the neural networkmethodology come from this training technique. The gradient descent method is notalways guaranteed to converge to the global minimum. Instead, the solution may betrapped in a local minimum resulting in incorrect weights. Even if it does converge to aglobal minimum, it may take a long winding path resulting in a very large convergencetime. A typical convergence time for the fish data was about 24 hours on a Sun Sparc-Station. Subsequent training can be achieved by training the weights with new data butprevious weights can be used as initial weights to improve the learning time.7.3 Multiple RegressionThe fastest of all techniques is the multiple regression estimator, because it requiresonly 3 multiplications and 3 additions to estimate one of the four coefficients of thecutting contour polynomial. Therefore, the total number of multiplications and additionsrequired to estimate all four coefficients are 12 and 12 respectively, as it is clear fromEquations 6.94, 6.95, 6.96, and 6.97. The accuracy in this case can only be viewed as thesimilarity or the closeness of the estimated contour to the actual contour. To computethe error between the estimated contour and the actual contour the index of mismatchbetween the two is computed. The complete set of training data was tested initiallyand computed the indices of mismatch between each estimated contour and the actualcontour. About 65% of the original data was estimated within the specified tolerancelevel. The reason for this low success rate is something inherent to linear regression.Chapter 7. Comparison of Recognition Techniques^ 137It attempts to fit the best hyperplane for the independent variables (features) in thetraining data to estimate the dependent variable. For example, in the present case, thereare three independent variables, (nose-eye length)/(length), (nose-gill length)/(length),and (width)/(length), to estimate the dependent variable, the cutting contour polynomialcoefficient. Regression, therefore, fits the best hyperplane in 4-D space to estimate eachcoefficient of the polynomial function by minimizing the square errors in each dimension.If a particular group has a considerably larger number of data than some other group, theestimated hyperplane will be biased towards the larger group resulting in a large errorfor members in the smaller group. This became very clear when the test set of data wastested resulting in unacceptable indices of mismatch for the members in group 1. Here,the indices of mismatch were computed between the test sample and the model contourof the group of that sample. This comparison was essentially carried out to compare theregression model with other techniques.There is not much difference between the results for the two test methods. Evenif two or three samples were left out and tested, the results were very consistent. Theoverall accuracy of the method for resubstitution was about 72%, and for leave one outit was 68%. Unlike all the other classification techniques, the regression model was ableto come up with a reasonably good contour even when the system was presented with acompletely new set of data.The initial learning of the coefficients of the model is fairly trivial and further im-provements can be obtained with increased number of data.Chapter 7. Comparison of Recognition Techniques^ 1387.4 k-Nearest Neighbour MethodThe speed of k-nearest neighbour algorithm is directly dependent on the number of datain the database and the number of features for each data item. As the number of dataitems and the number of features grow larger, the speed of the method rapidly decreases.The algorithm has to search for the nearest neighbours by computing the euclideandistance between each of the features of the test sample and the corresponding featuresof each of the data items in the database. As it is clear from this, the complexity of thealgorithm rapidly increases with the increased number of data items and the number offeatures per data item resulting in poor speed performance. The on-line inference timefor the fish classification case is about 55ms.The accuracy of the algorithm, on the other hand, increases with the number of datain the database. For the test set of data given in Table 6.3, the algorithm produced onlyabout 76% correct classifications for the resubstitution method, and 68% for the leaveone out method. The major drawback of the method is the inability to generalize thetraining set of data. To illustrate this, consider a single feature (1-D) example shown inFigure 7.3.Now, if the unknown data to be classified is xl, and k is chosen to be 3, the algorithmwill determine the neighbours to be 2, 2, and 1. This will result in the class of theunknown data x1 to be 2, although the actual class may be 1. On the other hand, ifa smoothed version of the density is used for classification as in rule-based method, theprobability of x1 being in class 1 will be larger, hence, the classification will be class 1.There is no initial learning phase for this algorithm and new data could be easilyadded to the database with minimum effort making even on-line learning possible.zzca)022222222222222222 2 2 2 22 2 2 2 2 22 2 2 2 2 2 22 2 2 2 2 2 2 22 2 2 2 2 2 2 2 22 2 2 2 2 2 2 2 2 22 2 2 2 2 2 2 2 2 2 2 2 22 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 211111 1 1 1 11^1^1^1^1^1^1^1^111 1 1111 ^1221XiChapter 7. Comparison of Recognition Techniques^ 139Figure 7.3: Density of Patterns for Two-Class, Single-Feature Classification.7.5 Summary of Classification ResultsTable 7.1: Summary of Results.Classification Test Method Correct Incorrect Not ClassifiedRule-BasedMethodResub. 88% 8% 4%Lye. 1 Out 80% 8% 12%Neural NetMethodResub. 92% 4% 4%Lye. 1 Out 72% 16% 12%KNNMethodResub. 76% 24% 0%Lye. 1 Out 68% 32% 0%MultipleRegress.Resub. 72% 28% 0%Lye. 1 Out 68% 32% 0%Chapter 8Pattern Recognition for Estimation of Cutter PositionIn many fish processing applications, machines employing fixed average settings are usedto perform a series of operations on products which contain substantial variations. Sincesuch an approach leads to product wastage, the development of advanced techniquessuch as computer vision and adaptive pattern recognition in order to reduce wastageand maximize yield and revenue, is important. A good example of such a process is thehead removal stage in a fish canning line. By manually butchering a set of fish, andexamining them through consultation with fish processing experts, the optimal locationof the cutter has been established as the point of intersection of the line along the backof the roof of the mouth, and the center plane of the head. Once the location of thepoint is established, a cutter can be moved to this position to remove the head of a fish[Carnage et. al., 1993].The direct use of image analysis to establish the location of the cutting point is notpossible because the point itself is not a physical attribute that is externally visible. Thischapter describes how a pattern recognition system can be employed to estimate thiscrucial reference point with a computer image processing system at the front end. Theresulting information is directly used to drive the cutter in the head removal process.A multiple regression model and a neural network were developed to estimate thereference point. The results of the two methods are also presented.140Chapter 8. Pattern Recognition for Estimation of Cutter Position^1418.1 Off-line Data Acquisition and AnalysisAbout 500 heads of Sockeye and Pink salmon were collected at B. C. Packers and broughtto the Industrial Automation Laboratory to generate the database for data analysis. Im-ages of each fish head were captured to a vision workstation (SD512) to record theimportant features of the head that would be useful for estimation of the point. Variousmeasurements such as eye-to-nose, eye-to-flap, eye-to-gill, eye-to-the center of the pec-toral fin, eye-to-the edge of the pectoral fin, width at flap position, width at gill position,and eye-to-the reference point were taken with the help of the vision system. Thesefeatures are illustrated in Figure 8.11 Nose2. Eye3 Flap4,5 Flap Width7,8 Gill Width6 Gill9 Fin Top10 Fin CenterP Point of interestFigure 8.1: Illustration of the Features of a Fish Head.The data were then transferred into a database in a PC/486 computer to analyzeand to examine the relationships between the reference point and the other features. AChapter 8. Pattern Recognition for Estimation of Cutter Position^142statistical package (SYSTAT) was used for this purpose. Various statistical parametersincluding covariances and correlations were used to examine the dependence of measure-ment of the point to the other measurements, and to compute the interdependence ofthe features. If, on the one hand, the correlation between a feature and the point mea-surement is strong (i.e. close to 1) the feature itself is a good representation of the pointof interest. On the other hand, if two features are strongly correlated, then one of thefeatures is redundant. Therefore, correlation was used to include the important featuresand to eliminate the redundant features from the model of the estimate.8.2 Development of an Estimation ModelSince the problem is an estimation of a point in a real time industrial application, amultiple regression model was initially chosen because of its speed and the robustness.As mentioned in Chapter 6, multiple regression is an extension of simple regression, inthat there are several independent variables compared to one independent variable as insimple regression, to estimate a dependent variable. In the present case, the point in thehead is the dependent variable and is estimated using some dimensional features of thefish. In fact, these dimensional features are the independent variables in the regressionmodel. Therefore, the following regression model was formed to estimate the point ofinterest.= +thfi + 02f2 + ^ + Onf.^(8.113)where /3 is the point estimated, fi , ^ fn are the dimensional features, and Os are thecoefficients of the features.After analyzing correlation coefficients among the features and between the featuresChapter 8. Pattern Recognition for Estimation of Cutter Position^143and the reference point, it was found that some of the features are redundant and some ofthe features are insignificant. Therefore, the above model was reduced to the following.^= Oxo +13.1(eye-to-fin edge) + f3x2(eye-to-gill)^(8.114)13y = i3 ^edge) +132(eye-to-gill)^(8.115)The coefficients of the features computed during the regression phase were imple-mented in an estimation model to estimate the x and y co-ordinates of the point p asgiven above. The origin of the co-ordinate system was fixed at the position of the eye.The system was tested with a set of fish data and the errors px — Px and py — Py werecomputed. The average error in both x and y directions are about ±2.5mm. This was avery small percentage (4%) in x direction, but was a large percentage (60%) in y directionbecause the estimate of y itself is around 4.0mm. Therefore, the distance from the eyeto the point is considered only in the x direction for implementation. This distance isapproximately the same as the distance from the eye to the point along the center line ofthe fish. Following is the actual model of the estimate, while Table 8.1 shows a sampleof the training data and the results of the estimate.= 7.582 -I- 0.192 * (eye — to — fin edge) + 0.43 * (eye — to^— gill)^(8.116)The validity of the model built was checked in several ways including normal probabil-ity plots and scatter plots of residuals. The normal probability plot was used to observethe distribution pattern of the residuals. If the residuals are normally distributed, themodel cannot be further improved. This was true for the above model. The scatterplot also allows us to graphically observe the random nature of the residuals. Figure 8.2Chapter 8. Pattern Recognition for Estimation of Cutter Position^144Table 8.1: Sample Test Results of Estimate Using Multiple Regression.Fish No. Eye-Gill(mm)Eye-Fin(mm)Eye-Point(Actual)(mm)Eye-Point(Estimate)(mm)ErrorAbs(mm)1 87.11 86.24 60.64 61.60 0.9592 89.72 87.08 66.70 62.88 3.8193 84.84 84.25 55.69 60.24 4.5494 88.17 85.97 56.61 62.00 5.3925 77.72 78.11 52.46 56.00 3.5396 80.04 79.36 56.60 57.24 0.6377 84.52 79.28 53.43 59.15 5.7168 78.06 77.11 52.26 55.95 3.6899 76.66 65.31 49.41 53.09 3.67210 78.53 77.66 56.98 56.26 0.72011 78.15 75.63 56.33 55.71 0.61912 72.73 75.32 51.77 53.31 1.54813 73.65 74.82 48.08 53.61 5.53814 69.69 67.57 47.52 50.52 2.99515 78.15 77.42 53.66 56.05 2.39216 75.97 76.39 52.14 54.91 2.76817 81.59 81.25 56.96 58.27 1.30018 83.76 82.06 58.36 59.35 0.99119 91.53 85.42 65.23 63.34 1.89620 84.56 82.76 59.55 59.83 0.278Chapter 8. Pattern Recognition for Estimation of Cutter Position^145and 8.3 show the normal probability plot and the scatter plot of residuals. As it isdear from these plots the residuals are random and normally distributed supporting thevalidity of the model.A neural network was also trained to estimate the distance to the point p. The samefeatures that were chosen for the multiple regression were used for the neural network.The results of this method are shown in Table 8.2.Table 8.2: Sample Test Results of Estimate Using a Neural Network.Fish No. Eye-Gill(mm)Eye-Fin(mm)Eye-Point(Actual)(mm)Eye-Point(Estimate)(mm)ErrorAbs(mm)1 87.11 86.24 60.64 61.60 0.9592 89.72 87.08 66.70 64.89 1.8063 84.84 84.25 55.69 58.98 3.2894 88.17 85.97 56.61 63.07 6.4645 77.72 78.11 52.46 55.69 3.2326 80.04 79.36 56.60 56.75 0.1527 84.52 79.28 53.43 58.61 5.1768 78.06 77.11 52.26 55.98 3.7179 76.66 65.31 49.41 52.63 3.22410 78.53 77.66 56.98 56.26 0.72011 78.15 75.63 56.33 56.18 0.15312 72.73 75.32 51.77 56.06 4.29113 73.65 74.82 48.08 52.69 4.61014 69.69 67.57 47.52 48.90 1.37815 78.15 77.42 53.66 56.01 2.34316 75.97 76.39 52.14 54.70 2.56017 81.59 81.25 56.96 57.28 0.32518 83.76 82.06 58.36 58.26 0.09919 91.53 85.42 65.23 65.96 0.73020 84.56 82.76 59.55 58.77 0.783The average absolute error of the neural network estimator is about 2.1mm. Theaccuracy of this method is better than the multiple regression method, however, real timeChapter 8. Pattern Recognition for Estimation of Cutter Position^146Normal Probability Plot for Point EstimationResidual of EstimateFigure 8.2: Normal Probability Plot of Residuals of Point Estimate Using a NeuralNetwork.321- 2- 3-•• •_•••-•••°..^. ..• •..:•. i^.•.• .^--•-•a^..▪ %: V^• • • •^. • •^a^• ..^, • • .. d' ....• •..^..• .8 . . • ••• sir ••• •^• • •a .^•^i^•• .. • .. . • .•_•.^. . •„.^•• . •• •• ea%• •^•^• 'lb • •:".•.e . •• • - • ..^c •. :^..^• .• •^. . .I.^•▪ • . •• • • •• : • • • ••^•^...I -^.▪^...^. •  . •^• •^. %. •-••• •-•a•• •- a•%• a• •Chapter 8. Pattern Recognition for Estimation of Cutter Position^147Residual Scatter Plot45^50^55^60^65^70Estimate of PointFigure 8.3: Scatter Plot of Residuals of Point Estimate.Chapter 8. Pattern Recognition for Estimation of Cutter Position^148computation will be more intensive than the regression. Models were also developed toestimate the point for each species separately rather than all types represented in onemodel. However, no improvements were observed in the errors of estimate with thisseparation.In the practical automated fish processing system, the front end image processingsystem extracts the required features from the fish images that are captured on-linefrom a CCD camera. These features are then passed in to the reference point estimator(regression or neural network estimator) which, in turn, estimates the reference point forhead removal using the features.Chapter 9Laboratory ImplementationAs mentioned in Chapter 3, existing fish butchering machines waste about 10%15% of useful meat. About 8%^10% of this meat can be recovered by accuratelyplacing a special double angular ("V-shaped") cutter at the gill position and remov-ing the head. In order to recover the rest of the meat, however, it is necessary toperform a contour cut. Section 9.1 describes a "V-cut" butchering machine that isbeing developed in the Industrial Automation Laboratory of the University of BritishColumbia [de Silva, 1990] [de Silva, Riahi, and Carnage, 1991]. Section 9.2 gives a detaildescription of the development and the implementation of a prototype contour cuttingsystem while Section 9.3 describes the automatic rule generation scheme for this sys-tem. Section 9.3 and 9.4 respectively describe the cutter technology and the gripperconsiderations.9.1 "V-cut" Butchering MachineAfter manually butchering a large number of fish in consultation with fish cutting experts,it was established that most of the meat that is wasted in the head can be recovered bybutchering fish according to the angles shown in Figure 5.7. There are two importantangles (By and O.), as shown in this figure. The angle 0„ represents the angle of V-cutwith associated recovery of useful meat within the head region. This angle is fixed andis achieved by a double angular cutter. The cutting angle Oci of an angle cut is chosen149Chapter 9. Laboratory Implementation^ 150to recover the meat in the dorsal side of the head while eliminating the pectoral fin.Statistics show that (see Table 5.2) the optimum values for the angles 0, and Oa areapproximately 350 and 600 respectively.In the operation of this machine, each fish on the conveyor is gripped and the headregion is imaged in order to compute the gill position. Low-to-medium level imageanalysis techniques are employed for this purpose. Once the gill position is determinedthe double angular cutter assembly is positioned at the gill position such that the headis removed from the body just behind the gill (collarbone). Note that, the fish on theconveyor is placed at a 30° angle to the motion of the conveyor in order to remove thepectoral fin while recovering most of the meat in the dorsal side of the head. Also, thethickness of the fish is measured in order to align the center of the fish with the centerof the double angular cutter.The cutter assembly is driven by a DC motor through a lead screw. The height of thefish (to align with the cutter) is adjusted by an elevating platform which is also drivenby a DC motor. Both motors are controlled by a GAUL two axis motion controller.Also, the cutter has two rotary blades. These are arranged to form the angle au, and aredriven by two 3 phase motors. In order to reduce the weight of the cutter assembly, themotors are mounted separately and the blades are driven through flexible shafts.9.2 Contour Cutting SystemThe rule-based methodology was used for on-line generation of cutting contours in a headremoval operation for Pacific Salmon [Carnage and de Silva, 1992]. The integrated sys-tem for the contour generation consists of a CCD camera, a commercial image processingsystem, and a knowledge base computer. The model base and the rule base are locatedChapter 9. Laboratory Implementation^ 151in the knowledge base computer. During operation, a fish is imaged and the featuresare computed by the image analyzer. The feature values are then transmitted to theknowledge base computer, and during the rule matching process a rule is fired withinthe rule base and a model is matched from the model base. The model contains therequired cutting contour, which is then transmitted to the controller of the fish cuttingrobot. Along with the contour, the length and the width of the fish are also transmittedin order to dimensionalize the non-dimensional model contour. This dimensionalizationis carried out in the robot controller in the present implementation, since only the modelnumber (not the contour) of the corresponding contour is sent to the robot controller inorder to save the communication time. The model base is stored in the robot controller,and upon receiving the model number, the length, and the width of the fish, a programin the controller dimensionalizes the contour retrieved from the model base and drivesthe robot according to the trajectory generated for the corresponding model. A cutterattached to the end effector of the robot removes the head along a path described by themodel contour. A suitable cutter to perform the contour cutting has to be developed.Section 9.4 describes the proposed cutters along with their advantages and disadvantages.Approximately 200 recorded images were processed and analyzed by the SD512 visionworkstation in order to select possible features which are further analyzed by a computer(HP Apollo DN4500 and PC386) for model development. The specifications and systemconfiguration of SD512 vision station along with its vision functions and applicationdevelopment are described in Appendix C. The rules for each model are developed bysetting up a connectivity diagram. An automatic method for rule generation is beingdeveloped and it is described in Section 9.3.A laboratory demonstration to verify the operation of the knowledge based contourgeneration system, and the accuracy of the trajectory of the robot (SCORBOT-ER VII)Chapter 9. Laboratory Implementation^ 152with respect to a determined contour, was performed. A block diagram of the integratedsystem is shown in Figure 9.1. The specifications of the robot and robot controller, andprogramming considerations are given in Appendix B.An image obtained by the CCD camera is sent to an image buffer of the SD512 visioncomputer. The image analysis program on this computer reads an image from the bufferand computes the geometric features that are needed for the knowledge-based system.This program utilizes the C library and the image processing routines library in itscomputation of these features. In the present implementation, these features are the nose-to-eye length, the nose-to-gill length, the length, and the width of the fish. The featuresare then communicated to the PC486 computer through one of the serial ports of thevision workstation. The serial communication has been established using three wires andXON/XOFF handshake. The serial communication software running on the PC486 readsthe features sent by SD512, and transfers them to the working memory of the 0PS83 rule-based program. 0PS83 is a programming language for developing rule base systems, andan overview of it is presented in Appendix A. The feature values in the working memoryprovides the context for the rule base, and it will fire a rule if satisfied. Otherwise, adefault action is followed. The inference of a rule is the cutting contour model, and it issent to the SD512 computer through the serial link, to be transferred to the SCORBOT-ER VII robot controller together with the length and the width information of the fish.The interface circuit between SD512 and the robot controller, and the communicationand synchronization software are described in Appendix D. An image captured by thevision station is shown in Figure 9.2. A typical inference of the model-based contourgeneration system is shown in Figure 9.3. The inferred contour superimposed on thecorresponding image of a salmon is shown in Figure 9.4.—11•4^OPS 83PrOgICIMRulesAttributesRS232CModel No.C Library(MICROSOFT)WWICommun.Software4--Model No.Image BufferAPascalUser interfaceParallel I/OComm. SoftwareAsh ImageBufferConveyMotionJoint Actuator SignalsChapter 9. Laboratory Implementation^ 1535D512 VISION WORKSTATION^ PC 486 COMPUTER^I^I Model No. parallel dataand handshakeCCDCameraDigttalCommun.SoftwareACL Program. DimendonalzatIon• Determinationsof joint M011011$Joint ActuatorInterfaceNonDimensionalModelBaseAttrbutChapter 9. Laboratory Implementation^ 154Figure 9.2: An Image Captured by the Vision System.Figure 9.3: A Typical Output of the Model-Based Contour Generation.Cutting ContourChapter 9. Laboratory Implementation^ 155Figure 9.4: An Image of a Salmon with the Cutting Contour Superimposed.9.3 Automatic Rule GenerationA two-stage automatic rule generation process, which is carried out on a per class basis,is readily executed using a set of training data stored in the data-base. During thefirst stage a set of potential rules (P-Rules) is established, and subsequently during thesecond stage the P-Rules are merged into a smaller number of final rules (F-Rules). Withreference to the analytical basis for rule generation, recall that the probability densityf:)(cAcri) (or 13(g,c1Cj)P(C3)) is estimated within fixed sized hypercubes in the attributespace. We can consider the attribute space to comprise many adjacent hypercubes, eachconstituting a potential rule (P-Rule) defined by the bounding values of the hypercubedimensions. In addition, each fish in the training set will be contained within one ormore of these hypercubes. In practice, however, multiple fish contribute to a single P-Rule, thereby decreasing the number of P-Rules and increasing the density of P-Rulescontaining multiple fish. The initial stage of generating P-Rules involves simply takingChapter 9. Laboratory Implementation^ 156each fish in the training data set and either initiating a new P-Rule or incrementing thedensity of an existing P-Rule. The P-Rules are maintained in an ordered list according todensity, and a density threshold is established in order to deactivate low priority P-Rules.A graphical interface was developed to help understand the effect of the threshold level.Figures 9.5, 9.6 show the P-Rules at various threshold levels.In the second stage of rule-generation, neighbouring P-Rules are merged to form thereduced set of F-Rules. This procedure starts by initializing a new F-Rule with thehighest density P-Rule. Next, the P-Rules are considered, in the order of their density,for merger with existing F-Rules or for the initiation of new F-Rules. The neighbouringhypercubes of the P-Rule having the next highest density are examined in order todetermine the action to be taken. If one of the neighbouring hypercubes corresponds toan existing F-Rule, the corresponding boundary of the F-Rule is extended to include thecurrent P-Rule, and the current P-Rule is deleted. In addition, any remaining P-Rules,including deactivated rules, that fall within the new F-Rule boundary are included in theF-Rule and are deleted. If the current P-Rule does not neighbour an existing F-Rule,the P-Rule remains on the list in order to be reconsidered. If no remaining P-Rules cancontribute to an existing F-Rule, the highest density P-Rule is used to initiate a newF-Rule and the process is repeated.The process described continues until all P-Rules have been incorporated into F-Rules. During this process, the number of training data and the number of hypercubescontributing to each F-Rule are recorded in order to set the priority of the F-Rule. Theresult of this process is a reduced set of F-Rules describing each class of cutting contour.Treshold Level : 9adl ac2 ac3de:2 etc39 . 0 9 . 08 . 0 8 . 07 . 0 . 06 . 0 6 . 05 . 0 5 . 04 . 0 4 . 03 . 03 . 02 . 02.01 . 0 1, 00.0 0. . 0 9 . 0 9 . 08.07 . 0 7.0 7 . 06 . 0 6 . 0 6 . 05 . 0 5 . 0 5 . 04 . 0 4 . 0 4 . 03.0 3.0 3.02 . 0 2 . 01 . 0 1 . 00,0 0.0Treshold Li: 6act2 . 01. 00 . 0Chapter 9. Laboratory Implementation^ 157Figure 9.5: P-Rules with Density 9 (above), and Density 6 (below).Chapter 9. Laboratory Implementation 158Figure 9.6: P-Rules with Density 4 (above), and Density 2 (below).Chapter 9. Laboratory Implementation^ 1599.4 Cutter TechnologyOnce the cutting contour for head removal is determined by the knowledge-based system,it has to be transformed to necessary cutting signals for a robotic cutter. The roboticcutter must be able to cut the fish according to the contour generated, within the specifiedtime limits. The cutter can be attached as the end effector of a robot which can providethe necessary trajectory in order to follow the required path. The design of a cutter is anon-trivial task because of the cutting geometry and the material properties of fish. Inparticular, the cutter will undergo different type of material within a fish, as well as indifferent batches. For example, it has to cut flesh as well as bones within a fish and alsofish may be frozen, fresh, cultured, or wild salmon. With these constraints in mind, thefeasibility of following types of cutter was investigated.1. Water-jet Cutter2. Laser Cutter3. Wire Cutter4. Rotary Blade Cutter5. Reciprocating Blade Cutter9.4.1 Water-jet CutterAlthough a water-jet cutter can provide necessary cutting power, speed, and accuracy,it is considerably expensive for this application. Their prices can range from $80,000 to$150,000, depending on the pressure, for this application. A low pressure water-jet cutteruses abrasives to improve the cutting ability. This type of cutter is, therefore, not suitableChapter 9. Laboratory Implementation^ 160for fish processing. On the other hand, a high pressure cutter may not be economicallyviable in head removal of fish. However, the prices of high pressure water-jet cutters arebecoming less expensive and this may lead to an economically feasible solution in time.Also, water-jet cutters are currently being used in fish processing industry proving itstechnical feasibility for this type of application.9.4.2 Laser CutterLasers are available for wide variety of applications. The power output of these is one ofthe key features that decides the cutting capacity and it ranges from low power 15W tohigh power 5kW. The low power lasers will not be able to penetrate 70mm thick flesh,which is about the maximum thickness of the fish that are processed using the presentmachine, to remove the head in contour cutting. Although the medium power lasers(500W to lkW) are able to provide such penetration power, the speed of the operationwill not be satisfactory. The only type that can provide deep cutting at the requiredspeed is the high power CO2 laser. Their power output is about 5kW and they costabout $300,000. However, the new Nd : YAG lasers can generate beams with 3kWpower output, which may be adequate for the present purpose. These lasers are lessexpensive and they cost about $80,000 to $100,000. Even with these prices, the use oflasers in head removal will also not be economically feasible. Furthermore, the lasercutting has the added disadvantage that it burns the cutting surface of the fish.9.4.3 Wire-CutterWire-cutting may be another possible solution to the head removal problem. Cutting afish according to the contour generated along with the necessary "V" angle, as shown inFigure 9.7, will be a tedious task using a wire-cutter. However, a two stage cut may easeDorsal Side  ^StomachV - Cut AngleDorsal FinChapter 9. Laboratory Implementation^ 161Cross SectionFigure 9.7: Cutting Geometry of Fish in Contour Cutting for Recovery Improvement.the problem to some extent. Consider the fish as shown in Figure 9.8. To remove mostof the material in the head, a cut, as shown by line 1 in this figure, can be performedprior to the contour cutting. The second stage of the cut can be utilized to achievethe required contour pattern using a device as shown in Figure 9.9. Here, the wires arearranged in such a way that it forms the necessary angle for the "V-Cut". The cuttercan be attached to an end effector of a robot. Also, the whole assembly may have to beoscillated in a vertical plane to achieve better penetration. The technical feasibility ofthis type of cutter is yet to be proven by laboratory experiments.Chapter 9. Laboratory Implementation^ 162Figure 9.8: Illustration of Two Stage Cutting in Head Removal.Sharp WiresFigure 9.9: Proposed Wire-Cutting Assembly for Robotic Contour Cutting.Rotary Blades-.Chapter 9. Laboratory Implementation^ 1639.4.4 Rotary Blade CutterTwo small rotating blades can be arranged to form the necessary angle for "V-Cut" alongwith two more degrees of freedom to obtain the depth of the cut and the curvature ofthe contour pattern. This arrangement is shown in Figure 9.10. The blades are free tomove in the z direction to achieve the required depth and can also rotate about the sameaxis to follow the contour curvature. The blades have to be small enough to providean adequate approximation to the curvature of the actual contour pattern. As in theprevious case, this assembly will be suitable only for trimming operations to obtain thefinal contour cut after the major part of the head is removed by an initial cut. Thetechnical feasibility of this type of cutter is also yet to be observed.Figure 9.10: Proposed Rotary-Blade Cutter Assembly for Robotic Contour Cutting.ReciprocatingBladesChapter 9. Laboratory Implementation^ 1649.4.5 Reciprocating Blade CutterTwo reciprocating blades can be arranged as shown in Figure 9.11 to perform the contourcutting. Reciprocating saws are commonly available and can be modified to suit thepresent requirement. In order to reduce the weight of the cutting assembly, the motorsof the saws can be installed separately at a remote location, and the rotational motioncan be transmitted through flexible shafts. Although the gear that converts the rotarymotion to the reciprocating motion has to be mounted on the cutter assembly, it will notpresent a serious problem as the weight of the gear together with the saw will be aboutlkg. If this cutter is technically viable, it will be the most economical solution to thecontour cutting problem.Figure 9.11: Proposed Reciprocating-Blade Cutter Assembly for Robotic Contour Cut-ting.Chapter 9. Laboratory Implementation^ 1659.4.6 Comparison and RecommendationsAlthough both the high pressure water-jet cutter and the high power laser cutter aretechnically feasible, they are considerably expensive for the present application. In otherwords, they are not economically feasible from the industry point of view. The costsof these devices are, however, becoming less, and eventually they may be economicallyfeasible.The technical feasibility of the other cutters are not yet known, but the cost of theseare far less than that of the water-jet or laser cutter. The continuing work of the presentresearch, therefore, is to investigate the technical feasibility of these cutters. In particular,the reciprocating blade cutter will be experimented with first, because of its simplicityand the availability of its components.9.5 Gripper ConsiderationsProper holding of fish during the head removal process is important in order to achievethe desired cut. For example, if the fish is not properly held, it may be dragged towardsthe cutter and/or it may not maintain the optimal orientation with respect to the cutterresulting in a poor cut. Designing an appropriate gripper is, therefore, crucial and thissection describes two grippers that are being developed in the Industrial AutomationLaboratory.The first gripper, as shown in Figure 9.12, is an active gripper which means that theholding rollers of the gripper rotate in synchronism with the conveyor. In particular, thelinear speed of the rollers at the holding position is same as the conveyor speed. Withthis mechanism, a firm grip can be achieved without obstructing the motion of the fishon the conveyor.Driving Chain(Synchronized with the Conveyor)Conveyor Bed Fish Holding PanConveyor MotionSide ViewA—Driving GearDriving String (Rubber)Flexible PlasticHolding RollersFishFront ViewFigure 9.12: Schematic Diagram of the Synchronous Gripper.Chapter 9. Laboratory Implementation^ 166Chapter 9. Laboratory Implementation^ 167In this gripper mechanism, the driving chain, which is linked to the main drivingmotor of the conveyor, drives all of the gear wheels which in turn drive the independentrollers. The outer surface of the plastic rollers are grooved in order to increase grip. Also,the rollers are flexible to a certain extent which allows them to conform to the shape ofan arbitrary fish. The springs attached to each lever are adjusted such that they applysufficient force in order to hold fish firmly. Note that the fish are conveyed on pans whichconstrain the rotation of fish around a vertical axis.The second gripper, that is being designed, is a simple hold down mechanism to holdfish while the head is being removed. This gripper is primarily designed to operate withan intermittent (run, stop, run, • • -) conveyor. Figure 9.13 shows a schematic diagram ofthis gripper.In this gripper mechanism, the solenoid is synchronized to the intermittent motion ofthe conveyor. In particular, during the moving period the solenoid is activated (moved up)and, therefore, the fish is free to move to the next position. During the stationary period,the solenoid is not activated, and the spring forces the plastic holder down such that thefish is properly gripped. There are three such holders shaped and placed accordingly inorder to achieve the optimum grip.9.5.1 Comparison and RecommendationsAs mentioned above, the first gripper is an active gripper and, therefore, is best suited fora continuous motion conveyor. Although it may well work with an intermittent conveyor,it is not recommended for such a system because of its high cost and moderate complexity.The second gripper is recommended for an intermittent type conveyor because of itssimplicity and low cost. Also, this gripper is only suited for this type (Figure 9.13).One issue concerning gripping is that the initial gripping may change the positionPush downSpringSolenoidPlastic HolderPanChapter 9. Laboratory Implementation^ 168Conveyor BedSide ViewFront ViewFigure 9.13: Schematic Diagram of the Intermittent Gripper.Chapter 9. Laboratory Implementation^ 169and/or the angle of the fish with respect to the camera (image) coordinates. The latterproblem is minimized by introducing holding pans, however, the position problem stillremains. One way to approach this problem is to extend the gripping mechanism up tothe camera position (as shown in Figure 9.12) so that the fish is gripped prior to imaging.Although the nose, the eye, and the gill are visible to the camera and, therefore, nose-to-eye length and nose-to-gill length can be computed with this arrangement, the length andthe width cannot be computed because the whole fish not visible. In order to overcomethis problem, it may be necessary to introduce a secondary camera or an alternativesensing mechanism. Further consideration of this problem is, however, beyond the scopeof this thesis.Chapter 10Conclusions and Future Work10.1 ConclusionsThe developed rule-based method combines a production system with the techniques ofprobability and statistics. Production systems, on the one hand, are inherently inefficientwhen manipulating numerical data but extremely efficient in logical deductions. Statisti-cal systems, on the other hand, have well established techniques to manipulate knowledgeavailable in numerical form, but it is poor in logical deductions. The methodology de-veloped in the present research provides method for expressing statistical knowledge inthe form of production rules. These rules can be integrated with expert and heuristicknowledge which are readily available in the rule form. The integrated knowledge-basedsystem provides more efficient and accurate classification than a statistical system or anexpert system alone.In the development of off-line knowledge, an approach similar to density estimateusing Parzen windows is followed. The generation of rules is based on a priori probabil-ities like Bayesian method, however, unlike Bayesian inferencing, only the regions withsignificant a priori probabilities are used. Also, in most Bayesian classification systemsa priori probabilities are usually determined by a parametric method. In the presentapproach, however, the connectivities among features are used to determine these proba-bilities. This is much simpler and it allows for the examination of the significance of thefeatures being used.170Chapter 10. Conclusions and Future Work^ 171On-line inferencing of the rule base system is computationally efficient because onlythe significant regions of the a priori probability density function are used to build therule base. Also, the on-line computational complexity is linearly proportional to thenumber of features.The thesis presents an extensive error analysis of the rule base methodology, andgives a comparison of the rule base error to the Bayesian error. This thesis also discussesseveral pattern recognition techniques and compares them with the rule-based method.The comparison is achieved by applying these techniques and methodologies to generatecutting contours for head removal in fish processing.The neural networks method produced good results with resubstitution and wasadaptable to small variations in the feature space but it was not acceptably adaptiveto the new data. The performance of the rule-based system was better than neural net-work in terms of successful classification, robustness, and speed. Although it producedgood results for the leave one out approach, the adaptability to new fish was not ade-quate. This may be, however, due to lack of training data. This may be the reason forpoor adaptability of the neural network as well.Estimation of cutting contour polynomials using multiple regression was very robustbut the accuracy of a significant number of contours was not within the specified tolerancelevel. The major drawback of the k-nearest neighbor algorithm was its poor classificationrate. Also, the on-line computation cost of this method is high. This increases with thenumber of features and the number data samples in the database.All these methods were experimented with the existing data of 200 fish samples. Aportion of this data may be unreliable due to the poor quality of static pictures. Method-ology have been developed, however, to eliminate noisy data. A prototype laboratorysystem incorporating a knowledge base, a vision system, and a robot was developed. TheChapter 10. Conclusions and Future Work^ 172feasibility of the approach is demonstrated to perform well in the laboratory environment.10.2 Main Contributions of the ThesisIn summary, the main contribution of this research is a methodology for rule generationfor knowledge-based systems. In particular, rules are generated and prioritized usingmultivariate probability density distributions of the features. This methodology enhancesthe reliability and the performance of the system by combining the knowledge availablefrom experts and others with experience in the process, with the knowledge gained fromstatistical analysis.This research also provides a set of algorithms to compute features for the rule-based system using image data. Comparison of knowledge representation techniques forthis type of industrial application is a further contribution of the research. The maincontributions to the industry are the new technology and a practical workcell for fishprocessing.10.3 Future WorkThe present method of rule generation uses a square kernel to estimate the density. Thedensity estimate may be improved by using other kernels such as a Gaussian kernel or atriangular kernel. Also, in rule prioritization a constant value is used for one region inthe density function, i.e., for one rule. Accuracy of classification can further be improvedif a variable priority value is used depending on the distance from the peak value of thehigh density region of the density function.The transfer of technology to industry and the demonstration of the ability of thesystem to function as practical workcell in fish processing are also continuing work fromChapter 10. Conclusions and Future Work^ 173this research. The major component of this work involves developing a robotic cutterthat is able to perform a contour cut at the speed specified by the industry. Feasibilityof employing water-jet, hot-wire, or reciprocating blade type cutter is being investigated.NomenclatureSymbol Description A(x)^Parameters of an objectao^Fourier series constant (DC) terma,^Coefficients of cutting contour polynomialFourier series coefficients of the Cosine termsACk„^Subset i of attribute k for group jack^Feature or attribute kbi^Output i of neural networkFourier series coefficients of the Sine termsCh^Class with highest a priori probabilityC^Class jCH^Chum SalmonThreshold level of relationship densityDimensionalityd(x , y)^2-D directional derivative operatorTotal error of a neural networkE (k)^Expected value of kf (x)^Discrete probability distribution of xf (x)^Feature vector xfi^Ratio (nose-eye length)/ (length)12^Ratio (nose-gill length)/(length)13^Ratio (width)/(length)174Nomenclature^ 175G(X,Y) Discrete Gaussian functionGx(X, Y) First derivative of Gaussian functiong(x,y)^2-D Gaussian functionSize of a subrangehki,^Size of a side, corresponding to feature k, of ith hyper-rectangleof ith classIdentity matrixSecond moment of area of an image about the center/(X, Y)^2-D digital imageIM^Index of mismatch, the minimum absolute area between two cuttingcontoursi(x, y)^Continuous 2-D imageChord length of cutting contourModel jrn^Number of featuresOutput of layer j of a neural networkPink SalmonProbability/5^Estimated probabilityProbability densityp(aciC3)^Conditional probability density of feature vector ac given class C3P3^Number of clusters in group jEstimated probability densityNumber of classesR(f())^Mapping function between feature vector and class indexNomenclature^ 1767?^A region in the feature spaceReal numbersSockeye salmonk^Target or desired output at the kth layeruz3^Discriminant function between class i and class jV^Volume of feature spaceWh(x) Gaussian kernel with parameter (variance) hw,^Window height of a selected region i in feature spacewo„^Weights of neural network connections between layer o and layer iX^Coordinate axis Xx coordinate of a point in x-y planeCoordinate axis Yy coordinate of a point in x-y planeMomentum term of a neural networkParameters of the multiple regression model0(ac)^A unit hyper-cube centered at the origin of the feature spaceLearning rate of a neural networkn 3^Median vector of ith cluster of jth classA^Intersection indicator; the value of this is 1 if two clusters intersect,and 0 otherwiseMean vector of class iAngle of rotationea^Cutting angle of a straight cutV-cut angleCorrelation between features i and jNomenclature^ 177Et^Covariance matrix of iazz^Variance of feature icrij^Covariance of features i and jO(ac) A hyper-rectangle centered at the origin of the feature spaceith cluster center of ith classBibliography[Anderson, 1958,19841 Anderson T. W., "An Introduction to Multivariate StatisticalAnalysis," Wiley, New York, 1958, 1984.[ART, 1984] ART User's Manual, Inference Corp., 5300 W. Century Blvd., Los Angeles,Calif., 1984.[Ayache and Faugerus, 1986] Ayache N., and Faugerus 0. D., "HYPER: A New Ap-proach for the Recognition and Position of Two Dimensional Objects," IEEETrans. Pattern Anal. Machine Intell., Vol. PAMI-8, pp 44-54, Jan 1986.[Barnea and Silverman, 1972] Barnea D. I., and Silverman H. F., "A class of Algorithmsfor fast Digital Image Registration," IEEE Trans. Computers, Vol. C-21, pp 179-186, Feb 1972.[Birman, 1982] Birman K. P., "Rule-Based Learning for More Accurate ECG Analysis,"IEEE Trans. Pattern Anal. Machine Intel!., Vol. 4, No. 4, pp. 369-380, July 1982.[Blackwell and Girshick, 1954] Blackwell D., and Girshick M. A., "Theory of Games andStatistical Decisions," Wiley, New York, 1954.[Blum, 1982] Blum R. L., "Discovery Confirmation and Incorporation of Causal Rela-tionships from Large Time Oriented and Clinical Database: The RX Project,"Computers and Biomedical Research, 15, 1982.[BoIle et. al., 1989] Bolle R. M., Kjeldsen R., and Taylor R. W., "Visual Recognition Us-ing Concurrent and Layered Parameter Networks," IEEE Conf. Pattern Recog-nition, pp. 625-631, 1989.[Bolles and Cain, 1982] Bolles R. C., and Cain R. A., "Recognizing and Locating Par-tially Visible Objects: The Local-Feature-Focus Method," Int. J. Robotics Re-search, Vol. 1, No. 3, pp. 57-82, 1982.[Brady, 1982] Brady M., "Parts Description and Acquisition Using Vision," Proceedingsof SPIE - Robot Vision, Vol. 336, pp. 20-28, May 6-7, 1982.[Buchanan and Shortliffe, 1975] Buchanan B. G., and Shortliffe E. H., "A Model of In-exact Reasoning in Medicine," Mathematical Biosciences, Vol. 23, pp. 351-379,1975.178Bibliography^ 179[Buchanan and Shortliffe, 1984] Buchanan B. G., and Shortliffe E. H., "Rule-Based Ex-pert Systems," Addison Wesley Publishing Company, 1984.[Carbonell et. al., 1981] Carbonell J. G., Cullingford R. E., and Gershman A. V., "StepsToward Knowledge-Based Machine Translation," IEEE Trans. Pattern Anal. Ma-chine Intell., Vol. 3, No. 4, pp. 376-392, July 1981.[Chellappa and Bagdazian, 1982] Chellappa R., and Bagdazian R., "Fourier Coding ofImage Boundaries," IEEE Trans. Pattern Anal. Machine Intel!., Vol. PAMI-6,pp. 102-105, 1982.[Chow, 1957] Chow C. K., "An Optimum Character Recognition System Using DecisionFunctions," IRE Trans. on Elec. Comp., EC-6, pp. 247-254, 1957.[Darwish and Jain, 1988] Darwish A. M., and Jain A. K., "A Rule-Based Approach forVisual Pattern Inspection," IEEE Trans. Pattern Anal. Machine Intell., Vol. 10,No. 1, pp. 56-68, 1988.[de Silva, 1989] de Silva C. W., "Control Sensors and Actuators," Prentice Hall, Engle-wood Cliffs, New Jersey, 1989.[de Silva, 1990] de Silva C. W., "Research Laboratory for Fish Processing Automa-tion," Proc. 3rd International Symp. on Robotics and Manufacturing, Vancouver,Canada, 1990.[de Silva, 1991] de Silva C. W., "An Analytical Framework for Knowledge-Based Tuningof Servo Controllers," Engng. Applic. Artif. Intell. Vol. 4, No. 3, pp. 177-189,1991.[de Silva, Riahi, and Gamage, 1991] de Silva C. W., Riahi N., and Gamage L. "ImageProcessing Techniques for Position and Orientation Measurements in Fish Pro-cessing Automation," Modeling and Simulation, Vol. 22, Part 3, Instrument So-ciety of America Publication, pp. 1528-1535, May, 1991.[Dirilten and Newman, 1977] Dirilten H., and Newman T. G., "Pattern Matching UnderAffine Transformations," IEEE Trans. Computers, Vol. C-26, pp. 314-317, March1977.[Draper and Smith, 1966] Draper N. R., Smith H., "Applied Regression Analysis," Wi-ley, 1966.[Duda and Hart, 1973] Duda R. 0. and Hart P. E., "Pattern Classification and SceneAnalysis," Wiley, New York, 1973.Bibliography^ 180[Duda et. el., 1979] Duda R. 0., Gaschnig J., and Hart P. E., "Model Design in theProspector Consultant System for Mineral Exploration," Expert Systems in theMicroelectronic Age, Ed. Michie D., Edinburgh University Press, 1979.[Dudani et. al., 1977] Dudani S. A., Breeding K. J., and McGhee R. B., "Aircraft Iden-tification by Moment Invariants," IEEE Trans. Computers, Vol. C-26, pp. 39-45,Jan 1977.[ESHED ROB OTEC, 1988] "ACL, Advanced Control Language, Reference Guide," ES-HED ROBOTEC Inc., Princeton, NJ, 1988.[ESHED ROBOTEC, 1990] "SCORBOT-ER VII User Manual," ESHED ROBOTECInc., Princeton, NJ, 1990.[Faber and Stokely, 1988] Faber T. L., and Stokely E. M., "Orientation of 3-D Structuresin Medical Images," IEEE Trans. on Pattern Analysis and Machine Intelligence,Vol. 10 No. 5 pp. 626- 633, September 1988.[Feigenbaum et. el., 1971] Feigenbaum E. A., Buchanan B. G., and Lederberg J., "OnGenerality and Problem Solving: A Case Study Involving DENDRAL Program,"Machine Intelligence 6, pp. 165-190, New York, 1971.[Ferguson, 1967] Ferguson T. S., "Mathematical Statistics: A Decision Theoretic Ap-proach," Academic Press, New York, 1967.[Fisher, 1936] Fisher R. A., "The Use of Multiple Measurements in Taxonomic Prob-lems,"Ann. Eugenics, 7, part II, pp. 179-188, 1936.[Floyd, 1961] Floyd R., "A Descriptive Language for Symbolic Manipulation," Journalof the Association for Company Machinery, 1961.[Forgy, 1985] Forgy C. L., "OPS83: User's Manual and Report," Tech. Report, Produc-tion System Technologies Inc., Pittsburgh, Pa, 1985.[Fu, 1982] Fu K. S., "Syntactic Pattern Recognition and Applications," Prentice Hall,Englewood Cliffs, NJ, 1982.[Fu and Young, 1986] Fu K. S., and Young T. Y., "Hand Book of Pattern Recognitionand Image Processing," Academic Press Inc., 1986.[Fu(3), 1982] Fu K. S., "Pattern Recognition for Automatic Visual Inspection," Proceed-ings of the SPIE, May 6-7, pp. 12-19, 1982.Bibliography^ 181[Carnage and de Silva, 1990] Carnage L., and de Silva C. W., "Use of Image Processingfor the Measurement of Orientation with Application to Automated Fish Pro-cessing," Proc. of IEEE Industrial Electronics Society, pp. 482-487, November,1990.[Carnage, 1990] Carnage L., "PhD Research Proposal," Dept. of Mechanical Engineering,UBC, November, 1990.[Carnage and de Silva, 1991al Carnage L., and de Silva C. W., "Model-Based ComputerVision Approach for the Cutting Contour Generation in Automated Fish Process-ing," Modeling and Simulation, Vol. 22, Part 3, Instrument Society of AmericaPublication, pp. 1544-1551, May, 1991.[Carnage and de Silva, 1991b] Carnage L., and de Silva C. W., "A Model-Based Ap-proach for On-line Extraction of Complex Features Through Image Processing,"Advances in Instrumentation, DSC Vol. 30, ASME, NY, pp. 73-78, 1991.[Carnage and de Silva, 1992] Gamage L., and de Silva C. W., "Rule-Based Contour Gen-eration for Robotic Processes," Proc. of IEEE Region 10 International Confer-ence, TENCON '92, Melbourne, Australia, pp. 649-653, 1992.[Carnage et. al., 1993] Carnage L., de Silva C. W., and Cosine R. G., "Statistical Pat-tern Recognition for Cutter Positioning in Automated Fish Processing," Proc.of IEEE Pacific Rim Conference on Communications, Computers, and SignalProcessing, Victoria, Canada, pp. 786-789, 1993.[Gayle and Dankel, 1986] Gayle B. G., and Dankel D. D., "RxPERT : An IntelligentComputer System for Drug Interactions," Applications of Artificial IntelligenceSPIE, Vol. 635, pp. 599-605, 1986.[Gilmore, 1986] Gilmore J. F., "A Comprehensive Evaluation of Expert System Tools,"Applications of Artificial Intelligence iii, SPIE, Vol. 635, pp. 2-16, 1986.[Gottinger, 1988] Gottinger H. W., "Statistical Expert Systems," Expert Systems, Vol.5, No. 3, August 1988.[Goshtasby, 1985] Goshtasby A., "Template Matching in Rotated Images," IEEE Trans.on Pattern Analysis and Machine Intelligence, Vol. PAMI-7, No. 3, pp. 338-344,May 1985.[Hand, 1985] Hand D. J., "Artificial Intelligence and Psychiatry," Cambridge, CambridgeUniversity Press, 1985.[Harman, 1967] Harman H. H., "Modern Factor Analysis," University of Chicago Press,Chicago and London, 1967.Bibliography^ 182[Hays-Roth et. al., 1983] Hays-Roth F., Waterman D. A, and Lenat D., "Building ExpertSystems," Addison Wesley, Reading, MA, 1983.[Held and Carlis, 1989] Held J. P., and Carlis J. V., "Data Modeling of Expert Systems,"IEEE Expert, spring 1989.[Hoffman et. al., 1986] Hofmann M., Carviedes J., Bourne J., G. Beale G., and BrodersenA., "Building Expert Systems for Repair Domains," Expert Systems, Vol. 3, No.1, pp. 4-11, Jan 1986.[Horn, 1986] Horn B. K. P., "Robot Vision," MIT Press, Cambridge, MA, 1986.[Howard and Rehak, 1989] Howard H. C. and Rehak D. K., "Interfacing Expert Systemswith Databases," IEEE Expert, pp. 65-76, Fall 1989.[Hsieh and Fu, 1979] Hsieh Y. Y., and Fu K. S., "A Method for Automatic IC ChipAlignment and Wire Bonding," IEEE Computer Society Conf. Pattern Recogni-tion and Image Processing, pp. 87-92, Aug. 1979.[Hutchinson et. al., 1989] Hutchinson S. A., Cromwell R. L., and Kak A. C., "Apply-ing Uncertainty Reasoning to Model-Based Object Recognition," IEEE Conf.Pattern Recognition, pp. 541-548, 1989.[IRI, 1988] "SUPRAVISION User's Manual," International Robomation Intelligence,Carlsbad, CA, 1988.[IRI, 1989] "IKS/VR Programming Manual," International Robomation Intelligence,Carlsbad, CA, 1989.[Ishida, 1988] Ishida Y., An Application of Qualitative Reasoning to Process Diagnosis:Automatic Rule Generation by Qualitative Simulation," IEEE Conference onArtificial Intelligence, pp. 124-129, 1988.[Jain, 19891 Jain A. K., "Fundamentals of Digital Image Processing," Prentice Hall, En-glewood Cliffs, 1989.[Jain and Hoffman, 1988] Jain A. K., and Hoffman R., "Evidence-Based Recognition of3-D Objects," IEEE Trans. Pattern Anal. Machine Intel, Vol. PAMI-10, pp783-802, November 1988.[Kendall and Stuart, 1966] Kendall M. G., and Stuart A., "The Advanced Theory ofStatistics," Vol. 3, Hafner, New York, 1966.[Kjeldsen et. al., 1988] Kjeldsen R., Bolle R. M., and Sabbah D., "A Connectionist Ap-proach to Primitive Shape Recognition in Range Data," IEEE Conference onArtificial Intelligence, pp. 70-75, 1988.Bibliography^ 183[Klein and Breeding, 1979] Klein C. A., and Breeding K. J., "Automatic Optical Identi-fication of Faults in Bubble Memory Overlay Patterns," IEEE Computer SocietyConf. Pattern Recognition and Image Processing, pp. 87-92, Aug. 1979.[Lafue and Smith, 1985] Lafue G. M. E., and Smith R. G., "A Modular Tool Kit forKnowledge Management," Proc. Ninth IJCAI, Morgan Kaufmann, Palo, Alto,CA, pp. 46-52, Aug 1985.[Levine, 1985] Levine M. D., "Vision in Man and Machine," McGraw Hill, 1985.[Levine and Shaheen, 1981] Levine M. D. and Shaheen S. I., "A Modular Computer Vi-sion System for Segmentation and Interpretation," IEEE Trans. on Pattern Anal-ysis and Machine Intelligence, Vol. PAMI-3, No. 5 pp. 540-556, September 1981.[Lin and Chellappa, 1987] Lin C. C., Chellappa R., "Classification of Partial 2-D ShapesUsing Fourier Descriptors," IEEE Trans. on Pattern Analysis and Machine In-telligence, Vol. PAMI-9, No. 5 pp. 686-690, September 1987.[Machuca and Gilbert, 1981] Machuca R., and Gilbert A. L., "Finding Edges in NoisyScenes," IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. PAMI-3, No. 1, pp. 103-111, January 1981.[Marr, 1982] Marr D., "Vision," W. H. Freeman and Company, New York, 1982.[Meisel, 1972] Meisel W. S., "Computer-Oriented Approaches to Pattern Recognition,"Academic Press, New York and London, 1972.[Modestino and Zhang, 1989] Modestino J. W., and Zhang J., "A Markov Random FieldModel-Based Approach to Image Interpretation," IEEE Conf. Pattern Recogni-tion, pp. 458-465, 1989.[Movich, 1979] Movich R. C., "Vision - Controlled Robotic Cell," Proceedings of SPIE -Robot Vision, Vol. 336, pp. 59-66, May 6-7, 1982.[Nazif and Levine, 1984] Nazif A. M., and Levine M. D., "Low Level Image Segmentation: An Expert System," IEEE Trans. Pattern Anal. Machine Intell., Vol. 6, No. 5,pp. 555-577, September 1984.[Neyman and Pearson, 1928] Neyman J., and Pearson E. S., "On the Use and Interpre-tation of Certain Test Criteria for Purposes of Statistical Inference," Biometrica,20A, pp. 175-240, 1928.[Otsu, 1979] Otsu N., "A Threshold Selection Method from Gray-Level Histograms,"IEEE Trans. Syst., Man, Cybern., Vol. SMC-9, No. 1, pp. 62-66, Jan. 1979.Bibliography^ 184[Pao, 1989] Pao Yoh-Han, "Adaptive Pattern Recognition and Neural Networks," Addi-son Wesley, 1989.[Perkins, 1978] Perkins W. A., "A Model-Based Vision System for Industrial Parts,"IEEE Trans. Comput., Vol. C-27, pp. 126-143, 1978.[Perkins, 1982] Perkins W. A., "Vision System That Learns How to Inspect Parts," Pro-ceedings of SPIE - Robot Vision, Vol. 336, pp. 50- 58, May 6-7, 1982.[Persoon and Fu, 1977] Persoon E., and Fu K. S., "Shape Discrimination Using FourierDescriptors," IEEE Trans. Syst., Man, Cybern., Vol. SMC-7, pp. 170-179, May1977.[Porter et. al., 1979] Porter G. B., Cipolla T. M., and Mundy J. L., "Automatic VisualInspection of Blind Holes in Metal Surfaces," IEEE Computer Society Conf.Pattern Recognition and Image Processing, pp. 83-86, Aug. 1979.[Provan, 1988] Provan G. M., "Model-Based Object Recognition: A Truth MaintenanceApproach," IEEE Conference on Artificial Intelligence, pp. 230-235, 1988.[Raudys and Pikelis, 1980] Raudys S. and Pikelis V., "On Dimensionality, Sample Size,Classification Error, and Complexity of Classification Algorithm in PatternRecognition," IEEE Trans. Pattern Anal. Machine Intell., Vol. PAMI-2, pp 242-252, May 1980.[Reddi, 1981] Reddi S. S., "Radial and Angular Moment Invariants for Image Identifica-tion," IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. PAMI-3,No. 2 pp. 240-242, March 1981.[Reinhold, 1982] Reinhold A. G., "Automatic Vision Inspection of Sheet Metal Parts,"Proceedings of SPIE - Robot Vision, Vol. 336, pp. 84- 90, May 6-7, 1982.[Riahi and de Silva, 1990] Riahi N., and de Silva C. W., "Fast Image Processing Tech-niques for the Gill Position Measurement in Fish," Proc. of IEEE IndustrialElectronics Society, November, 1990.[Rice, 1988] Rice J. A., "Mathematical Statistics and Data Analysis," Wadsworth andBrooks, 1988.[Roberts, 1986] Roberts G. A., "An Expert System for Labeling Segments in ForwardLooking Infrared (FUR) Imagery," Applications of Artificial Intelligence iii,SPIE, Vol. 635, pp. 50-57, 1986.[Rosenfeld, 1975] Rosenfeld A., "Survey : Picture Processing 1974," Computer Graphicsand Image Processing, Vol. 4, pp. 133-155, 1975.Bibliography^ 185[Rosenfeld and Danker, 19811 Rosenfeld A., and Danker A. J., "Blob Detection by Relax-ation," IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. PAMI-3,No. 1 pp. 79-92, January 1981.[Rosenfeld and Smith, 1981] Rosenfeld A., and Smith R. C., "Thresholding Using Relax-ation," IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. PAMI-3,No. 5 pp. 598-606, May 1981.[Rosenfeld and Vanderbrug, 1977] Rosenfeld A., and Vanderbrug, "Coarse-Fine Tem-plate Matching," IEEE Trans. Syst., Man, Cybern., Vol. SMC-7, pp. 104-107,Feb. 1977.[Rosenfeld and Vanderbrug(2), 19771 Rosenfeld A., and Vanderbrug, "Two-Stage Tem-plate Matching," IEEE Trans. on Computers, Vol. C-26, No. 4, pp.384-393, April1977.[Rowley, 1986] Rowley D., "Statistical Approaches to the Generation of Rules for Ex-pert Systems," 2nd International Expert System Conference, London, pp. 369-75,1986.[Rudemo, 1982] Rudemo M., "Empirical Choice of Histograms and Kernel Density Esti-mators," Scandinavian J. Statistics, Vol. 9, pp. 65-78, 1982.[Rumelhart, et. al., 1986] Rumelhart D. C., Hinton G. E. and Williams R. J., "LearningInternal Representation by Error Propagation in Parallel Distributed Process-ing," Explorations in the Microstructures of Cognition, Vol. 1., pp. 318-362 MITPress, Cambridge, MA, 1986.[Schaefer et. al., 1988] Schaefer R. M., Colmer J. S., and Miley N., "CABPRO: A Rule-Based Expert System for Process Planning of Assembled Multiwire Cables,"IEEE Conference on Artificial Intelligence, pp. 181-186, 1988.[Stefik et. al., 1983] Stefik M., Bell A. G., and Bobrow D. G., "Rule-Based Programmingin LOOPS," Tech. Report KB VLSI 82-22 Xerox Corp., Palo, Alto, CA, June1983.[Sterling, 1979] Sterling W. M., "Automatic Non-Reference Inspection of Printed WiringBoards," IEEE Computer Society Conf. Pattern Recognition and Image Process-ing, pp. 93-100, Aug. 1979.[Stockman et. al., 1982] Stockman G. et. al., "Matching Images to Models for Registra-tion and Object Detection via Clustering," IEEE Trans. Pattern Anal. MachineIntel, Vol. 4, No. 3, pp. 229-241, May 1982.Bibliography^ 186[Suetens et. al., 1989] Suetens P., et. al., "Recognition of the Coronary Blood Vessels onAngiograms Using Hierarchical Model-Based Iconic Search," IEEE Conf. PatternRecognition, pp. 576-579, 1989.[Suzuki and Arimoto, 1988] H. Suzuki and S. Arimoto, "Self-Organizing Model for Pat-tern Learning and its Application to Robot Eyesight," IEEE Conference on Ar-tificial Intelligence, pp. 236-246, 1988.[Svedlow et. al., 1979] Svedlow M., McGillem C. D., and Anuta P. E., "ExperimentalExamination of Similarity Measures and Processing Methods Used for ImageRegistration," Machine Processing of Remotedly Sensed Data, 1976.[Taubin, 1989] Taubin C., Bolle R. M., and Cooper D. B., "Representing and ComparingShapes Using Polynomials," IEEE Conf. Pattern Recognition, pp. 510-516, 1989.[Therrieu, 1989] Therrieu C. W., "Decision Estimation and Classification," Wiley, 1989.[Tompsett, 1988] Tompsett C. P., "Education, Training and Knowledge Base Design,"Expert Systems, Vol. 5, November, 1988.[Torimoto and Pavlidis, 1975] Torimoto S., and Pavlidis T. "A Hierarchical Data Struc-ture for Picture Processing," Computer Graphics and Image Processing, Vol. 4,pp. 104-119, 1975.[Von Neumann and Morgenstern, 1944] Von Neumann J , and Morgenstern 0., "Theoryof Games and Economic Behavior," Princeton University Press, Princeton, N.J.,1944.[Wald, 1950] Wald A., "Statistical Decision Functions," Wiley, New York, 1950.[Watanabe et. al., 1989] Watanabe M., Yamanouchi T., Iwamoto M., and Ushida Y.,"CL : A Flexible and Efficient Tool for Constructing Knowledge-Based ExpertSystems," IEEE Expert, pp. 41-50, Fall 1989.[Waterman, 1970] Waterman D. A., "Generalization Learning Techniques for Automat-ing the Learning of Heuristics," Artificial Intelligence 1, pp. 121-170, 1970.[Weisberg, 1980] Weisberg S., "Applied Linear Regression," Wiley, 1980.[Whitaker, 1986] Whitaker E. T., "Rule-Based Geometrical Reasoning for the Interpre-tation of Line Drawings," Applications of Artificial Intelligence iii, SPIE, Vol.635, pp. 621-627, 1986.[Wilkinson, 1990] Wilkinson L., "SYSTAT : The System for Statistics," SYSTAT Inc.,Evanston, IL, 1990.Bibliography^ 187[Wong and Hall, 1978] Wong R. Y., and Hall E. L., "Scene Matching with InvariantMoments," Computer Graphics Image Processing, Vol. 8, pp 16- 24, 1978.[You, 1986] You Y. C., "Expert System for Model Management," Applications of Artifi-cial Intelligence iii, SPIE, Vol. 635, pp. 17-24, 1986.Appendix AOverview of OPS/83OPS/83 is programming language for developing expert systems. It supports the ordinaryprogramming paradigm of languages such as C and PASCAL as well as the rule-basedparadigm.A.1 Rule-Based ProgrammingAn OPS/83 program consists of two components, a collection of IF THEN rules anda global database called working memory. Each rule contains a conditional expressioncalled the rule's LHS (Left Hand Side) and an unconditional sequence of actions calledthe rule's RHS (Right Hand Side). An LHS consists of one or more patterns. An LIEis considered to be satisfied when every pattern in the LHS is matched with an elementfrom working memory.The rule interpreter executes a production system by performing a sequence of oper-ations called recognize-act cycle. The standard recognize-act cycle is:1. Match: Evaluate the LHSs of the rules to determine which rules are satisfied giventhe current contents of the working memory.2. Conflict Resolution: Select one rule from among the ones with satisfied LHSs. Ifno rules have satisfied LHSs, halt execution.3. Act: Perform the operations specified in the RHS of the selected rule.188Appendix A. Overview of OPS/83^ 1894. Go to Step 1.A.2 High Level ProgrammingThe high level programming in OPS/83 is very similar to programming in C or PASCAL.It has all the variables, constants, operators data structures, control structures, subpro-grams, and file structures which are available in a conventional high level language.A.3 Working MemoryWorking memory consists of a set of record structures called working elements. The ele-ments are created and placed into working memory using the make statement. Elementscan be modified by the modify statement and deleted from working memory using theremove statement.A.3.1 Element DeclarationsWorking memory elements are declared exactly like records.type goal = elementtype : symbol;integer;object : integer;status : integer;),Appendix A. Overview of OPS/83^ 190A.3.2 The make, modify, and remove StatementsMake statement is used to create new elements and add them to the working memory.make(goal);This will make an element called goal.modify(goal type=find object=fish id=sockeye);This modifies the current contents of goal to the new specified values.rem ove(goal);deletes the element from the working memory.A.4 RulesA rule consists of• rule key word• the name of the rule• open brace (0• LHS of the rule• right pointing arrow (--0• RHS of the rule• closing brace (1)Appendix A. Overview of OPS/83^ 191Example:rule leave{ SiG(goal status=end);--remove 8/G };Appendix BESHED SCORBOT-ER VII Robotic ManipulatorESHED SCORBOT-ER VII is a five degree of freedom, vertically articulated robot. Therobot is controlled by MC68010 based controller which is also manufactured by ESHEDROBOTEC. The robot controller has its own internal control language, ACL, AdvancedControl Language, and it offers a wide range of capabilities for programming the robotand/or other axes of automation. Although the robot has only five joints, the controlleris able to control eight axes. The ACL is on an EPROM within the controller, and isaccessed via the RS-232 port using any ASCII terminal This appendix provides thespecifications of the robot, the robot controller, and also provides a description of theACL.B.1 Robot SpecificationsTable B.1 gives the specifications of the SCORBOT-ER VII robot. Figure B.1 showsthe robot arm axes, and Figure B.2 shows the operating range without the gripper[ESHED ROBOTEC, 1990].B.2 Robot Controller SpecificationsThe SCORBOT-ER VII controller has four driver cards. There are three cards whichdrive the robot axes and the gripper, and the other card can be used to drive accessoriesor other external motors. Table B.2 gives the specifications of the SCORBOT-ER VII192Appendix B. ESHED SCORBOT-ER VII Robotic Manipulator^ 193controller.Table B.1: Robot Arm Specifications.Item SpecificationMechanical Structure Vertical articulated robotFive axes of motionHarmonic drive transmission on each axisOperating range:Axis 1: Base rotation 250°Axis 2: Shoulder rotation +130° —35°Axis 3: Elbow rotation +130°Axis 4: Wrist pitch +130°Axis 5: Wrist roll Unlimited (mechanically)+180° (electrically)Radius of operation 851mm (33.5in) with gripper691mm (27.2in) without gripperEnd effector Electrical gripperHard Home Fixed reference positions on all axesActuators Electrical DC servo motorsTransmission Harmonic drivesMaximum Workload 2000g (4.41b), including gripperRepeatability 0.2mm (0.079in)Maximum Speed 1000mm/sec (39.37in/sec)Weight Approx. llkg (241b)Flexible Hose Factory installed for pneumatic end effectorapplicationsAppendix B. ESHED SCORBOT-ER VII Robotic ManipulatorFigure B.1: Robot Arm Axes.194Figure B.2: Operating Range of the Robot.SpecificationItem CommentsController TypeNo. of Servo AxesGroups of ControlAxes DrivePath ControlPath ControlProfilesInterpolationFunctionsMotion SpeedControl ParametersCPUEPROMWork RAMUser RAMNo. of Program linesand positionsReal time, Multi-tasking,stand alone, PIDMaximum - 11Standard - 811 axes can be divided into 3groups: A and B and a groupof independent axes (group C)PWM, 20kHzPTPCP - linear, circular pathscalculated by mathematicalformulas; e.g., parabolaParaboloid, Trapezoid, OpenLoop Free RunningArticulation, linear,circular, math. formulaDefinable either by speed orby travel time between pointsOpen or closed loop, integral,proportional, differential,offsetMotorola 68010384K bytes64K bytes32K bytes - standard128K bytes - expanded(optional)Standard RAM: 2960 prog.lines or 1450 positions (orany combination)Terminal is required forprogramming stageEach group of controlis independent. Fullflexibility in operation.Axes interpolation ingroups A and BPWM: Pulse WidthModulationPTP: Point to PointCP: Continuous PathWith acceleration/deceleration. Can bedefined separately foreach group of controlProgrammable to apercentage value withinspeed rangeUser RAM is batterybacked up (up to 12,000lines or 6300 positions)Each position requiresmemory space equal totwo program linesAppendix B. ESHED SCORBOT-ER VII Robotic Manipulator^195Table B.2: Robot Controller Specifications.Appendix B. ESHED SCORBOT-ER VII Robotic Manipulator^ 196Table B.2 contd.SpecificationItem CommentsNo. of programs inuser RAMMulti-taskingInputs/OutputsHomeCommunicationProgramming MethodsExpanded RAM: 12800 prog.lines or 6375 positions (orany combination); e.g., 4800lines and 4000 positionsHundredsRuns up to 20 independentprograms simultaneously.User can edit a programwhile another program runs.16 inputs16 OutputsLimit switch for each axisRS-232 serial portUsing any terminal or PCwith terminal emulationsoftware. More than 100commands available.Depends on length ofprograms.All inputs with LEDsOV-1.5VDC: Input ON2.5V-24VDC: Input OFFAll outputs with LEDs4 relay outputs12 open collector outputs24VDC maximumUp to 11 limit switchesD25 female connector.XON/XOFF handshakeACL: Advanced ControlLanguageItem CommentsSpecificationPositions teachingCoordinate systemsSensorsEmergency BrakeProtectionTrouble shootingUsing ACLUsing a teach pendantJoint coordinatesXYZ coordinatesSensors interruptcapabilitiesEmergency ON/OFF switchHardware: Current limit oneach axisAutomatic fuse on each axisSoftware: Impact protectionThermal ProtectionUpper and lower range limitsfor each axisBuilt in test routineAbsolute and relativedefinitions for bothSensors are connectedvia input tabs.Red light when ONbrake is activatedDisconnects power tomotors and resets thesystemFactory adjusted;user adjustable.Factory set;user definableActivated fromterminal/computer Orfrom TP. Tests servoaxis, inputs, outputs,and limit switches.Appendix B. ESHED SCORBOT-ER VII Robotic Manipulator^197Table B.2 contd.Appendix B. ESHED SCORBOT-ER VII Robotic Manipulator^198Table B.2 contd.Item Specification CommentsLED indicator on each servoaxis and on each input/outputExternally visibleCommands for testing the The commands:performance of each motor in "SET ANOUT[i]"open loop and of each encoder "SHOW ENCO"Power requirements 110VAC, 60Hz 500W maximum +5%Power switching Power ON/OFF switch LED indicatorsMotor ON/OFF switch for bothUser's Power Supply +12VDC, 2A Regulated, externaltabs for each poleMotors Power Supply +24VDC, 8A Not regulatedOperating Temp. 5 — 40°C (41 — 104°F)Weight Approx. 19kg (421b)Dimensions 490mm(L)X445mm(W)X150mm(H)19.3in(L)X17.5in(W)X5.9in(H)B.3 Programming ConsiderationsThe ESHED robot is programmed using a special programming language called ACL -Advanced Control Language. ACL provides the following functions:• Directly executes robot commands.• Controls Input/Output data.• Enables the user to program the robot.Appendix B. ESHED SCORBOT-ER VII Robotic Manipulator^199• Runs all selected programs concurrently (full multi-tasking support).• Synchronizes the execution of programs.• Provides a simple file management system.Access to ACL is established through an RS-232 port on the controller. Table B.3gives a list of commonly used commands and their usage. There are over 100 ACLprogram commands. A detail description of these commands can be found in the ACLReference Guide [ESHED ROBOTEC, 19881.Table B.3: ACL Command Summary.Format DescriptionCommandCLRCOFFCONCONTINUECOPYDEFINEDEFDELAYDELPDELVARDIMDIREDITENDENDIFENDFORFORCLR nCLR *COFFCONCONTINUE pryCOPY prgl prg2DEFINE vi•- • v8DEFPA pDEFPB pDELAY tDELP pDELVAR vDIM v[n]DIREDIT pryENDENDIFENDFORFOR vl=v2TO v3Clears the value of encoder nClears all encordersTurns off servo control to all axesTurns on servo control to all axesContinues the run of the program pryCopies user program prgl to a new program prg2Defines local variables, up to 8 at onceCreates a position name p for group ACreates a position name p for group BHalts the program for a specified time (t)Deletes the position p from the position tableDeletes the variable v from the variable tableDefines a local vector of n variablesDisplays a list of all user programsStarts editing program pryEnd of a programEnd if IF branching sectionEnd of FOR loopLoop commandAppendix B. ESHED SCORBOT-ER VII Robotic Manipulator^200Table B.3 contd.Format DescriptionCommandGLOBALGOSUBGOTOHEREHOMEIFINITLABELLISTLISTPLISTPVMOVEMOVESPRINTGLOBAL vi• - • v8GOSUB prgGOTO labelHERE pHOMEIF vicondition v2INIT CONTROLLABEL nLIST prgLISTPLISTPV pMOVE pMOVE p tMOVED pMOVES pv s ePRINT "str"Defines a global variable up to 8 at oncePasses control to another program (prg)Continues program execution from commands afterlabelRecords the actual location of the robot as anabsolute position, in axes encoder coordinatesDrives the robot to its home positionChecks the condition and branches accordinglyInitializes all system parametersMarks a program label n for use with GOTOcommandListing of program prgDisplays user position tableDisplays the coordinates of position pMoves the robot to position pMoves the robot to position p subject to the time tSame as MOVE except that the program waits untilthe robot reaches position pMoves the robot smoothly along a vector ofpositions pv from start s to end eDisplays a string "str" on the screenAppendix B. ESHED SCORBOT-ER VII Robotic Manipulator^201Table B.3 contd.Command Format DescriptionPRINT v Displays the value v on the screenREAD READ "str" v Displays the string "str" and inputs the value vRUN RUN pry Executes the program prySET SET vl=v2 Assigns value v2 to viSETP SETP pl=p2 Assigns value of position p2 to plSETPV SETPV p a v Sets a joint value for axis (a) in the position pSETPVC SETPVC p c v Sets a cartesian value for coordinate c of position pSHIFT SHIFT p BY a v Adds value v to axis joint (a) of position pSHIFTC SHIFTC p BY v c Adds value v to cartesian coordinate (c) of pSHOW SHOW DIN Displays status of all digital input linesSHOW DOUT Displays status of all 16 digital output linesSHOW ENCO Displays values of all encodersSHOW SPEED Displays current speed settingSPEED SPEED v Sets speed to value vSTOP STOP pry Stops execution of user program prySTOP stops execution of all running programsTRIGGER TRIGGER pry Sets the condition for the execution of the programBY IN/OUT n s pry. The condition is an I/O state s on line nWAIT WAIT vi Suspends program execution until the condition iscondition v2 satisfiedAppendix CSD512 Vision SystemThe SD512 Development Workstation is a UNIX' machine, running under the REGULUS2operating system. REGULUS is a UNIX-compatible operating system which, unlike thestandard UNIX, has a real-time kernel [IRI, 1988].In order to facilitate vision application program development, the IRI (InternationalRobomation Intelligence) REGULUS version is enhanced by the IKS/VR vision proce-dure library and its interactive equivalent, INIKS. In addition, IRI provides program de-velopment tools such as VSL Vision Support Library to supplement IKS/VR [IRI, 1989].C.1 System Configuration and SpecificationsThe basic SD512 vision system consists of three circuit boards, called SCPU (SUPRAV-ISION CPU), FBB (Frame Buffer Board), and IPB (Iconic Processor Board). Thesecomponents of the SD512 workstation are described below, while Figure C.1 shows alogical block diagram of the main components.• The SCPU encompasses the vision system CPU with its coprocessor, the memorymanagement unit (MMU), up to 8MB DRAM, the I/O interfaces, and all other nec-essary "glue logic". The CPU is 32 bit MC68020 and the coprocessor is MC68881.I/O interfaces include SCSI for hard disks and streamers, floppy interface, eight1UNIX is a trademark of AT&T2REGULUS is a trademark of ALCYON, Inc.202Appendix C. SD512 Vision System^ 203RS232 serial channels, and one parallel Centronix interface. Also, there are sevensingle line vectored interrupts and two independent timers.• The FBB accommodates the frame buffer memory plus some more memory forlook up tables. In addition, the FBB contains all the special circuits needed tohandle the video input, synchronize the cameras with the system, and generate thevideo output for monitors. There are two independent frame buffer memories onFBB with 1MB each, organized to store images of variable resolution. These framebuffers operate in conjunction with a variable resolution digitizer, which can beprogrammed to interface to matrix cameras with 256 x 256, 512 x 512, or 1024 x1024 resolution. The content of any frame buffer or a segment of it can be displayedon a monitor screen.• The IPB comprises three iconic processors to perform a variety of iconic operations.These include feature enhancement, image segmentation, and feature extraction.C.2 Available FunctionsC.2.1 Feature EnhancementMonadic point transformations• Iselm ta b(s,t) Selects a monadic table, s - pixel stream, t - table• Im kmtab(t,f(p)) Makes a monadic table, t - table, f(p) - function of pixel values.Diadic point transformations• ladd() Computes the arithmetic sum of each pair of corresponding pixel values inSource 1 and Source 2 windows.Appendix C. SD512 Vision System^ 204Figure C.1: SD512 System Block Diagram.Appendix C. SD512 Vision System^ 205• Isub() Computes the arithmetic difference of each pair of corresponding pixel valuesin Source 1 and Source 2 windows.• land() Computes the logical AND of each pair of corresponding pixel values inSource 1 and Source 2 windows.• lor() Computes the logical OR of each pair of corresponding pixel values in Source1 and Source 2 windows.• lcopy() Copies the contents of Source 1 window to Source 2 window.Convolution• Iconv30 Performs 3 x 3 convolution. The coefficient kernel is specified by lcoef,e.g., C_3LAPALACE, C_3GAUSS, and so on.• Iconv50 Performs 5 x 5 convolution.• Iconv70 Performs 7 x 7 convolution.C.2.2 Image SegmentationLabeling• Ilabel() Detects blobs (sets of 8 connected adjacent pixels with identical value) inSource 1 window and assigns each one a label.• Ibbox0 Calculates the minimum-sized rectangle enclosing each labeled blob in theSource 1 window. The coordinates of the bounding boxes are stored in the arraysIbbrminn (min row), Ibbrmax[] (max row), IbbcminU (min column), and Ibbcmax[](max column).Appendix C. SD512 Vision System^ 206C.2.3 Feature ExtractionThe histogram• 'hist() Computes the frequency of occurrence of each pixel value in the Source 1window.• ImaskhistO Simultaneously computes a histogram in up to 16 arbitrary-shaped ar-eas.The profile• Ivprof0 Computes a vertical profile of the Source 1 window by summing the pixelvalues in each column.• Ihprof0 Computes a horizontal profile of the Source 1 window by summing the pixelvalues in each row.• Ineprof0 Same as above but in the North-East direction.• Inwprof0 Same as above but in the North-West direction.• Imaskprof0 Computes the profile in which mask defines the regions where the pixelvalues are summed.The pixel statistics memory• Ipsm[i] Contains the results of several iconic processor functions. These functionsare; histograms; profiles; labeling; and bounding box.Appendix C. SD512 Vision System^ 207Run-length encoding• 'runlength() Finds the transitions from black to white or white to black in a binaryimage and stores their coordinates in a list for further analysis.Moments• Imoments0 Computes the moments (zero through third order) of the image inSource 1 window.C.2.4 Template Matching and Correlation• Iptcorr0 Computes the point correlation of the images in the Source 1 and Source2 windows.• Inormcorr0 Computes the normalized correlation of the contents of the Source 1and Source 2 windows.C.2.5 Pyramid Processing• Icomp21O Compresses the dimensions of an image by a factor of two. A 3 x 3convolution reduction is performed (filtering and sampling).• Icomp4_10 Compresses the dimensions of an image by a factor of four. A 5 x 5convolution reduction is performed (filtering and sampling).• Isamp41. Shrinks the dimensions of the Source 1 window by selecting every fourthpixel.• lzoom() Enlarges the dimensions of an image by a factor of two.Appendix C. SD512 Vision System^ 208C.2.6 GraphicsPlotting fuctions• !pixel() Changes the value of a pixel; the value and the coordinate are specified.• Hine() Draws a line between the specified start and end coordinates; the pixel valueis also specified.• Irect() Draws an outline of a rectangle; upper-left and lower-right coordinates andthe pixel value are specified.• lbox() Same as Irect() but draws a filled rectangle.• 'circle() Draws a circle; center coordinates, radius, and the pixel value are specified.Text• Ichar() Prints a character; font is specified.• [text() Prints a string of characters; font is specified.C.2.7 Video Control and Image Acquisition• Ilive0 Turns video going directly from the camera to the monitor on or off.• 'snap() Captures an image from the currently selected camera video port into thecurrently selected frame buffer.• 'camera() Selects the active camera port.• 'frame() Selects the active frame buffer.• lvideo() Turns the video display on or off.Appendix C. SD512 Vision System^ 209C.2.8 IK S/VR Variables• Isrc1 Source 1 window upper-left corner.• Isrclb Source 1B window upper-left corner.• Isrc2 Source 2 window upper-left corner.• Idst destination window upper-left corner.• Idstb destination B window upper-left corner.• lwrk Work window upper-left corner.• Irows Number of rows in window.• 'cols Number of columns in window.• Icoef Coefficient array index.• Imisc Misc parameter.• Isres1 Source 1 resolution.• Isreslb Source 1B resolution.• Isres2 Source 2 resolution.• Idres Destination resolution.• Idresb Destination B resolution.• lwres Work resolution.Appendix C. SD512 Vision System^ 210C.3 Application Software DevelopmentApplication software on SD512 is developed using the C language. Each and everyprogram that uses an IKS/VR function or variable should have the following line includedin the program file.#include <iksvr.h>This header file declares variable and function return value data types, defines con-stants, and defines macros.The most important IKS/VR function is Iopen(). This function must be the firstIKS/VR function called by a program that uses any other IKS/VR function or variable.It is important to check the return value of Iopen() just to make sure that the visionhardware is working correctly. A return value of zero or more indicates that everything isworking correctly. A negative value means that something is wrong and the applicationprogram should terminate.The following example illustrates how the IKS/VR functions and variables can becombined to build an application program. In particular, this example captures two im-ages and subtracts the second image from the first and the resulting image is displayed.The first image is captured into frame buffer 0, the second image is captured into framebuffer 1, and the result is stored in frame buffer 3.#include <iksvr.h>main()if (lopen() < 0)Appendix C. SD512 Vision System^ 211exit(-1);Ilive(ON);^/* switches video on */printf(''Press ENTER to snap the first image\n");getchar();Ifra me(0);lsnap();/* selects frame 0 *//* snaps the first image */snap the second image\n");Ilive(ON)printfC Press ENTER togetchar();Iframe(1);Isnap();Isrc1^lwloc(0,0,0);Isrc2^lwloc(1,0,0);Idst = lwloc(3,0,0);Isub();Ifra m e(3)/* selects frame 1 *//* snaps the second image *//* set srcl to frame 0, row 0, column 0 *//* set src2 to frame 1, row 0, column 0 *//* set dst to frame 0, row 0, column 0 */subtract src2 from src1, put result in dst *//* select destination frame */It is necessary to compile the program and link the object program with the requiredlibraries in order to create the executable program. Suppose that the source programwas saved in a file called main.c. The following command is used to compile and linkmain. c:cc main .c -I/usr/include/iri -liksvrAppendix C. SD512 Vision System^ 212If the source program is successfully compiled and linked, the executable programwill be saved as a.out. Running the program is then just a matter of typing a.out.Appendix DInterface Circuitry and Communication SoftwareThis appendix describes the electronic circuits and communication software designed andimplemented for the prototype butchering machine. Section D.1 describes the interfacecircuit that was built to establish communication between the vision station and therobot controller. Section D.2 describes the circuit built to establish synchronizationof the conveyor with the vision software. Software developed for communication andsynchronization is discussed in Section D.3.D.1 The Interface CircuitAn interface circuit between the vision station and the robot controller was built to es-tablish communication between the two. A parameter command from the vision softwareis converted to two 7 bit digital signals and transferred to the robot controller throughthis interface circuit shown in Figure D.1.SD512 sends data out through its Special Input/Output Port (SPIO). The 8th bit ofeach data byte sent out is used as a strobe signal, and it indicates the robot controllerthat a data set is ready to be read.D.2 Synchronization CircuitSynchronization of the vision software with the conveyor is established through the syn-chronization circuit shown in Figure D.2. A trigger switch mounted on the conveyor gives213Appendix D. Interface Circuitry and Communication Software^214Ul - 74LS244U2 - 74LS244Figure D.1: Interface Circuit between the Robot Controller and SD512.Appendix D. Interface Circuitry and Communication Software^215an ON/OFF signal depending on whether there is a fish on the conveyor to be imaged.The vision software polls for this signal and proceeds with the processing, if an activesignal is detected.The signal obtained from the trigger switch is very noisy. Therefore, two retriggerablemonostable multivibrators are serially used to overcome this problem. The first multi-vibrator is triggered to the first low going edge and masks any other edges for a periodof about 200ms. This period can be adjusted by changing the variable resistor R2 inFigure D.2. The second multivibrator is triggered by the high going output of the firstmultivibrator and outputs a pulse. The duration of this pulse is about 5ms.D.3 Communication and Synchronization SoftwareThere are three parameters that must be sent by the vision station to the robot controller.• Length of the fish• Width of the fish• Model No.The length and the width parameters are each converted to two 7 bit words. Themodel number is converted to one 7 bit word because it ranges only from 1 to 5. The8th bit of each byte, as mentioned above, is used as a Strobe signal. The vision systemsends the first byte of a parameter and waits for an acknowledge (ACK) signal from therobot controller. It sends the next byte of the parameter if a successful ACK is received.This process is continued until all five bytes, corresponding to the above parameters, aresent to the robot controller.Appendix D. Interface Circuitry and Communication Software^216RI - 510RI - 10kR3 - 200kCI - 10p1C2 - 0 11AFC3- lpFU3 - 74LS123Figure D.2: Synchronization Circuit of the Conveyor and the Vision Software.Appendix D. Interface Circuitry and Communication Software^217A synchronization pulse from the synchronization circuit is polled from the visionsoftware. An active pulse from the circuit will cause the vision system to capture an imageand perform the relevant analysis. This signal is input through one of the digital inputchannels of SPIO. A high-level description of the communication and synchronizationsoftware is as follows:• Repeat for each fish1. Initialize the SPIO port2. Wait for the trigger signal3. Perform image processing4. Repeat for length and width parameter— Compute the two words (7 bit) of the corresponding parameter— Activate Strobe and send the first byte— Wait for ACK— Activate Strobe and send the second byte5. Compute the word (7 bit) for the model number6. Activate Strobe and send the first byte7. Wait for ACKFigure D.3 shows the complete wiring diagram of the interface circuit and the syn-chronization circuit.Appendix D. Interface Circuitry and Communication Software^ 218Figure D.3: Complete Wiring Diagram of the Interface and the Synchronization Circuit.


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items