Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

A methodology for the optimization of heat sterilization for rectangular food packages Greaves, Karen F. 1990

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


831-UBC_1990_A7 G73.pdf [ 4.89MB ]
JSON: 831-1.0098395.json
JSON-LD: 831-1.0098395-ld.json
RDF/XML (Pretty): 831-1.0098395-rdf.xml
RDF/JSON: 831-1.0098395-rdf.json
Turtle: 831-1.0098395-turtle.txt
N-Triples: 831-1.0098395-rdf-ntriples.txt
Original Record: 831-1.0098395-source.json
Full Text

Full Text

A M E T H O D O L O G Y F O R T H E OPTIMIZATION OF H E A T STERILIZATION F O R R E C T A N G U L A R FOOD P A C K A G E S By Karen F. Greaves B. A. Sc. (Systems Design Engineering) University of Waterloo A T H E S I S S U B M I T T E D I N P A R T I A L F U L F I L L M E N T O F T H E R E Q U I R E M E N T S F O R T H E D E G R E E O F M A S T E R O F A P P L I E D S C I E N C E in T H E F A C U L T Y O F G R A D U A T E S T U D I E S B I O - R E S O U R C E E N G I N E E R I N G We accept this thesis as conforming to the required standard T H E U N I V E R S I T Y O F B R I T I S H C O L U M B I A Sept. 1990 © Karen F. Greaves, 1990 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. The University of British Columbia 2075 Wesbrook Place Vancouver, Canada V6T 1W5 Date: A thesis for scholarly purposes may be granted by the head of my department or by his Abstract A new method for designing optimum package size and processing conditions for re-tortable flexible or microwaveable packages has been specified. The method employed the random centroid optimization technique. The method also included a computer sim-ulation model, used to calculated process lethality and nutrient degradation, and an objective function used to calculate total process cost from the above values as well as from the processing parameters. The objective function was original in that it was economicaly based, rather than concentrating solely on achieving processes with high product quality. Typical processing costs such as the cost of energy or materials were included, as well as more qualitative costs such as those associated with decreases in product quality. Most of these costs were inferred from conventional can processing costs, as information regarding processing costs for flexible or microwaveable retortable packages was often unavailable. For the simulation model, a recently developed one dimensional finite difference tech-nique of high accuracy (the exponential finite difference method) was extended to three dimensions and used to solve the partial differential heat flow equation. The simulation model also comprised an original algorithm to model headspace and a new numerical integration technique to calculate lethality and nutrient degradation from the time tem-perature profile. The headspace model calculated headspace volume as a function of retort temperature and pressure and then used steady state heat transfer theory to model heat flow across the package headspace. The numerical integration technique fitted an exponential curve to the time temperature profile and then integrated this curve analytically. u The accuracies of the exponential finite difference method with and without headspace included and of the numerical integration technique were tested by comparison with both analytical solutions and other well established numerical methods. In addition, the uncertainty associated with the assumption of uniform paramaters was evaluated using a Monte Carlo technique. After the validity of the simulation model was established, it was used in conjunction with the objective function and the random centroid optimization method to search for a global cost minimum. One trial optimization using a simple objective function was made, followed by three optimization runs using the more complex objective function desinged to calculate process cost. Each optimization used a different set of decision variables, which varied in number from three to five. The results were evaluated as to whether a minimum was found, and if so, whether it constituted a global minimum. m Table of Contents Abstract ii List of Tables vi List of Figures vii Acknowledgement viii 1 INTRODUCTION 1 2 LITERATURE REVIEW 4 2.1 Optimization Approaches 4 2.2 Methods of Process Simulation 7 2.2.1 Calculating Lethality 7 2.2.2 Calculating Nutrient Retention 16 2.2.3 Inaccuracies Due to Variability . 17 2.3 Objective Functions 19 2.4 Optimization Studies 21 2.4.1 Constant Temperature 21 2.4.2 Variable Temperature 24 3 METHODS 26 3.1 Mathematical Model 26 3.1.1 Simulation of the Time-Temperature Profile 26 iv 3.1.2 Calculation of Lethality and Nutrient Degradation 36 3.2 Optimization Technique 39 3.2.1 Approach Followed 39 3.2.2 Objective Functions Used 42 3.3 Verification of Model 47 3.3.1 The Finite Difference Method 47 3.3.2 Treatment of Headspace 49 3.3.3 The Numerical Integration Method 50 3.3.4 The Assumption of Uniformity 52 4 RESULTS AND DISCUSSION 55 4.1 Simulation Model Verification 55 4.1.1 Polynomial Finite Difference Method 56 4.1.2 Headspace Model 60 4.1.3 Numerical Integration Method 63 4.1.4 Monte Carlo Runs 65 4.2 Optimization Runs 66 4.2.1 Trial Optimization • • 66 4.2.2 Version I 67 4.2.3 Version II 69 4.2.4 Version III 70 4.2.5 Comparison of Methods 71 5 CONCLUSIONS 77 Bibliography 79 v List of Tables 3.1 Initial Ranges for the Optimization Runs 40 4.2 Analytical solution at the centrepoint for eight different processing runs . 57 4.3 Centrepoint temperature (°C) as given using the ADI and Polynomial Finite Difference Solutions 58 4.4 Percent difference in the centrepoint temperatures given using the ADI and Polynomial Finite Difference methods from the analytical solution . 60 4.5 Verification of Headspace Model 62 4.6 Comparison of Numerical Integration Methods 63 4.7 Centrepoint Lethalities (in minutes) Calculated for Four Processes . . . . 64 4.8 Values of the Decision Variables at Which the Minima were Found for the Optimizations Using Method I 67 4.9 A Cost Breakdown of Some Specified Processes 74 vi List of Figures 2.1 Schematic of the Random-Centroid Optimization Method 6 3.2 Extension of the Finite Difference Grid to Apply Convective Boundary Conditions 32 3.3 A Model of Package Headspace 34 3.4 Applying the Curve-Fitting Integration Method to a Lethality-Time Profile 38 3.5 The 1-D Infinite Plate Used in the Analytical Solution to Verify the Headspace Treatment Procedure 49 4.6 The Time-Temperature Profile for Run #4 59 vii Acknowledgement I am indebted above all to my supervisor, Paul Richard, for his ideas throughout the project, and his help in implementing them, and his complete understanding of my some-times erratic work hours. I thank Dr. Nakai for allowing me the use of his optimization program, which became an integral part of the thesis. I also thank Tim Durance for his instruction and advice on all things microbiological, and Garry Sandberg for imparting his knowledge of food processing in industry. Thanks to Ed Hauptmann for his keen insights (ie. tough questions) during my defence and his advice on technical writing. On a personal level, I'm really grateful to my family for their support (both financial and other) and my friends. Thanks Anya for great patience, understanding, and advice, as well as great dinner parties, kayak trips, and hikes. Thanks to Diane for her visits, and to Corinne and Andy for the great times in Calgary. Thanks to Kevin for being a "total sweetie", always willing to help out and completely immune to (or unaware of) the time of day or night. Thanks also to my roommates, especially Jana and Dave, for putting up with my forgetfulness and being so easy to live with. I really appreciate you all!! vm Chapter 1 INTRODUCTION Flexible retort pouches and microwaveable packages are a fairly recent food packag-ing alternative to the conventional can. Comparisons have shown that dramatic heat processing differences exist between retort pouches and conventional cans (Snyder and Henderson, 1989). Despite this, there is hmited data reported in the Hterature related to the thermal processing of either retortable pouches or microwaveable packages. Process design for products retorted in cans, or in flexible or microwaveable pack-ages must ensure commercial sterility while attempting to maximize product quality and minimize overall processing cost. The process design is often done through the use of optimization techniques. Most optimization studies have involved only one or two deci-sion variables (parameters describing the process which are altered in order to affect the processing results). The decision variables are usually chosen to be process duration and retort temperature, and all other potential design variables remain fixed. Previous investigations have concentrated on optimization to provide a process of adequate lethality with minimum quality deterioration. Usually a single nutrient is chosen to be representative of product quality, and the objective function becomes simply the percentage destruction of that nutrient. With such an objective function, it is possible to obtain a thermal process ensuring high product quality. However, the advantages of producing a high quality product could be offset by the potentially high costs incurred by the process required. An alternative objective function is one involving the overall net cost of the process. 1 Chapter 1. INTRODUCTION 2 Such a function would have the advantage of relating more directly to the decision-making criteria employed by the food processing industry. Product quality could still be included in the objective function provided that a cost could be associated with the product quality. In this thesis, a cost-based objective function was developed for the optimization of salmon processing in flexible or microwaveable retortable packages. The parameters of the thermal process which could be varied were not Hmited to process duration and retort temperature, but could include retort pressure, product initial temperature, and package shape. In order to allow for such a large number of processing parameters, the random centroid optimization method (Aishima and Nakai, 1986) was chosen to perform the optimization. The calculation of lethality and product quality (values needed as inputs to the ob-jective function) proceeded directly from the time temperature profile, rather than using one of the more common formula methods. To this end, a numerical method was used to solve the heat diffusion equation in order to calculate temperature values as accurately as possible. A simulation model based on solving the heat diffusion equation was developed to calculate the time-temperature profile of a rectangular package, given its size and a description of the thermal process to which it was subjected. The model included a finite difference approach developed by Bhattacharya (1985), which was adapted for use in the simulation model by extending the method to three dimensions and introducing boundary conditions pecuHar to the retorting process. An original algorithm to account for the effects of variable headspace was also developed for the simulation model. Lethality and nutrient degradation were calculated from the temperature profile given by the model. To summarize, this study was undertaken with the objective of developing a practical method for the design and optimization of thermal processing of rectangular containers Chapter 1. INTRODUCTION 3 for retortable solid food. Specific objectives are: 1) to develop a heat transfer model for rectangular packages in a retort. The development of the model includes the calculation of heat conduction through solid food and through the headspace, and the calculation of lethality and nutrient degra-dation from the temperature profile by numerical integration. 2) to test the model by comparing the accuracy of each of the above components with established analytical solutions, and by establishing the validity of some of the main assumptions upon which the model is based. 3) to develop a cost-based objective function for the retorting process. 4) to identify optimum sterilization processes using the above tools. This objective was met using the random-centroid optimization method developed by Aishima and Nakai (1986). The method was applied with several different sets of decision variables describing the retorting process. The optimizations were performed more than once in order to establish the consistency of the method. Chapter 2 LITERATURE REVIEW The following review describes how optimization has been applied to food sterilization processes by first outlining the habitual approaches taken to optimization in the food processing area. Then the mathematical methods commonly used in process simulation models and the prevalent objective functions are covered. Finally, some previous opti-mization studies which have used these models and the results obtained by the studies are reviewed. Published work pertaining to specific applications is reviewed in later sections. 2.1 Optimization Approaches Optimization methods are mathematical techniques which can be used to design processes to ensure optimal results. The review of optimization theory in food processing by Teixeira and Shoemaker (1989) described optimization problems by listing several of their attributes. A performance function or objective function is used to calculate the quantity to be maximized or minimized. For example, if the aim of the optimization is to maximize nutrient retention, then the performance function is the calculation of nutrient retention. The performance function is calculated based on the processing results, which are varied by changing parameters known as decision variables. Decision variables are any variables chosen to describe the process, such as retort temperature or process duration. Constraints are Hmitations which are placed upon the values assigned to the decision variables. The constraints are usually specified in the form of upper and lower limits on 4 Chapter 2. LITERATURE REVIEW 5 the variable, which are often often imposed by the practical limitations of equipment. For example, the capabilities of the retort available may impose an upper Kmit upon retort temperature. For any one set of decision variables, a mathematical model may be used to simulate the process. The model can be used to ascertain the effects of imposing a particular combination of decision variables on the process, and from this to calculate the value of the performance function. Finally, the overall optimization technique, or algorithm, is used to search possible combinations of the decision variables in order to find that combination which produces the optimum. None of the early optimization studies employed formal optimization theory in the determination of an optimum (Holdsworth, 1985). Instead, the technique used involved first determining several processes by iteration or by trial and error. These processes had to involve different values of the decision variables, but be equivalent in terms of lethality. The value of the objective function (usually nutrient retention) was then calculated for each of the processes, and plotted against the values of the decision variables. The values of the decision variables which maximized (or minimized) the objective function could then be noted. This technique lends itself most easily to the analysis of only a few decision vari-ables, one at a time, and without any considerations of possible interactions between the decision variables. The number of possible combinations of decision variables makes the technique unwieldy for optimizations involving objective functions containing many decision variables. A method well suited for optimization problems involving several decision variables is the random-centroid optimization developed by Aishima and Nakai (1986). The complete method, comprising several subsections which are used in sequence, is shown in Figure 2.1 (after Nakai 1989). Chapter 2. LITERATURE REVIEW 6 R A N D O M S E f l R C H C E M T R O I D S E A R C H M A P P I N G R E P E A T 1 N S I M U L T S H I J F T Figure 2.1: Schematic of the Random-Centroid Optimization Method In the random search several trial processes are formulated by the specification of values for each of the decision variables. The values are chosen randomly within the limits or constraints specified by the user. Once the results from the first randomly selected processes are known, new trial processes are specified using the centroid search. These processes are found using a simplex constructed according to the method described by Aishima and Nakai (1986). The results from the random search and centroid search operations are pooled and mapped. In order to help the user visualize the response surface, results are plotted Chapter 2. LITERATURE REVIEW 7 versus each decision variable in turn, and suggestions are made as to which points could be connected to form a surface. The shape of the surface thus produced indicates, for each plot, whether the value of the decision variable should be increased or decreased in order to further minimize the cost. This information is used to reduce the ranges over which the decision variables may vary. The complete cycle is repeated, using the altered ranges in the random search section to narrow down the optimum region. Once the optimum region has been sufficiently denned, the optimization proceeds us-ing the simultaneous shift experiment. A target value for each factor is denned, outlining the most probable location of the optimum. The target is set using individual judgement, based upon the plots of the response surfaces. The simultaneous shift program is then used to shift (or change) the value of each factor towards its target value. All factors are shifted simultaneously, to produce a new trial process, for which the result (objective function value) is expected to be lower. Different target values for the factors may be entered, allowing several simultaneous shifts to occur at once. The combination of target values giving the lowest result is noted, and further simultaneous shifts in the direction of these particular target values are made, until the result is no longer reduced by a significant amount with each further repetition. At this point a minimum has been found. 2.2 Methods of Process Simulation 2.2.1 Calculating Lethality The processing conditions required to produce sterility in food containers were usually calculated by trial and error procedures in which factors describing the process were adjusted until adequate sterility was attained. Adequate sterility, or commercial sterility, was defined using the concept of lethality, which was designated by the symbol F and Chapter 2. LITERATURE REVIEW 8 described the extent to which bacterial spores were destroyed. All of the many methods developed to perform lethality calculations shared the same basic approach. First, a temperature profile was determined for the product from the given processing conditions, and second, the extent of bacterial death was calculated at each temperature. The first method developed for determining process lethality was the General Method introduced by Bigelow et al. (1920). In this method, both the temperature profile and the bacterial death at temperature were determined experimentally. The thermal destruction curve was determined by heating products containing a known spore concentration for various times at various temperatures. Points on the curve were given by the time-temperature combinations which resulted in destruction of all the spores present. The thermal destruction curve was originally thought to represent the conditions nec-essary for 100 percent spore destruction. Currently however, a logarithmic relationship is almost invariably assumed, implying that spore destruction can come very close to, but never actually reach 100 percent. A graph of the time-temperature history was obtained from thermocouple measure-ments at the slowest heating point in the product. The slowest heating point was thought to have the lowest spore destruction in the package, thus giving the greatest safety factor. Temperature values from points on the graph were divided by the corresponding time to destruction at that temperature, which was found using the thermal destruction curve. The result was a lethal rate curve, which could be integrated over time to obtain the total lethality. Although this method was almost exclusively based on experimentally determined data, it contained a potential inaccuracy arising from the choice of location of the ther-mocouple measurements. The slowest heating point in the product, taken as the package geometrical centre for conduction heating foods, may not have been the point of lowest Chapter 2. LITERATURE REVIEW 9 spore destruction (Ball and Olsen, 1957). Thus the safety factor supposedly provided through use of the slowest heating point may have been lessened. After heating ends and the cooling water is applied, the temperature at the package centre continues to rise for longer than that at any other point. This provides the centre-point with a slightly greater heat treatment than the points immediately surrounding it, which cool more quickly. Flambert and Deltour (1972a) illustrated that this phenomenon, known as overshooting, may alter the location of the critical zone (least lethality zone) in the container. The effect has been quantified for cylindrical containers by Naveh et al. (1983), but it has not been incorporated into many of the mathematical methods in use today (Stumbo 1965, Naveh et al. 1983). An alteration to the original General Method was proposed by Ball (1923). The procedure was simplified by removing the need for an experimentally determined thermal destruction curve. Instead, logarithmic destruction was assumed, as described by a line characterized for each separate organism by a D value. The D value was the time required for a tenfold reduction in number of spores. Commercial sterility for a given spoilage organism was defined as a lethality equal to 12 D values, which implied a reduction in the number of original spores by 1012. Since most products requiring sterilization contain a variety of potentially haxmful microbes, lethality values were often determined experimentally and listed by product in reference tables, rather than being calculated from D values. D values change with temperature, and this temperature dependence was usually in-corporated by assuming that it could be described using first order kinetics. This involved plotting the log of the D value against temperature to obtain a Thermal Death Time curve. A z value denoted the negative inverse of the slope of this curve (the temperature range required for a tenfold reduction in D value). For a reference temperature #„./ cor-responding to a given D value, lethal rate could then be calculated at any temperature Chapter 2. LITERATURE REVIEW 10 8 as: I = 10 This was the so-called Thermal Death Time, or TDT equation. Most methods developed subsequently employed the TDT formula to describe the temperature dependance of spore destruction. Over small temperature intervals the TDT formula was equivalent to the Arrhenius relationship. The Arrhenius relationship used a rate constant k (equal to 2.303/D) rather than using a D value to describe the rate of thermal inactivation of spores. The temperature dependence of k was then described as k = A • exp(-Ea/RT) (2.2) where A is an empirical constant, Ea is the activation energy, R is the universal gas constant, and T is the absolute temperature. Several researchers (eg. Deidoerfer and Humphrey 1959, Simpson and Williams 1974), used the Arrhenius relationship in calculating lethality for continuous flow sterilizers, and Lenz and Lund (1977a,b) developed a method based on the relationship. In general, however, the relationship was not in widespread use, as it was less straighforward than the TDT equation and its superiority was not fully established (Holdsworth, 1985). Both equations gave identical results over small temperature ranges, rendering the choice of equation largely academic (Thome, 1985). The improved General Method still remained cumbersome, inspiring the development of other techniques that relied upon simpler methods for obtaining the time-temperature history of the product. The most commonly used technique in commercial practice was the so-called formula method developed by Ball and Olsen (1957), or some variant thereof. (2.1) Chapter 2. LITERATURE REVIEW 11 The basic method used a straight-line semilogarithmic approximation to the heating and cooling curves to determine product temperature. Two parameters were used to describe the curves; heating rate index (1/slope) or fh, and lag factor jh (heating) or jc (cooling). These parameters were obtained from processing conditions for the heating curve but were given fixed values for the cooling curve. The cooling curve also contained a non-linear section, and experimentally determined tables were used to describe the further lethal effects imparted during this portion of cooling. A number of inaccuracies were found in Ball's tables (Merson et al., 1978), and subsequently corrected. In addition, the assumptions made in order to produce the tables only held in a small proportion of cases (Hayakawa, 1978). For example, the tables were developed using data from cylindrical containers, and assuming that the retort temperature was constant throughout the process, the product undergoing sterilization was heated by either pure convection or conduction, the cooling curve had the same slope as the heating curve, and the value of the lag factor for the cooling curve was constant at 1.41. Further details and some of the shortcomings of this method are given in the review by Merson et al. (1978). Some improvements have been suggested to extend the applicability of the formula method. Stumbo (1965) increased the scope of the method to include products which heat first by convection followed later by conduction (for example, products in which starch gelatinizes partway through heating). Stumbo's improved method also allowed for variations in the lag factor describing the cooling curve, providing more flexibility and closer correspondence with the actual cooling curve. Hayakawa (1969) divided the cooling and heating curves into straight-line sections and curved sections, and modelled each portion independently. Other formula methods were developed based on an analytical solution to the heat Chapter 2. LITERATURE REVIEW 12 equation (Gillespy 1953, Flambert and Deltour 1972b) but, as with the previously de-scribed methods, the methods often incorporated hmiting assumptions. For example, both of the above methods applied only to cylindrical food containers, and required the use of tables to compute food temperatures. The method introduced by Gillespy (1953) did not require retort temperature to be constant throughout the process, but did assume a food package with an infinite surface heat transfer coefficient, no headspace, contents at a uniform initial temperature, and thermophysical properties of the contents indepen-dent of location and temperature. Hayakawa (1978) has provided a good critical review of the above methods. More recently, time temperature profiles were calculated using numerical techniques (usually finite difference or finite element methods) to solve the heat diffusion equation directly. Direct solution allowed improvements in accuracy and great increases in flex-ibility. Particulars such as boundary conditions, heating pattern, or type and size of packaging used could all be modified. Finite difference approaches involved approximating the continuous temperature dis-tribution by a discrete function in which temperature was only known at certain points (nodes); either the points located at the intersections of a finite difference grid, or the points located in the centre of each volume element. Time and space were both dis-cretized for a transient temperature distribution, and temperatures were then calculated for each time step at each node point. There are several finite difference techniques, classified as either explicit or implicit. Explicit equations express the temperature at a point in terms of the known tempera-tures of surrounding points at the previous time step. They are usually straightforward to implement, but are often unstable at longer time steps. Solution of many explicit equa-tions therefore requires a very short time step, resulting in long computation times. By contrast, implicit equations are unconditionally stable, but require a less straightforward Chapter 2. LITERATURE REVIEW 13 matrix inversion in order to obtain a solution. Finite difference approaches define the temperature only at specific points in a solid -the node points on a finite difference grid. Most approaches assume a linear temperature profile between nodes, with discontinuities at the nodes themselves. A new approach was developed by Bhattacharya (1985, 1986) in which the temperature distribution between nodes was assumed to follow an exponential equation. This was based on the fact that the analytical solutions to the equation contain exponentials. Several explicit finite difference methods were developed using this approach, which used a partially developed exact solution to the differential equation in order to emulate the analytical solution as closely as possible. For instance, the exact solution to the transient heat diffusion equation in one dimen-sion is : T(x,t) = [Acos(wx/d) + Bam(wx/d)] * exp j — ^ T " " J ( 2 - 3 ) *> with A, B, and w to be determined from the boundary and initial conditions. If a finite difference grid is imposed on the above, with the distance between node points assigned the value d in space and r in time, and known values assigned to the temperatures at (-d,0) and (0,0) and (d,0), then the equation can be solved to provide values for w, A, and B. Substituting x = d and t = 0 in equation 2.3 provides one equation: Tf+1 = A cos(w) + B sin(w) (2.4) substituting x = — d and t = 0 provides another: TU = A cos(-w) + B sin{-w) (2.5) Chapter 2. LITERATURE REVIEW 14 and a third is provided for x = 0 and t — 0: TI = A (2.6) These are solved to give the values: w = arccos A = Tf B = 0 i t + l T 27? These values, when substituted back into equation 2.3, provide the so-called inverse cosine finite difference approximation: OCT Id2 ^arccos 22? (2.7) This solution can easily be extended to three dimensions. The exponential and polyno-mial solutions are similar to the above, but use a different technique to partially solve the diffusion equation and obtain an analytical solution. The exponential equation, polynomial equation, inverse cosine equation, and conven-tional explicit and implicit forms were compared by Bhattacharya and Davies (1987). Error was assessed both in terms of bias error, which is proportional to the sum of the difference between the finite difference solution and the analytical solution, and the RSSD error, proportional to the root of the squared summed difference. The polynomial method was found to be more accurate at longer time steps than the others. The finite difference approximation was first applied to problems in food processing by Teixeira et al. (1969) for modelling cylindrical containers, and by Manson et al. (1970) Chapter 2. LITERATURE REVIEW 15 for modelling rectangular packages. McGinnis (1986) expanded the approximation to encompass a three dimensional solution with boundary conditions representing a heat transfer coefficient of less than infinity. All the above researchers used a conventional explicit finite difference method. Finite element methods are more complex than finite difference methods, and more suited to the modelling of irregularly shaped solids. Finite elements were employed in the analysis of food sterilization by Naveh et al. (1983) and Tandon and Bhowmik (1986). Analytical solutions to the heat conduction equation for simple cases were available (Luikov 1968, Carslaw and Jaeger 1948) but were normally used only to verify the ap-proximations. Analytical solutions consist of approximations to infinite series, which re-quire a large number of terms and consequently a lot of computing time for convergence. However, Castillo et al. (1980) developed a computer method based on an analytical solution. None of the above mentioned studies allowed for the effects of package headspace on the heat transfer rate. Headspace consists of a small pocket of gas left inside the package after sealing, which expands upon heating, pushing the surface of the package upwards and increasing the resistance to heat transfer. Conversely, a high retort overpressure causes inward deflection of the surface. There is some data regarding the behaviour of headspace in cans (Charm 1978), and with regard to flexible pouches, some researchers determined relationships designed to predict the amount of overpressure required to prevent bursting (Wallenberg and Jarnhall, 1957; Davis et al., 1960; Rubinate, 1964; Whitaker, 1971). Some studies also experimentally determined the effect that entrapped air would have upon the heating rate index (Weintraub et al., 1989; Ramaswamy, 1983). However, no such information was found for microwaveable packages. Chapter 2. LITERATURE REVIEW 16 2.2.2 Calculating Nutrient Retention It was often desirable to design processes which would minimize the destruction of nu-trients or of organoleptic properties such as colour or flavour in addition to providing adequate sterilization. Bacteria and nutrients or other quality attributes were usually assumed to decrease according to first order kinetics. Hence, the calculation of nutrient retention for a given process could follow the same procedure as the lethality calculation methods described above. The only difference would be in the choice of constants used in either the TDT or the Arrhenius equation to describe the temperature dependence of quality destruction. However, there were some drawbacks to the use of these methods for calculation of nutrient retention, primarily the calculation of temperature and consequently of bacterial or nutrient reduction at only one single point in the container, generally chosen to be the slowest heating point. Designing for sterilization at this point gave a large safety factor, as all other points in the container would receive an equal or greater heat treatment. However, a knowledge of nutrient retention at a single point overestimates this retention and has very little practical meaning. A means of determining lethality which avoids this problem while allowing for im-proved accuracy was to calculate lethality at several points over the container, and then calculate a volume average, known as integrated lethality. Except for the methods which involved solving the lengthy and cumbersome analytical solution (Castillo et al. 1980, Flambert and Deltour 1972a, Gillespy 1953), each of the methods described above was modified in order to calculate volume integrated lethality, as described in an excellent overview by Lenz and Lund (1977b). Additionally, a short-cut method was developed to calculate processing conditions Chapter 2. LITERATURE REVIEW 17 yielding a desired quality factor retention (volume average), or degree of bacterial ster-ilization. The method, developed by Thijssen et al. (1978), was based on calculations relating various dimensionless parameters descriptive of the process (Fourier number, Biot number, temperature reduction, kinetic factors) to the resulting quality retention. A uniform initial product temperature and constant heating and cooling medium tem-peratures were assumed, but the technique can be applied to many geometrical shapes. The method was extended by Thijssen and Kochen (1980) to include variable heating and cooling medium temperatures. 2.2.3 Inaccuracies D u e to V a r i a b i l i t y One of the more pervasive assumptions underlying the many methods of calculating lethality and nutrient degradation was the assumption of uniform parameter values. The retort temperature and cooling water temperature, the initial temperature of the prod-uct, the destruction rate coefficients for microorganisms and nutrients, and the thermal properties of the product were all assumed constant over time and space. In reality, each of these factors may fluctuate, affecting the calculated result to varying degrees. Some of the factors mentioned above are more easily manipulated (retort, cooling water, and initial product temperatures), and could be controlled more stringently if necessary. Others, such as microorganism destruction rates and food thermal properties, vary by virtue of their biological nature. Although the inaccuracies contributed by in-herent variations in these biological factors cannot be controlled, they can be quantified and their importance evaluated. Variability has been studied for some of these parameters, in the more common mi-crobes and foodstuffs. Lund (1978) reported the standard deviation of the D value of putrefactive anaerobe (PA) 3679, expressed as a percentage of the mean, to be approx-imately 10 %. He used 141 data points, taken from data gathered by Bigelow and Esty Chapter 2. LITERATURE REVIEW 18 (1920). Later Patino and Heil (1985) used their own experimental data to calculate the coefficient of variation (CV) of the D value for PA 3679 as 3 to 6 %. For z values characterizing this bacterium, the CV has been reported as 2% by Stumbo et cd. (1950), and 9% by Kaplan et od. (1954). Lenz and Lund (1977c) analyzed the TDT equation, applying the 10% value they found to describe variability in D, and obtained a standard deviation of 11% for z. Patino and Heil (1985) found experimentally that standard deviations of z were about half of this. Uncertainties in parameters describing thermal properties of the product may be described through the coefficient of variation of />,, the inverse of the slope of the semi-logarithmic heating curve. This has been reported as 7% for conduction heating foods (Hicks, 1961) and 25% for convection heating foods (Herndon, 1971). More recently, Patino and Heil (1985) conducted a series of replicated tests on diced potatoes in brine and found standard deviations in fh that ranged from 15.9% to 25.6%. The heating curve was used when calculating process time by formula methods, but other methods of heat penetration calculation required a knowledge of the thermal dif-fusivity. Teixeira et al. (1975a) found a CV for thermal diffusivity of pea puree of 6.7%, while Lenz and Lund (1977c) reported a 3.3% coefficient of variation. All of the above standard deviations were calculated assuming a Gaussian distribution. Hicks (1961) conducted an error analysis, using the formula method for process time calculations, and estimating the effect of uncertainty in z and in heat penetration pa-rameters on calculated process time. He found that a 2% CV in the z value would result in 5% uncertainty in process time, while 9% variability would result in a CV of 23%. Powers et al. (1962) experimentally investigated the variability in process lethality for six different types of beans, and found CVs ranging from 16% to 57%. They also noted that the centrepoint lethality values rarely followed a normal distribution, even after transformation to logarithmic, exponential, or reciprocal scales. Chapter 2. LITERATURE REVIEW 19 Lenz and Lund (1977c) used a Monte Carlo statistical method in conjunction with their own Lethality Fourier method for process time calculations to investigate the effects of biological variability on process lethality. They calculated process lethalities for a 211x300 can with an initial temperature of 60 °C, processed at 121 °C for three different time periods: 42, 60, and 84 minutes. They found that an 11% CV in z and a 3.3% CV in thermal diffusivity would result in the following mean FQ values and 95% confidence intervals: 2.47 ± 0.6 at 42 minutes, 12.6 ± 3 at 60 minutes, and 32.5 ± 3.8 at 84 minutes. Patino and Heil (1985) used data from heat penetration experiments to calculate standard deviations in process lethality. Their results agreed with those of Lenz and Lund, with CVs ranging from 11% to 21%. The lower coefficients of variation were obtained at higher heating rates, indicating that variability in both heat penetration and spore destruction contributed to uncertainty at lower heating rates, while at higher heating rates the heat penetration factors dominated as the main source of variance. 2.3 Objective Functions While the mathematical methods and simulation models used in the optimization process varied widely, other aspects of the process were less fully explored. Decision variables were usually hmited to process temperature and time, with a few studies including the effects of package size and shape and product initial temperature. Objective functions other than those pertaining directly to nutrient retention were seldom, if ever, employed. Norback (1980) pointed out that nutrient retention may not always have been an appropriate choice of objective function, as nutrient retention was seldom an overriding factor in the design of processes. He suggested that an economic objective function, based on costs or profits, could have a more direct impact upon decision making, as it would fit in more easily with the criteria employed by the decision makers. Such a function could Chapter 2. LITERATURE R E V I E W 20 still include nutrient retention as one of several cost factors. Lund (1982) also emphasized this point, and suggested as possible alternative choices the minimization of energy consumption, process time, nutrient retention, the cost of control, or a combination of these. This suggestion had not until now been undertaken, and this fact is the main justification for the present work. Although processing cost has never been used as an objective function, there have been energy studies done which attempted to quantify the different costs involved in the sterilization process (Singh 1977, Chhinnan et al. 1980, Paulson et al. 1984, Smith and Tung 1986). Some of the data from these studies could be used in devising an objective function based on total processing cost. For instance, in order to calculate energy costs, steam flow rates would be needed. It was difficult to extract meaningful data on steam flow rates from the literature, as pertinent information such as retort size or amount of product was often not included. Singh (1977) found that the flow rate could depend on something as peripheral to the operation as the size of the steam inlet hne, but nevertheless, if measured on a per can basis, the flow rate was fairly consistent. Total steam consumption rate, for both venting and sterilization together, was 0.12285 kg steam per No. 2 size can. Singh (1977) reported steam flow rates from 1134 to 2722 kg/hour during come-up and venting, and 45 to 68 kg/hour once operating temperature was reached. Chhinnan et al. (1980) gave average flow rates of 2250 kg/hour during venting and 250 kg/hour during sterilization. Paulson et al. (1984) found these rates to be 900 kg/hour and 200 kg/hour respectively. Finally, Smith and Tung (1986) encountered average steam flow rates of 600 kg/hour for venting and 80 kg/hour during sterilization. Chapter 2. LITERATURE REVIEW 21 2.4 Optimization Studies 2.4.1 Constant Temperature In general, past optimization studies were concerned with optimizing the retention of nutrients or other quality factors, while still maintaining adequate spore destruction. The processing temperature was varied and the other process parameters needed to provide adequate sterilization value were calculated for each temperature. The nutrient retention was then calculated for each specified process, and the optimum process was that which results in the highest retention value. It can be seen from the z values describing spore destruction and nutrient degradation that while both will accelerate at higher temperatures, spore destruction will do so at a greater rate. It would therefore be expected that high temperature processes, requiring shorter times to attain commercial sterilization, will also result in the smallest amount of nutrient degradation. For slow heating products such as those which heat by conduction however, other factors counteract this effect. By the time the package center heats enough to ensure adequate lethality, the product on the distal side has been exposed to the retort tem-perature for a long period of time. At higher temperatures this exposure becomes more pronounced, sometimes resulting in excessive nutrient degradation not compensated for by the savings in overall process time. The first optimization based on a simulation of the sterilization process was reported by Teixeira et al. (1969). They developed and used a two-dimensional finite difference technique to calculate the time needed to sterilize the product. Processing parameters other than temperature and process time were assigned fixed values for the optimization. For a can size of 307x409 and an initial uniform food temperature of 71 °C, an optimum thiamin retention was found at a process time of 90 minutes at 120 °C. This result Chapter 2. LITERATURE REVIEW 22 closely resembled the conventional process, rather than the high temperature short time processes used for aseptic processing of convection heating products. Teixeira et al. (1975b) used the same technique to compare nutrient retention for different can shapes, and found that for the same can volume, a length to diameter ratio much lower than unity (a flat disk) resulted in the greatest percentage of nutrients retained. It was noted however, that the advantage of the greater nutrient retention conferred by this flat disk geometry may be offset by the greater surface area (and hence packaging material) required. Thijssen et al. (1978) noted that using their calculated relationships between various dimensionless groups and quality retention produced an optimum quality retention at one single value of the Fourier number, namely 0.5. The method was valid provided that resistance to heat transfer at the surface of the container was negligible, that the initial product temperature was uniform and equal to the temperature of the coohng water, and that temperature of the heating medium did not vary with time. Ohlsson (1980a,b) conducted the most in-depth optimization study identified in the literature. He used an approach similar to that of Teixeira et al. (1969), expanding it to investigate how the optimum temperature was affected by changes in the package dimensions, choice of optimized nutrient, and initial product temperature. His first study (1980a) was done for cylindrical containers. His results pertaining to can geometry confirmed those of Teixeira et al. (1975b). That is, cans with a length to diameter ratio far from unity provided the highest quality retention. The calculated optima were found to be fairly wide, with a temperature difference of up to 2.5 C° making little difference to product quality. The optimum processing temperature varied from 117 to 119 °C, and was found to decrease with increasing can size. The optimization was done for three hypothetical nutrients, each degraded according to the same TDT formula, but with a different z value. Increasing the z value of the Chapter 2. LITERATURE REVIEW 23 nutrient or increasing the sterilization requirement (F0 value) resulted in an increase in the optimum process temperature. The optimum process temperature did not change as product initial temperature was decreased, but the process time increased, and the resulting quality retention decreased. The factor with the greatest influence on optimal temperature and quality retention was the can dimensions. The second study (1980b) was similar, but done for flat containers, using a finite dif-ference method which modelled the heat transfer in one dimension. Packages of different shapes but similar volumes could not be compared, as only one dimension was repre-sented. The conclusions regarding the effects of changing parameters on the optimum processing requirements resulting from this study were similar to those of the previous study. A different approach was taken by Saguy (1988), who studied the optimization prob-lem as it applied to aseptic processing. The problems associated with calculating the temperature profile of the product were bypassed by ignoring the heating section and assuming that the product in the holding tube was at a constant and uniform tem-perature. With this simplification, the equations describing sterilization and nutrient degradation could easily be linearized by taking their logarithm. Linear programming was then applied to find the optimal temperature. The approach was tested by performing an optimization, with the objective function defined as the maximum retention of thiamin and chlorophyll (each weighted equally), subject to a minimum lethality of 5 minutes. The optimal process temperature was found to be 130.8 °C. It was postulated that the method could be extended to allow more flexibility if it were possible to calculate the time-temperature profile needed on the outside of the container which would result in the desired time- temperature combination within. Such a method could be based upon work done by Swartzel (1986) which presents a technique for describing any process by an equivalent single time and temperature. Chapter 2. LITERATURE REVIEW 24 2.4.2 Variable Temperature If the requirement that processing temperature may not vary with time is relaxed, the optimization problem is transformed from one of simply finding the optimum processing temperature to one of finding an optimum temperature profile. The experimental trial and error required had formerly rendered such a study impractical, but new calculation techniques allowed the problem to be tackled. Teixeira et al. (1975b) used their finite difference simulation technique to identify an optimum temperature profile. In order to Hmit the vast number of temperature profiles possible, only step, ramp, or sinusoidal functions were used. A single representative function resulting in the greatest nutrient retention was found by trial and error from each class of functions, and these were compared. Little difference was found between the thiamin retention of the different optimal processes. Other researchers (as described below) used dynamic programming techniques to de-termine the optimum time-temperature profile. These techniques could be appHed to continuous processes, where the parameters to be manipulated were not decision vari-ables, but decision "trajectories" given by differential equations. The methods most frequently appHed in food processing oriented problems were based on Pontryagin's con-tinuous maximum principle (Pontryagin et al. 1962). To apply the maximum principle, the process to be optimized was described by ordi-nary differential equations, and the objective function was defined as a Hnear combination of the final values of the decision variables (such as process duration or temperature). The state equations and the objective function were adjoined via a distributed costate variable vector to create a Hamiltonian function. The continuous maximum principle states that the optimal solution to the problem as defined above was a continuous control vector, found by minimizing the Hamiltonian function. Chapter 2. LITERATURE REVIEW 25 Saguy and Karel (1979) were the first researchers to apply formal optimization tech-niques to food processing optimization (Holdsworth, 1986). They used the continuous maximum principle to obtain the temperature profile which provided optimal thiamin retention in cans of both pork and pea puree. They applied the maximum principle only to the thermal conduction process, assuming that other processes such as bacterial lethality could be "lumped", rather than described as continuous processes varying over time. The resulting profile was fairly complex, but gave further improvement in thiamin retention over the optimizations of Teixeira et al. (1975b) and Thijssen et al. (1978). Hildenbrand (1980) developed a different procedure to optimize retort temperature profile. He separated the problem into two sections - a process engineering problem iden-tifying which temperature profile is optimal, and a control engineering problem finding the best way to attain this optimal profile. Much of his paper was devoted to proving that such a separation is permissible. Other researchers (Nadkarni and Hatton, 1985) believed the complex temperature profile found by Saguy and Karel (1979) to be a spurious result, an artifact attributable to their lumping procedure, and to the fact that they used the TDT equation rather than the Arrhenius equation to describe nutrient and microorganism destruction. They also pointed out that neither Teixeira et al. (1975b) nor Saguy and Karel (1979) accounted adequately for thiamin destruction during the coohng period. Nadkarni and Hatton (1985) repeated the optimization of pea puree using the dynamic programming procedure to describe all aspects of the sterilization process, including nutrient and microorganism changes over time. They found that the optimal profile was a simple constant temperature profile (only one heating and coohng cycle), with the heating and coohng occurring as quickly as possible. The percentage of thiamin retained by the procedure was comparable to that found by Saguy and Karel (1979), with an additional percentage point destroyed during coohng. Chapter 3 M E T H O D S To carry out the optimization study, a mathematical simulation of the retorting process and a function to calculate the cost of any given process were developed. The mathemat-ical model simulated the temperature history throughout the product during retorting, and from this calculated product lethality and nutrient degradation. This information, along with information about process parameters such as package size, retort tempera-ture and process duration, was required for the cost function. The optimization method then used the cost function results to suggest potential optimum processes. The development of the mathematical model used to simulate the retorting process and the use of this model to calculate process lethality and nutrient degradation are described in Section 3.1. The means by which the model was verified before use in the optimization is outlined in Section 3.3. Section 3.2 describes the optimization method itself, and the cost function developed to calculate total processing cost from the parameters describing the process. 3.1 Mathematical Model 3.1.1 Simulation of the Time-Temperature Profile The assumptions made concerning the transfer of heat into the product were that it occured completely by conduction, and that thermal properties such as thermal conduc-tivity (k), specific heat (c), and thermal diffusivity (a) were isotropic, uniform throughout 26 Chapter 3. METHODS 27 the product, and did not vary significantly with product temperature. The dimensions of the food package approximated a regular parallelepiped, and heat was transferred through all sides of the package. The equation governing this form of heat transfer is the heat diffusion equation below. 1 &T 82T d2T d2T a dt ~ dx2* dy2 + dz2 ( 3 ' 8 ) In this equation x,y, and z are the axes in cartesian coordinates, and T is the temperature at point (x,y,z) in the package at time t. Finite differences were used to approximate the heat transfer equation in order to calculate the transient temperature distribution within the product. The continuous temperature distribution within the solid was approximated by a discrete function in which temperature was only known at node points located at the intersections of the finite difference grid. As the temperature distribution was transient, time was discretized as well as space, and temperatures were calculated for each time step at each node point. Difference methods based on Taylor Series approximations were used to approximate the continuous differential equation. These were found by taking the Taylor Series ex-pansion of a node point ft; in terms of its surrounding points ft;+1 and ft;_i : ^ A <9ft Ax2d2$ Ax 3 d 3 ft fti+1 = ft.- + A x - + — — + — — + . . . (3.9) A 5ft Ax2<92ft Ax3<93ft , o i n . * , _ , . * , - * . _ + _ _ _ _ _ + . . . (3.10) These equations were added together and solved for (dft/dx) to get the following : Chapter 3. METHODS 28 which, became the central difference approximation when the higher order terms were ignored. It is so called because it expresses the derivative at the point in terms of the node points surrounding it, ie. the next point and the previous point, Alternatively, the equations can be rearranged before summing to obtain the forward and backward difference approximations. The present study required a finite difference formulation in three dimensions, in order to model packages of differing size. Bhattacharya's (1985) polynomial method was therefore extended to three dimensions. The heat diffusion equation was separated into a polynomial in time and a polynomial in space as follows : T(x,y,z,t) = $(x,y,z)Q(t) (3.12) Taking a derivative with respect to time gave : -a (3-13) and taking derivatives with respect to space gave : d2T c92$ dx2 d2T dy2 d2T dz2 dx2 dy2 d2$ dz2 0(0 0(0 0(0 (3.14) (3.15) (3.16) Substituting equations 3.13, 3.14, 3.15 and 3.16 for the derivatives in equation 3.8 we obtained: (3.17) 1 a 0(0 dQ dt ${x,y,z) c92$ #2$ (92$ + — + 5a; 2 dy: Chapters. METHODS 29 The left hand and right hand sides of equation 3.17 were both set equal to a constant, -R, to give two equations : 1 50 0(i) dt = -R (3.18) a $(x,y,z) 5 2$ 5 2$ 5 2$ + — + dx2 dy2 dz2 = -R Both sides of equation 3.18 were multiplied by $(x,y, z) $(g|.y.*) 9e -Qitr--dT = -^y^'R (3.19) (3.20) or $(x,y,z) dQ Q(t)$(x,y,z) dt = -R (3.21) Equation 3.12 was used to replace the left hand side denominator in the above equation 1 dQ — * Q(x,y.z) • = — R T(z,y,z,t) K ' dt (3.22) and equation 3.13 was then used to obtain dT T(x,y,z,t) dt = -R (3.23) The above equation can be solved through integration, yielding, for any time t — r T(x,y,z,T) = T(x,y,z,0)exp<-R^ (3.24) To evaluate R, both sides of equation 3.19 were multiplied by Q(t): a 0(i) ${x,y,z) d2$ <92$ d2$ — 1 1 dx2 dy2 dz2 = -Q{t)*R (3.25) Chapter 3. METHODS 30 or a 0(i) <92$ <92<l d 2$ + (3.26) dx2 dy2 dz2 Again, equation 3.12 was used to replace the left hand side denominator in the above equation : a©(0 [5 2$ (92$ c52$l „ , v ; - ' ' =-R (3.27) <32$ 2$ dy2 dz* which, at time t = 0 equals a T{x,y,z, 0) d 2 $ d 2 $ d 2$ dx' + = - i 2 (3.28) The value of R given by equation 3.28 was substituted into equation 3.24 to give T(x,y,z,r) = T{x,y,z, 0)exp { a t T{x,y,z, 0) [d d2$ d 2 § 3 2$ fy2 (3.29) The solution then proceeded by difference methods. Instead of temperature at time 0, we used temperature at the previous time step, r—1. Imposing a finite difference grid means that x, y, and z were replaced by node points i, j , and k, and the infinitesimal distance dx, dy, or dz was replaced by the grid spacing d. The partial differentials remaining on the right hand side of the equation were replaced by their central difference counterparts. The completed exponential solution became: 7 S = ^ * e x p If * rpt (3.30) The polynomial solution was essentially a modification of the above exponential solu-tion for the purpose of faster computation. Bhattacharya (1985) derived the polynomial Chapter 3. METHODS 31 solution from the exponential solution by noting that the exponential eu can be expressed as an infinite series: e" = H u + ^ J + | u 3 + . . . (3.31) This series was truncated after four terms and the approximation was subsituted for the exponential in equation 3.30. This modification allowed the calculations, when per-formed on a computer, to proceed more quickly than the exponential form, and completed the polynomial finite difference method. To obtain the initial conditions, it was assumed that the entire package was at a uniform initial temperature. Boundary conditions of the third kind were implemented. For first kind boundary conditions, temperature was directly specified at the surface of the package. Third kind boundary conditions involved modelling heat transfer across the package surface according to Newton's Law: = h(To0-Ta) (3.32) where h was the convective heat transfer coefficient (the inverse of the resistance), T^ was the environment or retort temperature, Ts was the temperature of the surface of the solid, n was the vector normal to the boundary across which heat was diffusing, and dT/dn was the heat flow along the normal vector. Temperature right at the package surface was calculated using temperature values at nodes just inside and just outside the package. Since any node just outside the package was outside of the domain defined by the finite difference grid, it was necessary to extend the grid to include pseudo-nodes outside the solid itself. This is illustrated in Figure 3.2. As an example of how these boundary conditions were appHed, take the surface defined Chapter 3. METHODS 32 nodes pseudocodes \ \ \ J 1 1 1 1 1 I 1 1 1 / J=5 J=4 j=3 j=2 (0,0) i=l i=2 i=3 i=4 i=5 Figure 3.2: Extension of the Finite Difference Grid to Apply Convective Boundary Con-ditions by j = 1. The finite difference equation for temperature at the surface became: (XT 1? ft i if* i rpt i rpt I rpt , rpt 1+1,1,* "T" 1i-\,lje "t" ii,2,A "t~ -^ l.O.fe ~T" + l T -^ 7,1,* (3.33) This contained a term TlQk which was not located on the grid, and for which therefore no temperature value existed. The central difference approximation to Newton's law as applied to heat flowing across the surface defined by j = 1 was: k 2d (3.34) This was solved for the pseudo-node value Tlok which was then replaced into finite difference equation 3.33 as a pseudo-value representing an equivalent outside node tem-perature. This is a standard method of formulating boundary conditions of the third kind (Jaluria and Torrance 1986), which Bhattacharya (1986) proposed as a viable method for application to his novel finite difference approximations. Chapter 3. METHODS 33 Equation 3.30, along with the initial and boundary conditions described above, allows a full implementation of the finite difference solution. The complete implementation also required choosing values for the finite difference grid spacing and time step, and the thermal properties of the product. The values chosen affected the method's stability, precision, and computer run time. The basic model was extended to account for the effects of a headspace below the package surface. The headspace was modelled as dome-shaped, with the top of the dome located at the centre of the package surface. This dome could be considered to be the top segment of an imaginary sphere, as shown in Figure 3.3. The radius (r) of the sphere was calculated using the formula for sphere segment volume: V = ir * maxh2 * (r — maxh/Z) (3.35) given values for the dome volume (V) and the height of the dome at its highest point (maxh). An inverse relationship was assumed between the pressure and the values of V and maxh. Maximum values were assigned to each of V and maxh, and then the inverse relationship could be used to calculated their values at any pressure. The maximum value for volume was chosen to be 10 % of total package volume (it is recommended that headspace in cans sealed at 54.5°C not exceed 10 % - Brennan et al. 1976). The maximum value for height of the dome at its highest point was chosen as 0.7 cm. The equations relating these values to pressure were then: Vol (cm3) = 35/Press (atm) Maxhead (cm) = 0.7/Press (atm) Knowing the radius of the sphere allowed us to calculate the height of the headspace bubble at any point above the package surface. This was done by first assuming a Chapter 3. METHODS 34 Figure 3.3: A Model of Package Headspace coordinate system with its origin located at the centreline of the package on the centre of the imaginary sphere (see Figure 3.3). Any point on the domed surface of the package could be described as a point in space (a,b,c). Given a finite difference grid with spacing d, points corresponding to node points on the grid could be calculated as functions of package width (xmax) and package length (zmax). a = (xmax/2) — d (i — 1) c = (zmax/2) — d (k — 1) These values were substituted into the formula for a sphere: a1 4 - b2 4 - c2 = r2 and the result was solved for b. 6 = Vr2 -a2 - c2 (3.36) This was used to calculate the height of the headspace above the surface of the food Chapter 3. METHODS 35 package as H = moxh - (r - b) (3.37) Once headspace height was known at any node point, heat transfer across it could be calculated by using the same finite difference equation as was used for the rest of the product, but with different values for thermal diffusivity and conductivity. An alternative method to calculate the heat transfer across the headspace was also developed. A steady state resistance to heat transfer, such as that described in Kreith and Black (1980), was used to approximate the transient condition. An overall heat transfer coefficient (U) was calculated from the convective resistance at the package surface and the conduction resistance across the headspace as follows: where k refers to the thermal conductivity of the headspace and H to the headspace thickness. Because the package heats from the surface downward, the warmer air in the headspace, which is normally likely to rise, is already near the top of the package. This creates a fairly stable situation and inhibits convective transfer. Free convection in the headspace was therefore ignored. This assumption could make a great difference to heat transfer accross the headspace, and should be verified at some point. Studies such as that of Weintraub et al. (1989) have shown that the presence of a headspace can have a large effect on the temperature profile obtained, which supports the assumption that free convection is negligible. It was further assumed that the conductive resistance of the package material itself was already incorporated into the h value. Modified boundary conditions were then calculated by substituting the coefficient U for h in equation 3.32 to describe heat transfer across the package surface along the Chapter 3. METHODS 36 top boundary. This was a steady state approximation to a transient heat flow situation. This was appropriate for long time steps, and for conduction situations involving low heat capacity and high thermal diffusivity, as is the case with the gas in the head space. A high quality product of importance to the British Columbia economy is salmon. This product has been recently studied in terms of its sterilization in rectangular pack-ages (Collins, 1989). Values for the thermo-physical parameters of the substance in the package were therefore chosen to reflect the properties of salmon. A thermal conductivity of 0.5 W/mK (Lewis 1987) and a thermal diffusivity of 1 • 10 _ 5m 2/min (Smith 1978) were used. Values for the headspace properties were estimated from Weast (1980) for air. A headspace thermal conductivity of 0.03 W/mK and thermal diffusivity of 0.002 m2/min were used. A value was also needed for the convective heat transfer coefficient (h). This value was sensitive to processing conditions and could vary by orders of magnitude (Ramaswamy, 1983). The value used here was 200 W/m2°C, a common value for the overall convective resistance of a microwaveable package heated with 100 % saturated pressurized steam, including convection and the conduction resistance of the package material itself. 3.1.2 Calculation of Lethality and Nutrient Degradation Both bacterial lethality and destruction of nutrients were calculated from the temperature profile obtained. Lethality was calculated using only centrepoint temperature values. This is the standard procedure for lethality calculations, and there exist many tables outlining, for various products, the centrepoint lethality levels which must be attained in order to achieve commercial sterility. A centrepoint lethality (F0 value) was calculated using the TDT equation (equation 2.1) at each time step, and integrating the result over Chapter 3. METHODS 37 time as folows: ftime - -rei ,„ = / 101 * Ut (3.39) Jo The values chosen for T^f and z were 121.1 °C and 10 °C respectively. These are representative of Clostridium botulinum, a relatively heat resistant species, and the stan-dard spore for which desired destruction levels are specified in the processing of low acid foods (Thorne, 1985). Nutrient retention was calculated as a volume averaged quantity, which required a knowledge of the temperature profile at several points throughout the package, increas-ing the computer memory needed. An initially uniform concentration of nutrients was assumed, and the reduction in concentration was calculated using the method described by Thorne (1985). This was: rv f _ i /.time f T - T "/l Cf/Co = j0 exp\-^-*Jo l o [ ' >dt\ dV (3.40) The D and z values used were characteristic of thiamin, one of the more commonly modelled nutrients. Holdsworth (1985) gave a table of commonly used z values for thiamin in meat products, and from these the value of 26 °C was chosen as representative. The D value used was taken from Teixeira et al. (1975a) as 178.6 minutes. Both equations required a numerical integration of the lethality values over time. Lethality values are very sensitive to changes in temperature because of the exponential term in the TDT formula. If temperature is changing rapidly with time, the even more extreme changes of lethality with time could result in inaccurate integration. It was not desireable to use smaller time intervals in the integration, as this would increase the computer run time. For this reason, in addition to Simpson's rule (Press et al. 1988), a curve fitting technique was developed to calculate the lethality integral. The curve fitting technique was based on the assumption that the values of lethality plotted against time could be approximated by an exponential curve. Given any three Chapter 3. METHODS 38 Lethality {*• i \ t t tine Figure 3.4: Applying the Curve-Fitting Integration Method to a Lethality-Time Profile points on the curve, a coordinate transformation was used to put the origin at the first of these points, as shown in Figure 3.4. The fitted curve was then described by: b L2 — Li - l, = ( t 2 - t x y - L x \t3-txf (3.41) or L = a • tb where Integrating gave: b = a = In L7 -L , l £3 - i - l J In -Ml - * i J (L7-(t2-tl)b a b+1 tb+i t=time t=0 (3.42) (3.43) (3.44) The integral was calculated from £3 to tx for the first three points on the curve, then from tz to t2 for each subsequent set of three, giving the area under the transformed curve. The rectangular area given by ^1(^3 — t2) was then added in to include the area omitted by the transformation of coordinates. The resulting areas were summed for every set of three points to give total area under the curve, which was the FQ value. Chapter 3. METHODS 39 3.2 Optimization Technique The random-centroid optimization method, developed by Aishima and Nakai (1986), was simple in principle and procedure, had the versatility to allow optimization involving from three to five decision variables, and incorporated simultaneous runs, rather than requiring sequential iteration. These latter points are particularly important when dealing with food processing situations. Typical parameters which describe a sterilization run may include retort pressure, temperature, initial food temperature, cooling water temperature, duration of a run, etc. These parameters are variable, controllable, and have a direct effect on product cost or quality. These should therefore be included as decision variables, which made a multivariable optimization method essential. Another advantage of using the random-centroid method was that the value to be optimized did not need to be expressed either as an explicit (algebraically expressible) function of the decision variables, or even as an absolute measurement. An ordinal scale in which values are compared against one another is all that was required. 3.2.1 Approach Followed Several different approaches to using the optimization method were undertaken. These were: an initial trial method using a simple objective function, and three runs involving the optimization of a more complex and comprehensive objective function. The aim of the objective function devised for the trial optimization was simply to include a term related to lethality. This term was designed to ehminate optimal solutions involving lethalities below that required for commercial sterility. Thus, lethality could be treated as a constraint, rather than as a dependent variable which had to be calculated given the retort temperature. Chapter 3. METHODS 40 Decision Variables Initial Ranges Trial Run Subsequent Runs process duration : 20 to 120 minutes 10 to 90 minutes retort temperature : 100 to 200 °C 100 to 165 °C package width : 2 to 8 cm 2 to 8 cm package length : 6 to 16 cm 6 to 16 cm retort pressure : 1 to 4 atmospheres initial temperature : 20 to 65 °C Table 3.1: Initial Ranges for the Optimization Runs By contrast, the more complex cost function was designed to apply realistic costs to each of the decision variables involved in the optimization, rather than to simply address concerns about maximizing product quality independently of any related costs. Both the trial function and the complex objective function are thoroughly described in the next section. The optimization method allowed up to a maximum of five decision variables. The five decision variables chosen for the trial run, along with the initial ranges assigned to them, are listed in Table 3.1. The other variables in the simulation model (which could be manipulated if desired) were assigned constant values. These include package initial temperature, which was assumed uniform and assigned a value of 21 °C, cooling water temperature, also assigned a value of 21 °C, and the duration of the cooling process, set at five minutes. As can be seen, there were only two variables describing package dimensions. The missing variable, package height, was dependent on the other two. It was calculated as height = (package volume) / (length-width) This was rounded off to the nearest 2 cm in order to meet the demands imposed by the finite difference grid size. A standard package volume of 350 cubic cm was used. Chapter 3. METHODS 41 Different decision variables were chosen for the three optimization runs done using the more complex cost function. For run I, the maximum number of decision variables were used, as listed with their initial ranges in Table 3.1. In run II, a modification was implemented in which constraints were placed upon package size. The variables describing package size were removed from the set of decision variables, allowing only retort temperature, initial temperature, and process duration. Package length, height and width (x,y and z) were held constant at 4 cm, 14 cm, and 6 cm respectively, and the remaining variables were assigned initial ranges unchanged from those specified in Table 3.1. This was done in order to check the effect of the package size restriction that was part of run I, namely that package dimensions must be multiples of two cm. In run III, process duration was considered a variable dependent upon lethality, rather than an independent variable. As a result, the process duration was removed from the set of decision variables, and package size was reintroduced. Lethality was not treated as a constraint, but was instead removed from the cost function, and a lethality of six minutes was assured by adjusting process duration. A few modifications to the finite difference simulation became necessary in order to calculate, for any given process, the time needed to produce precisely the required lethality. The length of the coohng period was no longer assigned a constant value of five minutes. Rather, it was calculated (iteratively) so as to ensure that the desired lethality of six minutes was reached by the time the center temperature had cooled to 115 °C or below. The effectiveness of the trial run and the three later runs was evaluated by ascertaining whether or not a minimum was found, and whether or not this was a global or local minimum. Once a minimum was found for a given run, the possibility that it was a local, rather than global minimum was checked in one of two ways. Either the same Chapter 3. METHODS 42 optimization run was redone, to see if the same result was obtained each time, or the optimization results were verified against a large number of random cost results produced by a Monte Carlo simulation. 3.2.2 Objective Functions Used Two objective functions were developed. One was a simple objective function used for the trial optimization run. The other, used with all the later optimization runs, was a more comprehensive cost function. The main goal of the objective function devised for the trial run was to incorporate lethality as a constraint by assigning a prohibitive cost for any lethality values less than that required for commercial sterilization. For fish, the required centrepoint lethality value is 6 minutes (Thorne, 1985), and any lethality less than this was assigned a cost orders of magnitudes higher than any of the other costs. Lethality values greater than six minutes were also assigned a cost, as greater lethality values imply greater degradation of flavour, texture or other desirable qualities. This cost was calculated as a linear function: Lethality cost = 5 • F0 - 30. This gave a cost of zero for a lethality value of exactly six minutes. All other costs were related in a simple fashion to decision variables in the optimiza-tion. Material cost was calculated as a constant multiplied by the surface area of the package. Retort temperature and pressure and process duration were also assigned cost values linearly proportional to their magnitude. The constants were chosen to produce costs of approximately equal orders of magnitude for each variable, in order to ensure that no single variable affected the result to a disproportionate degree. Total cost was simply calculated by summing up all the individually assigned cost values. Chapter 3. METHODS 43 As can be seen from the literature review, there is a need for objective functions which reflect a more general concern than the maximization of product quality alone. The comprehensive cost function was developed to minimize total processing cost. This was calculated as a function of package material costs, cost of energy needed, cost of labour and capital required for sterilization, and cost associated with product quality. These costs were each calculated separately, and then summed up to get the final cost. Cost of packaging materials was calculated by multiplying the package surface area by a cost per square cm of package material. A material cost of 0.019 cents per square cm was used (Durance, 1990). The cost of food material inside the packages was not considered, as this was not one of the factors (decision variables) which could vary in the optimization. Energy costs were determined by calculating the energy required to perform one sterilization in the batch retort. All the required energy was converted into kWhrs, multiplied by a cost per kWhr, and then divided by the number of packages sterilized in a single run. The price of energy was taken as 4 cents/kWhr, and then a boiler conversion efficiency of 75% was assumed to bring the total energy price to 5 cents/kWhr (Sandberg, 1990). The number of packages sterilized in a single retort run was calculated as: effective retort volume / package volume The effective retort volume was calculated assuming that the retort was not completely filled with packages, but was packed with a number of containers, called busey baskets, which are used to hold the packages. As the packages were regular in shape, complete use of the space inside the busey baskets was assumed. The effective retort volume was then : number busey baskets-volume of one basket Chapter 3. METHODS 44 The horizontal FMC steam/air retort was selected to estimate capital cost and retort capacity. Five busey baskets, each with dimensions of 1.12m by 1.08m by 1.12m, gave a total capacity of 6.7615 m 3, or 19318 packages (Sandberg, 1990). To obtain the energy required for sterilization, steam flow rates during venting, come-up, and normal retort operation were used. The values used in the cost function were 6000 kg/hour during venting (from Singh 1977) and 250 kg/hour during sterilization (from Chhinnan et al. 1980). These were in the upper range of the values quoted, but they were chosen so as to ensure (for the calculated retort capacity) a total steam consumption per package close to that reported by Singh (1977) for cans. Total mass of steam required was calculated by multiplying these flow rates by the venting and come-up time and the processing time respectively. To get a value in Joules for the energy required, this mass was multiplied by the enthalpy (Ev) of steam at the process temperature. This was calculated in Joules/g, assuming a steam quality of 100%, using the following formula from Smith et al. (1983) : = ((-1.3289 • 10- 5-T, + 1.2503 • 10~3) • T, + 1.7060 -T, + 2506.2 (3.45) where Tt is the temperature of the steam in °C. Many products are retorted from an initial temperature much higher than room tem-perature, as they go into the retort directly from a hot-filling process. An extra energy cost associated with bringing the product up to an initial temperature was added in to the total. This cost was calculated using much the same method - by the multiplication of a steam flow rate, steam enthalpy, and time. The time required to bring the product up to the initial temperature was calculated using a finite difference program. The program simulated the product being heated from a uniform temperature of 20 °C (room temperature) by a surrounding medium at 100 °C. The time required for the centre to reach the desired initial temperature was recorded. Chapter 3. METHODS 45 This time was multiplied by the enthalpy of steam at 100 °C, and the same steam flow rate as that used in the processing energy calculations. These values were designed not as an indication of how the product was actually brought to the initial temperature, but rather to assign a price for bringing the product to the initial temperature. Capital costs were calculated using an actual retort cost (in 1989 US dollars), spread over 10 years of retort lifetime. Retort cost generally depends not on the retort shell itself, but on the complexity of the attached control system, and whether or not "energy-saving" devices are incorporated into the design. For the FMC horizontal steam/air retort the cost was $226400. The capital cost was calculated on a per day basis, and added to the daily labour cost. The latter was calculated assuming that four people operate the retort, working an 8 hour day at $10 per hour. (Sandberg, 1990). The costs of labour and capital had to be converted from daily costs to costs per package processed before they could be added to the other costs. To do this, the number of packages processed per day had to be known, which in turn depends on processing time. For the calculation, an ideal processing time was assumed, taken to be the lower constraint on process duration in the optimization (ie. the shortest processing time allowed). This ideal time was used to convert the daily capital and labour costs to cost per package processed in the following manner. The costs were divided by the number of packages processed in a day, which was calculated as : (packages in one batch)-(batches in one day) where the number of batches per day is calculated as: (working day duration) / (time for one process) and the time to process a single batch was calculated as the sum of the venting time, come-up time, coohng time, and the ideal process time. Less time consuming and more Chapter 3. METHODS 46 variable jobs such as emptying and filling were considered insignificant compared to the total process time, and were not included. The result represented the capital and labour cost of an ideally short process, and had to be adjusted through multiplication by a throughput value. This value was equal to the ratio of the actual process time over the ideal process time. The actual cost of labour and capital then equalled : (ideal capital and labour cost)-(actual time/ideal time) Product quality was assigned a cost because of its bearing on retail value. Many factors combine to define product quality, but only one was used in this study. Quality was assumed to be directly proportional to the percentage of thiamin remaining after sterilization, which makes cost inversely related to thiamin retention. This is justified by the fact that most organoleptic and nutritional attributes follow kinetics of destruction similar to thiamin (Stumbo, 1973). The values assigned to this cost were chosen arbitrarily since there was no information on such costs in the published literature. The general form of the function used was that of an inverse exponential. The exponential was used in order to make a strong differentiation (cost-wise) between products whose quality had decreased beyond the point of acceptability and products whose quality had decreased only marginally. Finally, degree of product sterility was assigned a cost, intended to act as a constraint. This was necessary because the degree of sterilization has a direct bearing upon the safety of the product. Economic studies such as those done by Todd (1985,1988) indicated that the costs associated with foodborne disease are generally in the millions, both for the community at large and for the food industry. Any lethality less than the value required for commercial sterilization was therefore assigned a prohibitive cost, but lethality values above the requirement were assigned no cost at all. Chapter 3. METHODS 47 3.3 Verification of Model 3.3.1 The Finite Difference Method The method used to verify the finite difference solution was to compare the results from the finite difference heat transfer method against both an analytical solution and a more common finite difference method. The latter method was the ADI method, a well es-tablished implicit-explicit method. Although unconditionally stable, the ADI method required the expression of the problem in matrix form, to allow for simultaneous solu-tion of equations describing all the nodes. This method was implemented following the formulation described in Lapidus (1982). The ADI method entailed solving separate equations in sequence - the results from one providing the input to the next. For a three dimensional model three such equations had to be solved, each one being implicit in one dimension and explicit in the other two. Because of this approach, the matrix equations describing the heat transfer were tri-diagonal and could be inverted relatively simply using the Thomas algorithm (Jaluria and Torrance, 1986). The equations used to form the matrix are given below as : T t+l/3 ij,k _ ft _ 1iJ,k -ar 1? T i+l /3 + TI: •t+l/Z u,k + rpt O T ' _L 71' -J- T ' 9T1' (3.46) rpt+2/3 Xij,k ar s2-rpt+l/3 _ IiJ,k -rpt+2/3 . rpt+2/2 _ nrpt+2/3 2 ^ + Tti+\k + 2? 2T! J,k (3.47) Chapter 3. METHODS 48 rpt+l _ rpt+2/3 ij,k ± ij,k OCT + Tij,k-l - 2 T U k rpt+l i rpt + l 1iJ,k+l "r - ttj,A;-l nrpt + l , /pt (3.48) The analytical solution was obtained for a three dimensional solid undergoing con-duction heating with convective boundary conditions from Luikov (1968 p. 286). Given a solid with dimensions 2RX by 2R2 by 2R3 and an initial temperature Ta throughout, put into a medium at a temperature T^, the temperature at any point (x,y,z) in the solid at any time tau can be expressed as: rpr rp oo oo oo m"'Z_ T = 1 _ Yl Yl X] ^n,l i 4m,2^fc,3 "'oo 1o n = 0 m = 0 f c = 0 * c o s ( / i n , i x / i 2 1 ) cos(/im > 22//R 2) cos( / i f c , 3 z/R 3 ) * exp a T { Rl + Rl + R\ (3.49) L 2 J«-2 , where the subscripts 1, 2 and 3 refer to the directions along the x, y, and z axes, and where: 2 sin a p + sin fi cos / i and where pn are the nth roots of the equation: cot p = (1/Bi) • p where Bi = h* R/k Chapter 3. METHODS 49 Figure 3.5: The 1-D Infinite Plate Used in the Analytical Solution to Verify the Headspace Treatment Procedure 3.3.2 Treatment of Headspace In a further test, the validity of the steady state assumption used in the treatment of headspace in the model was evaluated by comparison against analytical solutions from Luikov (1968 p440) and from Carslaw and Jaegar (1948 p 265). These solutions gave the transient temperature distribution for the symmetrical one dimensional system containing a central solid adjoined by a solid with different properties. The solids were designated by the subscripts 1 and 2 as shown in Figure 3.5, using thermal properties values typical of food and of headspace. Each analytical solution necessitated finding the roots of a complex trigonometric equation before implementing a series solution. In order to simplify the process while still allowing for comparison between the analytical and finite difference solutions, the finite difference solution was simplified to a one dimensional domain and boundary conditions of the first land were assumed. The solution for temperature distribution in the central solid, given an initial solid Chapter 3. METHODS 50 temperature TQ and a surrounding medium at temperature Too, was given by: TTZl° = 1 - EiVM) • cos(K;V*pnx/L2) • expi-plC^r/Ll)} (3.50) where ^ = [(1 + KeC) * sin /x * cos(/i * C) + (1 + C/KE) * cos ^  * sin(/i * C) * KE] (3.51) where pn were the roots of the equation: KE -tanp- tan(p * C) = 1 (3.52) and K A = a i / a 2 -Kfe = ki[k2 KL — L\l L2 K E = K K - K ^ 2 C = K L - K ; 1 ' 2 3.3.3 The Numerical Integration Method As there was no analytical solution to the lethal rate equation, the curve fitting numerical integration method (equations 3.41 to 3.44) was tested using an exponential equation, similar in form, but possible to integrate analytically. Assuming a linear temperature increase with time simplified the lethality equation to the following more tractable form: L = f~t2 exp(b + yt)dt (3.53) Jt=ti Chapter 3. METHODS 51 where t was in minutes and arbitrary values were assigned to the constants b and y. The general analytical solution to the above was given by: L = exp(b)/y • [exp(y * t2) - exp(y * tx)] (3.54) For two cases (b = 0.2 y = 0.01 and b = 0.2 y = 0.02), the analytical solution of equation 3.53 between a time of 1 and 50 minutes was calculated. This gave a time span of 49 minutes - a typical processing time. Numerical solutions were obtained for these same two cases, using four different time steps ranging in value from 0.1 to 2.0 minutes. At each time step, the numerical solutions were found using both Simpson's rule and the curve fitting technique. These results, for each step size and each of the two cases, were compared to the analytical solution. The numerical solutions were calculated for several time step sizes as it was suspected that the length of the time step could have an effect on the accuracy of the result; shorter time steps yielding increased accuracy. In addition, numerical results were obtained for the actual lethality integral (equa-tion 3.39) for four different processes. Rather than assuming a hnear temperature rise, temperature was calculated for each of the four processes at each time step using the sim-ulation model. Although this could not be analytically integrated, numerical integration over the process duration was done using both the curve fitting method and Simpson's rule, and the results were checked to ensure that lethality differences were minimal. The time step used in both numerical methods was the three second time step already used in the finite difference program to ensure stability. Processes with higher retort temperatures were included as they would be expected to produce a more precipitous lethality-time profile which would be more difficult to accurately integrate numerically. Chapter 3. METHODS 52 3.3.4 The Assumption of Uniformity The assumption of uniform parameter values upon which the calculation of lethality using the simulation model was based could lead to inaccuracies, due to the actual variability in these parameters. The importance of inaccuracies contributed by inherent variations in biological properties was assessed separately from the testing of the model itself, using the Monte Carlo technique. The technique involved running the simulation many times, each time using different values for the parameters of interest. These values were randomly chosen for each run from a population distribution. Mean and standard deviation of the lethalities from each run were calculated from the simulation results, and the magnitude of these indicated the relative importance of variability in the input parameters on the results. In this thesis, calculation of lethality from z and D values followed the same equation as that employed in the studies of variability described in the literature review. Therefore, the magnitude of the uncertainty in lethality resulting from variability in z and D values was likewise expected to be similar to results already described in the literature. However, the calculation of heat penetration in this work proceeded in an entirely different manner from the heat penetration calculations in previous studies on uncertainty. Uncertainties arising from the method of calculation of heat penetration were therefore investigated. Two thermal parameters were needed to calculate heat penetration in our model. These were the thermal diffusivity and, for the boundary conditions, the thermal con-ductivity. The effects of variability in these thermal parameters between food packages and within a single package were quantified. Higher processing temperatures were chosen for both of the Monte Carlo simulations so that the dominant source of variability would be the heat penetration parameters that we wish to study. Chapter 3. METHODS 53 For the investigation into the effects of variability between food packages, both ther-mal conductivity and diffusivity were calculated by relating them to the water content of the product. Many formulas have been put forth describing the relationship for various foodstuffs (Rha 1975, Sweat 1975, Mohsenin 1980, Jowitt et al. 1987). The relationships giving the strongest correlation between the thermal properties of fish and water content were selected. Since variability in thermal parameters is related to variability in water content, this choice allows for the greatest variability, thus testing the extreme case. For thermal diffusivity the formula chosen was: a = 10"8 • (8.8 + (7.2 * %water))m2/min (3.55) from Jowitt et al. (1987), and for thermal conductivity the following relationship from Sweat (1975): k = 0.08 + (0.52 * %water)W/m°C (3.56) Variability in these parameters was then incorporated into the model simply by using a population distribution for water content. A mean water content of 70.24% and standard deviation of 0.645% was used for Sockeye Salmon (Anon., 1987). The heat penetration simulation using the finite difference method was run 180 times, using a process time of 70 minutes, initial product temperature of 21 °C, and retort temperature of 145 °C, for a package 5 by 7 by 10 cm in size. All parameters other than thermal conductivity and diffusivity were held constant. The thermal parameters were kept constant for the duration of each run, but were varied between runs. For each run these parameters were calculated from a water content value chosen randomly from the population distribution described above. Centrepoint lethality was calculated for each run and the mean and standard deviation of all the centrepoint lethality values were then calculated. Chapter 3. METHODS 54 The investigation into the effects of thermal parameter variation within a single pack-age was conducted in a similar manner. In this case, however, the thermal diffusivity was assigned a greater standard deviation (3.3% after Lenz and Lund, 1977c). Also, instead of using a single thermal diffusivity and conductivity to describe the entire package, the assumption of internal uniformity was forgone and separate thermal parameters were calculated at each node point. The simulation involved 200 runs, and both centrepoint lethality and volume average thiamin retention were calculated. Retort temperature was 143.4 °C, process time was 29.3 minutes, initial product temperature was 43.2 °C, and the package dimensions were 4 by 8 by 10 cm. Chapter 4 R E S U L T S A N D DISCUSSION 4.1 Simulation Model Verification The complete temperature simulation consisted of three numerical approximations which could be subject to error, and were therefore verified by comparison with standard an-alytical solutions. These were the finite difference technique, the means by which heat diffusion through the headspace was incorporated into the technique, and the numerical integration of lethal rate to produce an F0 value. The results showed no substantial temperature differences between the analytical and numerical solutions to the heat diffusion equation either with or without the inclusion of headspace into the model. Similarly, lethality results from the numerical integration techniques - Simpson's rule and the curve fitting technique - corresponded very closely to analytical solutions (see Tables 1.1 and 1.2). As well as the solutions themselves, the assumptions underlying the model could also generate misleading results, in particular the assumption of homogeneity between food packages and within a single food package. Results from the Monte Carlo simulations intended to assess these assumptions showed that the inaccuracy in temperature that can be attributed to the assumption of homogeneity in a single food package was large enough to have a substantial effect upon the calculation of lethality. 55 Chapter 4. RESULTS AND DISCUSSION 56 4.1.1 Polynomial Finite Difference Method The polynomial method could be employed with either first or third kind boundary con-ditions. First kind boundary conditions, the simplest type, assume that temperatures at the outside of the package are equal to the temperature of the surrounding environ-ment (in this case retort temperature). Third kind boundary conditions assume that temperature at the surfaces of the package is less than retort temperature, due to a con-vective resistance. Often, in heat transfer situations, it is more accurate to use third kind boundary conditions, and these therefore were the boundary conditions implemented. The accuracy of the polynomial method was tested by comparing centrepoint temper-atures obtained using the method with those obtained under the same conditions using an analytical solution as outlined in section 3.3.1. Temperatures given by the finite dif-ference polynomial solution were also compared to those obtained with the ADI method, to confirm that the former produced answers comparable to a standard finite difference method. Temperature values from these comparisons showed that all of the numerical methods were within 0.8 percent of the analytical solution, and for the processes at lower retort temperatures they corresponded much more closely. Eight different test runs were made and the centrepoint temperature was first calcu-lated analytically. These results are shown in Table 1.1. The first 17 terms of the infinite series in the analytical solution were used, as the temperature value obtained (given to four decimal places) did not change when more terms were included. Temperatures were obtained for the same eight test runs using the two numerical methods. For the polynomial finite difference method the temperatures were calculated for two different grid sizes and two different time steps in order to verify that the larger grid size and longer time step did not sacrifice too much accuracy. Temperatures obtained using the well established ADI method were obtained for the smaller grid size only. Chapter 4. RESULTS AND DISCUSSION 57 run # time retort temp solid size centrepoint temp. (mins) (°C) (cm) (°C) 1 40 120 6*6*6 112.0455 2 35 130 6*6*6 116.9772 3 30 140 6*6*6 119.0849 4 25 150 6*6*6 116.9806 5 40 120 4*6*10 114.9915 6 35 130 4*6*10 121.4001 7 30 140 4*6*10 125.5528 8 25 150 4*6*10 126.2096 Table 4.2: Analytical solution at the centrepoint for eight different processing runs The eight test runs involved several combinations of process times, retort tempera-tures and solid dimensions, in order to establish that the method was equally accurate under different conditions. All the runs were done for the simplest case (no headspace, no cooling period, and constant initial temperature (equal to 55 °C)) for which there exists an analytical solution. Table 1.2 shows centrepoint temperatures calculated for the eight test runs using the polynomial and ADI finite difference methods. Table 1.3 shows the percentage difference of the temperature calculated using the above methods from the analytical solution. All of the temperatures were very close in value, indicating a high accuracy for all the numerical methods. Generally, the polynomial method with the smaller grid size gave results closest in magnitude to the analytical solution, followed by the ADI method. Of the two polynomial solutions involving the larger grid size, the values obtained using the smaller time step were closest in magnitude to the analytical solution. Temperature deviations of the polynomial method from the analytical solution should be within an acceptable range for the purposes of the optimization. The magnitude of deviation which could be considered acceptable depends on the effect that such a Chapter 4. RESULTS AND DISCUSSION 58 run # Temperature at Centrepoint: ADI Method Polynomial Method t=0.05 t=0.05 t=0.01 d=0.005 d=0.01 d=0.005 d=0.0l 1 112.0717 112.1258 112.1066 112.0826 2 117.0929 117.1755 117.0893 117.1099 3 119.4033 119.5247 119.2861 119.4291 4 117.7349 117.9048 117.3379 117.7733 5 114.9566 115.0147 114.9228 6 121.4060 121.4517 121.3531 7 125.6731 125.6586 125.5935 8 126.5886 126.4133 126.4744 Table 4.3: Centrepoint temperature (°C) as given using the ADI and Polynomial Finite Difference Solutions temperature difference would have on lethality. This in turn depended on the magnitude of the centre temperature and the duration over which the discrepancy occurs. An estimate of these magnitudes could be made using figure 1.1. This is a plot of centrepoint temperature versus time for run #4, the worst case found from table 1.3. The centrepoint temperature values were calculated at different times using the analytical solution and the finite difference polynomial solution with the larger grid size. As can be seen, the temperature differences, even for this case, were not such as to cause a significant change in lethality. It was therefore concluded that either method would give sufficient accuracy for use in the optimization method, with the polynomial method with the smallest grid size providing the most accuracy. Chapter 4. RESULTS AND DISCUSSION 59 run # % Temperature Difference at Centrepoint: ADI Method Polynomial Method t=0.05 t=0.05 t=0.01 d=0.005 d=0.01 d=0.005 d=0.01 1 0.023% 0.072% 0.055% 0.033% 2 0.099% 0.170% 0.096% 0.113% 3 0.267% 0.369% 0.154% 0.289% 4 0.645% 0.790% 0.305% 0.678% 5 0.030% 0.020% 0.060% 6 0.005% 0.043% 0.039% 7 0.096% 0.084% 0.032% 8 0.300% 0.161% 0.210% Table 4.4: Percent difference in the centrepoint temperatures given using the ADI and Polynomial Finite Difference methods from the analytical solution 4.1.2 Headspace Model ^ Being an explicit method, the polynomial finite difference solution was not uncondition-ally stable, but had a stabihty conditional upon the value of the dimensionless Fourier number, For = ar/d2. If heat transfer across the headspace was modelled using the finite difference approach, two stability requirements would have to be calculated, one using values for the thermal properties of the product, and the other using the thermal properties of the headspace. The overall stabihty requirement would then become the most stringent of the two requirements. The stabihty requirement was the same for the polynomial finite difference equation as for other standard explicit methods (Bhattacharya 1986). In three dimensions, for internal nodes, it would be l/For > 6. With the incorporation of boundary conditions of the third kind, the stabihty requirement would become l/For > 5 + 2hd/k. Chapter 4. RESULTS AND DISCUSSION 60 120 110 -100 -c E P T w n p a r o h j r a ( O l e f i n ) + F .D. S o l u t i o n 6 8 • Analytical SohiHan Figure 4.6: The Time-Temperature Profile for Run #4 Smaller grid sizes were desirable in order to increase the likelihood of a stable solution, and in order to attain greater precision, which is particularly important when modelling packages with only slightly differing sizes. However, available computer memory put a lower limit on the grid size. To allow a minimal run time, larger grid sizes and longer time steps would be chosen. Generally, the smallest grid spacing compatible with memory requirements and longest time step allowing stability are chosen. Using the values for the headspace properties, the headspace stability requirement would become: d2/ar > (5 + 13, 333<£) or d2/r > (0.01 + 2Q.6d). With this requirement, even if a relatively high value such as 0.01 m was selected for the grid size (d), the time step needed for stability would still have to be less than 3.615 • 10 - 4 minutes, or 0.0217 seconds. Although the solution could be implemented with such a time step, it would require an impractical amount of computational time. Chapter 4. RESULTS AND DISCUSSION 61 For the alternative (steady state) approach to modelling heat transfer across the headspace the stabihty requirement would be described by the same equation, only with the convective heat transfer coefficient, h, replaced by the overall heat transfer coefficient, U, to give the following: l/For > 5 + 2hd/k. Stabihty would then be assured with a grid spacing (d) of 0.005 m and a time step (r) of 3 seconds. Therefore, this alternative approach was the one implemented in the simulation model, in order to model headspace. To verify that this assumption did not affect the accuracy of the solution, centrepoint temperatures obtained using the simulation including headspace were again compared with those obtained using an analytical solution. The temperatures showed very close correspondence, even after a long process time, and the percentage differences were no greater than those obtained when testing the basic finite difference method without the headspace. As described in Section 3.3-2, the analytical and numerical solutions were obtained for a simplified one dimensional domain with properties for two adjoining solids represen-tative of salmon and of air. Initial temperature was 21 °C and environment temperature 121 °C. The inner solid was 10 cm long with 0.5 cm of headspace at each end. These dimensions were chosen so as to simplify the characteristic equation (equation 3.52) for which the roots had to be found for the analytical solution. The first 6 terms of the infinite series were used to approximate the analytical solution, and checked against solutions given using only the first 5 terms. Both resulted in the same temperature to at least four decimal places (0.0001 °C). The results, comparing the analytical and finite difference solutions for two different process times, are shown in Table 1.4. As can be seen, the centrepoint temperatures from the one dimensional solution incorporating headspace were lower than the tempera-tures from the three dimensional finite difference model without headspace. These lower temperatures could be attributed to the relatively large headspace thickness in the test Chapter 4. RESULTS AND DISCUSSION 62 Process Duration Centrepoint Temperature (min) CC) analytical FD method % difference 40 22.87 22.82 0.219% 80 29.06 28.89 0.585% Table 4.5: Verification of Headspace Model cases, as well as the simplification of the model to one dimension. Only two comparisons were undertaken, as each analytical solution necessitated a time consuming root finding process. 4.1.3 Numerical Integration Method As described in Section 3.3.3, equation 3.53 was analytically integrated for two different cases. The numerical integration methods were then verified by comparing the answer obtained analytically with the numerical solutions. The analytical solution obtained (by equation 3.54) was 78.0075 minutes for case I (y=0.01), and 103.7020 minutes for case II (y=0.02). Numerical approximations to the integral using Simpson's rule and the curve fitting method with different time steps are given in Table 1.5. As was expected, both methods provided answers very close to the analytical solution at shorter time steps, with accuracy decreasing at the longer time steps. Using the time steps providing intermediate accuracy (about 1 minute) the curve fitting method gave superior results. As the time step used in the finite difference solution was 0.01 minutes, which was much shorter than any of the time steps used in the validation, either method was practicable. At longer time steps however, the curve fitting method would be preferable because it would provide superior accuracy. The above procedure established the accuracy of the numerical integration methods Chapter 4. RESULTS AND DISCUSSION 63 y = 0.01, analytical solution = 78.0075 time step Simpson's rule Curve Fitting (mins) Method 0.1 78.0075 78.0075 0.5 78.0075 78.0075 1.0 77.3396 78.0078 2.0 76.0037 76.0049 y = 0.02, analytical solution = 103.7020 time step Simpson's rule C urve Fitting (mins) Method 0.1 103.7020 103.7020 0.5 103.7020 103.7024 1.0 102.6064 103.7037 2.0 100.4149 100.4210 Table 4.6: Comparison of Numerical Integration Methods Chapter 4. RESULTS AND DISCUSSION 64 time temp, x y z curve Simpson's (min) (°C) (cm) fitting Rule difference 51.7 110.62 3 7.5 14 1.044 1.062 0.018 28.75 165.10 7 4.5 12 1.063 1.064 0.001 43.18 128.86 4 5.5 16 6.614 6.598 0.016 74.91 107.03 5 6.5 11 0.333 0.337 0.004 Table 4.7: Centrepoint Lethalities (in minutes) Calculated for Four Processes for the simplified integration involving a linear time/temperature profile. However, the methods were both further tested in actual processing simulations - situations where the time/temperature profile was not hnear, for which no analytical solution existed. Lethal-ity was calculated for four processes, using both of the numerical integration methods. Descriptions of the four processes (duration, retort temperature, and package size), along with the lethalities obtained, are given in Table 1.6. In all cases, the lethalities obtained using the curve fitting technique were within 0.02 minutes or less of those obtained using Simpson's rule. The lethalities calculated for process number two, which was the highest temperature process and would therefore show the steepest temperature-time profile, corresponded particularly closely. Ohlsson (1980a) considered an error of 0.1 minutes to be acceptable for centrepoint lethality. The results using either numerical technique were well within this guideline. 4.1.4 Monte Carlo Runs As described in Section 3.3.4, two Monte Carlo tests were performed, the first testing the effect of variability between packages, and the second designed to test the effect of variability within a single package. The two tests were run using completely different processing parameters, so the coefficients of variation, rather than the magnitudes of the Chapter 4. RESULTS AND DISCUSSION 65 mean lethalities, should be compared. For the first test, a mean lethality of 1.06 minutes with a standard deviation of 0.0032 (CV = 0.003) was found. The second Monte Carlo simulation resulted in a mean lethality of 8.0229 minutes with a standard deviation of 0.37143 (CV = 0.0463). Ohlsson (1980a) considered an error in lethality of greater than 0.1 minutes to be unacceptable. By this measure, the standard deviation in lethality produced by inter-package variability (run #1) was not very high. This means that if a set of processing conditions was found to produce sterilization in one package, they would be likely to be adequate for any other package (of similar size and content), provided that the inter-package variability followed the assumptions used here. However, the coefficient of variation for the next run, describing the effect on lethality of variability within a single package, was about 15 times as large. At larger lethalities (such as six or eight minutes) the magnitude of variability in process lethality that could be ascribed to uncertainty in thermal parameters may in fact be outside the acceptable range put forth by Ohlsson (1980a). This should be kept in mind when interpreting results obtained using the model, as additional processing may be required beyond the amount specified by the model, in order to create a safety factor and ensure proper sterilization of all packages. 4.2 Optimization Runs 4.2.1 Trial Optimization The trial optimization, described in Section values of the decision variables: 3.2.1, yielded a minimum at the following Chapter 4. RESULTS AND DISCUSSION 66 process duration : 60.3 minutes retort temperature : 108.6 degrees Celcius retort pressure : 2.275 atmospheres package width : 3.1 cm package length : 14.3 cm Values obtained for the objective function during the optimization varied from about 500 to 5000 cost units. A minimum of 484.324 was found after three random design cycles, which was reduced to 473.005 by the combination program and then to 469.473 using further simultaneous shifts. As this optimization used the trial objective function rather than the comprehensive cost function, the cost values were not related to any real world costs, and should be interpreted only by comparison to one another. To help ascertain that a global minimum had been found, 300 simulations were carried out using randomly chosen values for the decision variables, and cost was calculated for each result. No cos*t value was obtained smaller than the minimum given by the optimization method, implying that a true global optimum had been found. The apparent success of the trial optimization in finding a global minimum prompted three further optimization runs. These used the comprehensive cost function described in Section 3.2.2. Cost for these versions was calculated on a per package basis in order to minimize the influence of conditions which may vary from retort to retort or plant to plant. and two different minima were found. • These cost minima, along with the values of the decision variables at which they were found, are shown in Table 1.7. As can be seen, 4.2.2 Version I Run I was done two times (to see if the same minimum was encountered both times), Chapter 4. RESULTS AND DISCUSSION 67 Run la Run lb cost (cents) 14.36 13.96 F 0 (min) initial temperature (°C) 26.9 48.2 retort temperature (°C) 129.3 139.5 x (cm) 4 4 z (cm) 12 8 process duration (min) 46.1 31.6 Fa value (min) 6.0 7.1 Table 4.8: Values of the Decision Variables at Which the Minima were Found for the Optimizations Using Method I the minimum costs (14.36 cents and 13.96 cents per package) did not seem to differ by a significant amount, but they were found at widely disparate values of the decision variables. This discrepancy is best understood when one analyzes the optimization procedures. In both optimizations, cost function values decreased during the three initial random design, centroid search, and mapping cycles from costs in the thousands (due to inade-quate process lethalities) down to 15 or 16 cents per package. Once these initial cycles were finished, total processing cost did not decrease significantly, and the values of the decision variables likewise varied only slightly. In the first optimization (run la) the best result at the end of the third cycle was 14.98 cents, found at the following values for the decision variables : process duration : retort temperature : initial temperature : package width : package length : 47.2 minutes 130.5 degrees Celcius 29.7 degrees Celcius 4 cm 10 cm Chapter 4. RESULTS AND DISCUSSION 68 Lethality for this process was 12.6 minutes, which was more than twice the minimum requirement. Thus, lethality cost was still a fairly high proportion of the total cost. Any shift in the values of the decision variables which would result in a lower lethality (down to a minimum of 6 minutes) would therefore also give lower cost. This was borne out by the mapping of the response surface at this point, which indicated that total cost values were decreasing for lower retort and initial temperatures and shorter process durations. As expected, the final minimum value for this run was obtained at very similar values for the decision variables (see Table 1.7), with only a slight decrease in the retort and initial temperatures and the process duration. Lethality for this final minimum had been reduced to 6.00 minutes. Any further manipulations of the decision variables resulted in processes of inadequate lethality and therefore resulted in very high lethality and total costs. Run lb followed a similar pattern. The best cost result after three cycles of random design, centroid search and mapping was 13.96 cents, a lower value than the minimum cost found for the previous run. Again, in trying to decrease this value still further by using the simultaneous shift program, problems with the specified processes providing inadequate lethality were encountered. For example, it was suspected that increasing the length of the z dimension from 8 to 12 cm would lower the cost still further, as this would give the optimum package size found in the previous optimization. When package size was modified using the simultaneous shift program, however, the resulting total costs were much higher. The higher costs were brought about because the processes suggested by the simshift program did not provide adequate lethality, and therefore lethality cost again became a significant proportion of the total cost. The above characteristic pattern, and the fact that two disparate minima were found for the two runs, indicated that this optimization method may have been leading to local Chapter 4. RESULTS AND DISCUSSION 69 minima, rather than a global minimum. 4.2.3 Version II The restriction of package dimensions to increments of 2 cm, necessary because of the size of the finite difference grid, posed a potential problem. The random-centroid optimiza-tion method assumed continuous variables, in contrast with these highly discontinuous transitions in package size. To assess the effect that this has on the optimization, run II was done for a package size fixed at 4 by 14 by 6 cm. This excluded the decision variables describing package dimensions, as described in Section 3.2.1. The minimum found for this run was 13.49 cents, at the following values for the decision variables : initial temperature : 57 °C retort temperature : 146.3 °C process duration : 23 minutes. As with run I, three random design, centroid search and mapping cycles were done, after which the lowest cost found was 13.65 cents, at the following values for the decision variables : initial temperature : 48.7 °C retort temperature : 142.2 °C process duration : 26.2 minutes Again, the direction of shift needed to produce a lower cost process was clear from the response surfaces produced by the mapping procedure. However, when the simultaneous shift was run in order to shift the temperature and time values in the appropriate direc-tions, the specified processes all resulted in inadequate lethality. This in turn meant very Chapter 4. RESULTS AND DISCUSSION 70 high, cost values for the processes, and the failure of the simultaneous shift to produce further minima. This time four simultaneous shifts were done, each with the same direction of shift but shifting by slightly different amounts each time. Of these, one did result in a process which provided adequate lethality. However, when the shift program continued with this succesful trend, the same problem with inadequate lethality leading to high cost was encountered. 4.2.4 V e r s i o n m As the problems encountered in the previous two versions were related to the lethality cost, lethality was dealt with in a different manner for optimization run III. As described in Section 3.2.1, package dimensions were reintroduced as decision variables, while pro-cess duration was excluded, becoming instead a dependant variable. The purpose was to ehminate lethality from the cost function, thus eliminating in large part the tendency of the cost function to form many local minima. In order to calculate process duration as required for this version, it was necessary to use the finite difference heat transfer simulation model in an iterative fashion. It was found that altering the simulation model for use in this manner increased the calculation time required to obtain processing results. However, this potential drawback was offset by the simplification of the objective function resulting from the removal of process duration as a decision variable, and the overall optimization was very efficient. The cost minimum found for this run was 13.38 cents per package, lower than any of the minimums from previous runs. The process time necessary to ensure adequate lethality was calculated as 24.6 minutes (including 5.2 minutes of coohng time). The values of the decision variables at which the minimum was found were : Chapter 4. RESULTS AND DISCUSSION 71 initial temperature : 65 °C retort temperature : 165 °C package size : x=4 cm and z=12 cm (4 by 8 by 12 cm) The optimization started similarly to runs I and II, with three random design, centroid search and mapping cycles. These were followed by a simultaneous shift, during which none of the problems described in the previous sections were encountered. A final random design was done for a narrower search space around the suspected minimum, and a final simultaneous shift. The eventual minimum was found without any of the anomalies encountered during the other optimization runs. When the run was repeated, the same result was obtained. 4.2.5 Comparison of Methods Starting with optimization run I, a characteristic pattern was evident in which the total processing cost dropped rapidly during the initial random design, centroid search and mapping cycles, then increased when the simultaneous shift was applied, as the specified processes generally resulted in inadequate lethality. This trend can be partly explained by investigating the form of the cost function. While most of the separate costs which were summed up to give total cost were of approximately equal weight, lethality cost increased dramatically whenever lethality was less than six minutes, and cost of nutrient degradation also rose precipitously whenever lethality was greater than six minutes. Total cost was therefore be very high for all lethality values other than those very close to six minutes. Another pattern which can be noted in comparing the minima obtained from the different versions of the optimization, is that the lower cost minima were found at a Chapter 4. RESULTS AND DISCUSSION 72 higher retort temperature. This, and the trends plotted during the mapping procedure, both indicated that higher temperatures and shorter times tended to result in processes of lower overall cost, in spite of the fact that they also produced processes with higher thiamin loss costs and energy costs. These observations can be partly explained by again considering the lethality value, and the manner in which it varied with different processing times and temperatures. As noted by Ohlsson (1980a,b), a plot of lethality versus process parameters can have many, widespread minima, with very large lethality values in between the minima. At lower temperatures and longer durations the minima tend to be wider than those resulting from high temperature short time processes. Thus, out of largely random combinations of decision variables, only a very few will result in a lethality close to six minutes, and these are more likely to be the combinations involving lower temperatures and longer durations. This characteristic of lethality values could explain why, particularly for versions I and II of the optimization, the minima were found at a relatively low retort temperature and long process duration, despite the evident trends towards higher temperatures and shorter times producing lower costs. Another possible explanation for these results was related not to the lethality costs, but to the manner in which package size was modelled with discontinuous rather than smooth transitions. The jump from one package size to the next could be large enough to require great changes in the values of the other decision variables, in order to maintain the same lethality. This possibility was investigated in optimization run II, when package size was excluded. However, this did not seem to mitigate the problem. The trend towards higher temperature processes resulting in lower overall cost was investigated by calculating the cost of a process with a retort temperature of 160 degrees (the maximum permitted in the optimization). A process duration was found that gave Chapter 4. RESULTS AND DISCUSSION 73 a lethality of 6 minutes when retort temperature was 160 degrees and initial temperature was 57 degrees. This duration, calculated using an iterative method, was 18.7 minutes, and the total cost of the process was 13.44 cents. This cost was lower than the cost minima found in optimization runs I and II, which is another indication that these optimization runs were not finding the global minimum. Run III however, did result in a minimum with a lower total cost value, found as expected at fairly high retort and initial temperatures, with a short process duration. This, and the repeatability of the result, indicated that the approach taken in optimization run III was the best approach. This success seemed likely due to the manner in which lethality was accounted for, treated as a dependent variable rather than as a constraint. A complete breakdown of the separate costs is given in Table 1.8 for the minimum found using version III of the optimization, and for some of the trial processes leading to that minimum. By analyzing these costs, along with the values of the decision variables that resulted in a minimum for the most succesful version (optimization run III), we can generalize about the effects of some of the costs which comprised the complete cost function. The smaller total costs at shorter process durations indicated that the cost associated with process duration had a large influence. A short process duration could be achieved by using high initial and retort temperatures, and long thin packages. This would also lead to increased energy and material costs, as well as higher thiamin degradation costs. The fact that the minimum was found at the highest allowable initial and retort tem-peratures indicated that the costs associated with energy and with nutrient degradation were of lesser importance than the time cost associated with process duration. How-ever, the eventual minimum was not found at values of x and z corresponding to the longest, thinnest package size, this indicated that the material cost outweighed the cost associated with process duration. Chapter 4. RESULTS AND DISCUSSION 74 Retort X z Initial Process Processing Temperature Temperature Duration Cost (°C) (cm) (cm) (°C) . (min) (cents) 112.1 8 10 58.7 134.6 20.94 153.1 4 14 54.8 27.65 16.27 151.7 4 16 54.8 29.8 15.99 140.1 6 10 52.5 50.05 14.66 160.3 6 10 48.2 40.45 14.53. 135.8 4 6 28.8 37.1 14.30 143.8 6 8 44.9 49.1 14.27 149 4 12 57.2 32.65 13.70 165 4 12 65 24.6 13.38 Cost(in cents per package) of: Thiamin Materials Energy Capital Total Degradation & Labour 4.539 6.022 2.064 8.317 20.94 4.265 6.391 1.652 1.709 16.27 3.811 6.576 1.660 1.841 15.99 3.994 5.816 1.760 3.093 14.66 4.483 5.816 1.728 2.499 14.53 3.914 6.442 1.647 2.293 14.30 3.782 5.693 1.756 3.034 14.27 3.755 6.245 1.680 2.018 13.70 3.946 6.246 1.671 1.521 13.38 Table 4.9: A Cost Breakdown of Some Specified Processes. Chapter 4. RESULTS AND DISCUSSION 75 This can also be noted by looking at the cost breakdown in Table 1.8. The material cost made up the highest proportion of the total cost, except for very long process durations when the capital and labour cost became the highest. The energy costs were the lowest of all the separate costs. Hence, the variables describing package size were the most important, as they had the greatest effect on the cost outcome. The variable with the smallest effect on total cost was the package initial temperature. Changing this variable had only a very small effect on the amount of total energy used and on total process duration. The variables used in the cost function were realistic values, but they did not corre-spond to any particular situation. Using the function in a specific situation would require adjustment of the function by using the actual energy, material labour, and capital costs. This could completely change the outcome of the optimization. A cost function with a lower material cost would engender a cost minimum at values of the decision variables reflecting a longer thinner package. A higher energy cost or thiamin degradation cost would generally result in processes with lower retort temperatures (and hence longer durations). If these costs greatly outweighed the material cost the package would also be longer and thinner (to allow for faster processing and greater nutrient retention). A lower labour cost would also result in lower retort temperatures, and if labour cost was even lower than material cost the result could again be shorter and fatter package formats. The optimization method was flexible enough to be used for batch retorting situations where the main means of heat transfer was by conduction. With some adjustments, the method could be used for almost any other food processing situation. Continuous sterilization setups could be optimized with some adjustments to the cost function, which assumed a batch process. Microwave processes, or processes involving agitation, could be optimized after some modifications to the heat transfer model. Different variables could Chapter 4. RESULTS AND DISCUSSION 76 be included, provided a cost could be associated with them. Although it may not be economically feasible to experimentally verify an optimization, the thiamin degradation and lethality of any proposed minimum cost process can and should be verified experimentally. Chapter 5 C O N C L U S I O N S With regard to the objectives of the study, the following conclusions were drawn: 1) The components of the model developed to calculate process lethality were valid numerical methods of equal or greater accuracy than similar methods already in existence. This conclusion is supported by the facts that the temperature deviations of the ex-ponential finite difference solution from an analytical solution and from a well established finite difference method were minimal. When the headspace model was included in the finite difference solution, temperature deviations from an analytical solution were equally low. The accuracy of the curve fitting numerical integration method (as compared to an analytical solution) depended on the time step used. At the tinie step in the finite difference model the curve fitting method was similar in accuracy to Simpson's rule, while at slightly longer time steps it was more accurate than Simpson's rule. From the first of the Monte Carlo runs, it was concluded that the temperature inac-curacy that can be ascribed to variations in thermal parameters between packages was quite low. From the second run, it was concluded that variations in the thermal param-eters within a single package may generate fairly high uncertainties in temperature, and therefore in the lethality and nutrient degradation calculated from temperature. This implies that, in a certain number of cases, the magnitude of lethality and nutrient degra-dation calculated for a given process will be significantly different from values found experimentally. 77 Chapter 5. CONCLUSIONS 78 2) A cost function to calculate total processing cost based upon the costs of energy, materials, capital, labour, and product quality has been developed. 3) The complete optimization method, using version III of the optimization runs, can consistently specify a package format and retorting process which minimizes the value of the above cost function. The earlier versions of the optimization found minima which may be local, rather than global. All three versions of the optimization method resulted in a minimum, but the mini-mum found for version III was likely to be a global minimum, while the other two versions resulted in local minima. The tendency for the first two versions to fall into local min-ima was probably due to the inclusion of lethality into the cost function. The resulting function surface was strongly affected by the large fluctuations in lethality as a function of the other processing variables. Bibliography [1] Aishima T. and Nakai S. 1986 Centroid Mapping Optimization: A New Efficient Optimization for Food Research and Processing. J. Food Sci. Vol 51 p 1297 [2] Anon. 1987 Agriculture Handbook No 8. Composition of Foods: Finfish and Shellfish Products. United States Dept. of Agriculture Human Nutrition Information Service. [3] Ball C O . 1923 Thermal Process Time for Canned Food. Bulletin 37, National Re-search Council, Washington, D.C. [4] Ball C O . and Olsen F.C.W. 1957 Sterilization in Food Technology. McGraw- Hill Inc. New York. [5] Bhattacharya M.C. 1985. An Explicit Conditionally Stable Finite Difference Equa-tion for Heat Conduction Problems. Int. J. Numer. Meth. Eng. Vol 21 No 2 p. 239 [6] Bhattacharya M.C. 1986. A New Improved Finite Difference Equation For Heat Transfer During Transient Change. Appl. Math. Modelling. Vol 10 p. 68 [7] Bhattacharya M.C. and Davies M..G. 1987. The Comparative Performance of Some Finite Difference Equations for Transient Heat Conduction. Int. J. Numer. Meth. Eng. Vol 24 No 7 p. 1317 [8] Bigelow W.D., Bohart G.S., Richardson A.C., and Ball C O . 1920. Heat Penetration in Processing Canned Foods. Natl. Canners Assoc. Bull. N0.I6-L. [9] Bigelow W.D. and Esty J.R. 1920. The Thermal Death Point in Relation to Time of Typical Thermophilic Organisms. J. Infectious Dis. Vol 27 p. 602 [10] Brennan J.G., Butters J.R., Cowell N.D., and Lilly A.E. 1976. Food Engineering Operations. (2nd Ed.) Elsevier Publishing Co. Ltd. [11] Carslaw H.S. and Jaegar J.C. 1948. Conduction of Heat in Sohds. Oxford University Press. [12] Castillo P.F., Barreiro J.A., and Salas G.R. 1980. Prediction of Nutrient Retention in Thermally Processed Heat Conduction Food Packaged in Retortable Pouches. J. Food Sci. Vol 45 p. 1513 [13] Charm S.E. 1978. "Fundamentals of Food Engineering" Third Edition, AVI Pub-lishing Company Inc., Westport, Connecticut. 79 BIBLIOGRAPHY 80 [14] Chhinnan M.S., Singh R.P., Pedersen L.D., Carroad P.A., Rose W.W., and Jacob N.L. 1980. Analysis of Energy Utilization in Spinach Processing. Trans. ASAE. Vol 23 p. 503 [15] Collins L.S. 1989. Quality Enhancement of Canned Late-Run Chum Salmon (On-corhynchus keta). M.Sc. Thesis, University of British Columbia. [16] Davis E.G., Karel M. and Proctor B.E. 1960. The Pressure-Volume Relation in Film Packages During Heat Processing. Food Technol. Vol 14 p. 165 [17] Deidoerfer F.H. and Humphrey A.C. 1959. Analytical Method For Calculating Heat Sterilization Times. Appl. Microbiol. Vol 1 p. 256 [18] Durance T. 1990. University of British Columbia. Personal Communication. [19] Flambert F. and Deltour J. 1972a. Localization of The Critical Area In Thermally Processed Conduction Heated Canned Food. Lebensm.-Wiss. u Technol. Vol 5 p. 7 [20] Flambert F. and Deltour J. 1972b. Exact Lethality Calculation for Sterilizing Pro-cess. I. Principles of the Method. Lebensm.-Wiss. u Technol. Vol 5 p. 72 [21] Gillespy T.G. 1953. Packs Heating By Conduction: Complex Processing Conditions and Value of Coming-up Time of Retort. J. Sci. Food Agric. Vol 4 p. 553 [22] Hayakawa K. 1969. New Parameters For Calculating Mass Average Sterilization Value to Estimate Nutrient Retention in Thermally Conductive Food. Can. Inst. Food Tech. J. Vol 2 p. 165 [23] Hayakawa K. 1978. A Critical Review of Mathematical Procedures For Determining Proper Heat Sterilization Processes. Food Tech. Vol p. 59 [24] Herndon D.H. 1971. Population Distibution of Heat Rise Curves as a Significant Variable in Heat Sterilization Process Calculations. J. Food Sci. Vol 36 p. 299 [25] Hicks E.W. 1961. Uncertainties in Canning Process Calculations. J. Food Sci. Vol 26 p.218 [26] Hildenbrand P. 1980. An Approach to Solving the Optimal Temperature Control Problem For Sterilization of Conduction-Heating Foods. J. Food Proc. Eng. Vol 3 p. 123 [27] Holdsworth, S.D. 1985. Optimisation of Thermal Processing - A Review. J. of Food Eng. Vol 4 p. 89 [28] Jaluria Y. and Torrance K.E. 1986. Computational Heat Transfer. Hemisphere Pub-lishing Corp. BIBLIOGRAPHY 81 [29] Jowitt R., Escher F., Kent M., McKenna B., and Rocques M. (eds) 1983. Physical Properties of Foods. Elsevier Science Publishing Co. Inc. [30] Kaplan A.M., Reynolds H. and Lichtenstein H. 1954. Significance of Variations in Observed Slopes of Thermal Death Time Curves For Putrefactive Anaerobes. Food Res. Vol 19 p. 173 [31] Kreith F. and Black W.Z. 1980. Basic Heat Transfer. Harper and Row Publishers. [32] Lapidus L., and Pinder G.F. 1982. Numerical Solutions of Partial Differential Equa-tions in Science and Engineering. John Wiley and Sons. [33] Lenz M.K. and Lund D.B. 1977a. The Lethality-Fourier Number Method: Experi-mental Verification of a Model for Calculating Temperature Profiles and Lethality in Conduction-Heating Canned Foods. J. Food Sci. Vol 42 p. 989 [34] Lenz M.K. and Lund D.B. 1977b. The Lethality-Fourier Number Method: Experi-mental Verification of a Model for Calculating Average Quality Factor Retention in Conduction-Heating Canned Foods. J. Food Sci. Vol 42 p. 997 [35] Lenz M.K. and Lund D.B. 1977c. The Lethality-Fourier Number Method: Confi-dence Intervals for Calculated Lethality and Mass Average Retention of Conduction Heating Canned Foods. J. Food Sci. Vol 42 p. 1002 [36] Lewis M.K. 1987. Physical Properties of Foods and Food Processing Systems. Ellis Horwood Series in Food Science and Technology. [37] Luikov A.V. 1968. Analytical Heat Diffusion Theory. Academic Press. [38] Lund D.B. 1978. Statistical Analysis of Thermal Process Calculations. Food Tech. Vol 32 p. 76 [39] Lund D.B. 1982. Applications of Optimization In Heat Processing. Food Tech. Vol 36 p. 97 [40] Manson J.E., Zahradnik J.W., and Stumbo C R . 1970. Evaluation of Lethality and Nutrient Retentions of Conduction Heated Foods in Rectangular Containers. Food Tech. Vol 24 p. 1297 [41] Merson R.L., Singh R.P., and Carroad P.A. 1978. An Evaluation of Ball's Formula Method of Thermal Process Calculations. Food Tech. Vol 31 p. 66 [42] McGinnis D.S. 1986. Prediction of Transient Conduction Heat Transfer in Foods Packaged in Flexible Retort Pouches. Can. Inst. Food Sci Technol J. Vol 19 No 4 p. 148 BIBLIOGRAPHY 82 [43] Mohsenin N.N. 1980. Thermal Properties of Foods and Other Agricultural Materials. Gordon and Breach, New York. [44] Nadkarni M.N. and Hatton T.A. 1985. Optimal Nutrient Retention During the Ther-mal Processing of Conduction Heated Canned Foods: Application of the Distributed Maximum Principle. J. Food Sci. Vol 50 p. 1312 [45] Nakai S., Koide K. and Eugster K. 1984. A New Mapping Super-Simplex Optimiza-tion for Food Product and Process Development. J. Food Sci. Vol 49 p. 1143 [46] Nakai S. 1989. A New Random-Centroid Optimization Method for Food Research and Processing. In Press. [47] Naveh D., Kopelman L., Zechman L. and Pflug J. 1983. Transient Coohng of Con-duction Heating Products During Sterilization: Temperature Histories. J. of Food Proc. and Preserv. Vol 7p. 259 [48] Norback J.P. 1980. Techniques for Optimization of Food Processes. Food Tech. Vol 34 p. 86 [49] Ohlsson T. 1980a. Optimal Sterilization Temperatures for Sensory Quality in Cylin-drical Containers. J. Food Sci. Vol 45 p. 1517 [50] Ohlsson T. 1980b. Optimal Sterilization Temperatures for Flat Containers. J. Food Sci. Vol 45 p. 1522 [51] Patino H. and Heil J.R. 1985. A Statistical Approach to Error Analysis in Thermal Process Calculations. J. Food Sci. Vol 50 p. 1110 [52] Paulson A.T., Lo V.K., and Tung M.A. 1984. Conventional Steam Retort Versus Direct Flame Sterilizer in the Thermal Processing of Canned Low Acid Foods. Agriculture Canada, Engineering and Statistical Research Institute, Contract File #09SU.01843-1-EP03. [53] Pontryagin L.S., Boltianskii V.G., Gamkrehdze R.V., and Mishchenko E.F. 1962. The Mathematical Theory of Optimal Processes. John Wiley and Sons, New York. [54] Powers J.J., Pratt D.E., Cannon J.L., Somaatmadja D, and Fortson J.C. 1962. Application of Extreme Value Methods and Other Statistical Procedures to Heat Penetration Data. Food Tech. Vol 16 p. 80 [55] Press W.H. Flannery B.P. Teukolsky S.A. and Vetterhng W.T. 1988. Numerical Recipes in C. Cambridge University Press. BIBLIOGRAPHY 83 [56] Ramaswamy H.S. 1983. Heat Transfer Studies of Steam/Air Mixtures For Food Processing in Retort Pouches. Ph.D. Thesis, University of British Columbia. [57] Rha C. (ed). 1975. Theory, Determination, and Control of Physical Properties of Food Materials. D. Reidel Publishing Co., Boston. [58] Rubinate F.J. 1964. A New Look in Packaging. Food Technol. Vol 18 p. 71 [59] Saguy I. and Karel M. 1979. Optimal Retort Temperature Profile in Optimizing Thiamin Retention in Conduction-Type Heating of Canned Foods. J. Food Sci. Vol 44 p. 1485 [60] Saguy I. 1988. Constraints to Quality Optimization in Aseptic Processing. J. Food Sci. Vol 53 p. 306 [61] Sandberg G. 1990. Lipton Puritan. Personal Communication. [62] Simpson S.G. and Williams M.C. 1974. An Analysis of High Temperature/Short Time Sterilization During Laminar Flow. J. Food Sci. Vol 39 p. 1047. [63] Singh R.P. 1977. Energy Consumption and Conservation in Food Sterilization. Food Tech. Vol 31 p. 57 [64] Smith, T. 1978. Comparison of Thermal Process Evaluation Methods for Conduc-tion Heating Foods in Cylindrical Containers. M.Sc. Thesis, University of British Columbia. [65] Smith T. Tung M.A. and Bennett L. 1983. Saving Energy in Conventional Steam Retorts with Emphasis on Venting Procedures. Agriculture Canada, Engineering and Statistical Research Institute, Contract File #09SU.01843-1-EP08. [66] Smith T. and Tung M.A. 1986. Energy Reduction for Thermally Processed Food in Cans and Retort Pouches. Agriculture Canada, Engineering and Statistical Research Institute, contract file #04SB.01916-3-EP24. [67] Snyder C.J. and Henderson J.M. 1989. A Preliminary Study of Heating Character-istics and Quality Attributes of Products Packaged in the Retort Pouch Compared with the Conventional Can. J. of Food Proc. Eng. Vol 11 p. 221 [68] Stumbo C.R., Murphy J.R., and Cochran J. 1950. Nature of Thermal Death Time Curves For P.A. 3679 and Clostridium Botulinum. Food Tech. Vol 4 p. 321 [69] Stumbo OR. 1965. Thermobacteriology in Food Processing. Academic Press. [70] Stumbo C R . 1973. Thermobacteriology in Food Processing. 2nd Ed. Academic Press. BIBLIOGRAPHY 84 [71] Swartzel K.R. 1986. Equivalent Point Method for Thermal Evaluation of Continuous Flow Systems. J. Agric. Food Chem. Vol 34 p. 396 [72] Sweat V.E. 1975. Modeling the Thermal Conductivity of Meats. ASAE Paper No 74-6016 [73] Tandon S. and Bhowmik S.R. 1986. Evaluation of Thermal Processing of Retortable Pouches Filled with Conduction Heated Foods Considering Their Actual Shapes. J. Food Sci. Vol 51 p. 709 [74] Teixeira A.A., Dixon J.R., Zahradnik J.W., and Zinsmeister G.E. 1969. Computer Optimization of Nutrient Retention in the Thermal Processing of Conduction Heated Foods. Food Technology Vol 23, p 845 [75] Teixeira A.A., Zinsmeister G.E., and Zahradnik J.W. 1975a. Experimental Evalu-ation of Mathematical and Computer Models for Thermal Process Evaluation. J. Food Sci. Vol 40, p. 653 [76] Teixeira A.A., Zinsmeister G.E., and Zahradnik J.W. 1975b. Computer simulation of Variable Retort Control and Container Geometry as a Possible Means of Improving Thiamin Retention in Thermally Processed Foods. J. Food Sci. Vol 40 p. 656 [77] Teixeira A.A. and Shoemaker C.F. 1989. Computerized Food Processing Operations. Van Nostrand Reinhold, New York. [78] Thijssen H.A.C., Kerkhof P.J.A.M., and Liefkens A.A.A. 1978. Short-Cut Method for the Calculation of Sterilization Conditions Yielding Optimum Quality Retention for Conduction-Type Heating of Packaged Foods. J. Food Sci. Vol 43 p. 1096 [79] Thijssen H.A.C. and Kochen L.H.P.J.M. 1980. Calculation of Optimum Sterilization Conditions for Packed Conduction-Type Foods. J. Food Sci. Vol 45 p. 1267 [80] Thorne S. (ed.) 1985. Developments in Food Preservation, Volume Three. Elsevier AppHed Science PubHshers. [81] Todd E.C.D. 1985. Economic Loss from Foodborne Disease and Non-Illness Related Recalls Because of Mishandhng by Food Processors. J. of Food Protec. Vol 48 p. 621 [82] Todd E.C.D. 1988. BotuHsm in Native Peoples - An Economic Study. J. of Food Protec. Vol 51 p. 581 [83] Wallenberg E. and Jarnhall B. 1957. Heat SteriHzation in Plastics. Modern Packag-ing. Vol 31 p. 165 BIBLIOGRAPHY 85 [84] Weast R.C. (ed) 1980. CRC Handbook of Chemistry and Physics. Chemical Rubber Publishing Co. [85] Weintraub S.E., Ramaswamy H.S. and Tung M.A. 1989. Heating Rates in Flexible Packages Containing Entrapped Air During Overpressure Processing. J. Food Sci. Vol 54 p. 1417 [86] Whitaker W.C. 1971. Processing Flexible Pouches. Modern Packaging. Vol 44 p.83 


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items