@prefix vivo: . @prefix edm: . @prefix ns0: . @prefix dcterms: . @prefix dc: . @prefix skos: . vivo:departmentOrSchool "Applied Science, Faculty of"@en, "Electrical and Computer Engineering, Department of"@en ; edm:dataProvider "DSpace"@en ; ns0:degreeCampus "UBCV"@en ; dcterms:creator "Jiao, Xuejun"@en ; dcterms:issued "2009-06-12T22:08:55Z"@en, "1999"@en ; vivo:relatedDegree "Master of Applied Science - MASc"@en ; ns0:degreeGrantor "University of British Columbia"@en ; dcterms:description """This thesis describes paper machine data analysis methods using wavelet and wavelet packets and their applications, aiming at improving paper machine efficiency. First, the validity and accuracy of the wavelet transform are confirmed by applying discrete wavelet transform to paper samples using both the paper machine on-line scanner data and off-line analyzer data. Results show that the wavelet transform can represent paper machine process data economically without loss of detail, and that it can also provide excellent visualization to the operator. Process monitoring and control performance assessment are then studied. By separating controllable and uncontrollable variations in the cross machine direction profile, the achieved performance and the best possible performance of the system are evaluated. A CD performance index can be calculated on-line, providing the operator with a quick assessment of the control system performance. Both wavelet and wavelet packets are used and the results are compared. Finally, the processed paper machine profiles obtained through wavelet and wavelet packet analysis are used for trim-loss optimization taking into account paper quality. Three different optimization schemes are compared and the potential savings through trim-loss optimization before and after improving control are analyzed."""@en ; edm:aggregatedCHO "https://circle.library.ubc.ca/rest/handle/2429/9065?expand=metadata"@en ; dcterms:extent "10617388 bytes"@en ; dc:format "application/pdf"@en ; skos:note "P A P E R M A C H I N E D A T A ANALYSIS A N D O P T I M I Z A T I O N USING W A V E L E T S By Xuejun Jiao B. E. (Electrical Engineering) Northeastern Heavy Mechanical Institute, P.R.China M . E. (Electrical Engineering) Northeastern Heavy Mechanical Institute, P.R.China A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF T H E REQUIREMENTS FOR T H E DE GRE E OF M A S T E R OF APPLIED SCIENCE in T H E FACULTY OF GRADUATE STUDIES DEPARTMENT OF ELECTRICAL AND COMPUTER ENGINEERING We accept this thesis as conforming to the required standard T H E UNIVERSITY OF BRITISH COLUMBIA January 1999 © Xuejun Jiao, 1999 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of Electrical and Computer Engineering The University of British Columbia 2356 Main Mall Vancouver, BC Canada V6T 1Z4 Date: January 1999 Abstract This thesis describes paper machine data analysis methods using wavelet and wavelet packets and their applications, aiming at improving paper machine efficiency. First, the validity and accuracy of the wavelet transform are confirmed by applying discrete wavelet transform to paper samples using both the paper machine on-line scanner data and off-line analyzer data. Results show that the wavelet transform can represent paper machine process data economically without loss of detail, and that it can also provide excellent visualization to the operator. Process monitoring and control performance assessment are then studied. By sepa-rating controllable and uncontrollable variations in the cross machine direction profile, the achieved performance and the best possible performance of the system are evaluated. A CD performance index can be calculated on-line, providing the operator with a quick assessment of the control system performance. Both wavelet and wavelet packets are used and the results are compared. Finally, the processed paper machine profiles obtained through wavelet and wavelet packet analysis are used for trim-loss optimization taking into account paper quality. Three different optimization schemes are compared and the potential savings through trim-loss optimization before and after improving control are analyzed. ii Acknowledgment Sincere appreciations to my supervisors, Prof. Michael S. Davies and Prof. Guy A. Dumont, for their kind guidance and support during the course of my thesis work. I would also like to thank Zoran Nesic for his consistent help on my research project. Thanks to all my friends at Pulp and Paper Centre and at the Department of Electrical and Computer Engineering for helping me and making my studies at UBC enjoyable. iii Table of Contents Abstract ii Acknowledgment iii List of Tables vii List of Figures viii 1 Introduction 1 1.1 Overview of Estimation Theory 3 1.2 Motivation 3 1.3 Outline of Thesis 4 2 Wavelet and Wavelet Packet Theory 6 2.1 Wavelet Analysis 7 2.1.1 Wavelet Basis and Time-Frequency Resolution 7 2.1.2 Wavelet Basis Properties 9 2.1.3 Discrete and Inverse Discrete Wavelet Transform 10 2.2 Wavelet Packet Analysis 11 2.2.1 Wavelet Packets 11 2.2.2 Discrete Wavelet Packet Transform(DWPT) 12 2.2.3 Best Basis Algorithm 15 2.3 Multi-Resolution Analysis 16 2.4 De-Noising 17 iv 2.4.1 Thresholding 18 2.4.2 Threshold Selection Methods 20 2.5 Compression 22 2.6 Two-Dimensional Signal Analysis 23 3 Comparison of On-machine and Off-machine Measurements of Paper Properties Using Wavelet Analysis 25 3.1 Introduction . 25 3.2 On-line Scanner Data and Off-line Tapio Analyzer Data 26 3.2.1 On-line Scanner Data 26 3.2.2 Tapio Analyzer Data 26 3.2.3 Transfer of Tapio Data 27 3.3 Cross Machine Direction Analysis 27 3.3.1 On-line Scanner Data Analysis 28 3.3.2 Off-line Tapio Analyzer Data Analysis 34 3.3.3 Comparison of On-line Data Analysis and Off-line Data Analysis . 35 3.4 Machine Direction Analysis 36 4 Wavelet and Wavelet Packet Analysis of Industrial Data 38 4.1 CD Variation Separation Using Wavelet Analysis 38 4.2 CD Variation Separation Using Wavelet Packets 40 4.2.1 Frequency Order of Wavelet Packet Nodes 41 4.2.2 Restricted Basis Algorithm for Tree Selection 42 4.2.3 Profile Decomposition at Each Waveband 45 4.2.4 Controllable and Uncontrollable CD Profiles 47 4.3 Performance Assessment Using Wavelet and Wavelet Packet Analysis . . 51 4.4 Denoising 55 v 4.5 Compression of Industrial Data 55 5 Trim-Loss Optimization 59 5.1 Introduction 59 5.2 Overview of Trim-Loss Optimization 60 5.3 Mathematical Model for Trim-Loss Problem 62 5.4 Lingo Optimization Solver 64 5.5 Trim-Loss Optimization Using Visual Basic and Lingo 66 5.6 Trim Optimization Schemes . 69 5.6.1 Scheme 1: Trim Optimization without Paper Quality Consideration 71 5.6.2 Scheme 2: Trim Optimization with Paper Quality Consideration . 73 5.6.3 Scheme 3: Trim Optimization after Improving Control 74 5.6.4 Comparison and Conclusion 76 6 Conclusions 77 6.1 Conclusions 77 6.2 Further Work 78 Bibliography 80 vi List of Tables 2.1 Minimax thresholds for various sample sizes 20 4.2 Natural order and frequency order of wavelet packet nodes 42 4.3 Summary of profile variation at each wavebancd 45 4.4 Compression of industrial data 58 5.5 Roll order information 71 5.6 Trim optimization result: Scheme 1 71 5.7 Trim optimization result: Scheme 2 74 5.8 Trim optimization result: Scheme 3 75 5.9 Result comparison for three schemes 76 vii List of Figures 1.1 A simplified diagram of paper machine (J. Ghofraniha) 1 1.2 Paper machine sensor path 2 2.3 Time-frequency plane: (a)STFT (b)WT (c)WPT 8 2.4 Block diagram of DWT and IDWT 10 2.5 The Harr wavelet packets 12 2.6 Filter bank implementation of DWPT 13 2.7 Analysis of a chirp signal(a) using wavelets (b) and wavelet packets (c) . 14 2.8 Wavelet denoising 18 2.9 Hard thresholding and soft thresholding 19 2.10 Block diagram of compression 22 2.11 Diagram of two-dimensional DWT 23 2.12 Decomposition of signal using 2-dimensional DWT 24 3.13 Original Tapio data 28 3.14 Tapio data after transfer 28 3.15 Raw profile 29 3.16 On-line scanner data analysis using wavelet 29 3.17 Multiresolution analysis and normalized wavelength 30 3.18 MD scan average 31 3.19 Raw profile 32 3.20 CD approximation: level 2 32 3.21 CD residues: level 3 32 viii 3.22 CD approximation: level 3 32 3.23 Raw profile 33 3.24 CD approximation: level 2 33 3.25 CD residues: level 3 33 3.26 CD approximation: level 3 33 3.27 Wavelet approximation and detail: level 7, Tapio data 34 3.28 Comparison of on-line scanner data and Tapio data 35 3.29 Scanner data with CD profile removed 36 3.30 Scan average and wavelet estimate of MD profile 37 4.31 Wavelet decomposition tree and wavelength at each node 39 4.32 Wavelet packet decomposition tree 41 4.33 The db2 wavelet packets 42 4.34 Wavelet packet decomposition tree and wavelength at each node 44 4.35 Profile wavelet packet decomposition at each node 46 4.36 Controllable profile estimates using wavelet and wavelet packets 48 4.37 Uncontrollable profile estimates using wavelet and wavelet packets . . . . 48 4.38 CD profile separation using wavelet and wavelet packets 49 4.39Controllable: Wavelet Packet 50 4.40Uncontrollable: Wavelet Packet 50 4.41 Controllable: Wavelet 50 4.42Uncontrollable: Wavelet 50 4.43 Performance index using wavelet and wavelet packets 51 4.44 Performance index using wavelet and wavelet packets(Caliper) 53 4.45 Wavelet decomposition tree and wavelength at each node(Caliper) . . . . 54 4.46 Wavelet packet decomposition tree(Caliper) 54 ix 4.47 Wavelet coefficients before and after thresholding 56 4.48 Scan 128: Energy distribution 57 5.49 An example with 3 products (i=l,2,3) and 3 cutting patterns (j=l,2,3) . 63 5.50 Trim optimization diagram 66 5.51 Trim optimization interface using Visual Basic 67 5.52 Three trim optimization schemes 70 5.53 Trim optimization: Scheme 1 72 5.54 Trim optimization: Scheme 2 73 5.55 Trim optimization: Scheme 3 75 x Chapter 1 Introduction The paper machine is the final stage of paper manufacturing after the fibre pulping and bleaching process. In the headbox, the fibres and white water are mixed and are delivered into the paper machine. Moisture is then removed through drainage, mechanical pressing and drying. Finally a sheet of paper is produced at the reel. A simplified diagram of paper machine is given in Figure 1.1. Reel Figure 1.1: A simplified diagram of paper machine (J. Ghofraniha) Hundreds of functional control loops are active to ensure the uniformity and quality of the paper produced. Among these, the basis weight and moisture control loops are 1 Introduction 2 the most important. A traversing sensor mounted on an O-frame at the dry end of the machine measures the sheet properties including basis weight, moisture etc. The sensor takes up to 3000 uniformly spaced measurements across the sheet during each traverse. As a result of sheet movement in machine direction (MD) and sensor moving in cross machine direction (CD), a zigzag pattern of measurements shown in Figure 1.2 is formed. Note that the machine direction speed is much greater than the sensor travel rate. The measured sequences of values thus contain both information about CD and MD variations. The MD variation is introduced by pressure and consistency variation and is considered to be fast and time-dependent. The CD variation is considered to be relatively time-invariant or slowly time-varying. For the purpose of CD control, the CD profile is extracted from the raw measurements and used to manipulate actuators distributed across the machine to achieve a uniform distribution of sheet properties. Sensor path Figure 1.2: Paper machine sensor path Introduction 3 1.1 Overview of Estimation Theory Effective estimation is important for the success of a proper control scheme in paper mills. Since automatic control was introduced on the paper machine, many estimation methods have been developed to process the scanner data. A commonly used technique is the exponential filtering (EXPO) technique [7]. The MD profile is extracted at the end of each scan and defined as the mean value of that scan. This estimator is usually slow and does not separate MD and CD variations optimally. More advanced filtering techniques [26, 3, 14, 34, 23] use stochastic models for the profile variation. In one particular method [34, 25], the CD profile is estimated with a modified least-squares estimator, and the MD profile is estimated using a Kalman filter. Because the MD estimations are updated at each data point, this method results in an improved MD control bandwidth. Recently, a new signal processing method - wavelet estimation has been used for paper machine data processing [27, 28, 4]. This method has been shown to be superior to previous estimation methods and has many advantages, such as small estimation error, fast computation, robustness in performance, better compression and better visualization. One important application of wavelet multi-resolution analysis is the separation of the profile variations into different spatial components for performance assessment. Wavelet multi-resolution analysis of paper machine data has been used in [29]. In this thesis, results when using wavelet packets are compared to results of wavelet analysis. 1.2 Motivation Advances in measurement and control systems have provided paper machines with high resolution profile data. The increased resolution potentially leads to overall improved control of paper machine. The operator benefits from a better presentation of process Introduction 4 data in order to detect and diagnose any change in the quality of the final product. Wavelet and wavelet packets filtering method can give a better and more clear picture of the profile data. Quality control and production monitoring are increasingly important for paper com-panies. High speed sensors generate large amounts of data so that advanced data com-pression techniques become more and more important for data storage and data transfer. Profile variation separation and calculation of a performance index provide a quick and direct way of process assessment. As will be shown in this thesis, wavelet packet anal-ysis provides a better separation of controllable profile variations and therefore a more accurate assessment of the control performance. Process optimization is important to reduce the cost of raw fibre and energy in the modern paper industry. Trim optimization, which is studied in this thesis, minimizes the trim-loss when slitting a paper reel into individual rolls. Traditional trim optimization solving does not address the quality issue of the paper product. In this thesis, the profile data after wavelet filtering is used for trim optimization to maximize use of high quality product. Individual roll information following the slitter can also be displayed graphically. 1.3 Outline of Thesis A brief introduction of wavelet and wavelet packets theory and their application is given in Chapter 2. Chapter 3 focuses on assessing the validity and effectiveness of the wavelet transform by applying it to both paper machine on-line scanner data and off-line analyzer data. In Chapter 4, CD profile variation separation and performance assessment are compared by using wavelet and wavelet packet analysis respectively. In Chapter 5, the processed reel profile after denoising is further used for trim-loss optimization. Three different optimization schemes are discussed and tested. Finally conclusions and some Introduction further remarks are given in Chapter 6. Chapter 2 Wavelet and Wavelet Packet Theory In recent years wavelet theory has found applications in different areas such as signal processing and statistical analysis. The basic idea of wavelet analysis dates back to work done by Littlewood-Paley in the 1930's or even earlier by A. Haar in the 1910's [18]. However it is only in 1982 that the wavelet was first proposed as a tool for signal analysis by Morlet. Later, the detailed mathematical theory of the continuous wavelet transform was carried out by Grossman and Morlet[17], followed by the detailed study on discrete wavelet transform by Daubechies, Grossmann and Meyer [9]. In 1988, Daubechies [8] gave a method for constructing compactly supported orthonormal wavelet basis functions from multi-resolution analysis, that attracted a lot of attention in many fields from theory to applications. Furthermore, wavelet packets were constructed by Coifman and Meyer in 1991 as a generalization of wavelets. More references on wavelet packets can be found in Coifman, Meyer and Wickenhausen [5, 30]. In this chapter, the basic theory of wavelet and wavelet packet theory, discrete wavelet transform and discrete wavelet packet transform are introduced followed by the descrip-tion of denoising and compression schemes which are used in Chapter 3 and 4. 6 Wavelet and Waveiet Packet Theory 7 2 . 1 Wavelet Analysis 2 . 1 . 1 Wavelet Basis and Time-Frequency Resolution The classical Fourier Transform is widely used for analyzing the frequency element of a signal by decomposing a signal into different frequencies of sine and cosine functions. However, the Fourier Transform is not suitable for dealing with non-stationary signals for which the frequency spectrum varies with time. To overcome this problem, Gabor proposed the Short-Time Fourier Transform(STFT) or the windowed Fourier Transform which is defined as following: The STFT is popular in time-varying signal analysis because it maps a time-domain function to a time-frequency function F(t,u)), therefore allowing changes in spectrum with time to be traced. Because the time window size is fixed for STFT, once a window is chosen, the resolu-tion in both time and frequency are fixed. By comparison, the Wavelet Transform uses a shorter timing window at higher frequencies and a longer window at lower frequencies. Wavelet basis functions are created by scaling and translating the same prototype ty(x) which is known as mother wavelet. Scaling corresponds to stretching or compressing the mother wavelet to generate new basis functions. (2.1) ^j,k(x) = 2-j/2V(2-jx - k) (2.2) where j is the scaling factor, k is the shift factor. The time and frequency resolutions are defined as follows: At = (2.3) Wavelet and W a v e l e t Packet Theory 8 Ao; g(t) and G(u>) are the basis function and its Fourier Transform respectively. According to Heisenberg inequality, the time and frequency resolution cannot be controlled independently. That is AuAt > i (2.5) Figure 2.3 shows the time-frequency plane for STFT, wavelet transform (WT) and wavelet packet transform(WPT). From Figure 2.3, it can be seen that the resolution >* o c CO CT CO o c CO 3 CT CD Time o c CD CT CD Time Time (a) (b) (c) Figure 2.3: Time-frequency plane: (a)STFT .(b)WT (c)WPT At and Au are fixed for STFT, however they vary in the time-frequency plane for the wavelet transform, which leads to a coarser resolution at lower frequency and finer time resolution at higher frequency. Furthermore, wavelet packets (to be discussed later) offer more flexibility because they allow At and AUJ to change in a signal decomposition. Wavelet and Wavelet Packet Theory 9 2 . 1 . 2 Wavelet Basis Properties Wavelet basis functions are created by scaling and translating a mother wavelet. The choice of mother wavelet depends on the wavelet properties and should be fit to the particular problem. Following are some important properties of wavelets, more details on wavelet properties can be found in [21]. Compact Support li the scaling function and wavelet are compactly supported, the wavelet filters are finite impulse response filters, as a result the fast wavelet transform is finite. Symmetry Symmetry is necessary for wavelet filters to have linear phase and thus to avoid phase distortion. Smoothness The M degree of smoothness means that the Mth derivative of a function is continuous at all points. Smoothness of wavelets plays an important role in compres-sion. A higher degree of smoothness corresponds to better frequency localization of the filters. Number of vanishing moments The number of vanishing moments N is defined as the number of moments of the wavelet that are zeros (see Equation (2.6)). It is related to the number of oscillations in wavelets and is important in singularity detection. A larger number of vanishing moments gives smoother wavelet. Y, ^kkl = 0,forO and[jVj = L2(R), jez jez 5. There exists 0 G V0, such that {(p(t — n)}nez is an orthonormal basis for VQ. By doing the scaling and translation for each level j , we have a collection of jJk{t) = 2-il24>{2-H-k) (2.15) Wavelet and Wavelet Packet Theory 17 2.4 De-Noising One of the most important applications of wavelet and wavelet packet is de-noising. De-noising is the process of suppressing the unwanted noise part of the signal and recovering the signal. Wavelet and wavelet packets are superior to the traditional denoising meth-ods, especially in estimating signals with jumps, spikes and non-smooth features [27, 4]. The denoising method in the wavelet packet framework is identical to that of wavelet framework which is shown as follows: The noise in a signal is normally the part which lacks structure or coherence. A coherent part of the signal exhibits a concentration of energy in the representation domain and an incoherent part of the signal diffusely spread throughout in the representation domain. Given the signal model in (2.16), yi = fi + 8ei,i = l,2,---,n (2.16) where yi is the noised measurement, /j is.the noise-free signal. Cj is a Gaussian white noise N(0,1), 8 is the noise level. The decomposition of the signal normally concentrates into small number of coeffi-cients, however the wavelet transform of white noise is still white noise which is evenly spread over all the coefficients [13]. Based on these facts, Donoho and Johnstone proposed the following three-step denoising procedure [10]: 1. Compute the wavelet decomposition of the original signal at given level N. 2. For each level 1 to N, apply soft thresholding to remove low magnitude wavelet coefficients. 3. Reconstruct the signal based on the original approximation and thresholded detail coefficients. Wavelet and Wavelet Packet Theory 18 Figure 2.8 illustrates the result of wavelet de-noising using the above procedure. It can be seen that the sharp changes in the signal are preserved after denoising. O r i g i n a l s i gna l 1OOO 1200 1400 1600 1 BOO Figure 2.8: Wavelet denoising In the de-noising procedure, the second step thresholding is the most important. Thresholding of the discrete representation is the key operation that may identify with the suppression of noise. Typically thresholding is only applied to the higher-resolution coefficient levels. 2.4.1 Thresholding There are two main types of thresholding: hard thresholding and soft thresholding. Hard thresholding sets all the coefficients with magnitude less than the threshold to zero while Wavelet and Wavelet Packet Theory 19 leaving those coefficients greater than the threshold unchanged. The algorithm is given in (2.17). hard , Cjk ~ < (2.17) Cjk if \\cjk\\ > A 0 otherwise Soft thresholding sets all the coefficients with magnitude smaller than the threshold to zero and shrinks the others by the threshold value. Its algorithm is given in (2.18). soft _ Cjk — < Cjk — A if Cjk > A 0 if \\cjk\\ < A (2.18) Cjk + A if x < —A The graphical display of hard thresholding and soft thresholding is given in Figure 2.9 (D (2) Figure 2.9: Hard thresholding and soft thresholding Due to the discontinuity of the shrink function, hard thresholding tends to have larger variance but smaller bias [2]. Soft thresholding tends to have larger bias but smaller variance because of shrinking all large wavelet coefficients towards zero by A. Wavelet and Wavelet Packet Theory 20 V. Solo recently reformulated the soft thresholding method as an Li regularised least squares problem. He proposed a new iterative algorithm to deal with the wavelet esti-mation in coloured noise and used it for transfer function estimation. Detailed reference can be found in [31]. 2.4.2 Threshold Selection Methods Donoho and Johnstone [10, 12, 22] have done extensive work on different threshold se-lection methods. The various thresholding values can be expressed by t = S-X (2.19) where 5 is the noise level estimation and can be estimated using Median Absolute Deviation(MAD) method [11]. 6 = Median(\\cjk\\)/0.6745 (2.20) There are different criteria for choosing A. The four most important threshold selec-tion methods are described as follows: (1) Minimax threshold applies the optimal threshold in terms of L 2 risk. Minimax threshold depends on the sample size n and is derived to minimize the upper bound of the L2 risk in estimating a function. It does not have any closed form expression. Table 2.1: Minimax thresholds for various sample sizes n A n A 64 1.474 2048 2.414 128 1.669 4096 2.594 256 1.860 8192 2.773 512 2.047 16384 2.952 1024 2.231 32768 3.131 Wavelet and Wavelet Packet Theory 21 Table 2.1 gives the approximate threshold value for different sample sizes. The mini-max method does a better job at picking up abrupt jumps at the expense of smoothness. (2) Universal Thresholding The thresholding value A is given by where n is the sample size. The universal threshold value is substantially larger than its minimax counterpart. So the universal method often gives smooth estimates but does not pick up jumps very well. (3) SURE Thresholding is based on the principles of minimizing the Stein Unbiased Risk Estimate(SURE) for threshold estimates. It is smoothness adaptive and the advan-tage of this method is evident when the underlying function has jump discontinuities on a smooth background. (4) Hybrid SURE Thresholding When the signal to noise ratio is very small, the SURE estimate may be very noisy and the Universal Thresholding is used. Otherwise SURE Thresholding is used. The threshold selection methods described above apply to the noise model in (2.16). For correlated noise, thresholds must be rescaled by a level-dependent estimation of the level noise as in (2.22) [22]. where cr,- is the standard deviation of wavelet coefficients at jth level and n is the data sample size. MultiMAD to be used in Chapter 3 and 4 is a resolution-dependent thresholding method using MAD noise estimator (2.20) to estimate the noise strength at each resolu-tion level. (2.21) (2.22) Wavelet and Wavelet Packet Theory 22 2.5 Compression Data compression is another important application of wavelet and wavelet packet theory. Because wavelet and wavelet packet transforms and thresholding can compress the signal energy in a small number of coefficients, they can be used for data compression. In the paper industry, data compression is very useful for the storage of historical data for future use and the transfer of data between different paper mills. There are two basic compression schemes: lossless compression and lossy compression. Here we only consider lossy compression, that is, we can accept some error as long as the reconstruction after compression is acceptable. The wavelet lossy compression diagram is shown in Figure 2.10. s and s are the input sequence and the recovered sequence respectively. The use of a quantizer is optional and can result in a high compression ratio at the expense of an additional error due to the quantization of wavelet coefficients. The compression procedure using wavelet packet is Transform Threshold Quantizer Encoder Compressor Decompressor Inverse Transform DeQuantizer 4 Decoder Sparse Matrix Storage Figure 2.10: Block diagram of compression identical to that of wavelet. The only new feature is the increased flexibility due to a Wavelet and Wavelet Packet Theory 23 large number of bases from the signal decomposition. A design objective can be used to choose the best representation of the original signal. Because wavelet packet coefficients can represent signals at least as efficiently as wavelet coefficients, the method normally achieves better compression. Smooth oscilla-tory signals such as speech or music can be compressed significantly better using wavelet packet bases, which can accurately single out the important frequency bands. 2.6 Two-Dimensional Signal Analysis The measured paper machine process data used in this thesis is two-dimensional data, and the two-dimensional transform is used. The two-dimensional Discrete Wavelet Trans-form(DWT) and Discrete Wavelet Packet Transform(DPWT) can be achieved by two separate one-dimensional transforms. As shown in Figure 2.11, the rows of the two-dimensional signal are decomposed first and then the columns of the signal are decom-posed. H columns s rows H columns Qcolumns Figure 2.11: Diagram of two-dimensional DWT Wavelet and Wavelet Packet Theory 24 Figure 2.12 illustrates the decomposition of a fingerprint image using two-dimensional discrete wavelet transform [24]. After the first level decomposition, there are four sets Figure 2.12: Decomposition of signal using 2-dimensional DWT of coefficients, which correspond to the approximation, the vertical detail, horizontal detail and diagonal detail at level one. At the next level, the DWT only operates on the approximation coefficients while DWPT operates on both the approximation and detail coefficients. Chapter 3 Comparison of On-machine and Off-machine Measurements of Paper Properties Using Wavelet Analysis 3.1 Introduction Improvements in measurement and control systems now provide paper machine profile data with high resolution. In recent years, wavelet filtering has been used in the paper machine data analysis and has been shown to give a superior performance in comparison to the traditional estimation methods. Wavelet processing can separate the CD and MD variations as well as remove high frequency measurement noise. Wavelets are effective for the detection of signals in the noisy data, leading to a better visualization and esti-mation. The sheet properties can be represented economically and without loss of detail. Detailed reference can be found in [27, 28, 4]. This chapter is concerned with the adap-tation of wavelet techniques to the analysis of two-dimensional paper sheet properties. The accuracy of analyzing paper machine data using discrete wavelet transform is first examined by applying wavelet filtering to both the paper machine on-line scanner data and off-line analyzer data from the same paper sheet. The Cross Machine Direction(CD) analysis is used to show the controllable and uncontrollable variation that can and cannot be removed by the CD control system. The machine direction(MD) analysis is intended to obtain clean MD profile. These results are also of interest since it is unusual for both scanner data and laboratory test data to be available for the same paper sheet. Such 25 Wavelet Analysis of Industrial Data 26 direct comparisons are of value to the paper manufacturers since they provide confirma-tion of the validity of the data used to control production. The logistics of identifying the sheet samples, connecting them and transporting the CD and MD samples to the laboratory from the mill site are formidable. 3.2 On-line Scanner Data and Off-line Tapio Analyzer Data 3.2.1 On-line Scanner Data As described in chapter 1, paper machine on-line scanner data are collected from zigzag sampling of the paper by the on-line scanner located at the dry end. The scanner has a moving head with six sensors to measure paper properties including dry weight, caliper, moisture, top side gloss and wire side gloss. In this thesis, the basis weight variations are used for study, and caliper is also studied in Chapter 4. The raw basis weight data consists of 130 scans and 685 data points per scan, this represents one entire jumbo reel, which is approximately 58 km of paper. The speed is 23 seconds per scan, and there are 130 scans so the speed is: 58000/(130 * 23) = 19.4m/s or 70km/h. 3.2.2 Tapio Analyzer Data The Tapio analyzer is an off-machine tool that measures the paper properties and cal-culates the variability at a high resolution. The Tapio software uses traditional signal processing analysis techniques to determine the variation and frequency content of the paper properties. The information from the analyzer is often used to determine how a paper machine is operating by measuring the spectral content of each paper property at a high resolution. The Tapio data consists of Cross Machine Direction(CD) samples and Machine Direction(MD) samples. Wavelet Analysis of Industrial Data 27 The CD samples are 150 CD strips taken over the entire width (7.8m) of the jumbo reel. This is approximately equivalent to two scans at the scan positions #127 and #128. Each CD strip is about 30cm wide, has about 9500 measurements and thus contains more high frequency information. During the data collection, the CD strips are taped together in order to run through the Tapio analyzer continuously, so the CD sample data has many spikes caused by the tapes following the CD strips (as shown in Figure 3.13). Two MD butt-roll samples were taken from the winder. The rolls were 8 inch wide and had a diameter of 39 inches, and were taken 25 feet from the front side of the machine. The length of MD samples are 415743 and 385535 respectively. 3.2.3 Transfer of Tapio Data Before the analysis starts, the spikes in the CD samples of Tapio data are removed. Figure 3.13 shows strips 26 to 75 of the Tapio CD sample data for basis weight property. As can be seen, the average value for basis weight is around 45 ~ 55p/m2. The spikes between the strips have much higher values(around 120g/m2) and should be removed. This is achieved by first detecting the tapes by searching the unusually high or low values and removing those values. The data shown in Figure 3.14 is the data after transfer, i.e. the spikes in Figure 3.13 have been removed. Then all the 150 CD strips are aligned together and are averaged into two scans for use in later analysis. 3.3 Cross Machine Direction Analysis The CD analysis is intended to separate the controllable and uncontrollable variations. Here in the absence of actuator response knowledge, the CD control bandwidth is assumed to be more than 2 actuator spacings. So the variations with a wavelength over this value should be removed by the CD control systems. Wavelet Analysis of Industrial Data 28 0 0.5 I 1.5 2 2.5 3 3.5 4 4.5 5 xlO 5 Figure 3.13: Original Tapio data Wavelet analysis requires the input data to be dimension(2ml, 2 m 2 ) . Paper machine measured data seldom gives this kind of size and the data should be padded to the proper size before the transformation. There are three ways of padding: zero padding, periodic extension and symmetric replication, detail reference can be found in [24, 32]. The symmetric replication is used here because it can reduce the boundary errors due to the periodic nature assumed by the DWT algorithm. After reconstruction the extra data sets are removed. 3.3.1 On-line Scanner Data Analysis Figure 3.15 shows the original raw profile from the on-line scanner. Figure 3.16 gives the diagram of on-line scanner data analysis. First two-dimensional discrete wavelet transform is performed on the raw profile. It has been found that for the paper machine data analysis, the Symlet wavelet family and Daubechies family produce better results. The Symlet wavelet(Sym4) of filter length 8 is selected for this set of data, MultiMAD Wavelet Analysis of Industrial Data Raw profile i 1 120 • : 1 i i > i i » J 100 • i I \\ • 1 • 80 • ! If! j 1; • \\ 60 • 1 \\ | T III J l i ' J F 40 -I 1 i \\ j i I • X * i t 20\"J IN , 1 . I ri 1 ' J 1 100 200 300 400 500 600 Actuator Figure 3.15: Raw profile s DWT — • Thresholding — IDWT Figure 3.16: On-line scanner data analysis using wavelet Wavelet Analysis of Industrial Data 30 resolution-dependent thresholding is used here due to its ability to adjust the thresholds at different levels. The multiresolution plot for scan 128 and the normalized wavelength is shown in Figure 3.17. The normalized wavelength is the corresponding spatial wavelength, nor-malized in terms of actuator spacing rather than absolute dimensions. The wavelength of the wavelet details from level 1 to level 3 is: 0.21 ~ 0.43, 0.43 ~ 0.86, 0.86 ~ 1.72 respectively. The decomposition level that can best separate the controllable and uncon-trollable variations is the third level, because the wavelength at this level is 1.72 which is close to 2 and is the best dividing wavelength that can be reached using wavelet analysis. Original signal Detail 0.21-0.43 0.43-0.86 0.86-1.72 . Actuators Figure 3.17: Multiresolution analysis and normalized wavelength After wavelet filtering, the thresholded details and approximation are used to re-construct the profile by Inverse Discrete Wavelet Transform (IDWT). Finally the scan average is removed and the clean CD profile is obtained. Because the zigzag sampling data contain information about both CD and MD variations, they should be separated Wavelet Analysis of Industrial Data 31 for different control and troubleshooting purposes. The separation is achieved in this case by subtracting the scan average after wavelet filtering to remove the MD trend. Figure 3.18 is the scan average after wavelet filtering. Figure 3.19 to Figure 3.22 are 45.6 | 1 1 1 1 1 1 1 Scan Figure 3.18: MD scan average the raw profile, CD approximation at level 2, CD residues at level 3 and CD approxima-tion at level 3 respectively. The image displays of profiles are plotted in Figure 3.23 to Figure 3.26 respectively. The controllable variations that should be removed by the control system are the streaks shown in the level 3 approximation, while the streaks in the residues are the residual variations that are inherent in the system. The profile after wavelet filtering provides better visual reference for the operators, who can quickly see if the paper being produced is within specifications. Wavelet Analysis of Industrial Data 32 Raw profile , . 5 . Actuator Figure 3.19: Raw profile Approximation at level 2(0.86—Inf) 1.5, Actuator Figure 3.20: CD approximation: level 2 Wavelet Analysis of Industrial Data 33 Riwpdik Appuiaului • level aO.SMnf) Figure 3.23: Raw profile Figure 3.24: CD approximation: level 2 Wavelet Analysis of Industrial Data 34 3.3.2 Off-line Tapio Analyzer Data Analysis The Tapio data set was averaged into two scans, 75 plies were averaged together for each scan. Then one dimensional wavelet filtering was applied to each scan. Because there is a large resolution difference between Tapio data (150 strips, 9500 measurement for each strip) and Measurex data(130 scans, 685 data points for each scan), the Tapio data is decomposed to a deeper level to compare variations for the same wavelength. The wavelength of variations for the Tapio data from level one to level eight are: 0.031,0.062,0.124,0.248,0.496,0.992,1.98,3.96 respectively. Raw Data 1 0 -1 1 1 1 1 1 1 1 Approximation 1 0 -1 V i i i i i i i A /» V 1 1 1 1 1 1 1 1 Detail 1 0 -1 V\\A i i i i i i i i i i i i i i 0 10 20 30 40 50 60 70 Actuators — Cross Direction Figure 3.27: Wavelet approximation and detail: level 7, Tapio data The decomposition level that can separate the controllable and uncontrollable varia-tions is level 7 corresponding to a wavelength of 2. Figure 3.27 show the wavelet approx-imation and detail at level 7. Wavelet Analysis of Industrial Data 3.3.3 Comparison of On-line Data Analysis and Off-line Data Analysis Figure 3.28 plots the third level wavelet approximation and detail of on-line data scan 128 and the seventh level wavelet approximation and detail for Tapio data at the same position(plies 76 to 150). Given the big difference of resolution between the two data sets, the match is excellent at the corresponding level. Raw Data Approximation 1 1 - Measurex Scan 128 l Tapio Average plies 76—150 Detail • I T 1 • i i -i i i i I I I I 1 I 1 1 _ 0 10 20 30 40 50 60 70 Actuators — Cross Direction Figure 3.28: Comparison of on-line scanner data and Tapio data The match between the on-line data and off-line data shows these two data sets are measuring the same properties and the wavelet transform does not produce any artifacts in the data. It also shows that wavelet filtering can be used to successfully separate the CD and MD variation of the on-line scanner data. The differences between the two data sets are caused by the following reasons: Wavelet Analysis of Industrial Data 36 1. In order to remove the spikes between two consecutive plies, the Tapio data transfer is carried out before the data analysis. However during this process some human errors occur. 2. There exists difference between the dividing wavelength for the on-line scanner data(1.72) and the Tapio data(1.98). 3.4 Machine Direction Analysis The MD direction analysis is intended to further remove noise and to obtain clean MD profile. Due to the data collection problem of the MD data set, only the on-line scanner data analysis is discussed here. The scanner data with CD profile removed is shown in Figure 3.29. After 2D wavelet filtering was performed on the raw data to remove the On-line scanner data — CD profile removed 46.5 I 1 1 1 1 l\\ 1 1 . 1 1 L _ 0 20 40 60 80 100 120 Scan Figure 3.29: Scanner data with CD profile removed high frequency MD and CD variations (as shown in the CD analysis), the resulting CD Wavelet Analysis of Industrial Data 37 MD estimation (wavelets vs scan average, level 9) 45.6] 1 1 1 . 1 140 Scan Figure 3.30: Scan average and wavelet estimate of MD profile approximation profile was subtracted from the raw data to leave only lower frequency MD variations and the high frequency residual variations in the data. Next, the data was transformed into a vector representing the MD path that the scanner has traced on the paper. Finally this vector was further decomposed to level 9 using one-D wavelet transform. The wavelet MD estimate versus the scan average is plotted in Figure 3.30. The close match between the wavelet estimation and the scan average shows that wavelet can separate the CD and MD variation of paper machine on-line scanner data successfully. Chapter 4 Wavelet and Wavelet Packet Analysis of Industrial Data In this chapter, both wavelet and wavelet packet transforms are used for industrial data analysis. One important application of multi-resolution analysis is performance moni-toring and performance assessment. Both wavelet and: wavelet packet analysis can be used for CD profile separation and control performance assessment, but wavelet packet analysis provides more flexibility for decomposition and this characteristic is used to get a better separation of CD profile variations, and thus a more accurate assessment of the system. Denoising and compression of the basis weight profile are also achieved using both wavelet and wavelet packet transforms and the results are compared. 4.1 C D Variation Separation Using Wavelet Analysis The profile measurements of paper machine contain various frequency components. Re-lating the different frequency ranges to the process, the variation can be classified as [6]: short term variation: < Is period medium term variation: Is — 200s period long term variation: Longer than 200s period Short term variation starts where formation leaves off (wavelength of 100mm) and includes all wavelengths up to the length of paper made in one second. Short term 38 Chapter 4. Wavelet and Waveiet Packet Analysis of Industrial Data 39 variation is primarily affected by hydraulic pulsation, hydraulic stability and equipment vibration. Medium term variation is primarily affected by blending, flow stability and fast control loops. Long term variation is primarily affected by system stability and slow control loops. After wavelet decomposition to a suitable level, the signal is decomposed into dif-ferent resolution levels which correspond to different wavelengths. Based on this, the process variations can then be divided into categories associated with the controllable and uncontrollable wavelengths and noise. If there exist controllable components in the profile, improved control actions are required to remove those variations. Here the CD control bandwidth is assumed to be wavelengths above 2 actuator spac-ings. This is a realistic assumption that may be modified in some cases when the response to an individual actuator adjustment is known. The on-line scanner data consist of 130 scans, 685 data boxes. It is already known from chapter 3 that the wavelet decomposition level that separates the CD controllable and uncontrollable variations is level 3. The wavelet decomposition tree is shown in Figure 4.31, each node is also labeled with its wavelength. According to this wavelet decomposition tree, the signal is divided (0,0) 0.21 ~lnf (1,0) 0.43~lnf (1,1)0.21-0.43 (2,0) 0.86~lnf (2,1)0.43-0.86 (3,0) 1.72~lnf (3,1)0.86-1.72 Figure 4.31: Wavelet decomposition tree and wavelength at each node Chapter 4. Wavelet and Wavelet Packet Analysis of Industrial Data 40 into the following consecutive wavelength divisions (units: actuator spacing): node(l,l): 0.21 to 0.43 node(2,l): 0.43 to 0.86 node(3,l): 0.86 to 1.72 node(3,0): 1.72 to oo Node (3,0) in the decomposition tree has a wavelength over than 1.72, and is re-garded as controllable in the wavelet analysis. However, this would include part of the uncontrollable variation, because the controllable wavelength should be longer than 2 actuator spacings. Since in the wavelet decomposition tree, only the low frequency ap-proximation component is further decomposed, the detail information is not decomposed any more. Thus 1.72 is the best dividing wavelength that can be reached using wavelet analysis. Once the wavelength A of the original data is determined, with wavelet analysis, separation is only possible at wavelength corresponding to 2nA. 4.2 CD Variation Separation Using Wavelet Packets In order to get a more accurate separation of CD variation, wavelet packet analysis is used. Wavelet packets provide more flexible decomposition, both the approximation and the detail may be decomposed each time. A design objective can be used to select the required bases to obtain an accurate separation of controllable and uncontrollable variation. Furthermore arbitrary multiples of the initial wavelength can be used for separating the controllable and uncontrollable components. In this case the wavelength division can be any value nA, where A is the original wavelength. Chapter 4. Wavelet and Wavelet Packet Analysis of Industrial Data 41 4.2.1 Frequency Order of Wavelet Packet Nodes In the process of wavelet packet decomposition, there are two ways to group the wavelet packet coefficients, frequency order and natural order [24]. Figure 4.32: Wavelet packet decomposition tree Consider the three-indexed family of wavelet packet functions WjtPtk(x) which is de-fined in Chapter 2, the natural order of a node is the same as its position in the de-composition tree. As shown in Figure 4.32, the natural order of the nodes at level 3 is 0,1,2,3,4,5,6,7 which is same as its position in the tree. The frequency order corresponds to the oscillating property, as can be seen from the db2 wavelet packets in Figure 4.33, Wn(x) oscillates approximately n times. The frequency order of the wavelet packet function is not the same as the natural order. The frequency order of the nodes can be obtained from the natural order recur-sively as in Table 4.2. Here for the convenience of calculating the wavelength at each Chapter 4. Wavelet and Wavelet Packet Analysis of Industrial Data 42 wO w l w2 w3 Figure 4.33: The db2 wavelet packets node and reconstructing the profile at each node, the coefficients are grouped according to the frequency order. Table 4.2: Natural order and frequency order of wavelet packet nodes Natural order 0 1 2 3 4 5 6 7 Frequency order 0 1 3 2 6 7 5 4 4.2.2 Restricted Basis Algorithm for Tree Selection The restricted basis selection algorithm is used here to search among the family of wavelet packet bases. Given the complete rectangle of wavelet packet coefficients down to some level, certain coefficients are excluded for statistical or other reasons [5]. Here the purpose of the restricted basis algorithm is to select those nodes which constitute consecutive wavelength divisions of the original signal. Chapter 4. Wavelet and Wavelet Packet Analysis of Industrial Data 43 For a given level, full wavelet packet decomposition is first performed. The wavelet packet nodes are grouped in the frequency order and the wavelength for each node is calculated. Next the wavelet packet separation node whose wavelength includes A = 2 is identified. After the dividing node is found, a group of nodes is found whose wavelengths together with the wavelength of the separation node can constitute a set of consecutive wavelength divisions of the original signal. In this way, a tree structure which satisfies our purpose can be obtained. The above procedures can also be summarized as: Given the initial wavelength A 0 , the decomposition find 2/A 0 and truncates it to an integer, then level n is found (here n = 4) which is the smallest integer that satisfies: 2 n A 0 > 2 (4.23) Then the approximation node at level n and all detail nodes above n are selected. Next decompose the detail node at level n and repeat the research until the node whose wavelength is equal to 2 (with the resolution of A0) is reached. Figure 4.34 shows the resulting wavelet packet decomposition tree based on the re-stricted basis algorithm. The wavelength and profile variance corresponding to each node are also labeled in the tree diagram. By performing the wavelet packet decomposition ac-cording to the tree in Figure 4.34, the signal can be divided into the following consecutive wavelength divisions (units: actuator spacing): node(l,l): 0.21 to 0.43 node(2,l): 0.43 to 0.86 node(3,l): 0.86 to 1.72 node (7,8): 1.72 to 1.93 Chapter 4. Wavelet and Wavelet Packet Analysis of Industrial Data 44 (0,0) 0.21 ~lnf (1,0) 0.43~lnf (1,1)0.21-0.43 | (Var 0.0046) (2,0) 0.86-lnf (2,1)0.43-0.86 I (Var 0.0111) (3,0) 1.72-lnf (3,1)0.86-1.72 (Var 0.0315) (4,0) 3.43~lnf (4,1)1.72-3.43 (Var 0.009) | (5,2) 1.72-2.57 (5,3)2.57-3.43 | (l/ar 0.0113) (6,4)1.72-2.14 (6,5)2.14-2.57 I (l/ar0.0070) (7,8) 1.72-1.93 (7,9) 1.93-2.14 (l/ar0.0074) (l/ar0.0038) Figure 4.34: Wavelet packet decomposition tree and wavelength at each node Chapter 4. Wavelet and Wavelet Packet Analysis of Industrial Data 45 node (7,9): 1.93 to 2.14 node (6,5): 2.14 to 2.57 node (5,3): 2.57 to 3.43 node (4,0): 3.43 to oo Here those nodes with wavelength shorter than 1.93 are considered to be uncontrol-lable, and those nodes with wavelength longer than 1.93 are considered to be controllable. By doing so, it can be seen that the variations of wavelength(node(7,8)) 1.72 ~ 1.93 should be classified as uncontrollable variation, whereas it is improperly included in the controllable variation when using wavelet decomposition method. 4.2.3 Profile Decomposition at Each Waveband After the wavelet packet decomposition, the profile in each node is extracted using wavelet packet reconstruction and is shown in Figure 4.35. The variance of the profile in each waveband are summarized in Table 4.3. Table 4.3: Summary of profile variation at each wavebancd Node Wavelength STD Var Var Percentage(%) 15(4,0) 3.43 oo 0.0949 0.009 10.3 34(5,3) 2.57 3.43 0.1063 0.0113 12.9 68(6,5) 2.14 2.57 0.0837 0.0070 8.0 136(7,9) 1.93 2.14 0.0616 0.0038 4.3 135(7,8) 1.72 1.93 0.0860 0.0074 8.4 8(3,1) 0.86 1.72 0.1775 0.0315 36.1 4(2,1) 0.43 0.86 0.1054 0.0111 12.7 2(1,1) 0.21 0.43 0.0678 0.0046 5.3 The first column shows the node number in the wavelet decomposition tree, the second column is the corresponding wavelength division. The third and fourth column are the er 4. Wavelet and Wavelet Packet Analysis of Industrial Data 46 Controllable at node 15(wavelength 3.43~Inf) Controllable at node 68(wavelength 2.14~ 2.57) 600 Uncontrollable at node 135(wavelength 1.72 —1.93 ) 600 1 Uncontrollable at node 8(wavelength 0.86 ~1.72 ) 600 Uncontrollable at node 4(wavelength 0.43 -0.86 ) 600 Uncontrollable at node 2(wavelength 0.21 —0.43 ) 600 Figure 4.35: Profile wavelet packet decomposition at each node Chapter 4. Wavelet and Wavelet Packet Analysis of Industrial Data 47 standard deviation and the variance of the profile decomposition at each node. The last column shows the variance percentage at each waveband. Knowledge of this information is helpful for CD performance monitoring and diagnosis. 4.2.4 Controllable and Uncontrollable CD Profiles Combining all controllable nodes and uncontrollable nodes in Figure 4.35 respectively, gives the controllable and uncontrollable profiles. Figure 4.36 and Figure 4.37 illustrate the controllable and uncontrollable profile after separation using wavelet analysis and wavelet packet analysis respectively. Based on the analysis for each scan, the controllable and uncontrollable profiles for the entire jumbo reel are shown in Figure 4.38. (a) and (b) are the controllable and uncontrollable profiles using wavelet packet analysis for the entire reel, (c) and (d) are the controllable and uncontrollable profiles using wavelet analysis. These pictures (normally displayed in color) are very helpful for the operators, and useful for the process retrieving and monitoring. It can be seen that some streaks in (c) are actually uncontrollable, so they are included in the uncontrollable profile properly shown in (b) by using wavelet packet analysis. Figure 4.39 to 4.42 display images of the profiles in Figure 4.38. Chapter 4. Wavelet and Wavelet Packet Analysis of Industrial Data Controllable profile(scan 128) Wavelet Packet Wavelet 300 400 Measurement 500 60O Figure 4.36: Controllable profile estimates using wavelet and wavelet packets Uncontrollable profile(scan 128) 1 0.8 0.6 0.4 ; ii _ 0.2 ii 1 I ° S3 —0.2 —0.4 -0.6 --0.8 -1 300 400 Measurement 500 600 Figure 4.37: Uncontrollable profile estimates using wavelet and wavelet packets Chapter 4. Wavelet and Wavelet Packet Analysis of Industrial Data (a) Controllable: Wavelet Packet( 1.93~Inf) 1, Actuator (c) Controllable: Wavelet( 1.72~Inf) 1 Actuator (b) UncontroEable Wavelet packet(0.21~1.93) Actuator (d) Uncontrollable: Wavelet(0.21~1.72) IN Actuator Figure 4.38: CD profile separation using wavelet and wavelet packets Chapter 4. Wavelet and Wavelet Packet Analysis of Industrial Data 50 Figure 4.39:Controllable:Wavelet Packet Figure 4.40:Uncontrollable:Wavelet Packet CcwolkHtWiwlaaTMnf) O M M i l k W»wid<0.2l-I.T2) Figure 4.41:Controllable:Wavelet Figure 4.42:Uncontrollable:Wavelet Chapter 4. Wavelet and Wavelet Faciei Analysis of Industrial Data 51 4.3 Performance Assessment Using Wavelet and Wavelet Packet Analysis After the controllable and uncontrollable separation, the potential improvement in con-trol that can be expected for the given actuator spacing can be evaluated in terms of performance index defined as follows. Performance index: wavelet Performance index: wavelet packet Figure 4.43: Performance index using wavelet and wavelet packets Let ap\\. be the noise-free overall profile variance and o2^^ be the variance of the controllable component. The performance index can be calculated as [29]: Chapter 4. Wavelet and Wavelet Packet Analysis of Industrial Data 52 C = y (4.24) pr ucontr If C = 1, perfect control is achieved, meaning that all controllable variations have been removed. C < 1 is not possible since cr^^ < o^. C > 1 means the controllable variations are still present in the profile. The following cases of combination of and C are considered: • If both Tnax: the maximal width that can be used in pattern j; riij-. the number of roll i in pattern j; Nj,max'- the maximal number of rolls per pattern; NjiTnin: the minimal number of rolls per pattern; Ni,max: the maximal demand for roll i ; (5.26) (5.27) (5.28) (5.29) (5.30) Trim-Loss Optimization 64 Ni,min: the minimal demand for roll i ; my. multiple for each cut type j . This problem is non-convex mixed integer non-linear programming problem (MINLP) subjected to the bilinear constraints and bilinear cost function in the formulation above. The problem is often solved as a two-step optimization procedure in which the first step is to generate cutting patterns and the second step is a mixed integer linear programming [36]. Next we are going to use the Lingo optimization modeling environment to solve this model. 5.4 Lingo Optimization Solver In this chapter, the main concern is to avoid the unacceptable areas in the jumbo reel during the trim optimization process. The non-perfect areas in the reel are regarded as un-usable, and only those usable sections are available for optimization. The optimization is modeled and solved using the Lingo Optimization Modeling Language and Solver. Lingo is an optimization modeling tool developed by Lindo System Inc., which allows users to utilize the power of linear and nonlinear optimization to formulate large problems concisely, solve them and analyze the solution. The model can be easily formulated, solved and modified. The correctness of the solution can also be assessed directly. Windows version Lingo4 is used here because of its following features [1]: Powerful modeling environment Using Lingo, one can express a complex model in a simple and efficient manner. The model is highly readable and easy to edit. Versatile lingo solver Trim-Loss Optimization 65 In order to solve different types of models, Lingo provides four solvers which are: a direct server, a linear server, a nonlinear server and a branch-and-bound server for integer restriction. Lingo will call different solvers for different problems. It reads the model formu-lation and automatically selects the appropriate solver for the user. For example, if the model contains any integer restrictions, the branch-and-bound manager is invoked to enforce them. The branch-and-bound manager will, in turn, call either the linear or nonlinear solver depending upon the nature of the model. Easy data handling With Lingo, the data can be stored in a variety of convenient ways. The data can be embedded directly in the model or in a table or list from a separate file. So accessing the data is much easier. In the trim optimization of next section, Lingo reads the data evaluation result which is generated by Matlab application programs. It also saves the optimization results into a text file which is easy for Matlab to access. Links to other application Using Lingo, it is easy to interface with databases and spreadsheets. Together with its callable interfaces, it can be easily integrated with other Windows development tools. In this thesis, the Lingo Dynamic Link Library(Lingo.dll) is called by Visual Basic to perform the optimization solving. Also, by using real-time OLE(Object Linking and Embedding), Lingo imports the information from and exports the cutting result to Microsoft Excel spreadsheets. Trim-Loss Optimization 66 5.5 Trim-Loss Optimization Using Visual Basic and Lingo Given the entire jumbo reel, the task of the trim optimization here is to cut the reel into different rolls according to the order sizes based on the measured sheet properties. The paper quality is evaluated based on the two sigma criterion( 2 standard deviation) for the sheet property (basis weight). Visual Basic Matlab Lingo -4 »- Excel • i f Data Analysis ^ 2-5 Evaluation Profile Display , Optimization Export , Figure 5.50: Trim optimization diagram First, given the measured sheet property profile for the entire reel, one level wavelet decomposition is carried out and the very high frequency noise due to formation and Trim Optimization Result Analysis Spreadsheet Result] Roll Information Trim-Loss Optimization 67 measurements is removed. Then the average property and the 2a value of the reel are calculated. Then the 2a criterion is used to evaluate the paper quality. Three optimization schemes are studied. They are: trim-loss optimization without paper quality consideration, trim-loss optimization based on the 2a evaluation of sheet property before improving control and after improving control. The results from the three schemes are compared and the potential benefit of savings from better control is analyzed. Figure 5.51: Trim optimization interface using Visual Basic Trim-Loss Optimization 68 Several different application software tools are used together to accomplish the trim optimization. One is Lingo, which provides the optimization modeling environment and optimization solver. Another is Matlab, which implements the wavelet filtering of the sheet property profile to evaluate the paper quality. Matlab also performs some important functions including 2o evaluation, DWT, IDWT and graphical display. Furthermore, the roll order information needs to be handled by the user. The basic flow diagram is shown in Figure 5.50. In order to integrate different application programs together and fully exploit the high computation performance and visualization tool of Matlab and the strong optimization facility in Lingo, an application interface using Visual Basic has been designed as shown in Figure 5.51. The optimization model is written in Lingo optimization modeling language which specifies constraints and the cost function. The user can enter the roll order information such as roll width, minimal demand, maximal demand, from the input information frame of the interface. This information is passed to the Lingo solver through the use of Visual Basic script. The optimization is solved by Lingo solver and the results are saved in the data files and also into Microsoft Excel spreadsheets through Object Link Embedding. Several command buttons are also available, the functions of which are introduced briefly according to the general operating procedure: Load Load the optimization model, including: 1. Modell-Model for traditional trim-loss optimization, in which the sheet quality is not considered in the optimization. 2. Model2-Trim loss optimization model based on the sheet property evaluation before improving control. 3. Model3-Trim loss optimization model based on the sheet property evaluation Trim-Loss Optimization 69 after improving control. Calc Calculate the lo and evaluate paper quality based on the 2o criterion. Solve Solve trim-loss optimizations for the corresponding model using Lingo solver, each time the Lingo Dynamic Link Library(DLL) is called. Result Show the optimization result in Excel spreadsheet. Export Plot the trimming optimization result graphically from Matlab with the sheet property displayed. Quit Exiting from the current application. 5.6 Tr im Optimization Schemes Three optimization schemes shown in Figure 5.52 are studied in this section. First the traditional trim-loss problem is studied without paper quality consideration. Second the trim-loss problem is studied based on the 2a evaluation of the basis weight. The third scheme is similar to the second one except the paper quality is assumed to have been improved through better control to illustrate the impact of improved basis weight control on the trim-loss optimization results. The optimization results from the three schemes are compared in terms of the total amount of waste paper, so the potential saving from improving control can be easily observed. Three different roll order sizes are used and the roll information including the width, minimal and maximal demand for each roll are specified in Table 5.5. Al l rolls are of the same length. Using the application programs and interface described earlier in the last section, the three trim-loss optimization schemes in Figure 5.52 are tested. Trim-Loss Optimization Original Data Wavelet reconstruction at level 1 Identify 2-Sigma Optimize 1: Tranditional trim optimization Optimize 2: Trim optimization with sheet quality consideration Scheme 1 (without quality consideration) Optimize 1 O J CD E CD . C O CO Scheme 3 «S.2 CD CO •5 =» o j 8 m o Q. Optimize 2 (after improving control) Go to level 3, Remove 80% variation Reconstruct to level 1 Optimize 2 Calculate waste paper Calculate waste paper Calculate waste paper Figure 5.52: Three trim optimization schemes Trim-Loss Optimization 71 Table 5.5: Roll order information Roll Width Minimal Demand Maximal Demand Roll 1 50 7 9 Roll 2 30 5 7 Roll 3 40 4 8 5.6.1 Scheme 1: Trim Optimization without Paper Quality Consideration In this scheme, the traditional trim-loss problem is solved without considering the paper quality. Table 5.6: Trim optimization result: Scheme 1 Cut Number of Roll 1 Number of Roll 2 Number of Roll 3 Cut 1 4 0 1 Cut 2 0 7 1 Cut 3 5 0 0 Cut 4 0 0 6 Total Number 9 7 8 This scheme seems to be the best if we ignore the paper sheet quality specification, because the number of produced rolls of each size is 9, 7, 8 respectively (see Table 5.6). However, many rolls produced using this scheme need to be discarded because the quality does not meet the specification( 2 RoU 2 * R o l l RoU 2 -Roll' J Rol 2 Roll 1 -R o l l R o l l R o l l Rolll R o l l i .,,,.„•„„ RoU 2 . . i . i , l 200 400 600 800 Machine direction 1000 1200 Figure 5.55: Trim optimization: Scheme 3 Trim-Loss Optimization 76 5.6.4 Comparison and Conclusion Based on the optimization results earlier, the amount of total waste paper is calculated and the three schemes are evaluated in terms of waste paper. The comparisons of these three optimization schemes are summarized in Table 5.9. In this table, the number of rolls produced for each roll order is listed, the total saleable paper is calculated. Table 5.9: Result comparison for three schemes No. Total Usable Unusable Rolll Roll2 Roll3 Cut Rolls Saleable Saleable(%) 1 1000 823.8 176.2 9 7 8 980 390 39 2 1000 823.8 176.2 7 7 4 720 720 72 3 1000 917 83 8 7 6 850 850 85 It can be seen that if we just look at the numbers of produced rolls, Scheme 1 seems satisfactory. However, because paper sheet quality is ignored, as a result those entire rolls containing the un-usable area need to be discarded. The amount of cut rolls is 980, however the amount of saleable rolls is only 390. So the total waste (61%) will be the largest among the three schemes. For Scheme 2, the paper quality is taken into consideration. Instead of discarding the entire rolls later, the unusable areas are bypassed during the optimization. Al l the cut rolls are saleable and the saleable percentage is 72%, the total waste is 28%, much less than the 61% in Scheme 1. The total waste for Scheme 3 is the least (15%) among the three schemes. Since after improving control, the quality for the entire reel is improved, the amount of unusable area is reduced and thus trim-loss is minimized. This illustrates the potential benefit of improving control in terms of paper savings during trim-loss optimization. Chapter 6 Conclusions 6.1 Conclusions Aimed at improving paper machine efficiency, this thesis first described the use of wavelets and wavelet packets in paper machine data analysis. After wavelet decomposition, the data were analyzed to determine the controllable and uncontrollable components. Finally trim-loss optimization was studied as an attempt to maximize production quality while minimizing trim-loss. First the validity and effectiveness of the wavelet transform analysis were confirmed by applying the discrete wavelet transform to both the paper machine on-line scanner data and off-line analyzer data. Then, process monitoring and control performance assessment were achieved. By separating the controllable and uncontrollable variations in the cross machine direction profile using multi-resolution analysis, the performance and the control potential of the system were evaluated. Both wavelet and wavelet packets were used and the results were compared. The wavelet and wavelet packets analyses have the following advantages: • Wavelet filtering can separate the cross machine direction variation and machine direction variation accurately. • Wavelet transform can represent the paper machine process data economically with-out loss of detail, it can also provide excellent visualization to the operator. 77 Conclusions 78 • Wavelet and wavelet packets provide high compression that allows the efficient storage of historical data and the transfer of data between different mills. • Wavelet packet analysis can achieve an accurate separation of CD controllable and uncontrollable variation and control performance assessment. • The performance index can be calculated on-line to provide the operator with a quick assessment of the process. Finally, the processed paper machine profile obtained through wavelet or wavelet packet analysis was further used for trim-loss optimization in roll cutting. The method in this thesis, which took into account the quality of the paper sheet during optimization, presented a new concept in trim-loss optimization. Three different optimization schemes have been discussed and compared. The potential savings through optimization after improving control were also analyzed. The results have shown that: • By addressing the paper quality during trim optimization, the trim-loss is reduced. • After improving control, improvements in the quality of the entire reel result in great savings through trim-loss optimization. • The individual roll information such as 2a variation, mean value is displayed, which helps to separate different quality rolls according to different demands. 6.2 Further Work There are some possible extensions to the research presented here, steps that might be taken to improve the methods in this thesis are described as following: Conclusions 79 • For best denoising, Adaptive Waveform Analysis might need to be used to ex-tract the coherent features of the signal adaptively using libraries of orthonormal waveforms. • Before the separation of the CD controllable and uncontrollable profiles, the actu-ator response needs to be identified to exactly extract the controllable profile. • Instead of minimizing the absolute trim-loss in the trim-loss optimization, a more comprehensive cost function including other important factors such as slitter move-ment can be used. Bibliography \"LINGO: The Modeling Language and Optimizer\". Lindo System Inc., 1998. A. Bruce and H.Y. Gao. \"WaveShrink: Shrinkage Functions and Thresholds\". StatSci Division, MathSoft, Inc., Seattle, WA, 41:909-996, 1986. S. C. Chen. \"Kalman Filtering Applied to Sheet Measurement\". 7th American Control Conference, Atlanta, Georgia, pages 643-647, 1988. J. Chun. \"Estimation and Control of Paper Machine Variables Using Wavelet Pack-ets Analysis\". M.A.Sc. Thesis, The University of British Columbia, 1997. R. Coifman and Y. Meyer. \"Signal Processing and Compression with Wavelet Pack-ets\" . Progress in Wavelet Analysis and Applications, Editions Frontiers, Toulouse, France, pages 77-93, 1992. K. A. Cutshall, G. E. Ilott, and J. H. Rogers. \"Grammage Variation - Measurement and Analysis\". Grammage Variation Subcommittee, Process Control Committee, Technical Section, CPPA, 1988. E. Dahlin. \"Computational Methods of a Dedicated Computer System for Measure-ment and Control on Paper Machines\". 24th Engineering Conference, TAPPI, San Francisco, USA, pages 62.1-62.42, Sept. 1969. I. Daubechies. \"Orthogonal Basis of Compactly Supported Wavelets\". Comm. Pure and Applied Math., 41:909-996, 1986. I. Daubechies, A. Grossmann, and Y . Meyer. \"Painless Non-Orthogonal Expan-sions\". J. Math. Phys., 27:293-309, 1986. D. L. Donoho and I. M . Johnstone. \"Minimax Estimation via Wavelet Shrinkage\". Technical Report 402, Department of Statistics, Stanford University, July 1992. D. L. Donoho and I. M . Johnstone. \"Adapting to Unknown Smoothness via Wavelet Shrinkage\". J. Am. Stat. Ass., 1993. D. L. Donoho and I. M . Johnstone. \"Ideal Denoising in an Orthogonal Basis Chosen from a Library of Bases\". Technical Report 461, Department of Statistics, Standford University, Sept. 1994. 80 Bibliography 81 [13] D. L. Donoho and I. M . Johnstone. \"Ideal Spatial Adaptation via Wavelet Shrink-age\". Biometrika, 81:425-455, July 1994. [14] G. A. Dumont, M . S. Davies, K. Natarajan, and C. Lindeborg. \"An Improved Algorithm for Estimating Paper Machine Moisture Profiles Using Scanned Data\". 30th IEEE Conference on Decision and Control, Brighton, England, Dec. 1991. [15] H. DyckhofT. \"A New Linear Approach to the Cutting Stock Problem\". Operations Research, 29:1092-1104, 1981. [16] P. G. Gilmore and R. E. Gomory. \"A Linear Programming Approach to the Cutting-Stock Problem\". Operations Research, pages 849-859, 1961. [17] A. Grossmann and J. Morlet. \"Decomposition of Hardy Functions into Square In-tegrate Wavelets of Constant Shape\". SIAM J. Math. Anal, pages 724-736, 1983. [18] A. Haar. \"Zur Theorie der Orthogonalen Funktionensysteme\". Mathematische An-nalen, 69:331-371, 1910. [19] I. Harjunkoski and T. Westerlund. \"Different Formulations for Solving Trim Loss Problems in a Paper-converting Mill with ILP\". Computers Chem. Engng, 20, Suppl.:121-126, 1996. [20] A. I. HINXMAN. \"The Trim-loss and Assortment Problems: A Survey\". European Journal of Operational Research, 5:8-18, 1980. [21] B. Jawerth and W. Sweldens. \"An Overview of Wavelet Based Multi-Resolution Analyses\". Technical Report 1993:1, Industrial Mathematics Initiative, Department of Mathematics, University of South Carolina, 1993. [22] I. M . Johnstone and B. W. Silverman. \"Wavelet Threshold Estimators for Data with Correlated Noise\". Technical report, Statistics Department, University of Bristol, UK, Sept. 1994. [23] C. Lindeborg. \"A Nonlinear Algorithm for the Estimation of Moisture Characteris-tics in the Paper Process\". Proc. of 17th International Conference BIAS-81, Milan, Italy, 3:139-158, Oct. 1981. [24] M . Misiti and Y. Misiti. \"Wavelet Toolbox User's Guide\". The Maths Works Inc., March 1996. [25] S. T. Morgan. \"Estimation and Identification for Machine Direction Control of Basis Weight and Moisture\". Master's Thesis, The University of British Columbia, June 1994. Bibliography 82 [26] K. Natarajan, G. A. Durnont, and M . S. Davies. \"An Algorithm for Estimating Cross and Machine Direction Moisture Profiles for Paper Machines.\". IFFAC/FORS Symposium, Beijing, PRC, pages 27-31, 1988. [27] Z. Nesic. \"Paper Machine Data Analysis Using Wavelets\". M.A.Sc. Thesis, The University of British Columbia, 1996. [28] Z. Nesic, M . S. Davies, and G. A. Dumont. \"Paper Machine Data Analysis and Compression Using Wavelets\". Tappi Journal, 80:191-204, 1997. [29] Z. Nesic, M . S. Davies, G. A. Dumont, and D. Brewster. \"CD Control Diagnostics Using a Wavelet Toolbox\". International CD Symposium'97, Finland, June 1997. [30] R. T. Ogden. \"Essential Wavelets for Statistical Applications and Data Analysis\". Birkhauser, Boston, 1997. [31] V. Solo. \"Wavelet Signal Estimation in Coloured Noise with Extension to Transfer Function Estimation\". Proceedings of the 37th IEEE Conference on Decision and Control, Tampa, Florida USA, December 1998. [32] G. Strang and T. Nguyen. \"Wavelets and Filter Banks\". Wellesley-Cambridge Press, 1996. [33] P. E. Sweeney and R. W. Haessler. \"One-dimensional Cutting Stock Decisions for Rolls with Multiple Quality Grades\". European Journal of Operational Research, 44:224-231, 1990. [34] X . G. Wang, G. A. Dumont, and M . S. Davies. \"Modeling and Identification of Basis Weight Variations in Paper Machines\". IEEE Transactions on Control System Technology, June 1993. [35] G. Wascher. \"An LP-based Approach to Cutting Stock Problem with Multiple Objectives\". European Journal of Operational Research, 1990:175-184, 1990. [36] T. Westerlund and I. Harjunkoski. \"Solving a Production Optimization Problem in a Paper-converting Mill with MILP\". Computers Chem. Engng, 22:563-570, 1998. [37] T. Westerlund and F. Pettersson. \"An Extended Cutting Plane Method for Solving Convex MINLP Problems\". Computers Chem. Engng, 19, Suppl.: 131-136, 1995. [38] M . V. Wickerhauser. \"Lecture on Wavelet Packet Algorithm\". Department of Math-ematics, Washington University, Nov. 1991. "@en ; edm:hasType "Thesis/Dissertation"@en ; vivo:dateIssued "1999-05"@en ; edm:isShownAt "10.14288/1.0065154"@en ; dcterms:language "eng"@en ; ns0:degreeDiscipline "Electrical and Computer Engineering"@en ; edm:provider "Vancouver : University of British Columbia Library"@en ; dcterms:publisher "University of British Columbia"@en ; dcterms:rights "For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use."@en ; ns0:scholarLevel "Graduate"@en ; dcterms:title "Paper machine data analysis and optimization using wavelets"@en ; dcterms:type "Text"@en ; ns0:identifierURI "http://hdl.handle.net/2429/9065"@en .