UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Paper machine data analysis using wavelets 1996

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
ubc_1996-0144.pdf
ubc_1996-0144.pdf [ 15.7MB ]
ubc_1996-0144.pdf
Metadata
JSON: 1.0065195.json
JSON-LD: 1.0065195+ld.json
RDF/XML (Pretty): 1.0065195.xml
RDF/JSON: 1.0065195+rdf.json
Turtle: 1.0065195+rdf-turtle.txt
N-Triples: 1.0065195+rdf-ntriples.txt
Citation
1.0065195.ris

Full Text

Paper Machine Data Analysis Using Wavelets by Zoran Nesic B.Sc. University of Belgrade, Yugoslavia, 1987 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF APPLIED SCIENCE in THE FACULTY OF GRADUATE STUDIES DEPARTMENT OF ELECTRICAL ENGINEERING We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA February 1996 © Zoran Nesic, 1996 In presenting this thesis in partial fulfillment of the requirements of an advanced degree at the University of British Columbia, I agree that the library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the Head of my Department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of Electrical Engineering The University of British Columbia 2356 Main Mal l Vancouver, B C V6T 1Z4 Canada Date: February 1996 Abstract The thesis describes a new approach to paper machine process data analysis using one- dimensional and two-dimensional discrete wavelet transforms. These techniques have been adapted from a general theory that has been developed in recent years on the application of wavelets to signal analysis. Application areas in which the theory was first applied have included image processing and bandwidth compression for communications. Two main applications of the discrete wavelet transform have been analyzed in this thesis. First, an analysis of the use of wavelets for processing scanned data representing basis weight and moisture variations on a paper machine has been carried out. It has been shown that wavelets are effective for the detection of process signals in noisy data, so leading to better estimation and visualization of the machine direction and cross machine variations in process data. The second main application of the method has been to allow significant compression of the process data without diminishing the ability to reconstruct accurate profiles. It has been shown that the compression method can be embedded into the estimation algorithm, producing excellent results without a major expense in computation time. It has been shown that, in both applications, the new methods produce results superior to the industrially accepted procedures. For appropriate choice of wavelets, profile estimates are improved over those obtained using exponential filtering or other standard analysis methods. The data compression technique presents a new concept in paper machine data analysis and the author is not aware of any previous references to this subject. The ability to reduce data storage requirements is of importance in mill-wide process monitoring systems. A comprehensive analysis of the proposed algorithms has been carried out on a variety of simulated data sets for which the true process variations are known. Industrial data has also been analyzed and it is apparent that the method had many desirable characteristics. I Table of Contents Abstract II List of Tables \ Y List of Figures VI Acknowledgments , '* 1 Introduction 1 1.1 Measurement of Paper Properties 1 1.2 Review of Standard Estimation Methods 2 1.3 Motivation 3 1.4 Outline of Thesis 4 2 Wavelet Theory 5 2.1 Definitions : 5 2.2 Wavelets as Basis Functions 6 2.3 Wavelet Properties 8 2.4 Implementation of the Discrete Wavelet Transform 9 2.5 Detection of Signals in Noise and Data Compression 11 2.5.1 De-Noising 12 2.5.1.1 Thresholding methods 13 2.5.1.2 Threshold Selection 14 2.5.1.3 De-Noising of Two-Dimensional Signals 15 2.5.2 Compression 16 3 Analysis of Paper Machine Process Data 19 3.1 Matlab™ Toolbox for Wavelet Analysis of Paper Machine Data 19 3.1.1 Padding of the Two-Dimensional Signals 21 3.2 Applying Wavelet De-noising to Simulated Scanned Data 21 3.2.1 Edge Effects and Step Changes 21 . 3.2.2 Smooth Surfaces 29 3.2.3 Grade Profile Change 33 3.2.4 Data Created by the Moisture Variation Model (MVM) 38 3.2.5 Bumps and Streaks 42 3.3 Wavelet Analysis of Industrial Data 47 3.3.1 Choice of an Estimator 47 3.3.2 Pulp Machine Moisture Content Data 47 3.3.3 Grade Change for a Paper Machine 56 4 Other Applications of Wavelets in Paper Making Process 59 4.1 Compression of Paper Machine Data 59 4.2 Wavelet Estimation for Control Purposes 73 5 Conclusion 75 References 78 A Exponential Multiple Scan Trending (EXPO) Algorithm 82 B Estimation and Identification of Basis Weight and Moisture Content (EIBMC) Algorithm83 B.l Process Models 83 B. l . l Moisture 83 B.1.2 Basis Weight 84 B.2 The EIBMC Algorithm 85 B.2.1 Moisture 85 B.2.2 Basis Weight 86 References 87 C Wavelet Filter Coefficients 88 IV List of Tables 3.1 Edge effects: thresholds and estimated noise at each resolution level 25 3.2 Edge effects: summary of the results 26 3.3 Edge effects: best filter performances 27 3.4 Smooth surfaces: thresholds and estimated noise at each resolution level 29 3.5 Smooth surfaces: summary of the results 29 3.6 Smooth surfaces: best filter performances 30 3.7 Grade change: thresholds and estimated noise at each resolution level 34 3.8 Grade change: summary of the results 34 3.9 Grade change: best filter performances . 34 3.10 M V M test data:, thresholds and estimated noise at each resolution level 39 3.11 M V M test data: summary of the results 39 3.12 M V M test data: best filter performances 40 3.13 Bumps and streaks: summary of the results 43 3.14 Bumps and streaks: best filter performances 44 3.15 Filters used to analyze the industrial data 48 4.16 Key table 59 4.17 Edge effects: summary of compression results 60 4.18 Smooth surfaces: summary of compression results . 60 4.19 Grade change: summary of compression results . 61 4.20 M V M test data: summary of compression results . 61 4.21 Bump data: summary of compression results 62 4.22 Pulp machine moisture data: summary of compression results 69 7 List of Figures 1.1 Paper Machine 1 1.2 Sensor path 2 2.1 Block diagram of one-dimensional DWT. 9 2.2 Block diagram of one-dimensional inverse DWT 10 2.3 Block diagram of two-dimensional DWT . 10 2.4 Signal in time-domain and its wavelet coefficients 11 2.5 Noisy signal in time-domain and its wavelet coefficients 12 2.6 Block diagram of a compressor/decompressor without a quantizer 16 2.7 Block diagram of a compressor/decompressor with a quantizer 17 2.8 Quantizer mapping function 17 3.1 Wavelet filters used in the analysis 20 3.2 Edge effects: CD+MD before and after the noise has been added 23 3.3 Edge effects: estimated profiles. 24 3.4 Edge effects:, estimation errors as functions of time (scan number) 25 3.5 Edge effects: estimation errors as functions of time (scan number) for a smooth estimator. . 27 3.6 Edge effects: estimation results for a smooth estimator 28 3.7 Smooth surfaces: composite profiles with and without noise. . 31 3.8 Smooth surfaces: estimated profiles 32 3.9 Smooth surfaces: estimation errors 33 3.10 Grade change data set: CD+MD before and after the noise has been added 35 3.11 Grade change test data set: estimated profiles 36 3.12 Grade change test data set: estimated profiles (images) 37 3.13 Grade change test data set: estimation errors. . . 38 3.14 M V M test data: estimation errors 40 3.15 M V M test data: estimated profiles . 41 3.16 M V M test data: CD profiles 42 3.17 Bumps and streaks: estimated profiles 45 3.18 Bumps and streaks: estimated profiles (images) 46 3.19 Pulp machine moisture data: estimated profiles 49 3.20 Pulp machine moisture data: estimated profiles [1] 50 3.21 Pulp machine moisture data: estimated profiles [1] (images) 51 3.22 Pulp machine moisture data: estimated profiles [2] 52 3.23 Pulp machine moisture data: estimated profiles [2] (images) 53 3.24 Pulp machine moisture data: estimated profiles [3] . . . .54 3.25 Pulp machine moisture data: estimated profiles [3] (images) 55 3.26 Paper machine basis weight data 57 3.27 Paper machine basis weight data: estimated profiles. 58 4.1 M V M data set: energy distribution 63 4.2 M V M data set: distribution of nonzero wavelet coefficients 64 4.3 2D DWT: coefficient storage 64 4.4 M V M data set: compressed data 66 4.5 M V M data set: compressed data (images) 67 4.6 Pulp machine moisture data: energy distribution 69 4.7 Pulp machine moisture data: distribution of nonzero wavelet c o e f f i c i e n t s . . . . . . . . . . . . 70 4.8 Pulp machine moisture data: compressed data 71 \7il 4.9 Pulp machine moisture data: compressed data (images) 72 4.10 Smooth surfaces: high resolution M D estimation 74 4.11 Pulp machine moisture data: high resolution M D estimation 74 B . l Data indexing 83 Vlll Acknowledgments I thank my thesis supervisor Professor Michael S. Davies for his support and guidance. I would also like to thank Professor Guy A . Dumont for his assistance and input during the course of this work Thanks to all my friends at Pulp and Paper Centre and at the Department of Electrical Engineering for making my student days enjoyable, in particular, Anil Tuladhar, Ian McLeod, Gino Carrese, Leo Stocco, Lahoucine Ettaleb, Othman Taha, and Roger Shirt. Thanks to Brian McMillan for making the computers work. Special thanks to Jahan Ghofraniha and Scott Morgan for their friendship and, of course, for sharing their rich libraries with me. This thesis would have never been possible without the support and understanding of my friend and boss Professor Andy Black. I am grateful to my wonderful wife, Maja Krzic, for her encouragement and patience during all those long years. \X Chapter 1: Introduction Chapter 1 Introduction 1.1 Measurement of Paper Properties In the final stage of paper manufacturing, after pulping and bleaching have taken place, the mixture of water and fibre (stock) is delivered to the paper machine where moisture is first removed by drainage and mechanical pressing and then dried to produce a sheet of paper formed at the reel. A simplified diagram of a paper machine is given in Figure 1.1. Reel Figure 1.1: Paper Machine To achieve uniformity and high quality in the paper produced, hundreds of control loops are needed as well as experienced operators. The most important control loops are those that maintain paper properties such as basis weight (mass of fibre per unit area) and moisture content (percent of water in the overall mass of paper sheet) close to the target values assigned for the grade of paper produced. 1 Chapter I: Introduction Paper Sheet Sensor path relative to sheet Cross direction Machine direction Sensor off-sheet Figure 1.2: Sensor path The sensor (see Figure 1:1) travels across the sheet (cross direction or CD). During one scan (one pass over the sheet) the sensor makes between 20 and 3000 uniformly spaced measurements (Morgan [30]). A scan period can vary from 10 to 60 seconds, depending on the installation. The speed of paper sheet is up to 25 m/s or more than a hundred times the sensor speed. The combination of the paper trajectory (in machine direction or MD) and the sensor movement (in CD) produces the zigzag pattern of measurements presented in Figure 1.2. This non-uniformly spaced sampling represents a major problem in estimation and control. The M D variations, introduced by pressure fluctuations and consistency variations in the approach system and headbox are considered to be time-dependent, fast and independent of the CD position. Control action in M D direction is normally taken at least every scan. Changes in CD profiles, coming from nonuniformity in the headbox, slice lip, pressing and drying, are considered to be relatively slow, indeed nearly time-invariant. In practice, a CD control action is taken every 2 to 4 scans (Taylor [36], Jonsson [22]). Faster detection of M D upsets is increasingly desirable with changes in paper machine design. It is obvious (see Figure 1.2) that the measured sequence of values contains information about both M D and C D variations. Decoupling the machine direction variations from the cross direction variations is not a trivial task. 1.2 Review of Standard Estimation Methods Many estimation methods have been developed for processing scanned data since automatic 2 Chapter 1: Introduction control of paper machines was introduced. Most industrial estimators are based on the Exponential Multiple Scan Trending (EXPO) algorithm (Dahlin [4]). Assuming slowly changing, zero mean C D profiles, this algorithm (see Appendix A) carries out exponential filtering of M D data at each C D position and produces CD profiles that are considered reasonably acceptable by the industry. The M D estimate, defined as the mean value of a scan, is available at the end of each scan. Most recent algorithms (Lindeborg [25], Chen [2], Natarajan [32], Dumont [15], Jonsson [22], Morgan [30], Wang [37]) use stochastic models for basis weight and moisture variations. They can estimate CD (Least Squares filter) and/or M D profiles (Kalman filter), with the M D estimations updated at each data point [30, 37], leading to improved M D control bandwidth. The results of the estimation method described in this thesis are compared to the results obtained by the E X P O algorithm and the Estimation and Identification of Basis Weight and Moisture Content (EIBMC) algorithm [37, 30]. The details of the latter are given in the Appendix B . 1.3 Motivation Recent trends in actuator design have allowed control systems for paper machines to involve an increased number of measurements and control signals. Increases in resolution for both sensors and actuators are the general trend. The increased resolution potentially leads to an improved overall control of paper machines i f the multivariable control issues can be addressed effectively [36]. The increased amount of information causes problems in data processing (slows down the estimators and controllers), visualization of the process information, and in data storage. While the advances in computer technology tend to provide us with enough computing power, the visualization and data storage demands are becoming increasingly important. The machine operators benefit from a better presentation of the process data in order to be able to detect any change in the quality of final product. An 'enhanced' image of the two-dimensional (CDxMD) data is an important tool as part of the operator interface. The enhancements may include data filtering to provide 'smooth' images or to extract some other features, for example sudden changes in paper quality, or streaks. This type of data processing will lead to better diagnostics and feature recognition, first by the human operators and then, possibly, by the automated systems. 3 Chapter 1: Introduction Quality control and production monitoring is increasingly important with the introduction of new standards (ISO-9001). Being able to record production and to trace back the events that caused a change in the paper quality has become essential for the paper companies. Data compression methods that provide an efficient data storage and retrieval method are desirable. In this thesis a novel way of handling scanned paper machine data is introduced. A n effort has been made to provide a framework that unites the estimation, visualizations, and data storage of the paper machine process data using wavelet analysis techniques. These techniques have been taken from the image processing literature and adopted to our purposes. 1.4 Outline of Thesis An introduction to wavelet theory and its applications is given in the second Chapter. The third Chapter presents the results achieved by using wavelet filtering on simulated and industrial data. Applications of wavelets in paper machine process data compression and in the control of paper machines are presented in Chapter 4. The conclusions and the final remarks are given in Chapter 5. Models of basis weight and moisture variations, as well as the EIBMC and E X P O algorithms are given in appendixes. 4 Chapter 2: Wavelet Theory Chapter 2 Wavelet Theory In recent years wavelet theory has been established as a unified framework for a number of techniques used in different disciplines of science: • Signal Processing (multiresolution analysis, subband coding, pyramid schemes) • Applied Mathematics (multigrids for solving partial differential equations, approximation theory, grids) • Physics (the theory of coherent states) Statistics (regression) The roots of what is today known as wavelet theory can be found in the work done in the early 1930's (Littlewood-Paley techniques) or even earlier (1910, work done by A . Haar [17]). In the early 1980's, these different techniques were unified as wavelet theory. At that time, the word wavelet was suggested by Alex Grossman and Jean Morlet. In mid 1980's, a new orthogonal wavelet expansion was constructed by Pierre-Gilles Lemarie and Yves Meyer [24] and, by the end of the decade, Ingrid Daubechies [5] gave a method for the construction of wavelets, nonzero only on a finite interval and with arbitrarily high but fixed regularity. Most of the work carried out in applying wavelet theory in the detection of signals in noise, and in data compression has been published in the last five years. In this Chapter an introduction to wavelet theory, one- and two-dimensional discrete wavelet transforms, wavelet filtering, and data compression is given, with an emphasis on the implementation issues. 2.1 Definitions The inner product of two elements x, y of a real or complex linear space Y is denoted (x, y) and has the following properties: (i) (x, y) > 0 , whenever x ^ 0 (ii) (x,y) = (y,x) (iii) (ax, y) — a(x,y) , a is a scalar (iv) (x + y,z) = (x, z) + (y, z) 5 Chapter 2: Wavelet Theory The norm, denoted ||y||, can be defined as ||y|| = (y, y)¥. In an inner product space, two elements x, y £ Y are defined to be orthogonal (denoted x _L y) if y) = 0. Let {y,-} be a set of mutually orthogonal ((yi,yj) — 0, i ^ j) elements in an inner product space Y. Define e,; = yi/||y»:||, then each e, has unit norm and the set {es-} is called an orthonormal set. A Hilbert space is a linear space on which an inner product has been defined, with a norm derived from the inner product and which is complete in this norm. 1? is a Hilbert space with inner product (x,y) = Jx{t)y(t)dt (2.1) where the functions x, y are defined and integrable on a domain I. I2 is a Hilbert space with inner product oo (z> v) = X]XiVi (2-2) i=i The condition number k of an operator T is defined as k = | |T| | • 1|. A n operator is unitary if T* = T~l. 2.2 Wavelets as Basis Functions Any signal in a Hilbert space can be approximated by a weighted sum of basis functions. The accuracy of the approximation will depend upon the class of signals, the choice of basis functions and the number of terms. N f(x)^^2c^i(x) (2.3) Different sets of basis functions will lead to different sets of coefficients c;. Two classes of representation are, for example, the sampling function and the sinusoid function. The signal representation by using sinusoids as the basis function is known as Fourier transform. This transform reveals information only about the signal's frequency content over the entire interval. On the other hand, the signal representation by a weighted sum of sampling functions will contain information only about the time domain behavior. This can be explained by comparing the functions' support (support is the interval over which a function is nonzero). The sampling function has infinitesimally small support and so produces precise time information but no frequency data. The sinusoids have infinite support and so provide spectral information but cannot establish time-domain variations. In 6 Chapter 2: Wavelet Theory many cases a set of basis function that is a compromise between these two extremes can be used to track both spectral and time-domain disturbances. These functions should have finite support (good time resolution) of adjustable width [18] (good frequency resolution). One such a set of basis functions is called wavelets. Wavelets are created by scaling and translating the same prototype function ^(x) known as the mother wavelet. The scale factor is normally chosen to be a power of two, yielding the desired cascade octave band-pass filter structure. For the discrete wavelet transform, all the integer shifts of *&(x) have to be considered. Therefore, the wavelet decomposition of the signal is given by (2.4) where Vjk(x) = 2iV(2jx-k) =2*V(2j(x-2-jk)) (2.5) The coefficient 2~3 k is a dyadic point. The multiplier 2 s is needed to make the basis orthonormal. The wavelet coefficients Cjk are computed by the continuous wavelet transform, which is the inner product of the signal f(x) and the basis function &jk(x). Let the time and the frequency resolutions be defined (in rms sense) as At = \ - O O oo ; w)\2dt (2.6) Au J uj2\G{uj)\2du / \G{u)\2du (2.7) where g(t) and G(u) are basis function and its Fourier transform respectively. The denominators represent the energies of g(t) and G(u) respectively. The product of time and frequency resolutions is lower bounded [34] (Eq. 2.8). At Aw > - ~ 2 (2.8) Eq. 2.8 is sometimes referred to as the uncertainty principle, or Heisenberg inequality. It implies that time and frequency resolutions cannot be controlled independently. Chapter 2: Wavelet Theory 2.3 Wavelet Properties Up to this moment, the choice of the mother wavelet has not been discussed. The choice of the function ^(x) depends on the wavelet properties that are best suited to a particular problem. Some of the most important properties of wavelets are given below. For a more detailed discussion of wavelet properties see the work of Jawerth and Sweldens [20]. Orthogonality - /)) = <5/) directly links the L2 norm of a function to the norm of its wavelet coefficients by The fast wavelet transform, using orthogonal wavelets, is a unitary transformation. Orthogonality is also important in numerical calculations. An error present in the initial data will not grow under the transformation, and stable numerical computations are possible. In the multiresolution analysis orthogonality means that the projection operators into the different subspaces yield optimal approximations in the L2 sense. Compact support is needed to guarantee that the summations in the fast WT are finite (i.e. wavelet filters have finite impulse responses). Symmetry is required for the generalized linear phase filters (to prevent phase distortion). Smoothness is a very important property for wavelet filtering (de-noising) and compression applications. A function is said to have M degrees of smoothness i f its M t n derivative is continuous at all points. The reconstructed version of a function will be much smoother i f the wavelet itself has a higher degree of smoothness. A higher degree of smoothness corresponds to better frequency localization of the filters. Number of vanishing moments TY for a sequence {gk} is given by Eq. 2.10 and is connected to the smoothness of the wavelet (and vice versa). A higher number of vanishing moments corresponds to a higher degree of smoothness. k There is a trade-off among these properties and they are held to varying degrees by different families of wavelets. There are many researchers working on the wavelet design. Fundamental 8 (2.9) (2.10) Chapter 2: Wavelet Theory and pioneering work has been done by Daubechies [6, 8, 5, 7], Meyer [29, 28], Cohen [3] and Wickerhauser [38]. Having chosen the appropriate wavelet, one can move towards the implementation of the discrete wavelet transform. 2.4 Implementation of the Discrete Wavelet Transform The paper machine sensor provides a sequence of discrete measured values. Therefore, an efficient algorithm for the discrete wavelet transform (DWT) is needed. Mallat's pyramid algorithm [26] has been used in this thesis and it will be described in this section. Let the sequence Sk be HP j-2 ' 1=2 • • • HP i = LP i : Figure 2.1: Block diagram of one-dimensional DWT. of the length n = 2N. The block diagram of a one-dimensional discrete wavelet transform (DWT) is given in Figure 2.1. It requires two properly designed wavelet filters: G — high pass filter and H — low pass filter, both of the length M. Examples of such filters are given in Figure 3.1 and Appendix C. These two filters are not independent since: 9n = nh\-n (2.H) where gn and hn are filter coefficients of G and H respectively. The DWT is implemented by filtering the signal followed by down-sampling by a factor of 2 (Figure 2.1 and Eq. 2.12). M - l HPk = ^ 9iS2k-i, M-l LPk = ^2 hiS2k-i i=0 (2.12) If the length of the original sequence Sk is n then the lowpass (LP) and the highpass (HP) component are each of the length n/2. This procedure of filtering and down-sampling can be repeated recursively Chapter 2: Wavelet Theory on the LP components of the signal. After N iterations the length of the lowpass and highpass components becomes equal to 1. The wavelet coefficients are given by: w = {LPj=N, HPj=N, HPj=N-i, • • •, HPj=1} (2.13) HP, L P * L P M • ( t 2 ) - ^ H , - ^ f 2 ) - » > H Figure 2.2: Block diagram of one-dimensional inverse DWT. The inverse DWT is depicted in Figure 2.2. It is accomplished by upsampling of the highpass and lowpass components by inserting a zero after each element, and convolving the upsampled lowpass and highpass signals with the filters H and G respectively. The filter pair (H, (?) is related to the forward DWT filter pair (H, G) 9n= 9-n (2-14) hn — h—jj The fast DWT requires 0(n) operations. An actual algorithm in pseudo code is given in [20]. A two-dimensional DWT can be accomplished by two separate one-dimensional transforms [18] as presented in Figure 2.3. r— H(yk) 12 along yk — * - H(XK) — » > 12 along xk — * - — * • — * • G(yk) 12 along yk — • —»*- i2 along xk — » - H(yk 12 along yk G(yk 12 along yk Figure 2.3: Block diagram of two-dimensional DWT. When implementing the filtering of a signal by the filters H,G a choice has to be made regarding the signal padding at the ends of the sequence. A signal padding assumption is needed because both, 10 Chapter 2: Wavelet Theory the filter and the signal lengths are finite. When the filter passes the last signal value (s n ) h\ h-2 /13 fi4 . . . S „ _ 4 S „ _ 3 Sn-2 Sn-l Sn X\ X2 £ 3 . . . (2.15) some values {x\, X2, £ 3 , . . . } have to be added at the end of the signal. The simplest solution would be to pad zeros. Padding zeros introduces a discontinuity at that point and it is quite unlikely that zeros are a natural extensions of the signal. Knowing how sensitive the DWT is in registering discontinuities, which is a reason for using the DWT in the first place, it is likely that this artificial insertion will introduce a significant error. A better approach is to consider the function to be periodic (i.e. x\ = s\, x'% = S2, ...). Again, it will cause no error only i f the behavior of the signal at s i and sn do match. Other much more advanced approaches to this problem, are summarized in [20] (Meyer's boundary wavelets, Dyadic boundary wavelets). 2.5 Detection of Signals in Noise and Data Compression The signal representation in wavelet domain can become very compact with only a few coeffi- cients being needed to represent a complex signal (Figure 2.4). By contrast, broadband components associated with noise have their energy dispersed over the coefficients (Figure 2.5). Function Blocks (MakeSignal.m, WaveLab) 6 1 Wavelet coefficients 0.2 0.4 0.6 0.8 Time 0 -1 -2 -3 -4 -5 -6 -7 -8 -9 0.2 0.4 0.6 0.8 Time Figure 2.4: Signal in time-domain and its wavelet coefficients 11 Chapter 2: Wavelet Theory This property makes the wavelet transform very suitable for estimation and data compression. By reducing (shrinking) the wavelet coefficients it is possible to remove the "less significant" features of a signal, most likely noise. Therefore an estimated, noiseless, signal can be obtained after the inverse DWT. The fact that the signal energy is concentrated in a relatively small percentage of its wavelet coefficients, naturally led to the implementation of wavelets in data compression. 2.5.1 De-Noising The term De-Noising describes, in an informal way, the various schemes which attempt to reject noise by damping or thresholding in the wavelet domain [11]. Different methods of wavelet filtering are very well documented in the literature. Most of the implementations are based on the work done by Donoho and Johnstone at Stanford University. They have shown [13, 12, 14, 10] that for the signal model: Vi = fi + crei, i — 1,. . . , n (2.16) where e; l~ N(0,1) is Gaussian white noise, the non-linear wavelet-coefficient thresholding produces estimates / that are nearly optimal in the I2 sense. Function Blocks with additive white noise 7. 0 0.2 0.4 0.6 Oi Time Wavelet coefficients in presence of noise 0 -1 -2 -3 -4 -5 -6 -7 -8 -9 _i_ T -Ir- T " I 1 I ' r~r ,,1. , •A—JTA- 0 0.2 0.4 0.6 Oi Time Figure 2.5: Noisy signal in time-domain and its wavelet coefficients 12 Chapter 2: Wavelet Theory The wavelet regression estimator that is suggested by the Stanford group is based on a three-step procedure: 1. Find DWT of y = (yi,...,y„) u = Wy (2.17) where operator W represents the wavelet transform; 2. Create a new set of wavelet coefficients UJ* by a non-linear modification of UJ; 3. Reconstruct an estimate of / b y applying the inverse DWT to UJ*: f = WTu* (2.18) The step 2 is the most important part of the regression. Donoho and Johnstone have proposed thresholding of wavelet coefficients as the primary method of the modification. They have established that this method works because: 1. The DWT of Gaussian white noise is again white noise. Thus, it is evenly spread over all coefficients (every empirical wavelet coefficient contributes noise of variance a2 but only a very few wavelet coefficients contribute signal [14]. 2. The real signal / is represented by a small number of nonzero coefficients. To achieve successful estimation, a threshold has to be found based on the estimated noise level. 2.5.1.1 Thresholding methods There are two main types of thresholding: Hard Thresholding and Soft Thresholding. Hard thresholding keeps all the coefficients that are greater than the threshold and sets the others to zero. Expressed in the standard Matlab™ notation, it is given by ui*hard — ui • (abs(ui) > threshold) (2-19) Soft thresholding sets all the coefficients smaller than the threshold to zero and shrinks the others by the threshold value. It is given by ui*soft = sign(u>) • (abs(uj) — threshold) • (abs(uj) > threshold) (2.20) 13 Chapter 2: Wavelet Theory It is very common to apply thresholding only on the higher-resolution coefficient levels, keeping the lower-frequency content of the signal intact. It is obvious that these methods are highly non-linear. The soft thresholding is the I2 optimal non-linear function in the wavelet domain to apply i f one requires the resulting function to be at least as smooth as the original, noise-free one. Hard thresholding yields better I2 performance but does not guarantee the smoothness property. In practice, hard thresholding may produce estimates containing "blips" and other irregularities that may not be visually appealing. Another method of thresholding, proposed by Lang et al. [23], uses a shift invariant DWT for the noise reduction. Authors have shown that this method, when combined with hard thresholding, gives better noise suppression without oversmoothing of the details. This method calculates the DWT for all the shifts of the signal, followed by the usual hard thresholding and inverse DWT. Then, it averages the resulting estimates. When properly implemented, shift invariant DWT requires O(NlogN) operations and storage space for NlogN coefficients compared to O(N) operations and storage space for N coefficients required for classical DWT. 2.5.1.2 Threshold Selection Donoho and Johnstone have established [13, 10] several methods of threshold selection. Universal Threshold or VisuShrink is given by tuniv = cryj2log(n) (2.21) where n is the sample size (2N) and a is the noise-level estimate. The authors have proposed median absolute deviation (MAD) method for the noise level estimation [13] from the finest scale of empirical wavelet coefficients (ojf) a = Median(\uf\)/0.67i5 (2.22) This is a robust estimator which produces visually appealing (hence the name VisuShrink), very smooth, estimates. The same threshold is applied to all the levels. It has been shown that this method tends to underfit (oversmooth) the data [31]. SureShrink is an adaptive threshold-selection procedure [13]. A different threshold is applied at each resolution level leading to smoothness-adaptive estimates. It uses Stein's Unbiased Risk 14 Chapter 2: Wavelet Theory Estimate for threshold choice. A n implementation of SURE algorithm is given in the Matlab file: ValSUREThresh.m which is part of the wavelet toolbox developed at Stanford University [1]. At each level of resolution the algorithm checks whether there is enough signal to compute the SURE threshold. If there is, tsure is used, otherwise tuniv is used. This method also tends to underfit. There are some other methods, proposed by different authors. Nason [31] has suggested two methods. One uses SURE procedure on all the coefficients simultaneously. It is called GlobalSure. This procedure results in reasonable performance with respect to mean square error. The second method is based on Cross-Validation. The same paper presents a comparative study of these methods. For all the threshold-selection procedures described above, the noise model is assumed to be as in (2.16). In case of correlated noise Johnstone and Silverman [21] have proposed a level-adaptive threshold: where n is the data sample size and Sj is the standard deviation of the wavelet coefficients at j t h level. Another method that can be used in the case of correlated noise, i f the noise structure is known, is to use a prewhitening transformation followed by universal thresholding. However, the wavelet decomposition of the signal in the original domain may possess sparsity properties that are lost in the prewhitening transformation, and the advantages of using wavelet de-noising on the prewhitened data would then be diminished [21]. 2.5.1.3 De-Noising of Two-Dimensional Signals The same methods described in the preceding sections can be used for two-dimensional signals. Donoho [10] proposed the same three step procedure described in 2.5.1, but applied to two- dimensional image data tj = Sj sjl • login) (2.23) i , . . . , m i (2-24) 1,... ,m2 where zi iid N(0,1) is Gaussian white noise and the sample size n is given byn = m\m2. 15 Chapter 2: Wavelet Theory 2.5.2 Compression Data compression is one of the most common applications of wavelet theory. The ability of the DWT to compress the signal energy in a small number of coefficients makes it a natural choice for data compression. There are two basic types of data compression, lossless and lossy. Lossless compression is used when an exact reconstruction of the signal is needed. In con- trast, lossy compression allows small errors in the reconstruction, in an attempt to achieve higher compression ratios. DWT H Threshold H Encoder Compressor Sparse Matrix Storage IDWT Decoder Decompressor / Figure 2.6: Block diagram of a compressor/decompressor without a quantizer. In this thesis, only lossy.compression will be considered. The reason for this should be apparent from the previous section (2.5.1). The goal is to try to reconstruct the" signal from the noisy data. Most of the time it is only the reconstructed signal that is of interest and the "loss" should be the noise. The de-noising procedures reduce the number of nonzero elements, therefore making it an , excellent candidate for compression. There are two possible methods to implement data compression on the process data. One method is to use the procedure depicted in Figure 2.6. The encoder's role is to losslessly compress the sparse matrix of quantized coefficients. For the fast execution, simple run-length (Pratt [33]) coding of zero-valued coefficients has been proven very effective. This simple procedure, when applied to the industrial data used later in this thesis, reduces the percentage of nonzero coefficients in the DWT to 5% (i.e. the signal is represented by 5% of its DWT coefficients). The remaining coefficients can be stored in a Matlab™ sparse matrix achieving the compression ratio of 13:1. Because the wavelet transform and de-noising procedure are an integral part of the estimator, no additional computing time is required. 16 Chapter 2: Wavelet Theory Another method used in image compression is shown in Figure 2.7. CO' DWT H Threshold H Quantizer H Encoder Compressor s(\,yj' IDWT M Dequantizer— Decoder co'. Sparse Matrix Storage Decompressor Figure 2.7: Block diagram of a compressor/decompressor with a quantizer. This method adds one additional step, quantization, into the compression procedure. By quantization of nonzero DWT coefficients much higher compression ratios are achievable at the expense of an additional error. A quantizer is a function that maps many input values into a smaller set of output values. This mapping is generally a staircase function (Figure 2.8). It takes a continuous variable u and maps it into a discrete variable u* which takes values from a finite set of reconstruction levels { r i , T2, TL} (Jain [19]). tk U Figure 2.8: Quantizer mapping function. If the value of u is between two decision levels (tk,tk+i] the quantizer will map it into a single reconstruction level tk (Figure 2.8). The quantizer mapping is irreversible — for the given reconstruction level (a quantizer output) the input value cannot be uniquely determined. Thus, an additional distortion of the original signal is introduced. Therefore, the quantizer should be designed in such a way that it minimizes the distortion. In the case of wavelet transform, where the significant information about the signal is carried by the high magnitude wavelet coefficients, it is obvious that the quantizer should have more reconstruction levels dedicated for those coefficients than for the low 17 Chapter 2: Wavelet Theory magnitude ones. One design method, the optimum mean square quantizer (LLOYD-MAX) design, is given in [19]. The decompressor (Figure 2.7) applies decoding (unique) and dequantization (non-unique) to come up with a set of wavelet coefficients CJ* (note: CJ* ^ CJ*). The compression of process data differs from the compression in image processing. The image processing compression should be optimized in a sense of accounting for the Human Visual System (Marr [27]) while the goal of process data compression is to preserve the information about the process. For additional information on wavelet data compression one can refer to the overview given by Hilton et al. [18]. 18 Chapter 3: Analysis of Paper Machine Process Data Chapter 3 Analysis of Paper Machine Process Data The first part of this chapter will be used to introduce the application of wavelet estimation methods to paper machine data analysis. A Matlab™ toolbox has been developed so that different algorithms, described in the previous chapter, can be tested. Test data sets have been used to verify the algorithms and to compare their results to those obtained by using the industrially accepted estimators (EXPO, ED3MC). 3.1 Matlab™ Toolbox for Wavelet Analysis of Paper Machine Data A set of Matlab™ functions and scripts has been developed to provide quick access to variety of different wavelet de-noising techniques and different test data sets. Even though this software is based on the Rice University Wavelet Toolbox [16] and WaveLab [1] only three functions from those toolboxes are used in the final version of the software (two-dimensional DWT and fDWT from the Rice Wavelet Toolbox and SURE threshold selection from the WaveLab). Most of the functions have been rewritten to adjust the algorithms to handle paper machine data (two-dimensional data with x and y dimensions of different, non-power of two, lengths). By selecting different options in the front-end interface procedures many different setups can be tried . Two different thresholding methods can be used: 1. hard thresholding, 2. soft thresholding. Four different methods of noise estimation and threshold selection are available: 1. standard deviation of the wavelet coefficients at the finest resolution multiplied by y/2 * log (n), 2. VisuShrink procedure (see Chapter 2) 3. Mul t iMAD, resolution-dependent thresholding using M A D noise estimator (Eq. 2.22) to estimate the noise standard deviation at each resolution level, 4. SureShrink (see Chapter 2) 19 Chapter 3: Analysis of Paper Machine Process Data and filters created from four different mother wavelets can be used: 1. Haar wavelet, 2. Daubeshies orthogonal wavelets, 3. Coiflets, 4. Symmlets. The above wavelet filters are depicted in Figure 3.1. For more information about the mother wavelets used the reader may refer to the original papers [8, 5, 9]. Haar Daubechies 4 Figure 3.1: Wavelet filters used in the analysis. The results of wavelet de-noising procedures are compared to the estimations obtained by the E X P O algorithm and, in some cases, to those obtained by the EIBMC algorithm. Only a few tests 20 Chapter 3: Analysis of Paper Machine Process Data were done using the EIBMC since, for a fair comparison, the EfBMC algorithm needs to be properly tuned and the test data sets have to be created by using the proposed models (see Appendix B). The verification process called for some extreme-case input signals. Hence, not all data sets used for the wavelet estimator testing were created by the given models. Note that both EIBMC and E X P O are recursive algorithms, whereas wavelet filtering is a batch operation at present. 3.1.1 Padding of the Two-Dimensional Signals As mentioned in the previous chapter the fast DWT and IDWT procedures need input data sets of dimensions D ( 2 L l , 2 1 , 2 ) . Paper machines rarely give this kind of output. Therefore, some kind of extrapolation is needed so data can be padded up to the proper size. The padding by the repetition of the last data point in each particular direction has been chosen because it is simple to implement and it reduces the boundary errors due to the periodic nature of the input signals assumed by the DWT algorithm. The extra data points are removed after the de-noising. This procedure caused many of the boundary wavelet coefficients to be set to zero (signal becomes constant in that direction setting high-pass signal component to zero). Thus, the original M A D algorithm (Eq. 2.22) had to be modified so that all the zero coefficients are removed before the median is calculated. 3.2 Applying Wavelet De-noising to Simulated Scanned Data Verification of the proposed algorithms has been conducted on the various sets of the simulated data. Data sets were designed in such a way that they provided signals with enough complexity so that the estimator could be properly evaluated. The test data also contained the features that commonly appear on a paper sheet (streaks, bumps and grade changes). A l l the data sets have been zigzag sampled and presented to the estimator as i f they had been coming from a paper machine scanner. The data processing has been done in batch mode. A matrix of data points of the size D(n,m) is taken, padded to the size D\ (2N, 2M), where 2N >n and 2M > ra, and sent to the estimator. 3.2.1 Edge Effects and Step Changes The first data set has been created in such a way that the edge effects (errors from the signal periodicity assumed by DWT) and the effects of sharp (sudden) changes in profiles can be studied. The target values (noiseless data) are shown in Figure 3.3. This data set' contains a very well 21 Chapter 3: Analysis of Paper Machine Process Data defined C D profile which differs greatly at the edge sensors (sensors #1 and #120) and it has a sharp discontinuity at the sensor #60. The M D profile is a sine function of time, with an amplitude of 1. At the M D position 29 the C D profile flattens up by 50% (sudden change in C D profile — the amplitude being changed from 1 to 0.5). The number of scans for all test data sets is 58 and the number of sensors is 120. Consequently, four data points were padded to each end of every C D profile and 3 scans were padded before and after the real signal. This signal is corrupted with white noise Zilti2 ~ N(0,a2) where a2 = (0.2) 2. The composite signal (CD + MD), before and after adding the noise, is shown in Figure 3.2. This noisy signal has been processed by using different wavelet estimators and the results compared to those obtained by the EXPO. For a signal containing such abrupt changes it was expected that the Haar wavelet would perform very well. This has been confirmed by the experiments (see Tables 3.2 and 3.3). The estimated profiles as well as the real signals are given in Figure 3.3. The M D profiles are calculated as scan averages. 22 Chapter 3: Analysis of Paper Machine Process Data MD+CD Target Profile 0 o Sensors Raw MD+CD Profile Figure 3.2: Edge effects: CD+MD before and after the noise has been added. 23 Chapter 3: Analysis of Paper Machine Process Data Chapter 3: Analysis of Paper Machine Process Data Comparison of mean scan errors Figure 3.4: Edge effects: estimation errors as functions of time (scan number). The Haar wavelet given in Table 3.3 has been used. The thresholds and the estimated noise standard deviations, used for de-noising, are given in Table 3.1. Figure 3.4 depicts the mean Levels Threshold <r(<7 = 0.2) 1 0.75 0.19 2 0.64 0.18 3 0.63 0.20 4 0.50 0.19 5 0.25 0.12 Table 3.1 Edge effects: thresholds and estimated noise at each resolution level. estimation error (per scan) as a function of time (scan position). The error calculation formula is given in Equation 3.1 CDerrij = \CDij — CDestimatedij\, i = 1... n; j = 1... m (3.1) where CDerrij is the absolute value of estimation error at the i t h scan and j t h sensor. For each scan the mean value has been plotted. I m CDerrj = — > CDerri, (3.2) j=l 25. Chapter 3? Analysis of Paper Machine Process Data J = ^ 2 2 C D 2 ( I R R ' - J (3-3) The overall errors (Equation 3.3) were: Jwav — 0.86j (3-4) Jexpo = 9.6 and the mean error at each scan was less then 1% of the signal amplitude. Therefore, the signal was successfully recovered from the noisy data. A number of tests using different methods of thresholding, threshold selections, and with a variety of different mother wavelets of different lengths, has been run for each data set. Some of the results are summarized in the Table 3.2. For the Symmlets and Coiflets the.number in the fourth column represents the wavelet type not the actual length. Two tables, summarizing the test results, are given for each data set. The first table always shows the results obtained by the same set of the wavelet filters and is used for comparison of the filter performances on different data sets. The second table displays the representatives of four different wavelet filters that have produced the best results, i.e. smallest errors, for that data set. The Haar wavelet used in this example has produced the best results, as expected, but a number of different configurations have performed very well. When very smooth wavelet filters have been used, the estimation error would tend to go up with an increased number of resolution levels. This happens when thresholding starts to affect the low-frequency part of the signal (trying to smooth out the signal made of sharp edges). Wavelet Thresholding Threshold Wavelet Resolution J J expo method estimation length/type levels Daubechies soft std 6 • 3- 0.85 Daubechies soft Mul t iMAD 6 3 1.14 Symmlet soft std 6 . 3 0.85 Symmlet soft Mul t iMAD 6 • 3 1.19 Coiflet soft std 3 3 0.83 Coiflet soft Mul t iMAD 3 3 1.27 ' Haar soft Mul t iMAD 2 2 0.46 Table 3.2 Edge effects: summary of the results 26 Chapter 3: Analysis of Paper Machine Process Data It can be seen that the errors at the edges of the signal are very small. This may be attributed to the signal padding and to the wavelet length. For a filter length of two, only one data point has to be padded to the signal at any time — consequently only one point can be wrong. Wavelet Thresholding Threshold Wavelet Resolution J t « expo. method estimation length/type levels Daubechies hard M A D 4 2 0.70 Symmlet hard M A D 6 3 0.72 Coiflet hard std 2 3 0.66 Haar soft Mul t iMAD 2 5 0.09 Table 3.3 Edge effects: best filter performances. To contrast the results shown in Figures 3.3 and 3.4, in Figures 3.6 and 3.5 are shown the results of the estimation obtained by using the Coiflet filter described in Table 3.3. As expected, the filter with a higher degree, of smoothness yielded much smoother estimates causing a decrease of 'sharpness' at the points of discontinuity (sensor #60 and scan #29). The overall estimation error was higher than in the first case but still significantly better than for the E X P O estimator. The filter length has also contributed to the edge effect errors. Comparison of mean scan errors Figure 3.5: Edge effects: estimation errors as functions of time (scan number) for a smooth estimator. 27 Chapter 3: Analysis of Paper Machine Process Data 28 Chapter 3: Analysis of Paper Machine Process Data 3.2.2 Smooth Surfaces The second data set, depicted in Figure 3.7, has been created with the intention of testing the wavelet estimator on some smoother surfaces with a constant signal to noise ratio. The profiles CD = 1-sin 6CD, 6CD£[0M] (3.5) MD = 1 • sin 0MD, 0MD € [0, 2TT] were corrupted by the additive white noise Zilti2 *~ 7Y(0, <J2) where a2 = (0.5) 2 which was much higher than in the first data set. The estimation has been done by the Symmlet filter using Mul t iMAD procedure (see Table 3.5). The thresholds and the estimated noise are given in Table 3.4. The estimation errors are shown in Figure 3.9 while the estimated profiles are given in Figure 3.8. Levels Threshold a(a = 0.5) 1 1.56 0.40 2 1.44 0.40 3 1.87 0.60 Table 3.4 Smooth surfaces: thresholds and estimated noise at each resolution level. Wavelet Thresholding Threshold Wavelet Resolution J J expo method estimation length/type levels Daubechies soft std 6 3 0.51 Daubechies soft Mul t iMAD 6 3 0.51 Symmlet soft . std 6 3 0.39 Symmlet soft Mul t iMAD 6 3 0.39 Coiflet soft std 3 3 0.42 Coiflet soft Mul t iMAD 3 3 0.42 Haar soft Mul t iMAD 2 2 0.95 Table 3.5 Smooth surfaces: summary of the results Even though the signal to noise ratio was low, the estimated profiles were very close to the original ones and, due to the filter smoothness, very smooth. The overall errors were: Jwav = 6.22, (3.6) J expo — 16.03 29 Chapter 3: Analysis of Paper Machine Process Data Jwav — 0.3QJ~expo (3-7) Wavelet Thresholding Threshold Wavelet Resolution J J expo method estimation length/type levels Daubechies soft std 6 3 0.51 Symmlet soft Mul t iMAD 6 3 0.39 Coiflet soft Mul t iMAD 3 3 0.42 Haar soft Mul t iMAD 2 2 0.95 Table 3.6 Smooth surfaces: best filter performances. 30 Chapter 3: Analysis of Paper Machine Process Data Figure 3.7: Smooth surfaces: composite profiles with and without noise. 31  Chapter 3: Analysis of Paper Machine Process Data Comparison of mean scan errors Figure 3.9: Smooth surfaces: estimation errors. 3.2.3 Grade Profile Change This data set, shown in Figure 3.10, has been used with the wavelet estimator to see how the estimator behaves when a step change of amplitude 2 is introduced in the M D profile. The profiles CD = 0.3 • cos 6CD, OCD € [0, 2TT] MD = 0.3 • s in6 M D, 0MD€[O,2TT] (3.8) were corrupted by additive white noise with variance that was very high compared to the profile amplitudes: a2 = (0.2) 2. The estimation has been done using the Symmlet filter and Mul t iMAD procedure (see Table 3.9). The thresholds and the estimated noise are given in Table 3.7. It is interesting to note that the VisuShrink threshold that would apply to this case was 0.66. Table 3.7 shows that a threshold of this size was applied only at one resolution level. A l l the other threshold levels were lower. Therefore, a VisuShrink estimator would oversmooth the signal in this case. This agrees with the theory given in Chapter 2. The estimation errors are shown in Figure 3.13 while 33 Chapter 3: Analysis of Paper Machine Process Data Levels Threshold <j(cr = 0.2) 1 0.61 0.16 2 0.60 0.17 3 0.66 0.21 4 0.49 0.19 5 0.38 0.19 Table 3.7 Grade change: thresholds and estimated noise at each resolution level. the estimated profiles are given in Figures 3.11 and 3.12. The overall error for the E X P O filter was Jexpo — 6.42. Wavelet Thresholding Threshold Wavelet Resolution J Jexpo method estimation length/type levels Daubechies soft std 6 3 0.49 Daubechies soft Mul t iMAD 6 3 0.39 Symmlet soft std 6 3 0.50. Symmlet soft Mul t iMAD 6 3 0.38 Coiflet soft std 3 3 0.51 Coiflet soft Mul t iMAD 3 3 0.38 Haar soft Mul t iMAD 2 2 0.71 Table 3.8 Grade change: summary of the results Wavelet Thresholding Threshold Wavelet Resolution j J expo method estimation length/type levels Daubechies soft Mul t iMAD 6 4 0.20 Symmlet soft Mul t iMAD 8 5 0.17 Coiflet soft Mul t iMAD 2 4 0.19 Haar soft Mul t iMAD 2 3 0.50 Table 3.9 Grade change: best filter performances. 34 Chapter 3: Analysis of Paper Machine Process Data MD+CD Target Profile 0 o Sensors Raw MD+CD Profile 0 o Sensors Figure 3.10: Grade change data set: CD+MD before and after the noise has been added. 35 Chapter 3: Analysis of Paper Machine Process Data 36 Chapter 3: Analysis of Paper Machine Process Data S O O O O O O C N >r> — ui o j" SUBOC j] o C A ed -a SUBDS SUBDS 37 Chapter 3: Analysis of Paper Machine Process Data Figure 3.13: Grade change test data set: estimation errors. 3.2.4 Data Created by the Moisture Variation Model (MVM) This data set (the profiles are shown in Figure 3.15) has been created by the MVM. The input parameters and the profiles are given by: CD = 1 • sin 6CD exp (-0.03 * k) k = 1... 58, (scan#) 6CD G [0, 6TT] I (3-9) u = 0.5, w ~ iV(0,0.12) The estimation has been done by the Coiflet filter using MultiMAD procedure (see Table 3.12). The thresholds and the estimated noise are given in Table 3.10. The VisuShrink threshold that would apply to this case was 0.27. Again, only one threshold (Table 3.10) used in this case was of that size. The threshold used on the lowest resolution level was much higher than the other two and was 38 Chapter 3: Analysis of Paper Machine Process Data the result of the estimator trying to filter out the coloured noise introduced in (3.9). The estimation Levels Threshold a 1 0.20 0.05 2 0.31 0.09 3 0.65 0.21 Table 3.10 M V M test data: thresholds and estimated noise at each resolution level. errors are shown in Figure 3.14 while the estimated profiles are given in Figure 3.15. The overall errors for the E X P O and the EJMBC estimators were Jexpo = 11.43 (3.10) Jeibmc — 0.94 • Jexpo It can be seen, from the given results, that the wavelet estimator performed very well compared to the both E X P O and EfBMC algorithms regardless of the fact that a priori knowledge of the moisture model and coloured noise has not been used. A possible improvement would be to include the noise model in the threshold-estimation procedures by increasing or lowering the threshold, at different resolution levels, by prescheduled coefficients based on the noise model. The wavelet estimated profiles were much smoother and without the artificial CD shapes that can be seen on the EfBMC estimated results (see Figure 3.16). Therefore, i f the estimated signals were to be used for control, the smooth profiles would prevent unnecessary CD control actions. Wavelet Thresholding Threshold Wavelet Resolution j Jexpo method estimation length/type levels Daubechies soft std 6 3 0.78 Daubechies soft Mul t iMAD 6 3 0.69 Symmlet soft std 6 3 ; 0.79 Symmlet soft Mul t iMAD 6 3 0.74 Coiflet soft std 3 3 0.76 Coiflet soft Mul t iMAD 3 3 0.67 Haar soft Mul t iMAD 2 2 0.89 Table 3.11 M V M test data: summary of the results 39 Chapter 3: Analysis of Paper Machine Process Data Wavelet Thresholding Threshold Wavelet Resolution j method estimation length/type levels Daubechies soft std 6 5 0.66 Symmlet soft std 6 5 0.68 Coiflet soft Mul t iMAD 3 3 0.67 Haar soft M A D 2 5 0.76 Table 3.12 M V M test data: best filter performances. Comparison o f mean scan errors 1 i 1 E x p W a v — E i m c I 1 I 1 I \ I y ' s . . . ;../.> _ V • ^~*y j _ i i 11 : i i 1 1 1 • ' 0 10 20 30 40 50 60 Scan Figure 3.14: M V M test data: estimation errors. 40 Chapter 3: Analysis of Paper Machine Process Data Chapter 3: Analysis of Paper Machine Process Data CD profile #30 Scan number Figure 3.16: M V M test data: CD profiles. 3.2.5 Bumps and Streaks The ability of the wavelet estimator to localize some characteristic patterns on a paper sheet has been tested using the data set given by Equation 3.11. This data set simulates a narrow streak in M D and two actuator bump changes. The simulation results are given in Figures 3.17 and 3.18, and performances of different estimators and their respective errors of estimation are shown in Tables 3.13 and 3.14. Results shown in the Figures are obtained by the Coiflet estimator whose parameters are given in Table 3.14. Hard thresholding has been used with the threshold set at 0.754 and the estimated noise standard deviation was 0.18 (true value was 0.2). This signal was a very difficult test for the estimators because of its smoothness combined with some very short and/or narrow features. If the signal had contained flat profiles with the same bumps and the streak, then the Haar wavelet would have performed the best. In this case, as in the most of the other tests, the performance of the selected group of wavelet estimators is similar to the best performances for that data set. The wavelet estimator has successfully localized the test patterns of short duration. Estimation of the streak is not as effective as by the E X P O algorithm, although the contrast between smooth surfaces, the bumps, and the streak is impressive. E X P O is effective in detecting these variations since it carries on no C D smoothing. 42 Chapter 3: Analysis of Paper Machine Process Data CD = 0.2 • cos 0CD, 6CD e [0, 2TT] MD = 0.3 • sin6MD, OMD G [0,2TT] CT = 0.2 1st bump : amplitude = 1 scan #30. . . 39 sensor #30. . . 34 2nd bump: (3.11) amplitude = — 1 scan #20. . . 23 sensor #80. . . 85 Streak : amplitude = 0.6 scan # 1 . . . 58 sensor #60. . . 61 Wavelet Thresholding Threshold Wavelet Resolution J Jexpo method estimation length/type levels Daubechies soft std 6 3 0.82 Daubechies soft MultiMAD 6 3 0.99 Symmlet soft std 6 3 0.86 Symmlet soft MultiMAD 6 3 0.99 Coiflet soft std 3 3 0.85 Coiflet soft MultiMAD 3 3 1.00 Haar soft MultiMAD 2 2 . 1.04 Table 3.13 Bumps and streaks: summary of the results 43 Chapter 3: Analysis of Paper Machine Process Data Wavelet Thresholding Threshold Wavelet Resolution J Jexpo method estimation length/type levels Daubechies hard M A D 6 3 0.74 Symmlet hard M A D 8 3. 0.78 Coiflet hard std .2 3 0.73 Haar hard std 2 2 0.85 Table 3.14 Bumps and streaks: best filter performances. 44 Chapter 3: Analysis of Paper Machine Process Data s -a c c3 45 Chapter 3: Analysis of Paper Machine Process Data 4 6 Chapter 3: Analysis of Paper Machine Process Data 3.3 Wavelet Analysis of Industrial Data 3.3.1 Choice of an Estimator For almost all the tested cases, the selected group of wavelet estimators performed significantly better than the E X P O filter. The purpose of this testing was to show that even though we cannot choose a single optimal wavelet estimator for all the signals, one can be chosen from the selected group (given in the tables showing the summary results) and still achieve a good performance. When the features that have to be extracted are known (when looking for certain patterns on the sheet) an appropriate wavelet estimator can be selected from the table showing the best filters for each particular job. In this way a substantially better performance (compared to the conventional estimators) can be achieved, as has been shown for the given examples. This approach to the filter selection is now used when the estimators to be implemented on the industrial data are selected. 3.3.2 Pulp Machine Moisture Content Data This data set is collected from a pulp machine manufacturing thick pulp [22] with the target value of 640.6 g/m 2. Slice lip opening at the headbox was 15 cm. The headbox was 4.43 m wide with design speed of 184 m/min. The CD moisture control system was a Devronizer with 25 actuators each of a width of 15.24 cm. One scan took 34 seconds to complete. It has been recorded (from the actuator positions) that two CD bumps have occurred: 1. sensor #20, scans #19—31, 2. sensor #87, scans #13—30. Three different wavelet estimators were used and their results were compared to the E X P O and EIBMC estimators. The chosen estimators are given in Table 3.15. The first estimator produced the results given in Figures 3.19, 3.20 and 3.21. This filter yielded very smooth results. The M D streak that can be seen in the scan #8 was picked up by the CD profile. It was concluded that this filter oversmooths data in the low-frequency range and consequently the number of resolution levels for the other two cases have been dropped to two. It has been observed, during the experiments, that the adaptive wavelet filtering methods work more reliably when the number of resolution levels has been kept low. It is well known [13, 10] that for adaptive filtering to work well, the number 47 Chapter 3: Analysis of Paper Machine Process Data of resolution levels used for de-noising must be much smaller than the maximum possible number of decomposition levels. The other two filters (shown here for the comparison) have been more effective in the localization of bumps and the streaks on the paper sheet. The results obtained by the second filter have been shown in Figures 3.22 and 3.23 while Figures 3.24 and 3.25 demonstrate the results achieved by the Symmlet filter. Wavelet Thresholding Threshold Wavelet Resolution method estimation length/type levels Daubechies soft std 6 3 Daubechies soft Mul t iMAD 6 2 Symmlet soft Mul t iMAD 6 2 Table 3.15 Filters used to analyze the industrial data. The wavelet filters have shown a superior performance compared to the conventional estimators when it came to the bump localization (see Figures 3.23 and 3.25). The size of the real bumps, based on the known positions of the actuators and their widths, was 13x4.8 and 18x4.8 for the first and the second bump respectively. Both, EXPO and EfBMC have shown signs of dispersing the bumps over a much greater number of scans although the width of the bumps was preserved. The wavelet estimators have detected the bump amplitude much better than the other two estimators. 48 Chapter 3: Analysis of Paper Machine Process Data Raw MD+CD Profile 0 o Sensors MD+CD Profile estimate (Wav) Figure 3.19: Pulp machine moisture data: estimated profiles. 49 Chapter 3: Analysis of Paper Machine Process Data (%) SJTUSIOW (%) ainjsio^ 50 Chapter 3: Analysis of Paper Machine Process Data Chapter 3: Analysis of Paper Machine Process Data Chapter 3: Analysis of Paper Machine Process Data Chapter 3: Analysis of Paper Machine Process Data (%) airnsiow (%) airusiopv 54 Chapter 3: Analysis of Paper Machine Process Data > i SUBDS SUB3S 55 Chapter 3: Analysis of Paper Machine Process Data 3.3.3 Grade Change for a Paper Machine This data set is collected from a newsprint machine during a grade change with the dry weight control system in auto mode. Data used for the analysis was the basis weight in grams per square meter. There were 141 sensors and data was collected during 189 scans. This data set was much larger than those previously analyzed so that it was possible to select estimators that used more resolution levels for de-noising. A Coiflet3 filter has been used and soft thresholding has been applied on 5 resolution levels using a threshold t = (j-^/2 * log (n * m) where the standard deviation has been estimated from the wavelet coefficients at the finest resolution level: a = std(u>f). The results are shown in Figures 3.26 and 3.27. The estimated CD profiles look very smooth, and it is possible to see that some of the M D change has found its way into the CD profile. This has happened because the M D profiles were calculated (and updated) once per scan. A n alternative method of M D profile estimation will be proposed in the next Chapter in order to avoid this difficulty. 56 Chapter 3: Analysis of Paper Machine Process Data Raw MD+CD Profile MD+CD Profile estimate (Wav) Figure 3.26: Paper machine basis weight data. 57 Chapter 3: Analysis of Paper Machine Process Data Chapter 4: Other Applications of Wavelets in Paper Making Process Chapter 4 Other Applications of Wavelets in Paper Making Process 4.1 Compression of Paper Machine Data The theoretical background needed for the understanding of the compression algorithms used in this chapter has been given in Section 2.5.2. In this chapter some of the more practical aspects of the process-data compression method, as well as the results achieved will be given. Both compression methods given in Section 2.5.2, (compression with and without quantizer) have been used. It has been assumed that the input data has been measured and stored in 16-bit format. A l l the calculations have been done using double precision (Matlab™'s default precision). The software has been written in such a way that quantizer resolution could assume any value. Only three resolutions have been used for the test purposes: 8—, 12— and 16-bit. The 16-bit resolution has been used as the reference, yielding the compression ratio of 1:1. The 12—bit resolution corresponded to a compression ratio of 1.33:1 and 8—bit resolution resulted in a compression factor of 2:1. A l l the above mentioned compression ratios are in addition to the compression achieved by the thresholding of wavelet coefficients and the encoding of data into sparse matrices. Case Wavelet Threshold Wavelet Resolution estimation type/length levels 1. Daubechies std 6 3 2. Daubechies Mul t iMAD 6 3 3. Symmlet std 6 3 4. Symmlet Mul t iMAD 6 3 5. Coiflet std 3 3 6. Coiflet Mul t iMAD 3 3 7. Haar Mul t iMAD 2 2 Table 4.16 Key table The examples used in Chapter 3 have been used to test the ability of the wavelet compressor to compress different types of signals. The results are summarized in Tables 4.17 to 4.21. The wavelet filters used here are given in Table 4.16. 59 Chapter 4: Other Applications of Wavelets in Paper Making Process Case 16-bit 12-bit 8-bit compress, ratio Jcompress Jexpo compress, ratio Jcompress Jexpo compress, ratio Jcompress Jexpo 1. 30.00 0.85 40.00 0.85 60.00 1.17 2. 50.80 1.15 67.74 1.15 101.61 1.43 3- 28.06 0.85 37.42 0.85 56.13 1.02 4. 52.33 1.19 69.77 1.19 104.66 1.32 5. 27.08 0.82 36.11 0.83 54.16 0.85 6. 52.73 1.27 70.30 1.27 105.45 1.29 7. 13.57 0.46 18.09 0.46 27.13 0.58 Table 4.17 Edge effects: summary of compression results Case 16-bit 12-bit 8-bit compress, ratio Jcompress Jexpo compress, ratio Jcompress Jexpo compress, ratio Jcompress Jexpo 1. 50.43 . 0.51 67.25 0.51 100.87 0.66 2. 50.80 0.51 67.74 0.51 101.87 0.66 3. 49.36 0.39 65.82 0.39 98.72 0.51 4. 51.56 0.39 68.74 0.39 103.11 0.51 5. 49.01 0.42 65.35 0.42 98.03 0.43 6. 53.54 0.42 71.38 0.42 107.08 0.42 7. 13.57 0.95 18.09 0.95 27.13 0.96 Table 4.18 Smooth surfaces: summary of compression results The actual quantization used here is given by this four-step procedure (for a data set D): 1. find t m i n = min(D) and tmax — max(D) 2. A t = (tmax ^min)/(2 l ) 3. Dj = D — tmin 4. Dc = floor(Di/At) where the variable bits represents the resolution in the number of bits and the function floor rounds the given number to the first integer smaller or equal to the number. 60 Chapter 4: Other Applications of Wavelets in Paper Making Process The reconstruction (de-quantization) algorithm is given by D' = Dc* At + tmin (4-1) Case 16-bit 12-bit 8-bit compress, ratio Jcompress Jexpo compress, ratio Jcompress Jexpo compress, ratio Jcompress Jexpo 1. 19.39 0.49 25.85 0.49 38.77 0.51 2. 50.08 0.39 67.74 0.39 101.61 0.42 3. 16.93 0.50 22.58 0.50 33.87 0.53 4. 51,56 0.38 68.74 0.38 103.11 0.42 5. 17.66 0.51 23.55 0.51 35.33 0.53 6. 53.54 0.38 71.38 0.38 107.08 0.90 7. 13.57 0.71 18.09 0.71 27.13 0.71 Table 4.19 Grade change: summary of compression results Case 16-bit 12-bit 8-bit compress, ratio Jcompress Jexpo compress, ratio Jcompress Jexpo compress, ratio J compress Jexpo 1. 6.98 0.78 9.31 0.78 13.97 0.82 2. 48.00 0.69 64.00 0.69 96.00 0.74 3. 6.87 0.79 9.16 0..79 13.74 0.85 4. 44.38 0.74 59.18 0.74 88.77 0.80 5. 6.81 0.76 9.09 0.76 13.63 0.76 6. 45.96 0.67 61.28 0.67 91.91 0.67 7. 12.58 0.89 16.78 0.89 25.17 0.89 Table 4.20 M V M test data: summary of compression results . It can be concluded, from the given results, that very high compression ratios are achievable by the proposed algorithms. The additional errors introduced by the quantization of wavelet coefficients are very small even when the resolution is reduced to a half (8—bit). In most cases the increase in errors is negligible for the 12-bit resolution, and around 1 to 2 percent for the 8-bit resolution. A significant jump in errors sometimes happens when the compression ratios reach 80 to 100. It should be noted that in almost all the cases — even for the highest compression ratios, the overall relative 61 Chapter 4: Other Applications of Wavelets in Paper Making Process error stayed below one, suggesting that even for those cases the achieved accuracy of estimation after compression was better than obtained by the EXPO algorithm without compression. Adaptive methods (MultiMAD) have given much better results (Table 4.20) due to their ability to adjust the thresholds at different levels. The method moved to higher thresholds at some resolution levels so setting more wavelet coefficients to zero (see Table 3.10 for an example). Case 16-bit 12-bit 8-bit compress, ratio J compress Jexpo compress, ratio Jcompress Jexpo compress, ratio Jcompress Jexpo 1. 41.18 0.86 54.91 0.86 82.37 0.87 2. 51.56 1.06 68.74 1.06 103.11 1.07 3. 38.88 .0.91 51.84 0.91 77.77 0.91 4. 51.56 1.06 68.74 1.06 103.11 1.07 5. 36.63 0.89 48.84 0.89 73.26 0.91 6. 53.13 1.07 70.84 1.07 106.26 1.08 7. 13.51 1.11 18.02 1.11 27.03 1.11 Table 4.21 Bump data: summary of compression results The test data set created by the M V M is used to show the visual effects of the compression. This data set has been chosen because of its richness in M D and C D frequencies yielding relatively smaller compression ratios (Table 4.20). The results described are achieved by the filter #6 (compression ratio of 81.5:1). The energy distribution of the signal is shown in Figure 4.1(a), and is very close to normal distribution. Figures 4.1(b) and 4.1(c) depict, respectively, wavelet coefficients before and k after the thresholding has been done (Note different scales in (a), (b) and (c)!). It is obvious, from the given diagrams, how wavelet compression operates. The wavelet coefficients are so concentrated in the quantization amplitude levels 50 to 75 that it had to be "zoomed in" to be able to see the low-frequency (high magnitude/energy) wavelet coefficients. The quantization level 62 corresponded to the wavelet coefficient of the zero magnitude. There were 2194 coefficients at this level. Even more impressive are the results after the thresholding was done. A l l the coefficients that used to be grouped around the zero value are now reduced to zero. There is only one peak at the level 62 and 8049 coefficients have that value. This compression resulted in only 143 nonzero coefficients. 62 Chapter 4: Other Applications of Wavelets in Paper Making Process 80- | 60- *8 20-o U Raw data at 8-bit resolution _ i i 50 100 150 200 Wavelet coefficients at 8-bit resolution 250 100 150 Quantization level Figure 4.1: M V M data set: energy distribution An alternative way to show the ability of wavelets to compress a given 2D signal can be seen in Figure 4.2. This figure depicts the wavelet coefficient distribution in a two-dimensional image. Each small black square represents one nonzero wavelet coefficient. The meaning of different regions, marked in Figure 4.2, can be understood by comparison to the diagram given in Figure 4.3 which is related to the calculation of the two-dimensional DWT (see Figure 2.3). The entire rectangle would be occupied by nonzero coefficients prior to compression. 63 60 50 40 30 20 10 Chapter 4: Other Applications of Wavelets in Paper Making Process Location of non-zero wavelet coefficients 20 40 60 Sensors 80 100 120 Figure 4.2: MVM.data set: distribution of nonzero wavelet coefficients. f 1 L H r1 HH f2 L H f HH f 1 HL i i i f3 - L H f3 HH 4 f3 LL f3 , HL 60 50 40 30 20 10 20 40 60 80 100 120 Figure 4.3: 2D DWT: coefficient storage. 64 Chapter 4: Other Applications of Wavelets in Paper Making Process The two-dimensional wavelet transform decomposes a signal into a low-frequency (average) signal (fix) and three high-frequency (detail) signals which are directionaly sensitive [18]: fLH — emphasizes horizontal, CD-position dependant, features, fnL — emphasizes vertical, time-dependant, features, fHH — emphasizes diagonal, mutually-dependant, features. This directional sensitivity is an artifact of the frequency ranges they contain. It can be seen (Figure 4.2) that only a few nonzero coefficients lay outside of the f \ L frequency component. It can be confirmed, from Figures 4.4 and 4.5, that the errors are indeed very small and visually they appear as a kind of fuzzyness surrounding the estimated profiles. 65 Chapter 4: Other Applications of Wavelets in Paper Making Process Chapter 4: Other Applications of Wavelets in Paper Making Process SUE3S sireos Chapter 4: Other Applications of Wavelets in Paper Making Process Compression has been also tested on the industrial data set given in Section 3.3.2. The compression ratios and the errors are given in Table 4.22. The filters are given in the same order as in Table 3.15. In this case the errors (Eq. 3.3) were calculated based on the estimated signal obtained by the same filter since the signal is not known. The errors were very small — even the largest one (case #1) corresponds to an average per sample error of 12.42/(60 * 118) = 0.0018 in comparison to the standard deviation of the estimated signal which was around 0.7. Figures 4.6 to 4.9 are obtained by the filter #3 from Tables 3.15 and 4.22 at the 8—bit resolution. It is impossible to see any difference between compressed and non-compressed images. There is only one wavelet coefficient (Figure 4.7) left outside of the 111 region. Based on the results shown, it is easy to see that the proposed compression method, which is a natural part of the estimation process, yields high compression with almost no additional errors. Different compression strategies may be developed to assure the smallest possible error and the highest compression ratio, based on the use to be made of the stored data. If data is to be used as a simple image-record of the production, good results may be achieved by simply selecting 2—5% of the largest wavelet coefficients before thresholding. This would produce a constant compression ratio. If the record showing the estimated profiles is to be kept, then the data processing described in this section can be used (estimation followed by compression, using the same wavelet coefficients). If there is a need to keep data with high-frequency components still present, thresholding may be done by selecting the threshold conservatively (i.e. one half of the one automatically selected by the estimator) and by using non-adaptive methods. This filtering would remove a portion of the noise but would still achieve a useful compression ratio. 68 Chapter 4: Other Applications of Wavelets in Paper Making Process (a) Raw data at 8-bit resolution 300 (c) Thresholded wavelet coefficients at 8-bit resolution Quantization level Figure 4.6: Pulp machine moisture data: energy distribution Case 16-bit 12-bit 8-bit compress, ratio Jcompress compress, ratio Jcompress compress, ratio Jcompress 1. 11.13 0.06 17.77 0.19 26.66 12.42 2. 13.62 0.01 18.16 0.42 27.24 2.37 3. 13.72 0.00 18.24 0.06 27.44 0.89 Table 4.22 Pulp machine moisture data: summary of compression results 69 Chapter 4: Other Applications of Wavelets in Paper Making Process Location of non-zero wavelet coefficients 20 40 60 80 100 120 Sensors Figure 4.7: Pulp machine moisture data: distribution of nonzero wavelet coefficients. 70 Chapter 4: Other Applications of Wavelets in Paper Making Process (%) ajnjsic-w (%) airnsiow 71 Chapter 4: Other Applications of Wavelets in Paper Making Process 72 Chapter 4: Other Applications of Wavelets in Paper Making Process 4.2 Wavelet Estimation for Control Purposes The wavelet estimator presented in Chapter 3 produced very good results with low estimation errors and at high execution speeds. Those estimates, in the form of two- and three-dimensional images can provide a paper machine operator with a much better view of the production quality. The operator acts on the basis of this improved information leading ultimately to better control and paper quality. The estimator, as given in Chapter 3, worked in the batch mode on a relatively large number of data samples (more then 50 scans and more then 110 sensor measurements per scan) and was quite suitable for the above purposes. Some modifications to the proposed algorithms are needed for the estimation to become more appropriate for use in real-time control. Those modifications concentrate on more practical issues such as the speed of M D profile updates and the handling of edge effects. This is mainly a topic for future research. One possible approach to this problem is given as follows. The C D profiles should be estimated as shown in Chapter 3. The number of scans that need to be used for each CD profile calculation depends on the filter length and it should be at least in the range of 2 to 3 filter lengths. The most recent estimates of the signal are the most important because these are the samples needed for the controller to respond at the highest rate. Hence, better handling of those data samples may be needed (i.e. using Meyer's boundary wavelets or Dyadic boundary wavelets). After the CD profile is calculated (once per scan, batch mode) the M D profiles may be estimated by subtracting the CD profile from the raw data and using the result as an input to a one-dimensional wavelet de-noising procedure. This approach may lead to M D estimation at each sample point instead of once per scan. An example of the M D profile estimation at each sample and its comparison to the scan average method is given in Figure 4.10. The data set used in this example was given in Section 3.2.2 (Smooth Surfaces). The true M D profile, its scan average estimate, as well as the high resolution estimates achieved by the proposed method are shown in Figure 4.10(b). The raw M D profile is shown in Figure 4.10(a) for comparison. The same method was applied to the pulp machine data (Section 3.3.2). The results as shown in Figure 4.11(a)-(b). The wavelet estimator will thus enable more frequent M D estimates. The M D controller will then be able to operate with reduced time delay and possibly with increased bandwidth. A second possible benefit is in C D control, where the profile estimate, after decomposition into wavelet components, can be matched to CD actuator spacing and response characteristic. 73 Chapter 4: Other Applications of Wavelets in Paper Making Process (a) Raw MD profile (b) MD profile comparason - l h True MD ScanAvg. - ID WAV 10 20 30 Scans 40 50 60 Figure 4.10: Smooth surfaces: high resolution MD estimation 10 (a) Raw MD profile (b) MD profile comparason • ScanAvg. 1D_WAV 10 20 30 Scans 40 50 60 Figure 4.11: Pulp machine moisture data: high resolution MD estimation 74 Chapter 5: Conclusion Chapter 5 Conclusion The thesis describes a new approach to paper machine process data analysis using one- dimensional and two-dimensional discrete wavelet transforms. These techniques have been adapted from a general theory that has been developed in recent years on the application of wavelets to signal analysis. Application areas in which the theory was first applied have included image processing and bandwidth compression for communications. Two main applications of the discrete wavelet transform have been analyzed in this thesis. First, an analysis of the use of wavelets for processing scanned data representing basis weight and moisture variations on a paper machine has been carried out. It has been shown that wavelets are effective for the detection of process signals in noisy data, so leading to better estimation and visualization of the machine direction and cross machine variations in process data. The second main application of the method has been to allow significant compression of the process data without diminishing the ability to reconstruct accurate profiles. It has been shown that the compression method can be embedded into the estimation algorithm, producing excellent results without a major expense in the computation time. It has been shown that, in both applications, the new methods produce results superior to the industrially accepted procedures. For appropriate choice of wavelets, profile estimates are improved over those obtained using exponential filtering or other standard analysis methods. The data compression technique presents a new concept in paper machine data analysis and the author is not aware of any previous references to this subject. The ability to reduce data storage requirements is of importance in mill-wide process monitoring systems. A comprehensive analysis of the proposed algorithms has been carried out on a variety of simulated data sets for which the true process variations are known. Industrial data has also been analyzed and it is apparent that the method had many desirable characteristics. The following properties have been shown for the wavelet estimator: • The estimator reduces the error of estimation compared to the existing algorithms, when applied to test data. 75 Chapter 5: Conclusion • When applied to industrial data, the method presents an excellent visual interface to the paper making process. Operator data presentations are an important part of process control systems, and it is believed that wavelet analysis gives excellent visual insight into process variations, insight significantly better than is available in current systems. • The wavelet decomposition can be closely related to cross machine actuator responses. It is expected that a cross machine control method based on an appropriate wavelet analysis will allow profile upsets to be matched to actuator settings. • The algorithm is computationally fast, which allows real time analysis. The need for a recursive form may be avoided. The method has been shown to be applicable to basis weight and moisture estimation using data from a scanning sensor. It is to be expected that it could be used for the analysis of other process variables. • Unlike many estimation methods based on least squares techniques, wavelet analysis is robust in its performance and does not require additional tuning once it has been properly installed. The wavelet compression algorithm has been shown to have the following advantages: • The method is very effective with compression ratios greater than 25:1 (up to 100:1) and with the error of estimation, after decompression, remaining lower than the corresponding error for an exponential filter based algorithm without compression. • Data compression is an integral part of the estimator procedure, and therefore requires very little additional computation time. • The algorithm provides an effective tool for the visualization of production variations over large time periods. It follows that it can be used to search for the possible causes of disturbances and faults. • The algorithm can be adjusted to provide a minimum error or a constant compression ratio, allowing it to be tailored to particular applications. • Data compression will become increasingly important with the introduction of new, high speed sensors that generate large volumes of data. A related advantage is the ability to reduce time for data transfers over the local area networks used in modern machines. 76 Chapter 5: Conclusion There are many possible extensions to the research presented here. Steps that might be taken to improve the effectiveness of the algorithms are described in the following: • There is a need for better handling of the edge effects for the given DWT algorithms. • For real time data analysis it is desirable that recursive versions of the algorithms replace the batch procedures. • An organized procedure for incorporating knowledge of the process and noise models into the estimation procedures is needed. As the theory associated with wavelet decomposition advances, there is a need to evaluate new approaches. For example, there is a case for exploring new wavelet techniques such as wavelet packet analysis [38] and second generation wavelets [35]. • It is likely the use of wavelet estimation algorithms in automatic control applications wil l allow sensor and actuator spatial and temporal bandwidths to be matched. • The existing threshold estimators that were used and play a critical role in data compression and noise elimination can still be improved. In some cases (for example the SURE technique), the results were not as expected and there is a need for further development of this part of the method. 77 Chapter 5: Conclusion References [1] J. Buckheit, S. Chen, D. Donoho, I. Johnstone, and J. Scargle. Wavelab 0.655, a library of matlab routines for wavelet analysis, ftp://playfair.stanford.edu/pub/wavelab, 1995. [2] S.-C. Chen. Kalman filtering applied to sheet measurement. In 7th American Control Conference, pages 643-647, Atlanta, Georgia, 1988. [3] A . Cohen, I. Daubechies, and J. C. Feauveau. Biorthogonal bases of compactly supported wavelets. Comm. Pure Applied Math., 1992. [4] E. Dahlin. Computational methods of a dedicated computer system for measurement and control on paper machines. In 24th Engineering Conference, TAPPI, pages 62.1—62.42, San Francisco, California, Sep 1969. [5] I. Daubechies. Orthonormal bases of compactly supported wavelets. Comm. Pure Applied Math., XLI(41):909-996, Nov. 1988. [6] I. Daubechies. Orthonormal bases of wavelets with finite support - connection with discrete filters. In J. M . Combes, A . Grossman, and P. Tchamitchian, editors, Wavelets, Time- Frequency Methods and Phase Space, pages 38-66. Springer-Verlag, Berlin, 1989. Proceedings of International Colloquium on Wavelets and Applications, Marseille, France, December 1987. [7] I. Daubechies. The wavelet transform, time-frequency localization and signal analysis, Sept. 1990. [8] I. Daubechies. Ten Lectures on Wavelets. S IAM, Philadelphia, PA, 1992. Notes from the 1990 CBMS-NSF Conference on Wavelets and Applications at Lowell, M A . [9] I. Daubechies. Orthonormal bases of compactly supported wavelets II. Variation on a theme. SIAM Journal of Mathematical Analysis, 24(2):499-519, March 1993. [10] D. L . Donoho. Nonlinear wavelet methods for recovery of signals, densities, and spectra from indirect and noisy data. In Proceedings of Symposia in Applied Mathematics, volume 00, pages 173—205. American Mathematical Society, 1993. 78 Chapter 5: Conclusion [11] D. L . Donoho. De-noising by soft-thresholding. IEEE Trans. Inform. Theory, 41(3):613—627, May 1995. [12] D. L . Donoho and I. M . Johnstone. Minimax estimation via wavelet shrinkage. Technical Report 402, Stanford University, Department of Statistics, July 1992. [13] D. L . Donoho and I. M . Johnstone. Adaptating to unknown smoothness via wavelet shrinkage. J. Am. Stat. Ass., 1993. To appear, Also Tech. Report 425, Department of Statistics, Stanford University, June, 1993. [14] D. L . Donoho and I. M . Johnstone. Ideal spatial adaptation via wavelet shrinkage. Biometrika, 81:425-455, 1994. Also Tech. Report 400, Department of Statistics, Stanford University, July, 1992. [15] G. A . Dumont, M . S. Davies, K . Natarajan, C. Lindeborg, F. Ordubadi, Y . Fu, K . Kristinsson, and A . Jonsson. An improved algorithm for estimating paper machine moisture profiles using scanned data. In 30th IEEE Conference on Decision and Control, Brighton, England, Dec 1991. [16] R. A . Gopinath, H . Guo, R. Hindman, M . Lang, J. E. Odegard, and D. Wei. Rice university wavelet toolbox, version 2.01. http://www-dsp.rice.edu/software/RWT, Apr 1994. [17] A . Haar. Zur theorie der orthogonalen funktionensysteme. Mathematische Annalen, 69:331— 371, 1910. Also in PhD thesis. [18] M . Hilton, B . Jawerth, and A . Sengupta. Compressing still and moving images with wavelets. Multimedia Systems, 3(2), 1994. To appear. [19] A . K . Jain. Fundamentals of digital image processing. Prentice Hall, Englewood Cliffs, NJ, 1989. [20] B . Jawerth and W. Sweldens. An overview of wavelet based multiresolution analyses. Technical Report 1993:1, Industrial Mathematics Initiative, Department of Mathematics, University of South Carolina, 1993. Submitted to SLAM Review. [21] I. M . Johnstone and B. W. Silverman. Wavelet threshold estimators for data with correlated noise. Technical report, University of Bristol, U K , Statistics Department, Sept. 1994. [22] I. M . Jonsson. Estimation and identification of moisture content in paper. Master's thesis, The University of British Columbia, Aug. 1991. Chapter 5: Conclusion [23] M . Lang, H . Guo, J. E. Odegard, C. S; Burrus, and R. O. Wells, Jr. Noise reduction using an undecimated discrete wavelet transform. In To appear in IEEE Signal Processing Letters, 1995. [24] P.-G. Lemarie and Y . Meyer. Ondelettes et bases hilbertiennes. Rev. Mat.Tberoamericana, 2:1-18, 1986. [25] C. Lindeborg. A nonlinear algorithm for the estimation of moisture variation characteristics in the paper process. In Proc. of 17th International Conference BIAS-81, volume 3, pages 139-158, Milan, Italy, Oct 1981. [26] S. Mallat. A theory for multiresolution signal decomposition: The wavelet representation. IEEE Trans, on Pattern and Machine Intelligence, 11(7):674-693, 1989. [27] D. Marr. Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. W. H . Freeman, 1982. [28] Y . Meyer. Ondelettes, functions splines et analyses graduees. Lectures at Univ. of Torino, Italy, 1986. [29] Y . Meyer. Ondelettes. filtres miroirs en quadrature et traitement numerique de l'image. Technical report, Cahiers de Mathematiques de la Decision, Paris, France, 1989. Text from a conference on 22 February 1989 at the University of Autonooma of Madrid. [30] S. T. Morgan. Estimation and identification for machine direction control of basis weight and moisture. Master's thesis, The University of British Columbia, June 1994. [31] G. P. Nason. Wavelet regression by cross-validation. Technical Report 447, Stanford University, Statistics Department, Mar. 1994. [32] K . Natarajan, G. A . Dumont, and M . S. Davies. An algorithm for estimating cross and machine direction moisture profiles for paper machines. In IFFAC/FORS Symposium, pages 27—31, Beijing, PRC, Aug 1988. [33] W. K . Pratt. Digital Image processing. Wiley, New York, 1978. [34] O. Rioul and M . Vetterli. Wavelet and signal processing. IEEE Signal Processing Magazine, 8(4):14-38, Oct. 1991. 80 [35] W. Sweldens. The lifting scheme: a construction of second generation wavelets. Technical Report 1995:6, Industrial Mathematics Initiative, Department of Mathematics, University of South Carolina, 1995. [36] B . F. Taylor. Optimum separation of machine-direction and cross-direction product variations. Tappi Journal, pages 87-92, Feb 1994. [37] X . G. Wang, G. A . Dumont, and M . S. Davies. Modelling and identification of basis weight variations in paper machines. IEEE Transactions on Control System Technologies, June 1993. [38] M . V . Wickerhauser. Smooth localized orthonormal bases. CRAS, 315, 1993. 81 Appendix A Exponential Multiple Scan Trending (EXPO) Algorithm The equations describing the algorithm (Dahlin [1]) are: sn(m) = p{Dn(m) - Y(m)) + (1 - p)sn(m - 1) (A.1) where: Dn(m) is the measured value (moisture content or basis weight) at scan m and CD position n; Y(m) is the mean value of scan m; N is the total number of samples (data boxes) in a CD; sn(m) is the filtered CD profile of scan m at data box n; and p is the exponential filter weighting factor, 0 < p < 1; The weighting factor is usually chosen from the region p < 0.3. This eliminates the M D variations from the CD profiles but also slows down the CD profile estimation. 82 Appendix B Estimation and Identification of Basis Weight and Moisture Content (EIBMC) Algorithm B.l Process Models Process model used to describe the moisture variation in the ELBMC algorithm is based on Lindeborg's [3] moisture variation model (MVM). It has been shown [6] that the same model, with some modifications, can be applied to basis weight. In this thesis only the necessary equations, describing the process models and the ELBMC algorithm are given, For the detailed discussion one can refer to the original papers [3, 6, 2, 4]. Machine direction wet-end " reel Cross direction k+1 k-1 Figure B . l : Data indexing B.l.l Moisture The M V M for dry-end variations is Vk=Pk + £ + BPk>k + vk (B.l) where n is data box number (CD-position); k is sample number or time instant (kT); yl is the difference between measured value at C D position n and time k, and the variable set point; B is a measure of the non-linearity between the CD and M D variations; 83 pi is pseudo-profile at position n and time k; ut is M D value at time k; vk ~ N(0, R) is the measurement noise and model mismatch; It is assumed that P k = {p\,v\, • • • ,Pk] a n d B change slowly [3]. The first order model of the M D variations is given [5] as Uk = u + & (B.2) where u is the mean moisture content in the M D ; £jk is a zero mean stochastic process given by: & = a£k-i +Wk, Wk ~ N(0, q) The parameter a and the variance q depend on the paper machine. The model in state space form is given by xk+i = Axk + Wk Vk=Pk+ C%xk + vk (B.3) where 11 ' "1 0" Xk A = h . 0 a. 0 (B.4) (B.5) Cl = [l \\{l + BVl) B.1.2 Basis Weight It has been shown [6] that the previously described moisture model ( B . l . l ) can be modified to accommodate basis weight estimation by increasing the order of the disturbance (second order A R M A ) , setting B=0 and absorbing the state u into the A R M A model. The A R M A model is given as ( l - aiq'1 - a2q~2)uk = ( l + biq'1 4- hq~2)wk (B.6) Thus, the state space model for the basis weight becomes: xk+i = Axk + Wk Uk = Cxk + Wk Vk=Pk+Uk + vk (B.7) 84 where A a\ 1 a2 0 C = [l 0] Wk B.2 The E I B M C Algorithm B.2.1 Moisture The moisture version of the ELBMC algorithm is (B.8) where Xk\k-i = Axk_i\k_i Uk\k-i = I1 l]^fc|fc-i ipk = 1 + Buk\k-l Vk\k-i = Pk-i + Cfc-i*fc|fc-i V " = -Vn - + p-s(vny avnrk(ynk-ynklk^) Vl = pg- i + i + V » = ( l + B p j ) [1 1] Sfc-i(C^)T " • q ? E f c _ 1 ( q ? ) T - r J J Wfc|fc = [1 1 ]xk\k Q = E{WkWkr] = [91 0 .0 <?J The equations are given in the order of their execution. (B.9) (B.10) ( B . l l ) (B..12) (B.13) (B.14) (B.15) (B.16) (B.17) (B.18) (B.19) (B.20) 85 B.2.2 Basis Weight The basis weight version of the EIBMC algorithm is where Xk\k-i = Axk-\\k-i uk\k-\=Cxk\k_1 Vk\k-1 = Vk-\ + Uk\k-\ V n = \yn _ Q ( y " ) 2 + 0 _ fitynf A 1 + Vn . • ' Vk — Pk-\ + _ Ki 1 + vn C T f c _ i C T + .R Xk\k = Xk\k-i + K*k {Vk ~ {Pk + Cxk\k_i)) Uk\k = Cxk Q = E{WkWkT) = The equations are given in the order of their execution. tk\k b\ hb2 hh b\ (B.21) (B.22) 03.23) (B.24) (B.25) (B.26) (B.27) (B.28) (B.29) (B.30) 86 References [1] E. Dahlin. Computational methods of a dedicated computer system for measurement and control on paper machines. In 24th Engineering Conference, TAPPI, pages 62.1—62.42, San Francisco, California, Sep 1969. [2] G. A . Dumont, M . S. Davies, K . Natarajan, C. Lindeborg, F. Ordubadi, Y . Fu, K . Kristinsson, and A . Jonsson. An improved algorithm for estimating paper machine moisture profiles using scanned data. In 30th IEEE Conference on Decision and Control, Brighton, England, Dec 1991. [3] C. Lindeborg. A process model of moisture variations. Pulp and Paper Canada, 4:T142—147, 1993. [4] S. T. Morgan. Estimation and identification for machine direction control of basis weight and moisture. Master's thesis, The University of British Columbia, June 1994. [5] K . Natarajan, G. A . Dumont, and M . S. Davies. An algorithm for estimating cross and machine direction moisture profiles for paper machines. In IFFAC/FORS Symposium, pages 27—31, Beijing, PRC, Aug 1988. [6] X . G. Wang, G. A . Dumont, and M . S. Davies. Modelling and identification of basis weight variations in paper machines. IEEE Transactions on Control System Technologies, June 1993. 87 Appendix C Wavelet Filter Coefficients H a a r 0 . 7 0 7 1 0 6 7 8 1 1 8 6 5 5 0 . 7 0 7 1 0 6 7 8 1 1 8 6 5 5 D a u b e c h i e s ( 4 ) 0 . 4 8 2 9 6 2 9 1 3 1 4 4 5 3 0 . 8 3 6 5 1 6 3 0 3 7 3 7 8 1 0 . 2 2 4 1 4 3 8 6 8 0 4 2 0 1 - 0 . 1 2 9 4 0 9 5 2 2 5 5 1 2 6 D a u b e c h i e s ( 6 ) 0 . 3 3 2 6 7 0 5 5 2 9 5 0 0 8 0 . 8 0 6 8 9 1 5 0 9 3 1 1 0 9 0 . 4 5 9 8 7 7 5 0 2 1 1 8 4 9 - 0 . 1 3 5 0 1 1 0 2 0 0 1 0 2 5 - 0 . 0 8 5 4 4 1 2 7 3 8 8 2 0 3 0 . 0 3 5 2 2 6 2 9 1 8 8 5 7 1 S y m m l e t ( 6 ) 0 . 0 1 5 4 0 4 1 0 9 3 2 7 3 4 0 . 0 0 3 4 9 0 7 1 2 0 8 4 3 3 - 0 . 1 1 7 9 9 0 1 1 1 1 4 8 4 1 - 0 . 0 4 8 3 1 1 7 4 2 5 8 6 0 0 0 . 4 9 1 0 5 5 9 4 1 9 2 7 6 4 0 . 7 8 7 6 4 1 1 4 1 0 2 8 7 9 0 . 3 3 7 9 2 9 4 2 1 7 2 8 2 4 - 0 . 0 7 2 6 3 7 5 2 2 7 8 6 6 0 - 0 . 0 2 1 0 6 0 2 9 2 5 1 2 7 0 0 . 0 4 4 7 2 4 9 0 1 7 7 0 7 5 0 . 0 0 1 7 6 7 7 1 1 8 6 4 4 0 - 0 . 0 0 7 8 0 0 7 0 8 3 2 4 7 7 S y m m l e t ( 8 ) 0 . 0 0 1 8 8 9 9 5 0 3 3 2 9 0 - 0 . 0 0 0 3 0 2 . 9 2 0 5 1 4 5 5 - 0 . 0 1 4 9 5 2 2 5 8 3 3 6 7 9 0 . 0 0 3 8 0 8 7 5 2 0 1 4 0 6 0 . 0 4 9 1 3 7 1 7 9 6 7 3 4 8 - 0 . 0 2 7 2 1 9 0 2 9 9 1 6 8 1 - 0 . 0 5 1 9 4 5 8 3 8 1 0 7 8 8 0 . 3 6 4 4 4 1 8 9 4 8 3 5 9 6 0 . 7 7 7 1 8 5 7 5 1 6 9 9 7 5 0 . 4 8 1 3 5 9 6 5 1 2 5 9 2 0 - 0 . 0 6 1 2 7 3 3 5 9 0 6 7 9 1 - 0 . 1 4 3 2 9 4 2 3 8 3 5 1 0 5 0 . 0 0 7 6 0 7 4 8 7 3 2 5 2 8 0 . 0 3 1 6 9 5 0 8 7 8 1 0 3 5 - 0 . 0 0 0 5 4 2 1 3 2 3 3 1 6 4 - 0 . 0 0 3 3 8 2 4 1 5 9 5 1 3 6 Coiflet(2) 0.01638733646360 -0.04146493678196 -0.06737255472228 0.38611006682299 0.81272363544940 0.41700518442367 -0.07648859907867 -0.05943441864674 0.02368017194645 0. 00561143481942 -0.00182320887071 -0.00072054944537 Coiflet(3) -0.00379351286449 0.00778259642733 0.02345269614184 -0.06577191128186 -0.06112339000267 0.40517690240961 0.79377722262562 0.42848347637762 -0.07179982161931 -0.08230192710689 0.03455502757306 0.01588054486362 -0.00900797613666 -0.00257451768875 0.00111751877089 0.00046621696011 -0.00007098330314 -0.00003459977284

Cite

Citation Scheme:

    

Usage Statistics

Country Views Downloads
China 8 34
Russia 6 0
India 3 0
Japan 3 0
Canada 2 0
France 2 0
United States 1 3
City Views Downloads
Unknown 10 0
Beijing 8 0
Tokyo 3 0
Alangulam 2 0
Tiruchi 1 0
Ashburn 1 0

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}

Share

Share to:

Comment

Related Items