UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Wavelet packets-based digital watermarking for image authentication 2002

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata


ubc_2002-0525.pdf [ 6.03MB ]
JSON: 1.0065184.json
JSON-LD: 1.0065184+ld.json
RDF/XML (Pretty): 1.0065184.xml
RDF/JSON: 1.0065184+rdf.json
Turtle: 1.0065184+rdf-turtle.txt
N-Triples: 1.0065184+rdf-ntriples.txt

Full Text

Wavelet Packets-based Digital Watermarking for Image Authentication by Alexandre Paquet B.Eng, Universite Laval, 2000 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF Master of Applied Science in THE FACULTY OF GRADUATE STUDIES (Department of Electrical & Computer Engineering) We accept this thesis as conforming to the required standard The University of British Columbia July 2002 © Alexandre Paquet, 2002 In p resen t ing this thesis in partial fulfi lment of the requ i rements for an a d v a n c e d deg ree at the Univers i ty o f Brit ish C o l u m b i a , I agree that the Library shall make it freely available for reference a n d s tudy. I further agree that p e r m i s s i o n for ex tens ive c o p y i n g o f this thesis for scholar ly p u r p o s e s m a y b e g ran ted b y the h e a d of my depa r tmen t o r by his o r her representat ives . It is u n d e r s t o o d that c o p y i n g o r p u b l i c a t i o n of this thesis for financial gain shall no t be a l l o w e d w i t h o u t my wr i t t en p e r m i s s i o n . D e p a r t m e n t of ZJ^j^rJ 8 (cM«f feny W The Univers i ty o f Brit ish C o l u m b i a V a n c o u v e r , C a n a d a D a t e A ^ , ^ f _ ? n r o D E - 6 (2/88) Abstract The rapid expansion of the Internet and the overall development of digital technologies in the past years have sharply increased the availability of digital media. Digital contents can be reproduced without loss of quality, but they may also be easily modified, and sometimes, imperceptibly. In many contexts, any alteration of image, video or audio data must be detected. Therefore, some work needs to be done to develop security systems to protect the content of digital data. Watermarking is accepted as a plausible candidate for such an application as it allows for the invisible insertion of information in a host by its imperceptible modification. This thesis is concerned with the protection of information contained in dig- ital images. A novel, semi-fragile watermarking technique for the authentication of images is developed. Image protection is achieved by the insertion of a secret au- thor's identification key in an image's wavelet packet (WP) decomposition. Round- ing the mean of selected regions of WP coefficients embeds the binary key. To take maximum advantage of the host image's characteristics in the embedding pro- cess, an optimal quantization protocol is formulated. The image's verification is done without the use of the original unmarked image. The detection of unautho- rized frequency or spatial tampering with the image is performed by a combined interband/intraband verification protocol. This new technique can detect malicious tampering with images, but stays unaffected by high quality JPEG compression. ii Contents Abstract ii Contents iii List of Tables vii List of Figures viii List of Abbreviations xiii Glossary xiv Acknowledgements xv 1 Introduction 1 1.1 Digital Watermarking 1 1.2 Problem Definition 3 1.3 Organization of the Thesis 4 2 Wavelet Analysis 7 iii 2.1 Introduction 7 2.2 Historical Perspective 8 2.3 Wavelets and Multiresolution Analysis 14 2.3.1 Series Expansion 14 2.3.2 Filter Banks 16 2.3.3 Multiresolution and Wavelet Theory 23 2.4 Wavelet Packet Analysis 28 2.5 Multidimensional Signals 32 2.6 Summary 33 3 Digital Watermarking 34 3.1 Introduction 34 3.1.1 Definition of Watermarking 35 3.2 Historical Perspective 36 3.3 Background on Watermarking 38 3.3.1 Host Media for Watermarking 40 3.3.2 Applications of Watermarking 41 3.3.3 Requirements of Watermarking Systems 42 3.3.4 Embedding Domains and Decoding Procedures . . . . 43 3.4 Watermarking for Copyright Protection 45 3.5 Summary 50 4 Image Authentication 52 4.1 Introduction 52 4.2 Approaches to authentication 54 iv 4.3 Requirements of Authentication Schemes 56 4.4 Previous Work 58 4.4.1 Fragile Watermarking in the Spatial Domain 58 4.4.2 Fragile Watermarking in Transform Domains 61 4.5 Our WP-Based Image Authentication 69 4.5.1 Embedding Process 70 4.5.2 Optimal Quantization Step 81 4.5.3 Watermark Decoding Process 84 4.6 Summary 89 5 Experimental Results 91 5.1 Introduction 91 5.2 Embedding, Decoding and Visibility 95 5.3 Tampering Detection 107 5.4 _ Comparison with Ei'konamark 112 5.4.1 Image Quality and Tampering Detection 113 5.4.2 Resistance to Collage Attacks . 119 5.4.3 Summary of Comparisons 122 5.5 Robustness to J P E G Compression 123 5.5.1 Predistortion in the Spatial and Wavelet Domains . . . 125 5.6 Summary 130 6 Conclusions and Future Research 132 6.1 Overview 132 v 6.2 Digital Watermarking and Content Authentication 133 6.2.1 Our Wavelet Packets-Based Authentication Scheme 135 6.2.2 Review of Results 136 6.3 Future Research 138 6.4 Closing Remarks 140 Bibliography 141 Appendix A 150 A . l Fourier Analysis 150 A.2 Orthonormality of Haar Basis 151 A.3 Conditions of Filters Ht(z) and Fi(z) 152 A.4 Definition of Multiresolution 153 A.5 Steps towards Multiresolution 154 A.6 Erasable Watermarking 154 Appendix B 156 vi List of Tables 4.1 Optimum Step Sizes for Laplacian Distribution with a2 = 1 (from [65]) 84 5.1 Average PSNR for Different Wavelet Functions 103 5.2 Average Detection Rate for Different Wavelet Functions . . . . 107 vii List of Figures 2.1 Haar Scaling (</>) and Wavelet (ip) Functions 9 2.2 Meyer Scaling (<p) and Wavelet (ip) Functions 12 2.3 Daubechies-4 Scaling (<p) and Wavelet (ip) Functions 13 2.4 Multirate Quadrature Mirror Filter bank 17 2.5 Two Channels QMF Bank 19 2.6 Two Levels of Wavelet Decomposition using Filter Bank Rep- resentation 24 2.7 Two Levels of Wavelet Recomposition using Filter Bank Rep- resentation 25 2.8 Ideal Spectrum Division from Wavelet Decomposition 25 2.9 Frequency Tilling for Fourier and Wavelet Transforms 28 2.10 Two Levels of Wavelet Packet Decomposition using Filter Bank Representation 29 2.11 Two Levels of Wavelet Packet Recomposition using Filter Bank Representation 30 2.12 Ideal Spectrum Division from Wavelet Packet Decomposition . 31 viii 3.1 Publications on Digital Watermarking per Year 39 3.2 Our Generic Classification of Digital Watermarking Systems . 46 3.3 Watermarking as Communications 48 3.4 Watermark Embedder with Perceptual Model 49 3.5 Complete Watermarking Scheme 50 4.1 Quantization Scheme used in [39] 65 4.2 Our Classification of Image Authentication Techniques . . . . 68 4.3 Two Levels of Daubechies 12 and Coiflets 30 Wavelet packet decomposition and Associated Original Images 73 4.4 Coefficients Selection Approach (steps 4.) 75 4.5 Selected Coefficients in the WP (Coifflets 12) domain 76 4.6 Spatial (a) and Frequency (b) Mapping of Selected Coefficients 77 4.7 Embedding Scheme Developed 80 4.8 Input/Output Relation in the Quantization Process 81 4.9 Probability Density Function of WP Coefficients: (a) Coiflets 30 and (b) Daubechies 12 83 4.10 Intraband/Interband Verification Scheme (steps 6. and 7.) . . 86 4.11 Decoding Scheme Developed 87 5.1 Test Images : 93 5.2 Test Images 94 5.3 Original Barbara Image 96 5.4 Watermarked Barbara Image (Coiflets 12 with PSNR = 41.76d£) 97 5.5 Watermarked Barbara Image (Coiflets 24 with PSNR = 41.88dfl) 98 ix 5.6 Watermarked Barbara Image (Daubechies 12 with PSNR = 42.72d5) 99 5.7 Original Airplane Image 100 5.8 Watermarked Airplane Image (Coiflets 24, PSNR = 43.15dfl) 101 5.9 Difference between the Original and the Watermarked Airplane Images (the grayscale has been magnified for visualization, black regions referring to large differences) 102 5.10 PSNR Values for Different Embedding Keys 104 5.11 Detection Rates Achieved for Authentic Images 106 5.12 Tampered Watermarked Barbara Images (bookshelf added to the right of the existing one) with the Detection Results using Coiflets 12 (a,b), Coiflets 24 (c,d) and Daubechies 12 (e,f) 109 5.13 Compressed (3:1) Tampered Watermarked Image (Coiflets 12) and Detection of Spatial Tampering 110 5.14 Original (a), Watermarked (b) and LP (watermarked) Baboon im- ages (c). Frequency spectrums of the Watermarked (d) and LPF images (e). Frequency Detection of Tampering with our WP-based Approach (f) I l l 5.15 Original Barbara Image 114 5.16 E'ikonamarked Barbara Image (PSNR= 38.51 dB) 115 5.17 Original Cameraman Image 116 5.18 E'ikonamarked Cameraman Image (PSNR= 38.37 dB) 117 5.19 Tampered Eikonamarked Barbara Image and Detection with Ei'konamark 118 5.20 Compressed (3:1) Eikonamarked Barbara Image and Authen- ticity Detection 119 5.21 Eikonamarked Barbara (a) and Cameraman (b) Images with the Mixed Version (c) , notice the disappearance of books from top left corner, and the Tampering Detection Result with Ei'konamark (d) 120 5.22 Watermarked Barbara (a) and Cameraman (b) Images with the Mixed Version (c), and the Tampering Detection Result with our WP-based Approach (d) 121 5.23 WP Regions of 2 level Lena Image Decomposition with Coiflets 24 and Daubechies 16 that are Unaltered by JPEG Compression (QF=85) 127 5.24 Overcompensation in the WP domain 129 5.25 Recursive Embedding Scheme 130 B . l Proportion of each Media used for Digital Watermarking . . . 157 B.2 Different Applications of Digital Watermarking 158 B.3 Embedding Domain used for Digital Watermarking 159 B.4 Decoding/Detection Procedure used for Digital Watermarking 160 B.5 Discrete Filters used in our Implementation 161 B.6 Discrete Filters used in our Implementation 162 B.7 PSNR Values for Different Embedding Keys 163 xi Detection Rates achieved for Authentic Images xii List of Abbreviations DCT Discrete Cosine Transform DVD Digital Versatile Disk DW Digital Watermarking DWT Discrete Wavelet Transform FB Filter Bank FT Fourier Transform HVS Human Visual System JPEG Joint Photographic Experts Group LP Low Pass MPEG Moving Picture Experts Group MSQE Mean Square Quantization Error QF Quality Factor QMF Quadrature Mirror Filter WFA Windowed Fourier Analysis WP Wavelet Packets WPC Wavelet Packet Coefficients WT Wavelet Transform xiii Glossary Cover Work: Decomposition: Decomposition Level Frequency Band: Level of Detail: Resolution: Subband: Telltaling: Media content that is to be watermarked. Syn- onym: Host. Fragmentation of a signal into a known arrange- ment of frequency band by the subsequent ap- plication of an analysis filter bank or by the repetitive dilation of a wavelet basis. Synonym: Wavelet Decomposition. State of the wavelet decomposition in terms of the stage in a filter bank tree. Intervals of given width in the frequency spec- trum. Synonym: Band. Group of wavelet coefficients belonging to the same frequency band and forming a category of details in the signal reconstruction. Number of levels applied to an original signal in its wavelet decomposition. Synonym: Scale. Specific region of a frequency band resulting from a signal decomposition. Used in the context of content authentication to identify the specific tampering process applied to an original host. xiv Acknowledgements I would like to acknowledge the precious contribution of several people who have, knowingly or not, made the present thesis possible. My utmost debt of gratitude is to my supervisor, Professor Rabab K. Ward, for her continuous and kind support throughout my master's degree. She was always generous with her time and pro- vided me with invaluable guidance in technical and professional matters. She has been particularly important in different writing stages and preparation of presenta- tions, always providing me with advice, imprinted with wisdom earned from years in the field of image processing. I am also thankful to a fellow member of the image processing laboratory, Mehran Azimi, for his help on practical implications of our field of study and to Professor Ioannis Pitas, who provided me with insightful com- ments on my research. In addition, I want to express my gratitude to Professor Saif Zahir who directed the early stage of my work, and who gave me the organizational skills needed for its completion. I would also like to thank Dr. Hoss Ahmadi who was the first to introduce me to the world of wavelets. Through fruitful discussions on the subject, he is the one who, ultimately, got me interested in the field of dig- ital watermarking. Finally, I have to thank the Natural Sciences and Engineering Research Council of Canada for its financial support, without which the completion of this degree would have been much more difficult. To all, thank you. ALEXANDRE PAQUET The University of British Columbia July 2002 x v "There is no such thing as a long piece of work, except one that you dare not start." - C h a r l e s B a u d e l a i r e , w r i t e r a n d poe t (1821-1867) x v i Chapter 1 Introduction "There is a single light of science, and to brighten it anywhere is to brighten it everywhere." -Isaac Asimov, writer (1920-1992) 1.1 Digital Watermarking In the past decade, the apparition of digital cameras, both photographic and video, as well as CD-ROM and DVDs, has eased the creation, storage and visualization of digital multimedia. In addition, the developments of faster computers, combined with the augmentation of storage capacities and trans- mission speed, facilitate the overall utilization of digital technologies. This has created a real explosion in the use of digital data, and at least in terms of entertainment and media, we are now clearly living in a digital world. Digital contents show great advantages in terms of storage and process- ing. Furthermore, they can be reproduced without loss of quality, and allow 1 for easy and imperceptible modifications. This permits the wide distribution of high quality music and video contents, and the production of incredibly real visual animations by the film industry. However, it also brings some prob- lems: intellectual properties are harder to protect and so are original contents. Therefore, new practices must be developed in order to enhance the charac- teristics, to guard intellectual properties and to secure the content of digital data. Digital watermarking is a relatively new technology that embeds hid- den information in image, music, video or audio data by their imperceptible modification. It differs from cryptography since watermarking is about con- cealing the existence of secret information, while the former tries to protect it. Although insertion procedures are designed so that humans do not notice the marks inserted, computer programs can be created to extract the original marks easily. Afterwards, they can be used for copyright protection, broad- cast monitoring, or even, in relation with an embedding pattern, for content authentication purposes. Due to watermarking's wide range of applications and high potential, this sub-discipline of communication security has attracted a lot of interest in the last eight years. It has now evolved as an established candidate for copy- right protection, ownership identification and fingerprinting systems. More- over, several commercial applications of watermarking for copy control devices are planned, or are already implemented. For all these contexts, a lot of ef- fort is dedicated to the development of robust watermarking schemes that 2 permanently mark the works. On the other hand, the use of fragile embed- ding schemes-ones where the embedded key, that is, the mark, is destroyed by the modification of the work-is much less investigated. Nevertheless, this kind of system shows great promise for content authentication as it allows for the validation of digital data, thus giving it legal value. As digital media are now widely employed and commonly accepted as official documents, protec- tion of their informative content will grow as an important issue, as with the protection of intellectual property in the past years. 1.2 Problem Definition In many applications, such as courtroom evidence and video security systems, any modification of image, video or audio data must be detected if it can- not be prevented. As digital images are widely available, online or elsewhere, and because they are so easy to modify, some work needs to be done to pro- tect the information they contain. As the number of images increases, the direct storage of unique reference patterns becomes impractical. Moreover, as some images need to be slightly compressed in order to be efficiently stored, authentication systems need to offer flexibility. Unfortunately, many of the approaches previously proposed lack this characteristic, while others require too much user interaction to be truly considered secure for commercial appli- cations. In this dissertation, we introduce a novel technique for the content au- thentication of digital images. The new approach is able to detect, as well as 3 localize, malicious image alterations, while offering robustness to high quality image compression. Our method is based on semi-fragile watermarking tech- nology. It uses the knowledge of characteristics of the human visual system to round discrete wavelet packet coefficients from an images' decomposition to optimal quantization levels. Tampering detection is performed using intra and inter frequency band verifications. Combined together, they allow for the de- tection of possible alterations done either in the frequency or spatial domain, while rejecting low-level perturbation resulting from storage operations. 1.3 Organization of the Thesis The goal of the thesis is to find a watermarking method that can detect, as well as localize, tampering in digital images. We first review earlier work on digital watermarking, and then develop a semi-fragile watermarking scheme for image authentication. In order to make the thesis complete, we give an overview of image processing techniques used, and a more extensive background on watermarking technologies. The thesis is organized as follows. In Chapter 2, we go over the basis of wavelets. We first give the histori- cal background and explain how wavelets came to be. The importance of mul- tiresolution concepts for wavelets is then demonstrated. Starting from series expansion principles, and using quadrature mirror filter banks implementa- tions, we explain how wavelets can be used for multiresolution decomposition of one-dimensional signals. Wavelet packets, a particular kind of wavelet de- composition that separates signals in symmetrical levels of detail, and that is 4 used in our watermarking system, are then described. Finally, we extend the use of wavelet and wavelet packets to multidimensional signals using separable transform concepts. In Chapter 3, the bases of watermarking technologies are given. In the first section, we draw a portrait of the concepts leading to the use of water- marking for digital media by retracing its history. Then, we give a background on watermarking by explaining the general concepts. In that sense, generic classifications are given based on the host media, the application intended as well as the embedding domain, and the decoding procedure used. Specific requirements of different systems are also listed. To conclude, we review cor- nerstone papers on digital watermarking that have first laid the conceptual bases of the technology. In the following chapter, specificities of content authentication systems are described, and our WP-based technique is introduced. To start, we come up with a classification of authentication approaches considered. From this, we draw the requirements that such systems should fulfill in order to be effective and efficient. Then, we detail specific methods introduced that have served as bases in the development of our own system. We emphasize two different families of embedding protocols: those acting in the spatial domains, and the others, acting in some transform domain. The pros and cons of each are highlighted through the examination of published work. Finally, a novel image authentication approach based on the quantization of wavelet packet coefficients is introduced in the last section of Chapter 4. 5 Afterwards, we present experimental results in Chapter 5. We first confirm the invisibility of the embedded marks, as well as the authentication capabilities of our system. Then, we prove its ability to detect and localize tampering, both in space and frequency, even in the presence of compression. Afterwards, in order to have more objective evaluation criteria, we compare our system with a commercially available watermarking tool in terms of tam- pering localization aptitude and resistance to attacks, and demonstrate that our system outperforms the commercial software. Finally, different strate- gies meant to increase the robustness of the previously presented system are examined in Section 5.5. To finish, a summary of our work and the major results obtained, as well as future possible research work in the field of digital watermarking for image authentication, are presented in Chapter 6. 6 Chapter 2 Wavelet Analysis "The problems that exist in the world today cannot be solved by the level of thinking that created them." -Albert Einstein, physicist (1879-1955) 2.1 I n t r o d u c t i o n Although the average person probably knows very little about wavelets, their impact on today's technological world is phenomenal. They represent a very powerful mathematical tool commonly used by scientists and engineers, and are currently applied in fields such as signal processing, computer vision and data compression. Several new applications of wavelets are discovered every year and will continue to be in the future. The purpose of this chapter is to provide a solid understanding of wavelet transforms. Since we wish to make the present thesis readable, we avoid going too deep into the mathematical details of the approaches developed 7 throughout the years. We first highlight the cornerstones leading to modern wavelet theory. Then, in Section 2.3 we present the theoretical fundamentals of wavelet analysis for multiresolution decomposition. Finally, we introduce the concepts of wavelet packet decomposition used in our watermarking system before we extend the use of wavelets to two-dimensional signals. 2.2 Historical Perspective The first known step toward the development of a unified wavelet theory oc- curred when a Hungarian mathematician named Alfred Haar completed his work on the orthogonal systems of functions. In 1910, he proposed the use of piecewise constant functions to form an orthogonal basis. His system uses a basis function (now referred to as the scaling function (p) as a starting point. Then, the mother's (ip), daughters', sons', granddaughters' and grandsons' (and so on) functions are obtained by the subsequent scaling and translation of the basis, or father wavelet. Haar proved that the obtained set of functions can be used to represent a signal at different levels of detail [31]. Further- more, he demonstrated that a decomposed signal can be reconstructed using the reverse operations. Although it was not called "wavelets" back then, the simplest of the wavelet families was nonetheless born, and is now named the Haar wavelet. Half a century later, some interest was devoted to the study of Win- dowed Fourier Analysis (WFA) in order to achieve both spatial and frequency localization in signal decomposition. As it decomposes a signal into a sum of 8 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 Figure 2.1: Haar Scaling (</>) and Wavelet (ip) Functions 9 sines and cosines, which are both infinite in time, the standard Fourier de- composition lacks the time1 localization necessary for the accurate analysis of several real signals. The idea of WFA is to study the frequencies of a signal for time-limited windows. This allows for some time localization of the frequency characteristics of a given signal. WFA concept finally allowed the examination of things in terms of both time and frequency. Nonetheless, Haar remained the only example of a wavelet, and the next major advancements did not come until later in the 1980s. Jean Morlet and Alex Grossman teamed up in 1981. Together, they discovered that a signal could be transformed into wavelet form, and then synthesized back into the original signal without any loss of information. Then, in 1984, they were the first to use the term wavelet to describe their functions [30]. More specifically, they were called Wavelets of Constant Slope. Other researchers had used the term wavelets for different signal processing applications (see [61] for example) but Morlet and Grossman were the first to use it as it is now currently referred to, which is as follows: a wavelet is a unique function, limited in time and frequency, that can be translated and dilated to form multiresolution basis used to decompose a signal at different levels. In addition, their major contribution was the finding of a simple signal recomposition method from its wavelet coefficients. They also discovered an- 1Time and space will be used alternatively throughout this thesis as time is, in fact, but only one possible space representation. However, since a lot of concepts used have first been developed for time-dependant signals, we find it helpful to use the same notation. 10 other interesting thing that is now commonly used in wavelet-based coding: a small modification in the wavelet coefficients only causes a small change in the original signal. This might not have appeared to be especially meaning- ful at the time, but when considering that modern wavelet-based compression schemes quantize wavelet coefficients, if it had been otherwise, data compres- sion would be a much more difficult task today. The real breakthrough in wavelets analysis, however, happened in the late 1980's when a lot of papers now considered classic were published. Yves Meyer and Stephane Mallat were two important contributors to this newborn field. Investigating the use of wavelets in many different applied fields, they were amongst the first to develop the concept of multiresolution analysis for wavelets [49]. This was an important step for the advancement of research on wavelets. As a result, multiresolution is now an extensively used signal decomposition approach. Mallat and Meyer were the first to mention scaling functions of wavelets, which allow researchers and mathematicians to construct their own wavelets using established criteria [80]. Around the same time, a Belgian physicist named Ingrid Daubechies employed multiresolution analysis to create her own family of wavelets. Using construction methods related to filter banks, she introduced in [23] a family of compactly supported orthogonal wavelet systems with arbitrarily high, but fixed regularity. These wavelets offer a number of desirable properties (such as compact support, orthogonality, regularity, and continuity) that make them trully attractive2. This is why the Daubechies Wavelets are now some of the 2 More on this in the next section. 11 -5 0 5 -5 0 5 Figure 2.2: Meyer Scaling (</>) and Wavelet (ip) Functions 12 most common ones today. ft Db4: \|/ i i 1 ""0 2 4 6 0 2 4 6 Figure 2.3: Daubechies-4 Scaling (<j>) and Wavelet (ip) Functions Daubechies' work was probably the starting point of much focused re- search on wavelets that has lead to their acceptance as a modern mathematical tool and their wide use in sciences and engineering. Of course, many other researchers have contributed to the advancement of the field in the last decade, and several applications have been found. In particular, wavelet transforms prove to be extremely effective for image coding, and upcoming image com- pression standards-such as JPEG-2000-make use of them. From this, it is clear that wavelets are definitely a tool for the future, and this is why the knowledge 13 of their historical and theoretical bases is of great interest. 2.3 Wavelets and Multiresolution Analysis In her now classic book [24], Ingrid Daubechies defines the wavelet transform as the following: a tool that cuts up data or functions or operators into different frequency components and then studies each component with a resolution matched to its scale. The wavelet transform of a signal evolving in time depends on two variables: frequency and time. Therefore, these transforms provide an accurate tool for time-frequency localization. This is the most important factor that explains why wavelet transforms have already attracted so much attention. Extensive publications on the general theory of wavelets are found in [24, 69, 80], while [68] details their specific applications to image processing. We now briefly examine the concepts linked with series expansion, and subse- quently, the theory behind filter bank analysis. Finally, we introduce the idea of multiresolution and its application to wavelet decomposition. 2.3.1 Series Expansion The goal of series expansion is to represent a signal or function as a combina- tion of bases. Essentially, it means that we want to find a set of elementary signals {(pi}iez so that we can write an original signal x, as a linear combination 14 of the following basis: i where the expansion coefficients OJJ'S can be obtained by the computation of the inner product of the basis dual set {&} with the signal x as follows: <*i = '52<Pi[n]x[n] (2.2) n When the set {<pi} is orthonormal and complete3, we have an orthonor- mal basis and the basis, and its dual are the same, that is, <fi = <Pi. Therefore, it means that the expansion coefficients can be found directly with the basis coefficients, as follows: ai = '%2<pi[n]x[n] (2.3) n This principle is used in Fourier series decomposition in order to de- scribe periodic signals by the combination of harmonically related sinusoids. This gives a perception of a signal in the frequency domain, in terms of their frequency content (see A. l ) . Fourier series representation is among the most popular series expansion techniques. Two main reasons motivate its use: one, it is easily implemented; and two, it yields the smallest mean square error (MSE) in power between the signal x and its series representation [3]. How- ever, Fourier series are limited to periodic signals, a fact that prevents its use in several signal-processing applications. Nevertheless, the harmonic de- composition principle is used to compute the Fourier Transform (FT), which extracts the frequency content of a signal, periodic or not (Equations A.4 and 3 A set {<pi} is considered complete if all the signals x in the representation space S can be expanded as in Equation 2.2. 15 A.5). The FT allows us to go from the time domain to the frequency domain and back, without the loss of information4. From this, signal decomposition using series expansion (and associ- ated transforms) presents great advantages. It allows for the extraction of information-in the FT example about frequency content-that would not be available from the direct examination of the signal. The choice of the basis {cpi} used determines what characteristics are scrutinized in the analysis. The FT is a very powerful tool when we are interested in the frequency spectrum of a signal. However, it lacks the time localization needed in a lot of applications. This explains the amount of work done to develop other bases. In this context, it is important to note that wavelets are a specific kind of basis that allows for signal decomposition. In the next subsection, we introduce filter banks, a concept that simplifies the notions of multiresolution presented in Subsection 2.3.3. 2.3.2 Filter Banks A multirate filter bank is a set of (M) parallel filters having either the same input or output. When the filters are used to split a common input (x), it is referred to as an analysis bank. On the other hand, it is called a synthesis bank when it is used to recombine split inputs to form one common output (x or XJT). An important feature of multirate filter banks is that they split a signal into different frequency bands. Moreover, perfect reconstruction from the de- 4This holds only as long as the entire signal is known, a fact that can be highly prob- lematic for real-time applications. 16 x[n H«j(z) U -M H t M Fd(z) Hx(z) >!/ M 1- M HM.,(z) 4-M t M K_r[n] Analysis Bank Downsamplers Upsamplers Synthesis Bank Figure 2.4: Multirate Quadrature Mirror Filter bank composition is maintained as long as specific filter requirements are fulfilled. Therefore, filter banks (FB) play an important role in the study of wavelets. In fact, wavelet transforms can be constructed and easily implemented by the use of a particular class of filter bank systems developed by Croisier et al. in [22] and known as quadrature mirror filter (QMF) banks (Figure 2.4). Thorough design of QMF is outside the scope of our project, but it is important to have a general comprehension of filter banks in order to understand how wavelets allow the decomposition and the perfect reconstruction of signals. For that reason, we highlight the main steps leading to the construction of wavelets through the design of filter banks. Since wavelets rely on octave-based decomposition, we are interested only in two-channel filter banks, hence dropping the concept of multirate. However, the reader must understand that the concepts explained in this sec- tion can be generalized to multirate systems as well. Several books are pub- 17 lished on the subject, and we invite interested people to consult [80] for more theoretical information. In our development, we first apply the concepts de- veloped in 2.3.1 to Haar decomposition, and explain how a filter bank can be obtained with this particular basis. Then, we present the filter's requirements for the development of other bases. <P2k[n\ ^ for n = 2k, 2k + 1 0 otherwise ^ for n = 2k for n = 2k + 1 0 otherwise (2.4) <P2k[n] = <Po[n ~ 2A;] <p2fc+iN = <Pi[n - 2A;] (2-5) Haar first came up with the bases presented in Equation 2.4, and cor- responding to the scaling function (father wavelet) and mother wavelet re- spectively. This orthonormal basis (see A.2) has been shown to be suitable for wavelet decomposition as the basis functions are translates of each other (Equation 2.5). Particularly, those functions can easily apply to FB represen- tation. First, the transform X associated with the basis has to be defined as follows: X[2k] = (<p2k,x) = ^{x[2k]+x[2k + l\) (2.6) X[2k + 1] = ((pn+i, x) = ^{x[2k} - x[2k + 1]) 18 which allows the perfect reconstruction of the signal x[n] from the following: *[n] = £ X [ % f c [ n ] (2.7) keZ x[n HrfCz) 4- 2 f 2 Hi(z) 4-2 1 I 1* 2 x_r[n] Figure 2.5: Two Channels QMF Bank Once we have an appropriate basis for signal decomposition, we need to get the proper filters to obtain the simplified Figure 2.5. From it, we have that yo[k] = ho[n] * x[n]\n=2k5 and yi[k] — hi[n] * x[n]\n=2k- Since the transform is already defined for the Haar basis as a function of X[2k] and X[2k + 1], we want y0[k] = X[2k] and yi[k] = X[2k + 1]. Fortunately, the extraction of the corresponding filters ho[n] and hi[n] is pretty straightforward from the following: X[2k] = h0[n] * x[n]\n=2k = £ h0[2k - l]x[l] = -^=x[2k]+ -]=x[2k + 1] (2.8) X[2Jfc+l] = / i i [n ]*x [n ] | „ = 2 j fc = £ > i [ 2 f t - J ] x m = -^=x[2k]—]=x[2k+l] (2.9) uz v 2 v 2 and so, the analysis filter bank is defined with the impulse response of the following associated filters: r T2 forn = - l , 0 0 otherwise h0[n] = { (2.10) 5First, the n = 2k results from the downsampling operation. Second, the filters are iden- tified Ho(z) and Hi(z) in the figure. Z-domain representation is used simply by convention. 19 hi[n] = { T2 f o r n = ° forn=-l (2-11) 0 otherwise It is clear that the impulse responses found are time-reversed versions of the basis functions (which correspond to the (Haar) scaling ((f)) and wavelet (ip) functions of Figure 2.1). This comes from the fact that convolution is an inner product with time reversal, giving the following: h0[n] = ip0[-n] and hi[n] = <pi[—n] (2-12) At this point, we are able to decompose the original signal x[n] in two half length signals yo[n] and yi[n]. However, this is only half the way since our goal is to reconstruct the signal x using synthesis filters /o[n] and fi[n] (see Figure 2.5). Keeping in mind that the downsampling operation has to be reversed (by upsampling) in order to restore the original length of the signal, and using the series expansion procedure given in 2.2, we directly have the following: x\n\ = Y yo[k]f2k[n] + y i[%2fc + i N (2-13) keZ keZ For QMF, 2.13 yields perfect reconstruction (x[n] = x[n]), and the y's are transforms of x using Haar's basis set (i.e. yo[k] = X[2k] and yi[k] = X[2k + 1]). Equation 2.13 thus becomes the following: x[n] - £ X[2% 2 f c[n] + ^ X[2k + l]y>2fc+i[n] (2.14) keZ kcZ Keeping in mind that the reconstruction operation includes upsampling and convolution, the previous equation (2.14) allows the extraction of the 20 following: <P2k[n] = fo[n - 2k] and (p2k+i[n] = fi[n - 2k\ (2.15) Therefore, the synthesis filter bank is defined by the following filters' impulse responses: / o N = <£oN and /i[n] = <px[n] (2-16) To sum up, we have shown that it is possible to use a simple basis in order to construct a complete two-channel filter bank, which allows decom- position and perfect restoration of the original signal. Furthermore, we have found that, for the Haar basis, Equations 2.12 and 2.16 describe the system completely. Those equations can be made more general in order to fit other systems. In fact, the two following generalizations6 (from [80]) are used as requirements for the definition of another set of basis {<̂ i}. 1. The impulse responses of the synthesis filters equal the first set of basis functions, as follows: fM = <pi[n] t = 0,1. (2-17) 2. The impulse response of the analysis filters are the time-reversed versions of the synthesis ones, as follows: lk[n] = fi[-n] z = 0,1. (2-18) 6Which, in fact, represent the first prerequisites of QMF. 21 Equations 2.17 and 2.18 can be relaxed and rewritten in the Z domain to form the perfect reconstruction condition on the following filters (see A.3): F0(z)H0(z) + F^H^z) = I (2.19) In retrospect, we have made the link between series expansion and filter bank analysis. From this, we are now ready to define more two-channel filter banks, and extend the principles to general multiresolution wavelet analysis. Nonetheless, one more aspect needs to be covered in order to have a better understanding of the physical implication of the signal decomposition. We need to know what the characteristics of the signals y0[k] and y\[k] are in relation to the original signal x[n\. Taking the Haar decomposition case again, we have the first intermediate signal as follows: y0[k] = X[2k] = -^=(x[2k] + x[2k + 1]) (2.20) v2 which corresponds to the output of an averaging-therefore low pass-filter. On the other hand, the second intermediate signal yi[k] is as follows: yi[k] = X[2k + 1] = 4=(x[2fc] - x[2k + 1]) (2.21) v2 The later gives the difference between two successive samples of the original signal x[n]. Therefore, it is a high pass filter. It means that the outputs of the analysis stage of a two channel filter bank are one, a low pass filtered version of the original signal and two, a high pass filtered version of the original signal. To summarize, two channel filter banks allow for the decomposition of a signal into two different subband signals at each stage: one that describes 22 the coarse behaviour of the original signal (low frequency), while the other contains the details of the signal (high frequency). Furthermore, it allows for the reconstruction of the original signal without the loss of information. As first Haar, then Morlet, Grossman et al. have noticed, this offers very promising capacities. In fact, these are exploited in the design and use of multiresolution systems, the subject of the following section. 2.3.3 Multiresolution and Wavelet Theory At this point, we want to use the concepts developed in 2.3.1 and 2.3.2 to extend the bases of wavelet analysis. Once again, our goal is not to go too deep in the theoretical details. A thorough mathematical approach to filter bank analysis and wavelet decomposition is available in [1]. The purpose of this section, and in fact of the entire chapter, is to make the present thesis self-sustained; that is, we want to be sure that the reader has a general com- prehension of wavelets and their applications in order to understand their use in our digital watermarking scheme. The first thing to be comfortable with is the concept of multiresolution. As seen in the previous section, a signal can be represented by a coarse ap- proximation y0, plus added details y\. Mallat [48] and Meyer [53] first showed that the detail signal y\[n\ is the difference between the original one (i.e. x[n]) and its coarse representation yo[n]. From this, the coarse and detail subspaces are orthogonal to each other. This permits the recursive application of suc- cessive decomposition at all resolutions, which results in the division of the 23 original signal in narrower levels of detail. A multiresolution analysis consists of a sequence of embedded closed subspace V*, such as the following7: ... C V2 C VI C Vo C V_i C V_2- (2-22) Next, we need to know what multiresolution means in terms of filter banks. From Equation 2.22 and the arguments around it, there is nothing to prevent the use of additional two-channel QMF banks to further decompose some or all of the subband signals j/j, and then again decompose the resulting signals. The idea can be applied recursively to form a filter bank with a tree structure. If only the approximation signal is decomposed into two subbands at any given resolution (using the same basic QMF and downsamplers), it yields an octave band filter bank. Furthermore, it has been shown that, if the analysis filters HQ(Z) and H\(z) satisfy certain regularity conditions [48], the filter bank representation can be used to compute a wavelet decomposition (and reconstruction) of the original signal x[n]. H<j(z) 4-2 V H 0(z) 4-2 Hi(z) 4-2 H,(z) 4-2 Figure 2.6: Two Levels of Wavelet Decomposition using Filter Bank Repre- sentation The multiresolution decomposition is described in terms of subspaces Vj and Wj, which relate to the intermediate signals yo and y\, as seen in 7 A formal definition of multiresolution is found in [80] (See also A.4). 24 t 2 w. t 2 t 2 t 2 ^ x_r[n] Figure 2.7: Two Levels of Wavelet Recomposition using Filter Bank Repre- sentation Section 2.3.2, and, hence, to the scaling (</>(£)) and wavelet (ip(t)) functions. As the number of decomposition levels used increases, the subspace number j increases as well8. The wavelet space Wj corresponds to the difference between the present scaling space Vj and previous one Vj_i. It means that Vj © Wj = Vj-\. This is shown in Figures 2.6 and 2.7, and graphically generalized in the frequency domain by Figure 2.8. 7l/(2J) 51/(2)') 7l/4 7C/2 71 Figure 2.8: Ideal Spectrum Division from Wavelet Decomposition Up to now, we have seen that the analysis side of the octave band filter 8We use the same notation as [24] and [69] but several others have been used; for example [80] employs exactly the opposite convention. 25 bank calculates the forward wavelet transform, while the synthesis side calcu- lates the inverse wavelet transform. Therefore, a two-channel QMF bank can be directly used to form wavelet decomposition. This is probably the most fre- quently employed method of designing and implementing wavelet transforms. It is nonetheless important to summarize the more theoretical approach to wavelet decomposition, which is more related to series expansion than filter banks. In order to account for the fact that the signal has to be decomposed into two bands at each level, two basis functions have to be defined. At each scale, a scaling function (f>(t) is defined in addition to the wavelet function ift(t). The first one acting like the low pass filter H0, while the second one is linked to the high pass filter Hi. Furthermore, in order to keep the total length of the decomposed signals equal to the length of the original signal-a fact that is taken care of by the downsampling operation in the filter bank representation-and to increase the definition at each level, a dilation operation needs to be performed (see Equation A. 13) on the original basis. Finally, given the original basis (j>, the scaling and wavelet functions at level j are as follows: oo <f^\t) = 2 h(k)(t>{2H - k) (2.23) k=—oo oo </,«(*) = 2 Yl hi(k)(f>(2H — k) (2.24) k=—oo Consequently, the appropriate initial filter ho and scaling function make possible the definition of different scaling ((f>(t)) and wavelet (ip(t)) func- tions by iteration of the dilation equations. Of course, the choice of the original 26 basis is of primordial importance, but as stated earlier, this is not the concern of the present thesis. In addition, a lot of work already exists in the field and that gives us enough material to work with. We introduce, in A.5, the six steps towards multiresolution, as proposed in [69]. Here, we present the Haar scaling and wavelet functions for V0 and Wq as they are shown in Figure 2.1: 1 0 < t < 1, (2.25) 0 otherwise 1 0 < t < \, m = \ -1 § < * < 1 , ( 2- 2 6) 0 otherwise One final point needs to be made in order to complete our discus- sion about multiresolution wavelet transform. What is the advantage of this specific form of signal decomposition over others? Figure 2.9 clearly shows that wavelet decomposition allows both frequency and time localization, while Fourier Transform does not. For FT, A / and At are fixed, even if the signal is first windowed in time. On the other hand, wavelet transform gives scalable time/frequency resolution as Aw oc 2J and A i oc 2~J, thus allowing the choice of more or less levels of decomposition in accordance with the importance of each resolution. Finally, as different bases exist, it is also possible to choose and/or adapt a particular basis according to the application considered. This section reviewed the fundamentals of multiresolution analysis and its application to wavelet theory. We explained how multiresolution concepts are applied to octave band filter bank systems for the implementation of 27 *o T ^ T Figure 2.9: Frequency Tilling for Fourier and Wavelet Transforms wavelet transforms. In addition, we highlighted the main steps leading to the computation of scaling and wavelet functions from a given basis by dila- tion operations. Finally, we showed the time-frequency localization allowed by wavelet transform, and the degree of freedom granted by the existence of a wide range of basis. As a result, the advantages provided by the relatively new wavelet transform over the more traditional Fourier analysis should be very clear. 2.4 Wavelet Packet Analysis We just showed that the wavelet transforms offer several advantages in terms of localization, resolution and flexibility compared to the Fourier transform. In addition, we demonstrated that wavelet transforms can be easily implemented by the use of octave band Q M F banks. This means that, by recursive filtering on the signal's coarse approximation, it is possible to achieve efficient and 28 simple standard wavelet analysis. This is of great importance as it allows for both efficient and accurate signal decomposition based on solid mathematical definitions. Here, we are interested in seeing what would happen if we generalized the discussion of Subsection 2.3.3 to other, more arbitrary, tree structures. We are particularly interested in the full-tree decomposition scheme where all the outputs of the first stage, that is the high passed as well as the low passed ones, are further decomposed. This means that, starting from a single two- channel filter bank, we decompose a one-dimensional signal in 2J bands at each j resolution level. Of course, this full tree decomposition, first proposed in [14] and known as wavelet packets (WP), can be implemented using the same filter bank-related approach [12, 13]. The wavelet packet library is produced by cascading filtering and downsampling operations in a tree-structure. Figures 2.6 and 2.7 thus become Figures 2.10 and 2.11. x[n] Hd(z) 4-2 l H0(z) 4- 2 Hx(z) 4-2 H*z) 4- 2 w, 02 H0(z) 4-2 Hi(z) 4-2 w, w, Figure 2.10: Two Levels of Wavelet Packet Decomposition using Filter Bank Representation 29 wv t 2 F<j(z) w 32 r 2 F,(z) t 2 t 2 F,(z) t 2 F«j(z) W, t 2 F,(z) x_r[n] Figure 2.11: Two Levels of Wavelet Packet Recomposition using Filter Bank Representation The wavelet packet algorithm generates a library of orthonormal func- tions that are derived from a single filter kernel. This kind of decomposition offers important advantages. In particular, wavelet packets are very promising for image compression. This comes from the fact that full tree decomposition allows for. the selection of the best basis. As the complete tree structure yields over defined decomposition, not all the outputs have to be used in the analysis or for the reconstruction. For example, from Figure 2.10, Wu, W22 and V\ can be selected as bases for the reconstruction of x[n], leaving unused V2 and Wo2- Generally speaking, the WP algorithm searches through the library of bases V '̂s and Wi's to find the least computationaly expensive set, which also provides the best compression9. In addition, from the examination of Figure 2.12, it is clear that WP 9Due to this capability, the FBI chose the use of a wavelet packets-based image compres- sion scheme for their fingerprints databank as it allows for the best performance for images with important high frequency content. ^ 30 increases the frequency resolution at higher frequencies. On the other hand, as the frequency resolution increases, the space resolution decreases. This is a problem that has to be kept in mind for the implementation of our wa- termarking scheme. There exists, however, one other advantage of wavelet packets that makes up, in our application, for the above problem: the imple- mentation of wavelet packet decomposition assures the symmetry of the final decomposition bands. It means that all the bands are of the same size, and that the translation from frequency to time domain is much more straight- forward. This proves very useful when the time comes to compute the space localization of tampering from the wavelet domain (Section 5.3). -V, V w. w0 2 ji/(2J) TC/Q)-1) idA idl 3n/2 n Figure 2.12: Ideal Spectrum Division from Wavelet Packet Decomposition To sum up, this short section was designed to provide the concepts of wavelet packets. The principal differences as well as the advantages involved in WP (over traditional wavelets) signal decomposition are explained. We are now confident that interested persons will agree with our choice of wavelet packets for image decomposition. In any case, more detail on our implementa- tion of wavelet packets is given in the description of our watermarking system 31 in Chapter 4. 2.5 Multidimensional Signals Since we are particularly concerned with the utilization of wavelets for im- age decomposition, we need to generalize wavelets to two-dimensional signals. Filter bank concepts are easily applied to images by considering them as two one-dimensional signals: the rows and the columns. We can simply start by utilizing a one-dimensional transform on the rows before the same is done on the columns. This can be formulated in terms of the tensor product of a one-dimensional transform having analysis and synthesis basis functions <fi and ipi. Therefore, the two-dimensional signal I(x,y) can be expressed as a combination of the following bases: i(x, y) = J2 ZX 7^, y), <fi,k(x> y)) wAx, v) (2-27) i k where the 2D basis functions are defined from the ID ones as follows: &,k{x,y) = <Pi{x) <Pk(v) and (pitk(x,y) = tpi{x) <pk(y) (2-28) As the previous transform can always be expressed as two separate one- dimensional transforms, it is referred to as separable. While being constrained in their definition and construction, separable transforms are favored because of their computational efficiency with separable filters [80]. For this reason, separable transforms are the most commonly used today. For more information on non-separable systems used for image processing applications, the reader is invited to see [66, 79]. 32 2.6 Summary In this chapter, we reviewed well-known principles of signal decomposition, and particularly, the fundamentals of wavelet transform. We were specifically interested in the implementation of wavelets with octave band quadrature mirror filters. Although no new concepts were proposed, the originality of our presentation resides mainly in its simplicity. Even if wavelets are based on fairly complex mathematical theories, we strived to make this chapter read- able. We first started from commonly known series expansion theorems and examined the implications for signal representation. Then, we explained the theories behind filter bank decomposition and described the implication of oc- tave band Q M F for the implementation of D W T . Afterwards, we expanded the use of filter banks to multiresolution analysis and focused on wavelet pack- ets, a particular type of wavelet decomposition that is used in our digital watermarking algorithm. We concluded the chapter by broadening the use of wavelet transform to multidimensional signals. 33 Chapter 3 Digital Watermarking "An invasion of armies can be resisted, but not an idea whose time has come." -Victor Hugo, writer and poet (1802-1885) 3 . 1 I n t r o d u c t i o n Digital watermarking is a relatively new technology that allows the imper- ceptible insertion of information into multimedia data. The supplementary information, called watermark, is embedded into the cover work through its slight modification. This mark is hidden from view during normal use and only becomes visible as a result of a special visualization process. An impor- tant point of watermarking techniques is that the embedded mark must carry information about the host in which it is hidden. The protection of currency bills is the most widely known use of water- marking. For example, Benjamin Franklin's face is embedded on 100 US$ bills 34 and matches its printed portrait on the-same bill, but is visible only when the bill is held up to a light. The watermark is used to make the illegal reproduc- tion of bills detectable, if not impossible. In this context, the paper watermark carries information about the legitimacy of the currency bill. Authentication is only one possible application of watermarking and its use on digital work offers other great possibilities. It has been foreseen as a good candidate tech- nology for enhancing multimedia data by the addition of information available to the users for content improvement, copyright protection, authentication, and so forth. The purpose of the present chapter is to provide the reader with a solid basis on watermarking technologies. First, we define watermarking and draw a portrait of the concepts leading to its use for digital media by retracing its history. Then, we give a background on watermarking by highlighting its applications. We also give a generic classification of watermarking schemes and review cornerstone papers that explicate its bases. 3.1.1 Definition of Watermarking First, we need to point out what differentiates digital watermarking from infor- mation hiding and steganography. In information hiding, the goal is to make the information imperceptible, or to keep the existence of the information se- cret [58]. Since it includes applications such as user anonymity in networks or database secrecy, information hiding is considered beyond the embedding of messages in content, and therefore, out of our scope of interest. Steganog- 35 raphy, on the other hand, is more related to the technology discussed here. The word steganography comes from two Greek words: steganos, which means covered; and graphia, which means writing. In [20], it is defined as the art of concealed communication. The hidden message, however, does not neces- sarily carry information about the cover work. For this reason, we consider steganography and watermarking as two different applications, and focus on the later for the remaining of the thesis. 3 . 2 H i s t o r i c a l P e r s p e c t i v e Watermarking was first used in the thirteenth century in Fabriano, Italy, to label pieces of hand-made paper [50]. The inventors inserted designs in the paper sheets by thinning certain regions by placing wire in the mold. After- wards, one could access the inserted design by holding the dry piece of marked paper up against a strong light. Watermarking was used to distinguish the mold used for fabrication, to identify the paper maker [20], or even simply for decorative purposes [36]. It was named watermarking because the patterns formed by the wires were perceived as watery areas on the marked articles [50]. This technique became accepted as a labeling tool for paper sheets. By the eighteenth century, papermakers were using watermarks to record infor- mation about produced paper. In that way, watermarks served-and still do-as a means of identifying paper with the members of the trade organization who manufactured it. At approximately the same time, the increasing number of 36 commercial exchanges and currency bill circulation boosted the problems of money counterfeiting. For that reason, watermarking quickly became an ef- fective way to avoid the duplication of currency bills. As it has been proven to be effective, watermarking is still used as a currency protection technique. In the mid 1950's, Emil Hembrooke, an engineer from the Muzak Cor- poration, was the first to extend the use of watermarking to other media. He filed up a patent for the watermarking of musical works. The insertion of an ownership identification key was designed to identify the work at hand. It was performed by the intermittent application of a narrow notch filter on the audio signal using Morse-based coding. In [34], the system is described as follow: The present invention makes possible the positive identification of the origin of a musical presentation and thereby constitutes an effective means of preventing such piracy, i.e. it may be likened to a watermark in paper. The connection between the insertion of undetectable information in digital content and paper watermarking technology was thus made. Water- marking would, however, have to wait longer to attract enough attention to become an active field of research. In 1988, Komatsu and Tominaga were the first to use the term digital watermarking for their image authentication system [38]. Although there were several publications in the interval, a cor- nerstone paper by Cox et al. [15] was the starting point of more intensified research. Figure 3.1 shows1 that the number of publications on watermarking 1 Thanks to Peter Meerwald from the University of Salzburg for the data. 37 increased almost exponentially between 1995 and 1999. Of course, this was not only due to the paper by Cox et al., but mainly to the organization of the watermarking researchers. The first Information Hiding Workshop was held in 1996 and the Society of Photo-Optical Instrumentation Engineers, SPIE, started organizing conferences specifically on Security and Watermarking of Multimedia Contents in 1999. In addition to official efforts, individuals also contributed to the formation of a new research community. The work of Mar- tin Kutter on the Digital Watermarking World is the first, and probably the best, example of personal efforts for the advancement of the technology. In the meantime, commercial use of digital watermarking (DW) interested companies and organizations. The music industry came up with the Secure Digital Music Initiative, SDMI, in 1999, in order to create an environment for the legitimate distribution of digital music. In addition, several companies (e.g. Digimarc Corporation, Alpvision and Alpha-Tec) specializing in digital watermarking have also been created. As a result, an increasing amount of effort and fund- ing is dedicated for research in different areas of DW. It is therefore expected that a number of businesses will be created in the near future to deploy more applications based on this new technology [18]. 3 . 3 B a c k g r o u n d o n W a t e r m a r k i n g A lot of work has been done in order to develop reliable watermarking systems. Numerous digital media have been considered for the embedding of information to serve a wide range of applications. This has lead to the investigation of 38 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 Year Figure 3.1: Publications on Digital Watermarking per Year 39 several approaches, both in terms of embedding and detection procedures. In this section, we review work done on different media as well as previously intended applications and general methodologies used for the watermarking of digital images. 3.3.1 Host Media for Watermarking Since the first use of watermarking on audio signals for identification, wa- termarking has been tried on several other digital media. There are several papers on the insertion of information in gray scale [27, 39, 43, 50] or color [7, 29, 63, 87] digital images, while others exclusively investigate the use of com- pressed [40] or printed copies [25]. Watermarking of video has also attracted some attention [10, 26, 35, 70], especially for the protection of MPEG streams [27, 33]. Of course, more research has been done on the watermarking of audio signals [32, 46], and interesting applications have been found for speech in par- ticular [11]. Text data have also been investigated by many [8, 9, 44]. Some more theoretical approaches of watermarking for combined media have been studied [45, 59, 81], but the resulting systems are limited since they do not take advantage of the specific type of host media involved. In order to sum- marize the relative importance of the investigation of each medium in terms of number of publications reviewed, we present Figure B . l in Appendix B. 40 3.3.2 Applications of Watermarking In addition to the exploitation of different hosts, researchers have been look- ing at different applications for digital watermarking. The application that attracts the most attention is copyright protection [27, 43, 44, 70, 71]. In this context, a watermark is permanently embedded in the work to identify its orig- inal owner. In order to be efficient, the embedded mark has to be robust, that is, it has to be detectable as long as the host carries its information, hence, the name of robust watermarking. Another use of robust watermarking is for the labeling or fingerprinting of digital media. This application is technically similar to the previous one except that, here, a different mark is embedded in each copy of the same work to allow its tracking. As shown in Figure B.2, there are a number of other very interesting applications that require recognition. Watermarking technologies can be used to monitor the content of television or radio broadcasting [32], or to control the access or the copying rights of digital content [8, 54, 81, 83]. The Millennium Group proposed to control the reproduction of DVD's by the addition of in- visible fingerprints in the video streams. Associated with decoders in copying devices this can either grant or forbid the duplication of the digital content. Similar work is also done under the supervision of Philips and Sony to create a new audio disk encoding standard, called Direct Stream Digital (DSD), that includes watermarking for copy protection. In addition, simple covert com- munications have been investigated by many [9, 29], and some propose to use watermarking as a feature enhancement method [74]. A number of researchers 41 have also created multipurpose watermarking systems in order to add several layers of information to the host [40, 46]. Content authentication is the last application of watermarking that needs to be highlighted. This specific watermarking approach involves the embedding of a fragile key in the host. This key is destroyed if tampering occurs with the original content, hence the commonly accepted name fragile watermarking. Fragile watermarking can be used to assess the validity of the work at hand, thus increasing the value of its content [5, 39, 87, 88]. For this reason, content authentication is a very promising application of DW tech- nologies, and the next chapter investigates it with more details. From the above, it is clear that the use of watermarking on different media, and for different applications makes the number of publications quite overwhelming. Fortunately, extensive surveys have already been published [4, 19, 59, 81] and provide overviews of the state-of-the-art. Nevertheless, generic classifications can be obtained by the examination of previously pro- posed techniques. 3.3.3 Requirements of Watermarking Systems In order to be truly efficient, digital watermarking systems have to consider the specific application and host data intended. In a fact, in DW, one size does not fit all [19]. Nevertheless, some general requirements of the technology can be extracted. The first, and most important one, is invisibility; that is, the watermarked work should not be perceptually different from the original 42 (unmarked) one. Security is another universal consideration of watermarking. Its basis must lie on Kerckhoff's assumption that ones should assume that the method used to encrypt the data is known to the unauthorized party [73]. From this, watermarking security can be viewed as encryption security, thus leading directly to the principle that it must lie mainly in the choice of the key and embedding protocol. From the applications mentioned in Subsection 3.3.2, one can divide watermarks into two distinct types: robust and fragile. Used mainly for con- tent authentication, fragile watermarks are meant to disappear if the image is corrupted. Robust watermarking, on the other hand, allows for the mark to still be detectable after the content has undergone tampering, and therefore, grants its permanent identification. Therefore, robustness versus fragility re- quirements are contradictory, and this is one of the most important reason why each kind of watermarking is required. In summary, there are three main requirements of watermarking sys- tems: invisibility, security and robustness or fragility. 3.3.4 Embedding Domains and Decoding Procedures Although it is quite difficult to classify all the existing methods into families, the domain of embedding, that is, the kind of operation needed to insert the mark, can be used as a common trait for different techniques. A lot of ef- fort has been put into the development of straightforward embedding in the spatial domain [5, 72, 75, 87] because of its ease of implementation and com- 43 puting efficiency. To add control to the frequency of embedding, some first transform the content using the discrete Fourier transform (DFT) to create the embedding domain [42]. Others want to match the current image com- pression standard and employ the discrete cosine transform (DCT) [17, 78]. Few methods are based on the Walsh-Hadamard Transform [7, 28] because of the frequency spreading it grants. In the context of copyright protection, techniques are also developed in order to increase the robustness of the wa- termarks. Fourier-Mellin approaches allow good resistance to the geometric alteration of the host [64]. On the other hand, the use of WT [39, 46, 86, 88] or WP [27, 77] have shown to increase the embedded marks' resistance to image processing operations. Moreover, these techniques allow for both the spatial and frequency localization of embedded marks. To sum up, one could classify watermarking schemes in two broad categories in terms of domain of embedding: spatial domain versus transform domains. In the same way, the decoding procedures can be used to create an in- formative classification protocol. The simplest way to extract a mark from its host is when the unmarked data is available. Such techniques, referred to as informed2, were the first to be investigated because of their relative simplicity [17, 73, 86]. Informed detection is still considered in copyright protection or fingerprinting since the owner of the work is likely to have access to the original unmarked data. In contrast, access to the original unmarked data is forbidden in content authentication or copy control applications. Techniques recovering 2 Although this term is sometimes used to characterize embedding procedures that tailor the mark according to the host media [18], we consider our utilization as more accurate since all insertion systems are, in fact, aware or the original content. 44 the embedded mark without the use of the unmarked data are defined as blind decoding. Within these blind decoding techniques, some require access to a reference key to extract the mark [39, 42, 75, 76, 77, 87]. These are called private. Others, on the other hand, do not require the unmarked data either, nor do they need a key for decoding purposes [46, 64, 78, 88]. These meth- ods are called public because everyone is allowed to access the watermarked information. They are mainly considered for copy control or feature enhance- ment purposes, while private approaches are mainly investigated for content authentication. From this discussion, summarized in Figure 3.2, the basic categorization of watermarking techniques should become clear. From now on, the term DWT-based blind private watermarking technique3 should be meaningful. 3.4 W a t e r m a r k i n g f o r C o p y r i g h t P r o t e c t i o n Many of the first papers published on DW are about its use for copyright and ownership protection related functions. Thereby, most of the bases and theories associated with the technology are laid out in relation to this particular application. For this reason, it is logical to introduce cornerstone papers on the subject. The paper published by Cox, Kilian, Leighton and Shamoon in 1995 constitutes an important step towards the installation of watermarking as a 3In this case, the mark is embedded in the discrete wavelet domain and is detected with a secrete key but without access to the original content. 45 Watermarking of Digital Images Robust The mark is permanently embedded in the work. Z L Fragile and Semi-Fragile The mark is destroyed by alteration of the work. Z _ Emb e dding D omain s Emb e ddin g D omains Spatial Discrete Cosine Transform Discrete Wavelet Transform Walsh-Hadamar Transform Wavelet-packet Transform (Fast) Fourier Transform - Spatial - Discrete Cosine Transform - Discrete Wavelet Transform - Wavelet-packet Transform (proposed) I Recovery and Decoding z Recovery and Decoding © i mm Informed Blind (public) Copyright Protection - Fingerprinting and labeling - Copyright Protection - Broadast Monitoring - Copy Control - Covert C ommuni c ati ons - Feature Enhancement Blind (private) - Content authentication - Tell-taling Figure 3.2: Our Generic Classification of Digital Watermarking Systems 46 technology in its own right [15]. Presenting a watermarking approach for the copyright protection of digital content, Cox et al. capture the most important concepts of robust watermarking. The necessary characteristics of robust wa- termarking schemes are outlined: fidelity preservation, robustness to attacks, unambiguous identification4 and universality5. The authors are the first to ar- gue that a watermark should be placed in perceptually significant components of a signal if it is to be robust to common signal distortions and malicious attacks. To compensate for the fact that this is contrary to the fidelity re- quirement, they adopt an embedding method similar to spread spectrum com- munications: that is, hiding a narrow band signal in a wideband channel that is the data to mark. In this case, the watermark, an independent identically distributed (i.i.d.) Gaussian random vector x, known to the authors, is added to the largest DCT coefficients Uj-which represent the most visually significant components-of the image. To assure perceptual transparency, a weighting parameter a is used to produce the watermarked DCT coefficients v[ using the following three different strategies reflecting the degree of independence desired6: v\ = Vi + axi or v\ = Vi(l + axi) or v'i = vi(eaXi) (3-1) As it distributes the mark over the entire content, the spread spectrum ap- proach prevents attackers from jamming or detecting the embedded signal. At 4 The retrieval of the watermark should unambiguously identify the owner of a work. 5Once again, universality has been found to be a questionable requirement, but it was part of the original paper. 6 The second and third method are more robust against difference in scale between the D C T coefficients of the host and the mark to embed. 47 the same time, authorized parties can use their knowledge of the i. i. d. vector to form the originally embedded message from the comparison of watermarked image with its original version. Cox et al. demonstrate that their technique is robust to common signal processing procedures and geometric transformations, and is able to deal with simple collusive attacks, thus ensuring good copyright protection of images. They conclude by stating, without implementing, that watermarking systems should take explicit advantages of the characteristics of the human visual system, HVS. Media to Distortion/ Watermark Corruption Watermark — i m Watermark Encoder Watermark Embedder » / Watermark Extractor Watermark Decoder Recovered ^Message m' Figure 3.3: Watermarking as Communications This paper directly led to another cornerstone paper. In [21], Cox, Miller and McKellips examine the similarities and differences between water- marking and traditional communications. Figure 3.3 describes a simple model of a watermarking system as a standard communication scheme and includes: a watermark encoder, a watermark embedded (as a modulator), a destructive channel, a watermark extractor (as a demodulator) and a watermark decoder. Using models based on Shannon's communication theories, the authors high- light the fact that the knowledge of the cover data as side information at the transmitter allows the design of more powerful watermark embedding algo- rithms. Cox et al. stress the importance of the use of characteristics of the HVS in the embedding process; both for maximizing the robustness, and for 48 minimizing the perceptual distortion introduced. They argue that an appro- priate distortion model for watermarking applications includes a significant correlation between the distortion vectors (watermarks) and content vectors they are applied to. Therefore, their embedder makes use of the knowledge of the image rather than treating it as unknown noise. In addition, the inclusion of a perceptual model (Figure 3.4) assures the invisibility of the watermark based on the characteristics of the human eye [37]. The complete system of Figure 3.5 represents the underlying principles of watermarking as it takes into consideration the perception of the marked content by a potential user in parallel with the decoding procedure. The authors conclude by explaining the design of a blind optimal threshold-based detector. This discards the need to access the original image in the detection process, thus opening the field of watermarking to a wider range of applications. Mediate Watermark Local Power Constraints Perceptual Model Encoded Watermark Localized Attenuator/ Amplifier Watermarked Media Figure 3.4: Watermark Embedder with Perceptual Model To summarize, [15] is a great step towards the development of modern robust watermarking schemes. The authors list the requirements of robust wa- 49 termarking schemes, and acknowledge the challenges involved in the creation of an algorithm capable of producing watermarks that fulfill all the contra- dicting specifications. Then, [21] extends this work by explaining that the knowledge of the cover data as side information at the transmitter allows for the optimization of the watermark embedding algorithms, while enabling the blind recovery of the mark. Moreover, it shows that, to be fully efficient, a wa- termark embedding system has to take advantage of the knowledge of the host data and of the characteristics of the HVS. In conclusion, these two papers are important to highlight because they laid the basis for watermark embedding in images out. Other papers have greatly contributed to our understanding of the problems linked with watermarking systems, but the ones mentioned stand out both in their conceptualization and in their impact on our own work. Media to Watermark"^ Distortion/ Corruption Watermark Embedder h - 0 Message m ~~1 Watermark Encoder ul Display H V S Watermark Extractor Watermark Decoder Perceived "* Media Message Figure 3.5: Complete Watermarking Scheme 3.5 Summary In this chapter, we examined the basic concepts associated with DW. In the first place, we differentiated watermarking from other concealing technologies. 50 Afterwards, we positioned the development of the technology by giving an overview of its history, and pinpointing the main stages of its development. Then, we explained some of the most common applications of DW. We also gave generic classifications of proposed systems. In the last section, we re- viewed cornerstone papers-dedicated to the use of watermarking for copyright protection-that have clearly exposed the challenges and requirements of effi- cient watermarking schemes. This chapter was meant to provide a general background on digital watermarking, and to give a better understanding of the issues under consideration. The next chapter focuses on the application of digital watermarking that we are the most interested in: content authentica- tion. 51 Chapter 4 Image Authentication "When I am finishing a picture, I hold some God-made object up to it (a rock, a flower, the branch of a tree or my hand) as a final test. If the painting stands up beside a thing man cannot make, the painting is authentic." -Marc Chagall, painter (1887-1985) 4.1 I n t r o d u c t i o n The rapid expansion of the Internet, and the overall development of digital technologies in the past years, have sharply increased the availability of digital media. One of the great advantages of digital contents is that they can be reproduced without loss of quality. On the other hand, digital contents may be easily modified, and sometimes, imperceptibly. In cases such as courtroom evidence and video security systems, any alterations of image, video or audio data have to be detected. Therefore, some work needs to be done in order to 52 develop security systems to protect the information contained in digital data. As stated in the previous chapter, watermarking has come to be a widely accepted approach for copyright protection and ownership identifica- tion. Much effort has therefore, been dedicated to the development of robust watermarking schemes. Another possible, but much less investigated, appli- cation of digital watermarking is content authentication. Since the use of a fragile watermarking system, in which the integrity of the mark is affected by tampering with its host, allows for the detection of any unauthorized modifi- cations of an image, a video or an audio sequence, it makes the validation of multimedia data possible, thus giving it legal value. In this chapter, we review the bases of authentication schemes, and propose a new method based on digital watermarking. In the first part, we create a classification of authentication approaches considered. From this, we draw the requirements that such systems should fulfill in order to be efficient. Then, we detail specific methods introduced that serve as the basis in the development of our own system, which is finally introduced in the last section (4.5). It must be kept in mind that, although we are concerned specifically with the authentication of images, many principles introduced in this chapter can generalize to other media. This is why the terms content, data, multimedia or host are used, as well as image, to refer to the digital information to protect. 53 4.2 A p p r o a c h e s t o a u t h e n t i c a t i o n Before we can explicitly state the requirements of image authentication schemes, we need to classify (and clarify) the different approaches considered. The first kind of system that has been proposed is named complete authentication. This family regroups techniques that treat multimedia data as integral information such that no further manipulation is allowed [41]. In that sense, it includes purely fragile watermark embedding techniques, as well as image signature methods. In the first case, a totally fragile watermark is embedded in the host data such that the key is broken if anything happens to the work [87]. In the later signature case, data features are extracted. These are later used to verify the authenticity of the data whenever it is needed. In [57], for example, we propose a wavelet packets-based image retrieval system that can also be used in the context of image authentication. Our method uses the correlation of wavelet packet coefficients to extract features inherent to individual images and then create their unique signatures. Due to inter-frequency cascading charac- teristics of wavelets coefficients [62], our system captures the salient points of the image. The signature can then be encrypted and saved in a database for future certification. Most image signature extraction procedures require the storage of additional information. To eliminate this constraint, the signature may be embedded in the host data in a robust way that achieves complete data authentication [84]. For some applications, it is of utmost importance that not a single bit of information is changed; for example, the meaning of a text message 54 can be completely changed by the modification of only few letters (a handful of bits). For these cases, complete authentication is needed. For other cases, however, such as audio or video contents, the modification of a few bits does not change the information carried. For that reason, schemes with more flexibility need to be developed. Semi-fragile authentication1 schemes therefore, form the last group of systems introduced and the one we are more interested in. In this kind of authentication method, the altered multimedia is treated as authentic if the manipulations are imperceptible. It means that as long as the visual, audible or readable content of the data is kept unchanged, the later is considered authentic. The concept of image signature can be used to fulfil these requirements. For example, [6] extracts salient feature points of an image that are invariant to legitimate distortions, but not to malicious ones, and uses them to construct its semi-fragile image signature. In order to prevent the addition of side information, it is preferable to directly embed a mark in the host data in a semi-fragile way. Once inserted, the key stays unaffected by justifiable distortion. The key may also be designed so that any alterations- -for example, malicious versus acceptable-in the data are made differentiable. Semi-fragile watermarking has been shown to be an effective method to grant multimedia data with legal value [81]. This is why this content authentication approach has attracted the most attention since the rise of interest for digital watermarking techniques [39, 46, 72, 88]. 1Also referred to as robust authentication. 55 4.3 R e q u i r e m e n t s o f A u t h e n t i c a t i o n S c h e m e s From our willingness to protect digital data against forgery and tampering, and also based on semi-fragile techniques already proposed, we can extract several requirements that authentication systems must fulfill. Here are the main points to keep in mind in the development and evaluation of certification systems. In the context of image protection, an effective authentication scheme should be able to do the following: 1. Determine whether an image has been altered or not; 2. Find the location in the image where the alterations, if any, are made; 3. Integrate authentication data within the host image rather than storing the data in a separate file2; 4. Be robust to acceptable manipulations such as lossy compression or to other content-preserving manipulations defined by the original owner of the work to protect; and 5. Include security features preventing the forgery or manipulation of the reference mark. In essence, this means that the reference key used for authentication must be securely stored. In addition, the embedding pro- tocol must depend on the secret key in order to enhance the security of the authentication scheme. 2Hence ruling out image signature procedures. 56 Some authors have also added the recovery capability as a prerequisite of image authentication systems [41]. This means that, after the detection process, it should be possible to find out the original content of the tampered areas, and also, that the recovered data shown, be of the same quality as the original. This concept is interesting, and it has also lead to the development of erasable watermarking systems. An erasable watermark can be removed from its associated cover work to obtain an exact copy of the original unwatermarked work. It is however, impossible to design an erasable watermarking scheme that can be uniquely applied on all the work of a specific family of digital contents3 [20]. Erasable watermarking schemes are still highly prototypic and this is why, in the present thesis, we have strictly been concerned by the detection and localization of alterations, and have not attempted the subjects of reconstructing tampered regions or deleting embedded marks. Nevertheless, the use of digital watermarking for image authentication clearly presents some advantages. The advantages of watermarking approaches for content authentication are twofold. First, the direct embedding of a mark in the host data removes the need to store a separate authentication signature (point 3). Second, as the watermark undergoes the same alterations as the host, the mark is modi- fied by the host's corruption [20]. Using a reference pattern in the embedding and decoding procedures allows for the identification and delimitation of tam- pered regions. This satisfies points 1 and 2 of the list of requirements at once. 3For example, it is impossible to use the same erasable watermarking scheme on all digital images (§A.6). 57 In addition, some basic requirements of digital watermarking (see Subsection 3.3.3) are helpful in the authentication context. The fact that the embed- ded mark must stay invisible allows the watermarked data to be as close as possible to the original data, therefore, preserving the original content. Fur- thermore, security concerns (point 5) are already considered by watermarking systems' requirements, which state that such systems must be planned under Kerckhoff's security assumption. From this, it is easy to recognize why watermarking is seen as a plausible candidate for image authentication systems, and to understand the growing interest in the subject. 4.4 P r e v i o u s W o r k The raising of interest for content authentication has accelerated the devel- opment of fragile watermarking systems. As for other watermarking types, the fragile watermarking techniques proposed can be divided in two general categories in terms of the embedding process: the ones acting directly in the spatial domain and the others, working in different transform domains. Each has pros and cons that we highlight here. 4.4.1 Fragile Watermarking in the Spatial Domain Fragile watermarking techniques that embed hidden information in the spa- tial domain, such as [5, 60, 72, 74, 87], are definitely more straightforward, and therefore, less computationally expensive, than the ones using transforms. 58 Therefore, this kind of embedding is probably more suitable for real-time im- plementation. In [87], Yeung and Mintzer propose one of the first water- marking methods for high-quality color and grey-scale image verification and authentication. A watermark image is embedded into the source image in the spatial domain by the modification of the pixel values. The stamped image produced is visibly identical to the original one. A verification key, stored and known only to authorized parties, is also produced and is used in the verifica- tion process in order to extract the image inserted in the host. The extraction procedure can detect and localize spatial alterations done on previously wa- termarked images. The technique therefore provides a way of ensuring data integrity, adds to the security of the digital content, and allows the recipients of an image to verify the image as well as to display the ownership information of that image. The embedding process is however, fragile to unintentional image distortions introduced by basic image processing operations (e.g. compression) done for storage purposes. Another spatial embedding watermarking method is that proposed by Tefas and Pitas [72]. In addition to allowing the identification of modified re- gions in tampered images, it is able to reject small distortions introduced by high quality image compression (for which [87] is fragile). A pseudo- random watermark is embedded on randomly selected pixels using a neighbour- dependant function. In the detection process, the pixels surrounding the marked ones are used to create a mapping of false detections. The identi- fication of changes in small details of the image is based on mathematical 59 morphology; altered pixels are linked together in order to indicate tampered areas. Finally, the decision about the image's authenticity is made by com- paring the ratio of correctly detected watermark with a predefined threshold. A similar technique to further enhance the resistance of the watermarked im- ages to medium quality JPEG compression has also been proposed in [55]. However, both techniques (i.e. [87] and [72]) suffer from the following major drawback of spatial domain watermarking: the difficulty of the frequency lo- calization of modifications. In fact, because the marks are inserted in certain particular pixels, it is often impossible to localize frequency alterations applied to the entire image. The reason why the localization of frequency alterations is important is twofold: one, it is a step towards telltaling, the characteriza- tion of the specific process used for the alteration of the content; and two, it provides a measure of the relative degree of image distortion. In addition to the impossibility of identifying frequency tampering, im- age authentication systems based on the embedding of watermarks in the spa- tial domain have the drawback of being more susceptible to malicious attacks. In fact, search and collage attacks (defined below) are a threat to spatial-based- and particularly block-based-watermarking authentication approaches [20]. In a search attack, the aggressor, who has access to the watermark decoder, cre- ates altered versions of the work and processes them through the decoder by brute force, until one is declared authentic. Since the mark is embedded di- rectly in the pixel intensity values, it is possible, although lengthy, to extract a pattern from the multiple watermarked images and then use it to create 60 authentic images. On the other hand, collage attacks are much more possible and easy to realize. In that case, the attacker has access to several similarly marked works. He can therefore, use parts of different genuine images and assemble them to form a new authentic image. This is really easy to do and it allows the unregistered modifications of (supposedly) tamper-proof images. For examples, see Figures 5.21 and 5.22 in the next chapter. In summary, spatial-based authentication watermarking methods show speed advantage that can be favored for real-time implementations. This is why such techniques have often been extended to the authentication of video data [5]. However, for all the reasons mentioned above, more compliant tech- niques must be developed for still image authentication. 4.4.2 Fragile Watermarking in Transform Domains The techniques using transform domain are, of course, a little bit more complex and computationally expensive than the spatial domain ones. Yet, they offer a higher degree of robustness against common image processing operations [16]. One could wonder why robustness is important for fragile watermarking sys- tems. This is simply because it is highly preferable that basic image processing operations-ones that are typically used for storage of watermarked images-do not alter the embedded marks. Some authors have proposed taking advantage of the knowledge of cur- rent image compression standards to develop semi-fragile watermarking tech- niques in the discrete cosine transform (DCT) domain [41, 85]. In [41], Lin 61 and Chang introduce an authentication scheme that accepts JPEG lossy com- pression performed on the watermarked image up to a pre-determined lowest quality factor while rejecting crop and replacement processes. Their authen- tication procedure is based on JPEG invariant properties of DCT coefficients. Their technique also allows for the recovery of original visual information af- ter tampering. To achieve these goals, two binary sequences are created. The authentication bits ($), used to determine if any tampering has occurred, are computed from the relationship between two DCT coefficients of the same position in two separate (8 by 8) image blocks. This value is used since it is invariant to JPEG compression at a given quality factor. On the other hand, the recovery bits (\I/), used to reconstruct the approximation of the original blocks of pixels after tampering, are obtained by the reduction, compression and encoding of the original (unmarked and uncompressed) image. The two are then embedded independently by the quantization of DCT coefficients us- ing secret block-seclection functions in relation with JPEG quantization tables. Selecting quantization levels greater than JPEG ones guarantees that the em- bedded marks stay unaltered up to a lowest compression quality threshold. In the decoding step, the private authentication process first reconstructs the authentication bits, and then, reconstructs altered regions, if needed. Finally, the capacity of the system to endure JPEG compression with QF > 50, and to reconstructed altered regions, is explicitly demonstrated. Although DCT approaches show some potential, it is the wavelet do- main that attracts the most attention among all the transform domains used 62 as it has been shown to yield the highest degree of robustness to simple im- age processing operations [86]. Furthermore, as DCT techniques (like [41]) are mainly block-based, they are also highly susceptible to collage attacks. In terms of decomposition the main advantage of wavelets over'Fourier and DCT analysis is that they allow for combined spatial and frequency resolutions. Wavelet transform allows for the decomposition of the signal in narrow levels of detail, while keeping the basis signal space limited [24]. This is certainly of great importance when dealing with real signals, especially when spatial localization is to be considered. Moreover, as stated earlier, the availability of numerous mother wavelets gives flexibility to the analysis and allows it to be truly adaptive to a particular application. It is also possible to develop new basis functions to fulfill specific requirements. Finally, the use of the wavelet domain-as opposed to spatial or DCT domains-to embed the water- mark provides simultaneous spatial localization and a frequency spread of the watermark in the host image [39]. All these gains certainly explain why WT attracts so much attention for a wide range of image processing applications, including digital watermarking for image authentication [39, 46, 47, 88] and the upcoming image compression standard, that is, JPEG-2000. In [39], Kundur and Hatzinakos present a semi-fragile watermarking technique for the tamper proofing of still images. They propose to embed a mark in the discrete wavelet domain by the quantization of the image's cor- responding wavelet coefficients. The first operation is the decomposition of the image by the computation of its discrete wavelet transform (DWT). The 63 authors make use of the Haar wavelet exclusively, and propose an algorithm in which the changes in the wavelet coefficients guarantee integer changes in the spatial domain. Once the image is decomposed in L levels of detail, a watermark can be inserted. First, an author identification key is produced by the generation of a pseudo-random binary sequence (zeros and ones) of length Nw. This sequence is kept secret and known only by the original owner of the work. Then, a quantization map is created based on a user-defined quantization step A. The rounding of specific DWT coefficients to even or odd quantization step values embeds the zeros and ones of the watermark4 (see Figure 4.1). The selection of embedding locations is pseudo-random and well spread spatially and throughout each resolution level to be able to assess changes to all image components. The location information is stored in the coefficient selection key (ckey). In addition, an image-dependant quantiza- tion key (qkey) is introduced to improve security against forgery, and monitor specific changes to the image. The last step of the embedding process is the construction of the tamper-proofed image by the computation of the inverse discrete wavelet transform, IDWT. In the decoding process, the DWT is performed on the possibly tam- pered image and locations of original watermark embedding are selected using ckey. Then, the embedded mark is blindly extracted by the computation of quantization levels' parity using A and qkey. This allows for the comparison of the mark extracted with the originally embedded one. The approach permits tamper detection in localized spatial and frequency regions, therefore making 4Referred to even and odd quantizations. 64 A A -A 0 A 2A 3A 4A 5A i o i o i o ••• Q(f) Figure 4.1: Quantization Scheme used in [39] possible the identification of specific modified frequencies in an image. To assess the extent of tampering (the difference between the embedded mark w and the extracted one w), a tamper assessment function, TAF, is computed with the following: i Nw TAF(w,w) =—^w(i)<£>w(i) (4.1) Comparing the TAF with a predefined threshold T, allows the user to make application-dependant decisions concerning the credibility of the received data. Examining how a known embedded watermark has been changed gives the possibility to investigate how a work has been corrupted. This type of water- marking is referred to as a telltale watermarking. Thus, the users are allowed to make context-dependant decisions on the validity of the images received. However, the total capacity5 of the system, given by the mark's length Nw, is not specified. In addition, no strategy is propose to deal with a combination of malicious tampering and incidental distortion for the choice of A or T. In the same line of thought, Yu et al., developed, in [88], a digital images 5 The capacity of a watermarking scheme is defined as the amount of information that is to be embedded in the host. 65 authentication procedure that allows for the detection of malicious tampering while staying robust to incidental distortion introduced by compression. As in [39] , they embed a binary watermark in the wavelet transform domain. Once again, the insertion is done by the even or odd quantization of selected wavelet coefficients. Quantization-based watermarking is the simplest protocol because it requires the least storage of information. It is however, very sensitive to im- age modification. For this reason, the authors propose to make the embedded watermark more robust by quantizing the mean value of weighted magnitudes of wavelet coefficients. The quantization of regions of wavelet coefficients is performed using a predetermined function Q. The same function is used in the blind detection process as well, to privately retrieve the mark by reversed quantization, that is, determining the parity (in terms of quantization level) of the mean value of the WP coefficients. In order to distinguish malicious tampering from incidental distortion, the amount of modification on wavelet coefficients introduced by incidental versus malicious tampering is modeled as Gaussian distributions with small versus large variance. The probability of watermark error due to incidental alterations is smaller than malicious tam- pering because they produce a comparatively smaller variance difference with the embedded marks. To state the validity of possibly tampered images, a tamper response function (TRF) is defined for each decomposition level. It compares original quantization values xi(i,j) with wavelet coefficients xi(i*,j*) of the possibly tampered image, as shown below: TRF(xi(i*,j*),xi(i,j)) = ̂ 11:1 !̂̂ ^ ( 4 . 2 ) 6 6 The TRF allows for the estimation of tampering depth. Furthermore, the computation of the Chess-Board distance among altered coefficients permits the mapping of the tamper response. This serves as the basis for the decision rules to measure the malevolence of attacks. The integration of the tamper re- sponse at each scale of the wavelet decomposition allows for the discrimination of malicious tampering from incidental ones. This grants a certain degree of robustness to the system as the method is able to blindly authenticate JPEG compressed images. In spite of this, the authors do not explicate the degree to which the image can be compressed, and never explain how the quantization parameters are chosen. The main flaw with the two techniques described above is that they both involve post-processing operations to determine the validity of the content. In [39], the user has to set a threshold below which a mark can be considered authentic, while in [88], the tampering distribution has to be examined. Fur- thermore, in [88], the users might have to determine the tampering manually at each scale if the tampered area is too small, or if there are many small unconnected tampered regions. In fact, both systems in themselves are not robust to JPEG compression, and only the detection processes allow this spe- cific operation to go unnoticed. In conclusion, truly robust automated image authentication techniques in the wavelet-domain have yet to be developed. 67 Image Authentication C o m p l e t e A u t h e n t i c a t i o n No alteration of the work is tolerated I R o b u s t ( s e m i - f r a g i l e ) A u t h e n t i c a t i o n Alterations resulting from image processing operations (for storage puposes) are tolerated Fragile Watermarking Image Signature Image Signature S e m i - f r a g i l e W a t e r m a r k i n g J Embedding Domains S p a t i a l D o m a i n D C T D o m a i n W a v e l e t D o m a i n -Fast • Good spatial localization of • Offers robustness to JPEG compression , Adequate spatial localization of tampering • I ' tampenng , to attacks ' - Combines frequency and spatial localization of tampering - Highly secure • Sensitive - No localization of frequency | tampering | • Most techniques are sensitive I to compression | • Sensitive to block-based attacks I - Localizationoffrequency | tampering is not . straightforward ' - Post-processing operations needed to assess the malevolence of tampering Figure 4.2: Our Classification of Image Authentication Techniques 68 4.5 O u r W P - B a s e d I m a g e A u t h e n t i c a t i o n The last two techniques presented in Subsection 4.4.2 protect digital images from malicious tampering and unauthorized processing, while allowing the compression of images with small compression ratios. However, these tech- niques require the user to determine the malevolence of an attack. In addi- tion, they necessitate a certain degree of interaction in both the embedding and decoding procedures. On the other hand, our technique overcomes both these drawbacks as the accepted attacks are predetermined prior to the em- bedding process. Furthermore, our embedding procedure is adaptive since it is designed to automatically take maximum advantage of the characteristics of the human visual system (HVS). In addition, we propose to further improve the frequency resolution of standard discrete wavelet transform by the use of wavelet packets. The use of WP leads to narrower frequency bands at higher frequencies, and assures the capture of images' salient points [57]. Moreover, it allows for much higher precision and flexibility in the selection of the bands to be used in the embedding process. Our motivation for developing a semi-fragile authentication system that does not require the post-processing of the tamper detection in the recovery process comes from the fact that, in our mind, the original owner or creator of the work-not the end user-should decide the extent to which an alteration is judged acceptable. From this, we believe that techniques including user interaction in the detection process have a serious security flaw that could prevent their commercial use. This is why we introduce a novel technique for 6 9 the content authentication of digital images, which is able to detect and localize malicious image alterations while offering a certain degree of robustness to image compression. In this section, we go over the details of the embedding and decoding procedures, highlighting principally the quantization and evaluation steps of each process, respectively. The experimental results obtained, as well as the comparison with a commercially available (and spatial-based) technique, are presented in the next chapter. 4.5.1 Embedd ing Process The starting assumption of our approach is that any modification to an image leads to changes in the corresponding wavelet coefficients and embedded wa- termark [88]. As explained, small modifications in the wavelet coefficients do not change the image significantly, while minor changes in the image alter the coefficients locally, but noticeably. This characteristic is a good premise for watermark invisibility and fragility. In fact, this is the first reason why we have chosen the wavelet domain for our embedding procedure. The main steps of the technique developed are presented here, along with the specific advantages of wavelet packets. Our embedding scheme is summarized in Figure 4.7. 1. An author's identification key of 64 bits is randomly generated. Although this is an arbitrary length, 64 bits are enough to grant uniqueness of the key and protect individual creators and/or owners. The key is stored and kept secret. Only the owner-or other authorized parties-has access to the binary information contained in it. 70 2. The first four bits in the author's key are used to select the wavelet decomposition (wavelet function and number of decomposition levels) applied to the image. As previously stated, one of the advantages of WT is the great flexibility offered by the multitude of basis functions available. In the present application, it increases the security of our scheme since it is impossible for a would-be pirate to know which specific wavelet domain has been used for the embedding [52]. The selected number of decomposition levels had to be chosen so as to allow good frequency resolution, and to yield an important enough number of coefficients per band for embedding. Moreover, the computational cost for analysis had to be kept reasonable. We also had to select mother wavelets in order to have finite (compact) support and improved frequency resolution. Generally speaking, the use wavelet packets adds another degree of freedom because it allows selecting frequency independently of the scale [77]. A specific advantage for our application is that it leads to decomposition bands of same size. This is important because it simplifies the frequency-space relation. Another important point is to choose the wavelets to achieve optimal spatial and frequency resolutions. Much work has yet to be done in the domain of wavelet function design or selection for particular applications. This is a field of research in itself, and the investigation of the effects of wavelet function selection on the final image would be an interesting subject for another research project. As far as we are concerned, we use two families of compactly supported 71 wavelets: Daubechies and Coiflets. Both are formed with asymmetrical func- tions, which allow for perfect reconstruction and show good compressibility because of their smoothness6. From this, and although the use of 4 bits from the authentication key leads to 16 different possibilities for the selection of decomposition parameters, we have limited our investigation to 4 levels of de- composition using Daubechies 12 or 16 and Coifnets 12, 18, 24 or 30. The choice of 4 levels in the decomposition yields the minimal number of frequency bands necessary for the embedding, (see steps 4 and 5) thus minimizing the computational cost. The selected wavelets have less space localization than the simple Haar wavelet, and hence, yield better frequency resolution. Although they yield worst spatial localization, we have experimentally found them to be more suitable to the intended purpose since the embedded mark is truly spread over the image's content. As an indicator, the discrete filter functions used are presented in Figures B.5 and B.6 in the appendix. 3. WP decomposition of the image is performed based on the specifica- tion extracted from the key in 2. In our case, we use a special implementation of the 2-D wavelet decomposition in Matlab: Wavekit7. This toolbox decom- poses the image using multidimensional wavelet packets using successive filter banks and allows for the visualization of its level of detail (Figure 4.3). 4. The specific wavelet packet coefficients where the mark will be em- bedded are obtained. Our system identifies 64 groups of K + 1 bands. These groups are formed by one principal band surrounded by K secondary bands. 6 The smoothness is associated with the number of vanishing moments, which increases with the order of the function used [24]. 7http://www.math.rutgers.edu/~oj anen / wavekit / 72 Figure 4.3: Two Levels of Daubechies 12 and Coiflets 30 Wavelet packet de- composition and Associated Original Images 73 The principal band is always located in the top-left corner of the group, that is, it corresponds to the lowest frequencies of the group. The first principal band is selected to be the LL or approximation band, which corresponds to the lowest frequencies of the entire decomposition. Then, the other 63 principal bands are evenly distributed in the wavelet packet de- composition to cover the entire frequency content8. In each principal band, our system picks N subbands (regions of M wavelet packet coefficients). These subbands are spread along each principal band to cover ..the spatial content globally at all scales. The position of these regions is fixed in each band for any given decomposition. However, the subbands are shifted (by one subband size) from one main band to the next (see Figure 4.5) to cover the entire image content (see Figure 4.6(a)). Each group of bands (a combination of one principal band and K sec- ondary bands) is used in Step 5. for the embedding of one authentication bit from the author's key. For this reason, N regions of wavelet packet coefficients -or subbands-are established in the secondary bands as we'll. The regions be- longing to a secondary band correspond exactly to the same spatial areas as the WPC regions of the matching principal band. A coefficients selection ex- ample is shown in Figure 4.4 for four groups of four bands, each having four subbands of four coefficients (that is, K — 3, N = 4 and M = 4). The selection of 64 principal bands with their secondary bands allows for the embedding of the 64-bit author's identification key. This identification 8 This is shown by the computation of the F F T of the inverse WP transform of the selected coefficients (Figure 4.6(b)). 74 Principal Secondary 1 SB 1 SB 2 ffl ffl ffl ffl SB 3 SB 4 SB = Subband ffl ffl BB ffl ft 1 ffl ffl ffl ffl 4 adjacent bands of W P decomposition 3 level Image Decomposition Figure 4.4: Coefficients Selection Approach (steps 4.) step is important because we have to make sure the entire frequency spectrum is covered, as well as the entire image (in the spatial domain), in order to be able to assure authentication of the entire content. Furthermore, since we need to make sure that the images' characteristics are globally protected at each scale, the selected regions have to be spread in space, and their number should be sufficient to cover the image completely. In this context, the application of a four-level WP decomposition and the use of three secondary bands (K = 3) for each of the sixty-four principal bands with eight regions (N = 8) of four coefficients (M = 4) per band, is adequate for the protection of (256 by 256) images. A wavelet packets' domain mapping of coefficients selected for embedding is presented in Figure 4.5. In order to demonstrate that our selection process fulfils the above mentioned requirements (global space and 75 frequency covering), Figure 4.6 presents the mapping of the selected embedding coefficients in the spatial and frequency domains. 0*000*000000 00000*0000000000\0V 10**0000* 00x0°*0* 0000000t»0001' 00000* 000000 000000 000000^)^000* 00000**^00000000000000000*001*000 00*0000* 00000^0^000*0"''0000*10*' J0 0*0* 00000*00000+0*000000'000000000000 00000000000*&0000000000000-00000» 000*0* 0000000000000000000*000000 000*0*00000*00000*0*10*000*000000 0010+0* 0*000*00000*^g&0+000000000* 000*0**000*0*000000*0000000*000*00 000000* 0010000'•00000*40000000040*0000 00000*00000000000000000000 000*0*00000*000*00 000000000000 00000* Figure 4.5: Selected Coefficients in the W P (Coifflets 12) domain 5. The author's secret key is used one last time for its embedding. As mentioned, each of the 64 bits is uniquely embedded in a group of one prin- cipal and K secondary bands. First, the means Mean(i,j) of all the selected regions of W P C (in the principal and secondary bands) are individually com- puted. Then, the original quantization levels q{i,j) are obtained (Equation 4.3) based on an optimal quantization step A (see Subsection 4.5.2). After- wards, each bit of the author's identification key is inserted in the WP domain by the modification of the (individual) mean of the W P C regions belonging to each selected group of bands (one principal and K secondary). Rounding 76 (a) t i l l • t i t 1—r i ( i i ; . , i • | | i i . f t _ 1—1 1 1 t • • 1 » j 1 1 1 1 1 • • t j i • • t i l 5 0 i l l * I j ' ( 1 i 1 ! ) ( ( ! ( ! ' ! f * • i t t ! 1 t i l l 1 0 0 i i 1 i » 1 * * t • • • i i i I • • 1 t t < I 1 . . • ! | • ( » i ( } 1 1 1 1 1 I I j i i i i i i * • 1 4 • « j i * • t t j • i i • i l l t • • 1 • 1 1 1 i n l i f t t i i t t t t • \ i ijIfiI • J • 1 1 1 1 1 1 i 1 1 4 1 1 1 1 ] i i i t i i • • i « I • t « 1 * 1 1 t i l l « | t i 200 t • f 1 • i i i i i i i 1 4 1 1 t • • * * 1 1 1 ( 1 1 1 1 . ( 1 i i M M f i • ! ; ! 1 1 • 1 1 1 1 1 1 1 • 1 I i i i i i i [ 1 1 1 1 1 t t r \ 1 • \ > ! • i t • i • i i 4 i 4 * * t I I I ! • » I I t i l l t i l l 250 I I * ' • t » 1 j ; j J • 1 t 1 1 1 t » i : 1 1 1 1 50 100 ISO 200 250 (b) Figure 4.6: Spatial (a) and Frequency (b) Mapping of Selected Coefficients 77 the mean to an even quantization level embeds a zero, while rounding the mean to an odd quantization level embeds a one. This is done by rounding the obtained quantization levels q(i, j) to the nearest even/odd quantization levels (to form q'(i,j)) and then adjusting the mean of the WPC regions to the computed values (as demonstrated in Equations 4.4 and 4.5). The advantage of using this quantization technique is that the embedded information is mod- ified as a function of the host. By opposition, an adversary can forge a fragile watermark more easily if the embedded pattern is not dependant of the cover work [20]. Therefore, quantization-based approaches increase the security of authentication systems by assuring the uniqueness of the resulting embedded mark. In addition, as we do not use integer-to-integer wavelet decomposition, the intensity values of the watermarked image that will be produced in 6. are not guaranteed to be integers. Therefore, the use of quantization assures that the level of modifications introduced in the image is important enough to be kept by the discretization of the pixel values done in the image saving process. q ( i j ) _ [ M « m M j ( 4 3 ) key[n} = 0 q'(i,j) = <^ key[n) = 1 q'(i,j) = j if j) even q(i,j) + l if q(i,j) odd q{i,j) + l if q(i, j) even q(i,j) if q(i,j) odd (4.4) Mean'{i,j) = q'(i,j)-A (4.5) 78 In our embedding process, an optimal step A is used for the rounding of the mean of the defined regions belonging to the 64 principal bands. On the other hand, A/2 is used for the regions belonging to the 64 • K secondary- bands. This is done to minimize the introduced distortion. Section 4.5.2 presents the procedure to obtain the optimal A for Laplacian distribution of unitary variance (a2 = 1). As the WPC of the selected bands do not have unitary variance, we have to weight each quantization step as a function of the distribution of the particular embedding band in order for it to truly represent the optimal quantization step. To achieve this, we compute the power (a2) contained in each of the selected bands, and use it as the modulating factor. In this way, our system takes advantage of the human visual system's character- istics by giving more weight to marks embedded in regions with more details for which the HVS is less sensitive [37] (see Figure 5.9). In fact, our system takes only implicit advantage of the characteristics of the visual system as it does not-contrary to what others have done [6]-weight the embedded marks as a function of the sensitivity of the human eye at each frequency. However, this is shortly shown to be more than sufficient to assure the invisibility of the marks. 6. Finally, we apply the appropriate wavelet packet synthesis bank on the available coefficients-some modified and some not-to reconstruct the visual data and form the watermarked image. As shown in Figure 4.7, the image produced is visually identical to the original unmarked image9. 9More details are perceptible by comparing Figure 5.8 with Figure 5.7. 79 Original Image 1 Author's ID Decomposition Parameters Selection Authentication Key (64 bits) z 2 Detail Levels Representation Wavelet Packet Decomposition Detail Levels and Coefficients Selection Wavelet Packet Reconstruction WP coefficients Quantization Watermarked Image "0" means even rounding "1" means odd rounding Figure 4.7: Embedding Scheme Developed 80 4.5.2 Optimal Quantization Step As we have just explained, our watermarking algorithm uses the rounding of WP coefficients. The quantization step A should be chosen so as to maximize the embedding weight, while minimizing the distortion introduced. Thus, the choice of the optimal quantization step is of high importance. As we have not made explicit use of the HVS, we need to assure that minimal error is produced by quantization. In terms of watermarking, it means that we want to minimize the mean squared quantization error, which is as follows: W i - i f i A 2* - l MSQE = a2 = 2 £ / (x - ( 2 M- 1 /•oo + 2 / (x - ( n J{M/2-l)A 2 )A)2fx(x)dx )A)2fx(x)dx (4.6) 5A/2 3A/2 A/2 -3A -2A -A Level -1 Level -2 Level -3 " 0 " Output JX. Level 3 Level 2 Level 1 - A / 2 " -3A/2 -5A/2 2A 3A Input "0" = zero embedded "1" — one embedded Figure 4.8: Input/Output Relation in the Quantization Process In Equation 4.6, fx{x) is the probability distribution function (pdf) of the variable x, and M is the number of quantization levels to be used. Based 81 on the generic quantization scheme of Figure 4.8 [65], the minimization to achieve as a function of A is the following: s 2 M/2-I I A 2 . _ , _ f = - £ (2i-l) (x-(±-L)A)f,(x)dx oA £ri 7(i-i)A 2 - ( M - 1) f0 (x - {^l)A)fx{x)dx = 0 (4.7) It has been shown that wavelet coefficients have Laplacian probability dis- tribution functions [49]. In order to emphasize that this still holds for WP coefficients, we present, in Figure 4.9, the distribution associated with a high frequency band of a four level decomposition of an image using Coiflets 30, and the one associated with a low frequency band of a 4 level decomposition using the Daubechies 12 basis10. Fortunately, the problem of minimization has already been solved for this type of distribution [2]. It means that the optimal quantization steps are already available as a function of the number of quantization levels. In compression applications, the size of the alphabet-that is, the number of quan- tization levels-is selected to reflect the compression ratio desired. Here, it is the main parameter that controls the fragility of the embedded mark. There- fore, the steps used can be selected to reflect the degree of protection that is necessary to achieve. Of course, the visibility of the watermark also has to be kept in mind. As far as fragile watermarking is concerned, the visibility concern is similar to the fragility concern: the greater the number of quantiza- tion levels used, the less distortion introduced, and the more fragile the mark is. Typically, we use 32 quantization levels, as we have found that this value 1 0 Five different images are used for the creation of each figure. 82 83 Table 4.1: Optimum Step Sizes for Laplacian Distribution with a2 = 1 (from [65]) Alphabet size Laplacian Distribution Step Size S N R (dB) 2 4.40 3.00 4 1.0873 7.05 6 0.8707 9.56 8 0.7309 11.39 10 0.6334 12.81 12 0.5613 13.98 14 0.5055 14.98 16 0.4609 15.84 32 0.2799 20.46 yields good perceptual transparency and authentication capabilities11. There- fore, the optimal normalized quantization step A used in our experiments is 0.2799. 4.5 .3 Watermark Decoding Process At the other end of the communication channel or after the image has been stored, the watermarked content needs to be authenticated. We developed a decoding procedure in order to extract the embedded mark, decide on the authenticity of the received image and localize the tampering if needed. The first four steps (step 1. to step 4.) of our decoding procedure are identical to the embedding ones. The author's unique key is used in order to decompose u T h e values presented in Table 4.1 always have to be modulated by the variance of the band used, as explained in the previous section. 84 the image in levels of detail according to specific parameters. The regions of WP coefficients used in the embedding are selected. This allows us to recover a verification key without any use of the original unmarked image (see step 5.). Then, the extracted mark is examined (steps 6. and 7.), and the areas with watermarked errors are labelled as tampered areas (8.). 5. The 64 • iV regions of wavelet packet coefficients belonging to the principal bands, and the 64 • K • N regions belonging to the secondary bands are examined. First, their mean Mean(i,j) is computed. Then, using the knowledge of the optimal quantization steps, we extract a verification sequence by rounding to the nearest quantization level: an even quantization level meaning zero and an odd quantization level meaning one (see Equations 4.8 and 4.9). We call this step inverse quantization even if it does not involve the removal of the effects of quantization. The principal bands' quantization subbands are first scanned. In the same way, the regions from the secondary bands are also examined to verify the authenticity of each of them. As the total number of regions of embedding is 64 • (K + 1) • N, this extraction procedure yields a verification sequence (keyverificauon) of 64 • (K + I) • N bits. This binary sequence is however, not used directly for tampering assessment as we chose to evaluate the embedding regions in the WP domain by intraband and interband comparisons. i{iij)=raini[M^m] (4.8) 85 Principal Secondary 1 fm m\ ffl ffl Intraband J verificationl ffl ffl ffl^ffl Interband verification - ffl ffl ffl ffl ffl ffl Secondary 2 | Secondary 3 4 adjacent bands of W P decomposition Figure 4.10: Intraband/Interband Verification Scheme (steps 6. and 7.) 6. Intraband comparison: associations are made between WPC regions belonging to the same principal band to decide if the image has suffered from any frequency tampering. Basically, we verify if the bits extracted from a given principal embedding band are of the same parity and if they conform to the corresponding originally embedded bit. 7. Interband comparison: the verification is now performed throughout the WPC regions associated with the same spatial area to decide whether the image has been spatially altered or not. In this step we verify if all the bits obtained from the K + 1 WPC regions (from one principal band and K secondary bands) belonging to a given spatial region are of the same parity as the initially inserted bit. keyverification[n] = 0 if q(i, j) even keyverifiCatio„[n\ = 1 if q(i,j) odd (4.9) 86 Author's ID Waterma&ed Image Decomposition Parameters Selection z Authentication Key (64 bits) Detail Levels Representation Wavelet Packet Decomposition The verification allows for the identification of the frequency and spatial regions that have been altered Detail Levels and Coefficients Selection ff 8 6,7 Decision on the authenticity Comparison (Intra/Interband) Key Extraction (inverse quantization) Figure 4 .11: Decoding Scheme Developed Intraband and interband verifications remove the need for post-proces- sing operations by verifying the conformity of the embedded mark within a level of detail and along different bands for the same spatial region. In that way, it permits the quick decision of the spot or region of tampering in the image, and on the type of tampering, which can be either frequency alterations or modifications of pixel values. The rules of detection are quite straightforward and based on experi- mentations with the embedding of 64 bits in 64 main bands, having 8 subbands of 4 coefficients, and 3 corresponding secondary bands each. First, we state the authenticity of a given spatial region by ensuring that a minimum number 87 of embedded marks (here three regions out of four) corresponding to it are untouched. For example, if the WPC region belonging to the principal band is authentic, then only two of the secondary regions have to be authentic too. However, if the principal region is not authentic, then all the secondary regions have to be genuine in order for the mark to be considered valuable. Obviously, a similar approach is adopted in the intraband verification; the comparisons are now performed within the same principal decomposition band. The entire FB is declared authentic if at least seven out of eight regions are still marked with the original key. Otherwise, the band is marked as frequency tampered. In addition, we include an overall detection rule that requires 504 regions of embedding (out of a total of 512) extracted from the principal bands to have the same quantization level parity as originally embedded for the image to be considered genuine12. As each bit is embedded in 8 regions of a unique prin- cipal band, requesting 504 authentic ones assures the detection of one single different bit. Of course, other sets of rules can be defined in order to fully re- flect the degree of protection desired. For that reason, they can be considered as the detection parameters to be chosen for each application. 8. Finally, based on the results of steps 5., 6. and 7., a decision is made on the authenticity. If it is decided that the image has been tampered with, the altered frequency or spatial region(s) is(are) identified. Localization is useful because the knowledge of when or where a work has been altered can be used to understand the motive of tampering, the possible candidate adver- saries, and whether the alteration is legitimate. Localization of the tampering 1 2 This is considering the use of 64 principal bands with 8 regions of embedding §4.5.1. 88 can be considered as a special case of telltale watermarking. To determine whether or not an attack is malicious or acceptable, a basic rule of thumb is to consider the conclusions that might be drawn from the images' use. If the distortions introduced do not change these conclusions, then the attack is considered not malicious [20]. In our case, high-quality JPEG and JPEG-2000 compression operations are considered as the only legitimate image processing operations as they are used for storage purposes, but do not alter the visual content. Techniques destined to extend the use of our system to medium qual- ity JPEG compression are presented in Section 5.5. Nevertheless, the system, as proposed, shows enough robustness to be invariant to commonly used high quality compression standards. 4.6 Summary This chapter was exclusively dedicated to digital data authentication proce- dures. First, we laid the basis of such systems, and grouped previously pro- posed approaches. Then, we used this classification to highlight the most important requirements of content authentication schemes. In addition to be- ing able to determine if digital content has been tampered with, authentication schemes must allow for the evaluation of where and how an image has been altered. In addition, they must include security features preventing the refer- ence mark to be unnoticeably forged or manipulated. Then, we have presented previous approaches working in the spatial, DCT and WT domains. We also highlighted problems with each technique, and explained how we overcame 89 them in our digital watermarking image authentication method. We proposed to embed a secret author's identification key by rounding the mean of wavelet packet coefficients' regions to even or odd quantization levels. We developed an adaptive quantization procedure that makes use of the knowledge of the (Laplacian) distribution of W P C to maximize the mark embedding weight, while guaranteeing the invisibility of the watermark. Finally, we introduced an interband/intraband verification procedure, and explained how it is used in our decoding procedure to extract the watermark present in the image and decide if any tampering has occurred either in space or frequency. These pro- cedures make obsolete the use of any post detection operations employed to judge the overall tampering. Experimental results obtained with the proposed technique are presented in the following chapter, along with its comparison with a commercially available software and strategies to augment its robust- ness to J P E G compression. 90 Chapter 5 Experimental Results "A rock pile ceases to be a rock pile the moment a single man contemplates it, bearing within him the image of a cathedral." -Antoine De Saint-Exupery, writer and poet (1900-1944) 5.1 Int r o duct ion This chapter explains the experimental results, which verify the capabilities of our watermarking scheme. In order to do this, we use real and computer generated square-size images. We create a set of seven hundred watermarked images1 and test their visual quality and content authenticity. Since we used the Wavekit software for Matlab to perform the wavelet packet decomposition, our system is currently limited to squared size images. In reality, input images must have the dimensions 2 n by 2™ to allow for the computation of the two 1In fact, we create 720 watermarked versions of 16 original images, all shown in Figures 5.1 and 5.2, using 45 different embedding signatures. 91 dimensional W T . This is due to the factor 2 subsampling operation performed after each filtering step, and the fact that there is no interpolation executed between the decomposition's scales. In future developments, however, we can design our own implementation of W P in order to avoid this limitation. As stated earlier, the ultimate goal of our work is to lay the basis of a water- marking scheme for image authentication, and not to develop a new wavelet analysis tool for Matlab. Moreover, optimization certainly has to be made be- fore the commercial exploitation of our system is considered. This definitely involves the use of an instrument other than Matlab to implement the func- tions for wavelet packet decomposition. At any rate, we use Wavekit here, and the images presented should make the capacities of our wavelet packets-based watermarking system clear. In the first section, we confirm the invisibility of the embedded marks, as well as the authentication capabilities of our system. Then, we prove its ability to detect and localize tampering, even in the presence of compression. Several wavelet functions are considered and their impact on the authentica- tion procedure is commented upon. In Section 5.4, we compare our system with a commercially available-spatial-based-watermarking tool in terms of the visibility of the marks, tampering localization aptitude and resistance to at- tacks. Finally, different strategies meant to increase the robustness of our system are examined in the last section. 92 93 Sensin Sf Synthetic House Yogi Figure 5.2: Test Images 94 5 .2 E m b e d d i n g , D e c o d i n g a n d V i s i b i l i t y First we have made sure that our embedding system does not introduce visual artefacts in images. We have first measured the visual quality of the marked images by qualitative observations. However, in order to produce more objec- tive results we have also used the Peak Signal-to-Noise Ratio (PSNR) defined in Equation 5.1, which measures the difference between an original image and its modified version I'(i,j). With the set of images produced, we have obtained an average PSNR of 42.47 dB. This is above the usually toler- ated degradation level of 40 dB. Figures 5.3 to 5.8 clearly show that the vast majority of the watermarked images are not perceptually different from the unprotected ones2 and that the differences are, in fact, more important in the regions of detail (see Figure 5.9). Figure 5.10 summarizes the results obtained with four different images. For clarity, not all images tested are included. Figure B.7 is shown in the appendix to complete the one presented here. PSNR(dB) = 10 • log[ T'f^rl ^ Although Daubechies functions in general, seem to yield slightly larger PSNR, the choice of the wavelet function does not clearly influence the visual quality of watermarked images (Table 5.1). In fact, Coiflets 12 yields the best results of all, in terms oi PSNR. The length of the mother wavelet seem to have a greater influence on the visual quality of marked images-a shorter function means smaller distorted spatial regions-but the results are not yet absolute 2 Visual differences between the watermarked image and its original result from saving and/or printing operations. 95 Figure 5.3: Original Barbara Image 96 Figure 5.4: Watermarked Barbara Image (Coiflets 12 with PSNR = 41.76<£B) 97 98 Figure 5.6: Watermarked Barbara Image (Daubechies 12 with PSNR = A2.72dB) 99 100 1 0 1 I 4' * fl Figure 5.9: Difference between the Original and the Watermarked Airplane Images (the grayscale has been magnified for visualization, black regions re- ferring to large differences) 102 Table 5.1: Average PSNR for Different Wavelet Functions Wavelet Function Average P S N R dB Coiflets 12 43.40 Coiflets 18 42.05 Coiflets 24 41.78 Coiflets 30 41.98 Daubechies 12 43.25 Daubechies 16 43.25 Overall 42.47 enough. The characteristics of the original images, however, greatly affect the ability of the embedded mark to be unnoticed. For example, it is difficult for our algorithm to find adequate regions of embedding in the Synthetic House or Yogi images, as they do not include much detail. In fact, as they are com- puter generated, they contain several regions of uniform intensity. Therefore, small perturbations are quickly noticeable by the human eye. On the other hand, the Omaha image is easy to mark imperceptibly since it contains a lot of detail3. However, as two images of different perceptual quality can have the same PSNR, the PSNR does not seem to measure the absolute visual quality of images. Nevertheless, the peak signal-to-noise ratio is the current standard of comparison amongst watermarking techniques. Moreover, a commonly ac- cepted distortion quantification technique that fully takes into consideration the characteristics of the HVS has yet to be developed. For these reasons, we have adopted the PSNR in our measurements. From these, a general rule 3 T o compare the characteristics of Synthetic House and Omaha, see Figure 5.2. 103 46 oc g 38 36 34 3 2 -e- Omaha -v- Airport -0- Synthetic House -a- Sena — Threshold 10 15 2 0 2 5 30 35 40 4 5 Key # Figure 5.10: PSNR Values for Different Embedding Keys can be extracted: our system is more suitable for photographic-like gray-scale images since they have more detail in which to hide a watermark. Another important factor is to test the ability of our system to authen- ticate genuine (not tampered) images. Therefore, while testing the invisibility of the marks, we also tested the capacity of the embedded marks to de de- tected by our intra/inter-band verification approach. We had to ensure the percentage of false negative-the number of times where authentic images are declared tampered-and the number of false positive-the number of times where unmarked, wrongly marked or tampered images are declared genuine-are both kept to a minimum. 104 The later is the easiest to prove since our system is designed to avoid this kind of situation. The embedding of a binary mark-here, a sequence of 64 zeros and ones-limits the average percentage of random positive matches to 50 % for images with no watermark. As our detection process requires 98.4%4 of the extracted mark to be identical to the original one for the image to be considered genuine, the number of images falsely declared valid is inherently low. Theoretically, false positives are possible. However, in practice, we have not encountered any in the seven hundred images tested. From this, we can comment on the low false positive rate and the capacity of our system to reject unmarked or wrongly marked images. On the other hand, more experiments are needed to confirm the authen- tication ability of our scheme. Using the same set of 720 images, we applied the authentication procedure. We found that, even if some low-level marks are sometimes destroyed by the simple discretization of the pixel value (i.e. saving the image), our intra/interband verification approach is able to declare the image authentic 99.87% of the time (see Figure 5.115). Once again, results show that the choice of the wavelet function in the image decomposition does not appreciably influence the detection rates. These range from 99.70% of marked regions unchanged by the discretization of pixel values6 with Coiflets 18 wavelets to 99.97% (of marked regions unchanged by the discretization of pixel values) with Daubechies 16 wavelets (see Table 5.2). Then again, the 4504 regions of embedding out of a total of 512, see §4.5.3. 5Again, not all the images are represented but more results are shown in B.8. 6This is done naturally to store the images in the bitmap format. 105 - 515 490 - -©- Omaha Airport - 0 - Synthetic House -a - Sena l i l l l l — Threshold J I I I I I I I = J 5 10 15 20 25 30 35 40 45 Key# Figure 5.11: Detection Rates Achieved for Authentic Images characteristics of the original images influence the detection capacities. For example, the well known-and highly detailed-Barbara, Peppers and Airplane images, which are easy to watermark, all yielded 100% detection rate, while the less textured Clock image is harder to authenticate. Nevertheless, the re- sults unmistakably prove that the decoding method proposed in Subsection 4.5.3 permits the authentication of untampered images. In this section, we have made explicit that our WP-based system is able to embed a mark imperceptibly in an image. Furthermore, we have demon- strated that the marks are detectable with the proposed extraction procedure, and that genuine images can be authenticated easily. Finally, these results have been shown to be independent of the wavelet function chosen for the 106 Table 5.2: Average Detection Rate for Different Wavelet Functions Wavelet Function Average Detection % of valid regions Coiflets 12 99.90 Coiflets 18 99.70 Coiflets 24 99.85 Coiflets 30 99.82 Daubechies 12 99.87 Daubechies 16 99.97 Overall 99.87 decomposition, at least amongst the functions tested. Now, we have to make sure altered images can also be identified, and their tampering highlighted. 5.3 T a m p e r i n g D e t e c t i o n An important aspect of our system is its ability to localize image tampering. For that reason, we have tampered with previously watermarked images and ran the verification process. This was done to make sure our system is able to detect and highlight the doctoring, which is the malicious modification of an image with the intention of adding or removing information. Figure 5.12 shows the tampering done on the previously watermarked Barbara image by the addition of a second bookshelf to the right of the existing one, as well as the tampering detection obtained with our WP-based system. In addition, as we wanted our system to be robust to high quality JPEG compression, we have compressed the tampered images and ran the verification process again. Figure 5.13 shows the results obtained for the Barbara image. From these, we 107 found that the ability of our system to detect and localize tampering is good, even in the presence of JPEG compression. To complete and to evaluate the ability of the authentication scheme to detect frequency tampering, we have marked the Baboon image and tampered with it by low pass filtering7. This operation, although not perceptible, gets rid of small detail in the image. The results obtained for the frequency localization are lastly, presented in Figure 5.14. As shown, the spatial tampering recognition does not pinpoint specific pixels, but defines regions of highly probable tampering. This is due to the following: first, the spatial correspondence between WP coefficients and pixels is not one-to-one since the wavelet and scaling functions are not of unitary length (the functions actually span over more than one pixel), and second, we use clusters of WP coefficients. Nevertheless, we have found that 5 by 5 pixels doctoring8 can be detected with our authentication method, but that it is more reliable in detecting and accurately localizing modifications larger than 10 by 10 pixels. Furthermore, as shown in Figure 5.14, frequency tam- pering is identified in terms of altered bands, with the white ones being the tampered bands. Figure 5.14 (f) clearly shows that the low frequency bands (top-left corner) are untouched, while tampered bands become more plentiful in the higher frequency bands, which corresponds to the modification of the frequency spectrum of the watermarked image (Figures 5.14 (d) and (e)). It is also possible to highlight the corresponding most severely altered spatial re- 7 A 3 by 3 Gaussian filter was applied to the entire image. 8In 256 by 256 images. 108  Figure 5.13: Compressed (3:1) Tampered Watermarked Image (Coiflets 12) and Detection of Spatial Tampering gions. Accordingly, these results guarantee the ability of our scheme to detect malicious tampering meant to destroy spatial detail or spectral information in high quality images. This is an important step towards telltale watermarking, as it gives a better understanding of how an image has been corrupted by ex- amining how a known embedded watermark has been modified. In addition, our system is able to withstand a certain degree of J P E G and JPEG-2000 compression, up to a ratio 3 : 1 for the former, and 2.23 : 1 for the later. This, as will be discussed, forms an advantage compared to available schemes. In summary, as previously shown in [56], our system is truly able to detect malicious tampering, both in the spatial and frequency domains, even in the presence of high quality compression. To finish this section, we highlight general observations. First, we have found that, for 256 by 256 grey images, the use of 4 levels' decomposition increases the robustness to J P E G compression-hence, the use of 4 levels in 110 (e) Figure 5.14: Original (a), Watermarked (b) and LP (watermarked) Baboon images (c). Frequency spectrums of the Watermarked (d) and LPF images (e). Frequency Detection of Tampering with our WP-based Approach (f) 111 all the results presented. This was anticipated since the use of only 4 levels allows deeper mark embedding in the most significant parts of images. As the presence of fewer decomposition levels generates fewer decomposition bands, it also worsens the frequency resolution. This is however, compensated by better spatial resolution resulting from the wavelets' fundamental principles (Figures 2.8 and 2.12). In the same line of thought, decompositions based on Coiflets functions yield better authentication and more accurate frequency and spatial localization of tampering. Intrinsic characteristics of Coiflets wavelets, such as the highest number of vanishing moments (both for ip(n) and <ft{n)) and the shorter effective length of the functions, may allow for better control of the effects of WPC quantization on the resulting images and, therefore, gives more assurance that embedded marks will stay after the discretization of the images' intensity. All the same, in order to complete our evaluation, we need to see how our authentication approach compares with other available tools. 5.4 C o m p a r i s o n w i t h E i ' k o n a m a r k As copyright protection is the most common application of digital watermark- ing, some efforts have been put forth for the development of testing benchmarks for robust watermarking schemes. Stirmark9, Certimark10 and Optimark11 are examples of systems that try to solve problems associated with the evaluation and comparison of watermarking systems. Stirmark is propably the most wide- 9http://www.cl.cam.ac.uk/ fapp2/watermarking/stirmark/ 10http://www.igd.fraunhofer.de/igd-a8/projects/certimark/ uhttp://poseidon. csd.auth.gr/optimark/ 112 ley accepted, as it was the first to be developed, as early as 1997. However, Optimark solves the complexity constraint by allowing for the performance assessment by a unique quality factor. By the use of a unique score, it grades the quality of a watermarking technique. It is therefore, a promising candidate for future reference. Nevertheless, all those benchmarks are meant to compare robust watermarking schemes only. In the absence of evaluation standards for fragile watermarking systems, we need to compare our scheme with previsously proposed ones on a one on one basis. For that reason, we have chosen a commercialy available watermarking software called Eikonamark. It is a crude implementation of [5], and comes from the Alpha-tec corporation [82]. Using only the image authentication capabilities of Eikonamark (which also offers robust embedding for copyright protection), we compared the quality of the produced images, the ability to detect tampering, and finally, the resistance to collage attacks achieved by this software with the results obtained with our system. 5.4.1 Image Quality and Tampering Detection As the first comparison step, we produced Eikonamark watermarked (E'ikona- marked) images and compared them with the ones obtained with our novel approach. Figures 5.15 to 5.18 show original images in comparison with their E'ikonamarked version. From the comparison of the average PSNR value ob- tained with E'ikonamarked (38.42 dB), with the one obtained with our system (42.62 dB), it is clear that the commercial software degrades the images more 113 than our method does. Figure 5.15: Original Barbara Image Another important characteristic is the ability of the systems to cor- rectly identify modified regions in marked images. To assess this for Eikona- mark, we have performed the same doctoring tests as the ones presented in Section 5.3. 114 Figure 5.16: E'ikonamarked Barbara Image (PSNR= 38.51 dB) 115 116 Figure 5.18: Eikonamarked Cameraman Image (PSNR= 38.37 dB) 117 Figure 5.19: Tampered E'ikonamarked Barbara Image and Detection with Eikonamark The tampering detection quality achieved with Eikonamark (Figure 5.19) is comparable with the results obtained with our system (shown in Figure 5.12). However, as Eikonamark is spatially based, it yields a slightly better delimitation of the modified areas. On the other hand, we found that it is possible for doctored regions to go unnoticed in several cases. First, if the tampering of a 256 by 256 gray scale image is smaller than 10 by 10 pixels, we have found that Eikonamark is not able to detect the tampering. At a size of 10 by 10 pixels, the doctoring is detected in the sense that the authentication key is not 100% found, but the tampering cannot be localized. In fact, if the size of the region of tampering is 10 by X (or X by 10), where X < 30, the watermark is not noticeably broken, and the modified area cannot be spa- tially pin pointed. Moreover, Eikonamark might generate a somewhat better detection of spatial tampering for medium-size corrupted areas, but it does not allow for the localization of frequency tampering. In addition, we have 118 found the system to be highly susceptible to high quality J P E G compression, as shown in Figure 5.20. Figure 5.20: Compressed (3:1) Eikonamarked Barbara Image and Authenticity Detection 5.4.2 Resistance to Collage Attacks Since the goal of authentication methods is to detect any unlawful modifica- tion or tampering, it is of utmost importance to be able to uncover any kind of alterations. Forged attacks are often problematical since they are designed to bypass implemented protections. In that sense, collage attacks (see Sub- section 4.4.1) are easy to realize and have been proven effective at defeating authentication procedures. For that reason, the last aspect of our comparison considers the capacity of the marked images to resist collage attacks, or more precisely, the ability of the authentication processes to detect them. For our system and Ei'konamark, we first obtained two watermarked images. Then, we produced a tampered image by using parts of the two authentic images. 119 Finally, tamper detection results are presented, both for Eikonamark and our WP-based approach. Figure 5.21: E'ikonamarked Barbara (a) and Cameraman (b) Images with the Mixed Version (c) , notice the disappearance of books from top left corner, and the Tampering Detection Result with Eikonamark (d) From Figure 5.21, it is obvious that Eikonamark is easy to defeat by the use of collage attack. Even if the attack is made quite apparent for visualization reasons, the spatial-based approach is not able to detect the combination of the 120 (a) (b) (c) (d) Figure 5.22: Watermarked Barbara (a) and Cameraman (b) Images with the Mixed Version (c), and the Tampering Detection Result with our WP-based Approach (d) 121 original images. Of course, this is a major security flaw as collage attacks are easily implemented and are quite effective at removing (or adding) important visual information. By comparison, our system is able to declare the image tampered. As shown, our WP-based approach is not able to locate the changes due to the high level of modifications detected. In fact, in this case, the localization of the doctoring is not of utmost importance as the image inspected is formed by the combination of two genuine images. Therefore, no region can be considered more or less tampered than the others. It is the image, as a whole, that needs to be considered unauthentic. From this, the results obtained are obviously satisfactory, as we have shown that the proposed method is more secure than the commercially available Ei'konamark software. In addition, as stated earlier, straightforward spatial embedding eases search attacks, while the embedding in a wavelet domain-unknown to potential attackers-prevents it or, at least, makes it substantially more lengthy and difficult. 5.4.3 Summary of Comparisons To sum up, we have demonstrated that our system introduces less visual dis- tortion when embedding marks in gray-scale images. We have shown that the quality of spatial tampering detection is equivalent for both systems, with a slight advantage, as far as localization of large tampered regions, for Ei'konamark. However, our system includes the recognition of frequency tam- pering in addition to spatial ones. Moreover, it surpasses Ei'konamark in its capacity to tolerate image processing operations since it already includes ro- 122 bustness to high quality JPEG compression. Furthermore, our system is able to detect collage attacks. The only advantage of Ei'konamark is that it is less computationally expensive. This is mainly due to the nature of the two systems-one is spatial-based while the other works in a transform domain-but also to the tool used in our implementation, which is not fully optimized. In conclusion, our system outperforms Ei'konamark in most of the aspects inves- tigated. With minor changes and optimization, it can certainly be developed into commercial software. 5.5 R o b u s t n e s s t o J P E G C o m p r e s s i o n The first goal of our project is, indeed, to develop a watermarking scheme for image authentication that can withstand a certain degree of image com- pression. As already shown, the system proposed in the previous chapter can tolerate a reasonable degree of image compression. Generally speaking, JPEG recommends a quality factor between 75 and 95 1 2 for a compressed image to be visually indistinguishable from the original one, and between 50 and 75 to be merely acceptable13. In applications where small image detail have to be kept intact, such as identification photos, medical images or video security systems, we need to make sure that images are visually unchanged by com- pression. For that reason, a quality factor of 95 is the lowest tolerable in these 1 2 The quality factor controls the compression ratio by deciding the quantization table used in the DCT coefficients quantization. If QF < 50 then k = otherwise k = 200~1^qF where k is the multiplication factor for the quantization table. As an indication, a QF of 92 results in a 3:1 compression ratio. 13http://www.faqs.org/faqs/jpeg-faq 123 contexts. In other cases for which the visual content of images is important but not vital, more substantial compression gains might be valuable in order to permit efficient storage of the data. In this work, we want to add an additional module to our authentication scheme in order to allow for a higher compression ratio when needed. Since we always keep in mind that the image visual content must not be changed by the storage operation, our goal is to allow compression up to a quality fac- tor of 85. According to [88], in quantization based watermarking techniques, the robustness of an embedded mark can be improved by either enlarging the quantization step (A), or reducing the amount of modification caused by im- age processing. The amount of distortion produced by compression cannot be changed as the operations involved are standardized. On the other hand, augmenting the quantization interval involves increasing the distortion intro- duced, thus, violating the watermarking invisibility requirement. Moreover, we have found the increase of the quantization step not to be very effective as far as enhancing the overall robustness to JPEG (or JPEG-2000) compres- sion. To abridge, this means that additional techniques have to be developed if one wants to consider medium quality JPEG compression as an acceptable alteration of the work which our authentication scheme should also protect. In order to do so, we have experimented with different procedures that we describe in the following subsections. 124 5.5.1 Predistortion in the Spatial and Wavelet Domains In [67], Smith and Comiskey introduce a spread spectrum information hiding technique. Their system embeds information in digital images by making small modifications to a large number of pixels. They use a pseudo-random carrier to code and distribute the information over the entire image. In order to increase the resistance of the mark to JPEG compression, they propose a predistortion technique. The original direct sequence carrier is compressed and uncompressed prior to the modulation and demodulation operations in order to compensate for the distortion from JPEG. Their idea is to use the compression routine to filter out, in advance, all the energy that would otherwise be lost later by the compression of the marked image. In light of this, the authors argue that: Tricks analogous to this are probably possible whenever the infor- mation hider has a model of the type of distortion that will be applied. From this, it is reasonable to assume that predistortion techniques can be used to augment the robustness of our authentication scheme to JPEG compression. Consequently, we have experimented with different predistortion related approaches that we explain below. Al l are derived from experimenta- tions and observations rather than fundamental compression theories. The first and most simple tactic we consider is to correct the distortion introduced by compression using post-watermarking operations in the spatial domain. By inspection, we found that the difference between a watermarked 125 image and its JPEG compressed version (QF = 85) is, perceptually speaking, the same as the difference between the original (unmarked) image and its com- pressed version. Therefore, this should mean that the compression affects the watermarked image in the same way as the unmarked one. However, it also indicates that the watermark does not alter the image significantly enough to be kept by medium quality JPEG compression (QF smaller than 90). From these observations, we have tried several approaches. First, we have simply tried to compress the image prior to the watermark embedding. Since medium quality JPEG removes low energy components in images, pre-compression as- sures that the marks must be embedded in higher energy WP coefficients. It does not, however, guarantee that the mark values have enough energy to sus- tain post-embedding compression. For this reason, the first approach is not successful. Therefore, we have tried a second pre-distortion method in the spatial domain. Our idea is to magnify the spatial differences (between the original and marked images) introduced by watermarking to over-compensate for the compression to be performed. We wanted to make the perturbation introduced significant enough to be kept by the JPEG quantization proce- dure, and consequently, kept in the compressed version of the watermarked image. This tactic did not, however, produce a significant improvement of the robustness of our system without introducing visual distortion to test images. From this, we conclude that we need to experiment with other pre-distortion techniques to see if any can serve the present purpose. 126 Figure 5.23: W P Regions of 2 level Lena Image Decomposition with Coiflets 24 and Daubechies 16 that are Unaltered by J P E G Compression (QF=85) The second approach considered uses the wavelet packets' domain. First, we found that the quantization step A computed for each embedding band stays unchanged between a marked image saved in bitmap and the same one saved in J P E G (QF = 85). This confirms that the overall energy of each band is unchanged by compression, and explains the degree of robustness already achieved by our authentication technique. Then, we found that some coef- ficients of W P decomposition stay unaltered by compression (with a quality factor of 85). Furthermore, it seems that these regions are similar for the same level of different wavelet decomposition (see Figure 5.231 4). Accordingly, it means that the watermark perturbation introduced by compression comes from the modification of separate individual coefficients and not from overall alteration 1 5. We first tried to compensate for the distortion by diminishing the quantization alphabet size, thus enlarging the quantization steps. As stated earlier, this did not allow for the improvement of the resistance of our authen- 1 4Note: Here, the unaltered regions are represented in white. 1 5 In fact, this was our main motivation in the use of mean of regions of WP coefficients for our embedding. 127 tication watermark to JPEG compression without affecting the visual quality of the work at hand. Another tactic is to make the embedding regions larger- -that is, take the mean of more WPC at once. Despite the fact that it may augment the system's robustness to some extent, it would also decrease our ability to localize image tampering, a compromise which is unacceptable. As a result, we considered the strategy previously evaluated in the spa- tial domain instead: we have to overcompensate the perturbation in the em- bedding step in order for a post-compressed image to contain the appropriate mark. Compressing the watermarked image, and then extracting the available key accomplished this. From comparison of the mark obtained with the em- bedded one, it is possible to see where the compression causes modification of the mean of WP coefficients regions. Then, for these regions only, we tried to correct the alterations prior to compression by introducing as much distor- tion in the other direction (see Figure 5.24) in order to counteract its effects. This did not, however, produce the intended results because the compression is highly dependant on the frequency content of the images, which is modified by the introduction of overly quantized coefficients, but led to another technique to increase the robustness of our scheme. In the same line of thought, we implemented a sequential watermark- ing technique. Initially, the original image is marked using our previously presented scheme (Figure 4.7). After, the image is compressed and the wa- termark is extracted and compared with the original. Then, modifications by shifting in the WP domain are performed on the unauthentic marks. This is 128 Quantization D esired ^ Final Quantized Value ^ \ ~~ Distortion • ^ Compressed Value ^ Original ^ 0 A/2 A 3A/2 2A 5A/2 Figure 5.24: Overcompensation in the WP domain done to obtain regions in each band for which the mean is not affected by compression. Subsequently, the same embedding system is applied recursively in order to insert the modified marks (Figure 5.25). Theoretically, the recursive embedding system shows great promise. In practice, however, problems arise. First, the implementation of the recursive technique is computationally too costly. Even if each embedding is not lengthy (around 8 to 12 seconds), its repetition becomes intolerable. Secondly, it is extremely difficult to keep track of the modified versus authentic coefficients, as well as of the areas of embedding. Even though it is possible to do so in the embedding process, we have not been able to come up with an efficient manner to perform the recovery in the decoding process. Furthermore, by the shifting of WP regions, it is never certain that the image to protect is fully spatially covered. Finally, as the lossy part of the JPEG process is its quantization performed in the DCT domain, it is never guaranteed that unmodified regions are found, or that areas that are unmodified in one iteration stay untouched in order. 129 Original Image Author's ID Watermarked Image Embedding I Modifications and mark shifting JPEG Compression J L . 4 - Mark Extraction and Comparison Final Image Compressed Image Figure 5.25: Recursive Embedding Scheme A l l the techniques presented lead us to believe that predistortion tech- niques cannot be used in the proposed scheme. It seems that the tactic proven to improve the robustness of a spread-spectrum spatial domain watermarking method [67] cannot enhance the resistance of wavelet domain watermarking schemes. From this, we have to conclude that our system cannot be made more resistant to J P E G compression than intrinsically possible, unless a completely different authentication approach is developed. 5.6 Summary In this chapter, we demonstrated the image authentication capabilities of our system. First, we showed that minimal visual perturbation was introduced by the insertion of a secret author identification mark in the W P domain. 130 Furthermore, our intra/interband verification technique was proven to allow good spatial and frequency localization of tampering without requiring any reference images. In the same way, we have found the use of the wavelet packets-based embedding domain to maximize the robustness of the marks, which allows our system to work in-the presence of high quality JPEG com- pression. This was also shown to be a major advantage of our method as compared to Ei'konamark, a commercially available watermarking tool, and so were its ability to localize frequency tampering and its robustness to collage attacks. Finally, multiple strategies intended to augment the robustness of our system and extend its use to medium quality JPEG compressed images were presented. However, none were found to improve the overall performance of our watermarking system. To sum up, our WP-based digital watermarking system was shown to yield excellent results in terms of its authentication capabilities, the quality of watermarked images produced, its robustness to high quality JPEG compres- sion and its resistance to collage attacks. 131 Chapter 6 Conclusions and Future Research "The only true wisdom is in knowing you know nothing." -Socrates, philosopher 6 .1 O v e r v i e w The protection of visual content is becoming an important issue as the use of digital images increases. In this context, this thesis has studied the utilization of watermarking for content protection of digital images. In summary, the first part of the thesis gives an overview of wavelet transforms, while the most important part is dedicated to watermarking technologies. The bases of digital watermarking and content authentication methods are laid down. The emphasis is, however, mainly put on the implementation of a wavelet 132 packets-based digital watermarking system. The main contribution of the thesis is the development of a novel, semi- fragile watermarking-technique for image authentication. We elaborate a se- cure watermarking technique for which the specific domain of embedding is known only by the author. For the embedding process, we formulate an op- timal wavelet packet coefficients quantization protocol that takes maximum advantage of the host image's characteristics. We also create an authentica- tion procedure to detect unauthorized tampering with images performed either in the frequency or spatial domains. This interband/intraband procedure ver- ifies that certain regions in certain bands are not tampered with. In addition to our contribution to watermarking for image authentication, we introduce an easy description of wavelets and their implications in signal and image pro- cessing. We also, for the first time in the field, summarize and categorize the different watermarking techniques, and use this classification to summarize previously proposed techniques. The main concepts we review and develop are summarized one last time in the following. 6 .2 D i g i t a l W a t e r m a r k i n g a n d C o n t e n t A u t h e n t i c a t i o n Digital watermarking allows for the imperceptible insertion of information into multimedia data. The supplementary information, called a watermark, is em- 133 bedded into the cover work through its slight modification. Watermarking has attracted a lot of attention in recent years, and several applications have been found for it (e.g. copyright protection, fingerprinting or copy protection), and reviewed in this thesis. From this, we divide watermarking techniques into two general approaches in terms of the capacity of the embedded mark to re- sist alteration of the host: robust versus fragile embedding. Although more attention is given to the first category in the literature, this thesis is particu- larly focused on the protection of information contained in digital media, and hence, in the development of a fragile watermarking scheme. To be precise, we proposed to protect a digital image's visual content by the embedding of a semi-fragile watermark in the wavelet packets' domain. Different conceptual approaches are considered to protect the authen- ticity of digital media. This yields the introduction of several semi-fragile wa- termarking methods. However, each of the previous techniques has flaws that must be circumvented. In particular, spatial-based authentication techniques lack the ability to identify tampering performed on the spectral content of images. Furthermore, they are sensitive to search and collage attacks. On the other hand, techniques working in the wavelet domain are robust to attacks, and allow for the identification of frequency tampering as WT allows both spatial and frequency localization of alterations. However, standard wavelet- based techniques previously introduced ask for some user interaction in the decoding process to decide on the severity of tampering. This represents a security flaw that endangers their commercialization. 134 6.2.1 Our Wavelet Packets-Based Authentication Scheme This thesis introduces a novel technique for digital image authentication based on semi-fragile watermarking. Using the wavelet packets' domain for the wa- termark embedding, our technique overcomes the above mentioned problems arising from the use of previously proposed methods. This novel approach embeds an author's unique (secret) binary identification key (of 64 bits) in an image to allow the image's authentication. First, the secret key is used as input parameter for the selection of the specific wavelet packet decomposition. This characteristic improves the overall security of our system as a randomly generated key, known only by the author, controls the information about the embedding domain. Then, our authentication scheme embeds the author's key in some regions in the WP domain. The coefficients in the regions are rounded so as not to affect the visual quality of the picture. To determine the optimal quantization achievable (i.e. find the quantization step to grant detectability to the mark while minimizing the mean square quantization error in the WP distribution), an adaptive quantization procedure that makes use of the knowledge of the (Laplacian) distribution of WP coefficients is introduced. Then, the 64-bit author's key is embedded by the rounding of selected wavelet packet coefficients to even/odd quantization levels. This novel procedure per- mits us to maximize the embedding weight, while minimizing the distortion introduced. Finally, the watermarked image is obtained by the computation of the inverse wavelet packet transform on the quantized coefficients. 135 The authentication of the image content is performed blindly using the author's secret key. First, the WP decomposition is computed on the image to be verified and the mark is extracted. Then, interband/intraband verification procedures are developed to complete the watermark decoding. Basically, the authenticity of the mark is verified across frequency and space to decide if any tampering has occurred in either frequency or space. These verification measures make obsolete the use of any post detection operations for judging the overall incidence of tampering. Furthermore, this allows for the differen- tiation of malicious tampering versus small image alterations introduced by JPEG compression on a genuine image. In that sense, our method includes robustness in the context of high quality image compression, while allowing image tampering detection and localization. 6.2.2 Review of Results Experimental results demonstrate the capability of our WP-based watermark- ing approach to authenticate digital images. First, the visual quality of wa- termarked images produced shows the ability of our system to embed a se- cret mark in an image, while keeping the level of distortion introduced to a minimum. We also confirm that the mark is detectable with the proposed extraction procedure and that genuine images can be authenticated easily. In addition, the false negative detection rate is kept at a minimum. As well, the watermarking technique is proven to allow for accurate detection and good localization of image tampering performed either in the spatial domain (e.g. 136 addition or removal of objects) or in the frequency domain (e.g. filtering), even in the presence of high quality JPEG or JPEG-2000 compressions. Further- more, these results are shown to be independent of the wavelet decomposition chosen. Afterwards, the WP-based approach is compared with Eikonamark, a commercially available spatial-based watermarking tool, in terms of water- marked image quality, tampering detection ability and resistance to collage attacks. Our system is shown to yield better watermarked images' quality than Eikonamark, while spatial tampering detection is found to be compa- rable for both systems. Eikonamark, however, did not allow the localization of frequency tampering, nor did it tolerate high quality JPEG compression. Furthermore, our system clearly outperforms Eikonamark in terms of its re- sistance to collage attacks. These are obviously satisfactory results as they show that the proposed method is more robust and more secure than this commercially available software. To complete our investigation, we experimented with many different predistortion strategies in order to extend the use of our authentication sys- tem to medium quality JPEG compressed images. Unfortunately, neither the techniques working in the spatial domain, nor those performing predistortion of the watermark in the WP domain can assure the resistance of the mark to medium quality JPEG compression without introducing unacceptable visual distortion. This leads us to conclude that other strategies would have to be considered in order to allow image authentication schemes to be robust enough 137 for medium quality JPEG compression. In summary, our WP-based digital watermarking system gives terrific results. First, the interband/intraband decoding technique we developed yields outstanding authentication and tampering detection and localization capa- bilities. Second, the optimal quantization approach favored produces water- marked images of excellent visual quality. Third, the control of the embedding domain by an author's secret (and unique) key enhances the global security of our scheme. Last, the overall system has been shown to perform better than a commercial software in its robustness to JPEG compression and resistance to collage attacks. 6.3 Future Research This thesis clearly improves existing image authentication techniques and in- troduces new concepts for the use of watermarking. Nonetheless, there are still many aspects that can be further investigated, and regarding which the overall image authentication technology can be improved. First, the optimization of the embedding process for particular types of images may augment the embedding capacity-the amount of information carried by a host. As certain kinds of images possess particular traits, our system might be able to take advantage of those special characteristics. In the same line of thought, the overall performance of our system may be fur- ther enhanced by the use of color images, since their capacity to accept an invisible mark is greater that one of gray-scale images, due to the presence 138 of chrominance information, in addition to the luminance. Coding techniques can also be used to increase the capacity of embedding. The addition of an error control coding module [51] to augment the reliability of the information carried, may achieve this goal. As we have seen, wavelet packet decomposition is not limited to two- dimensional signals. Therefore, the concepts developed in the context of image authentication may be adapted for the use of our WP-based authentication scheme on other media, such as text, audio or video. Finally, future work may also include the augmentation of the mark's resistance to JPEG and/or JPEG-2000 compressions by the extraction of compression-invariant images' characteristics in the wavelets' domain and their use in the embedding process. In [55], the resistance of the watermark to JPEG compression is granted by assuring that the distortion caused on an image's DCT coefficient by compression is equal to the corresponding DCT coefficient of the spatially embedded watermark signal. As JPEG is based on the discrete cosine transform while our system works in the wavelet packets' domain, the augmentation of our scheme's resistance to JPEG compression using a similar technique definitely requires deep modification of the proposed scheme. On the other hand, this tactic can probably be used much more simply in the wavelet domain in order to grant our system more resistance to JPEG-2000, which is wavelet-based. 139 6 . 4 C l o s i n g R e m a r k s Semi-fragile watermarking techniques provide an effective means of protecting the content of digital media. Furthermore, their use on digital images allows for the detection and localization of unauthorized tampering, while permitting the efficient storage of visual information. These combined characteristics are of primary importance in applications, such as courtroom evidence or medical imaging for which the information contained in images is of utmost importance, while the number of images available requires proficient storage techniques. In this context, the use of semi-fragile watermarking techniques clearly augments the value of digital images. For all these reasons, the development of certifi- cation systems for digital data will become an increasingly important issue in the future. Thus, the introduction of our wavelet packets-based digital water- marking technique for image authentication is a small step in the advancement of overall digital security. 140 Bibliography [1] M.D. Adams. Reversible wavelet transforms and their application to em- bedded image compression. Master's thesis, University of Victoria, Vic- toria, B .C. , 1999. [2] W . C . Adams and C . E . Giesler. Quantizing characteristics for signals hav- ing laplacian amplitude probability density function. IEEE Transactions on Communications, 26(8):1295-1297, 1978. [3] Ashok Ambardar. Analog and Digital Signal Processing. PWS-Kent Pub- lishing Company, Division of Wadsworth, Inc., 20 Park Plaza, Boston, M A 02116, USA, 1995. [4] M . Barni, C.I. Podilchuk, F. Bartolini, and E . J . Delp. Watermark em- bedding: Hiding a signal within a cover image. Special Issue of IEEE Communication Magazine on Digital Watermarking for Copyright Pro- tection: A Communication Perspective, 39(8):102-108, August 2001. [5] F . Bartollini, A. Tefas, M . Barni, and I. Pitas. Image authentica- tion techniques for surveillance applications. Proceedings of the IEEE, 89(10):1403-1418, October 2001. [6] S. Bhattacharjee and M . Kutter. Compression tolerant image authentica- tion. In IEEE International Conference on Image Processing (ICIP'1998), volume I, pages 435-439, October 1998. [7] F . M . Boland, J.J.K. O Ruanaidh, and C. Dautzenberg. Watermarking digital images for copyright protection. In IEE Conference on Image Processing and its Applications, volume 410, pages 326-330, jul 1995. 141 [8] J. Brassil, S. Low, N. Maxemchuk, and L. O'Gorman. Electronic mark- ing and identification techniques to discourage document copying. In Proceedings of Infocom '94, pages 1278-1287, June 1994. [9] J. Brassil, S. Low, N. F. Maxemchuk, and L. O'Gorman. Hiding informa- tion in document images. CISS '95, 1995. [10] C. Busch, W. Funk, and S. Wolthusen. Digital watermarking: From concepts to real-time video applications. Computer Graphics and Appli- cations, 16:25-35, January 1999. [11] Q. Cheng and J. Sorensen. Spread spectrum signaling for speech wa- termarking. In IEEE International Conference on Acoustics Speech and Signal Processing, volume 3, pages 1337-1340, May 2001. [12] Ronald R. Coifman, Yves Meyer, Stephen R. Quake, and Mladen Victor Wickerhauser. Signal processing and compression with wavelet packets. Preprint, Yale University, New Haven, April 1990. [13] Ronald R. Coifman, Yves Meyer, and Mladen Victor Wickerhauser. Wavelet analysis and signal processing. In Mary Beth Ruskai:1992:WTA, Gregory Beylkin, Ronald Coifman, Ingrid Daubechies, Stephane Mallat, Yves Meyer, and Louise Raphael, editors, Wavelets and Their Applica- tions, pages 153-178. Jones and Bartlett, Boston, 1992. [14] R.R. Coifman and Y. Meyer. Orthonormal wave packet bases. Technical report, Dept. of Mathematics, Yale University, 1990. Preprint. [15] I.J. Cox, J. Kilian, T. Leighton, and T. Shamoon. Secure spread spectrum watermarking for multimedia. Technical Report 95-10, NEC Research Institute, 1995. [16] I.J. Cox, J. Kilian, T. Leighton, and T. Shamoon. Secure spread spec- trum watermarking for images, audio, and video. In IEEE International Conference on Image Processing (ICIP'96), volume III, pages 243-246, 1996. [17] I.J. Cox, J. Kilian, T. Leighton, and T. Shamoon. Secure spread spec- trum watermarking for images, audio, and video. Transactions on Image Processing, 6(12):1673-1687, December 1997. 142 [18] I.J. Cox and M.L Miller. Electronic watermarking: the first 50 years. In IEEE Fourth Workshop on Multimedia Signal Processing, pages 225-230, 2001. [19] I.J. Cox, M.L. Miller, and J.A. Bloom. Watermarking applications and their properties. In IEEE International Conference on Information Tech- nology, pages 6-10, 2000. [20] I.J Cox, M.L. Miller, and J.A. Bloom. Digital Watermarking. Morgan Kaufmann Publishers, 2929 Campus Drive, Suite 260, San Mateo, CA 94403, USA, 2002. [21] I.J. Cox, M.L. Miller, and A.L McKellips. Watermarking as communica- tions with side information. Proceedings of the IEEE, 87(7):1127-1141, July 1999. [22] A. Croisier, D. Esteban, and C. Galand. Perfect channel splitting by use of interpolation/decimation/tree decomposition techniques. In International Conference Information Science and Systems, pages 443-446, 1976. [23] I. Daubechies. Orthonormal bases of compactly supported wavelets. Com- munications on Pure Applied Mathematics, XLI(41):909-996, November 1988. [24] I. Daubechies. Ten Lectures on Wavelets. SIAM, Philadelphia, PA, 1992. Notes from the 1990 CBMS-NSF Conference on Wavelets and Applica- tions at Lowell, MA. [25] S. Decker. Engineering considerations in commercial watermarking. Spe- cial Issue of IEEE Communication Magazine on Digital Watermarking for Copyright Protection: A Communication Perspective, 39(8):128-133, August 2001. [26] O. D'Souza. Digital and analogue watermarking of video recordings. Tech- nical Report 71-76, Chubb Vision Australia, 1997. [27] M. Ejima and A. Miyazaki. A wavelet-based watermarking for digital images and video. In IEEE International Conference on Image Process- ing (ICIP'2000), volume III, pages 678-681, Vancouver, BC, Canada, September 2000. 143 [28] C. Fei, D. Kundur, and R. Kwong. The choice of watermark domain in the presence of compression. In IEEE International Conference on Information Technology: Coding and Computing, pages 79-84, 2001. [29] David J. Fleet and David J. Heeger. Embedding invisible information in color images. In IEEE Signal Processing Society 1997 International Conference on Image Processing (ICIP'97), volume III, pages 532-535, Santa Barbara, California, October 1997. [30] A. Grossman and J. Morlet. Decomposition of functions into wavelets of constant shape. SIAM Journal on Mathematic Analysis, 15(4):723-736, 1984. [31] Alfred Haar. Zur Theorie der orthogonalen Funktionensysteme. Mathe- matische Annalen, 69:331-371, 1910. In German. [32] F. Hartung and M. Kutter. Multimedia watermarking techniques. Pro- ceedings of the IEEE, 87(7):1179-1107, July 1999. [33] Frank Hartung and Bernd Girod. Fast public-key watermarking of com- pressed video. In IEEE Signal Processing Society 1997 International Con- ference on Image Processing (ICIP'97), volume III, pages 528-531, Santa Barbara, California, October 1997. [34] E. F. Hembrooke. Identification of sound and like signals, 1961. United States Patent number 3,004,104. [35] C T . Hsu and J.L. Wu. Dct-based watermarking for video. Transactions on Consumer Electronics, 4(1):206-216, February 1998. [36] D. Hunter. Handmade Paper and its Watermarks: A Bibliography. B. Franklin, New York, 1962. [37] N. Jayant, J. Johnston, and R. Safranek. Signal compression based on models of human perception. Proceedings of the IEEE, 81(10):1385-1422, October 1993. [38] N. Komatsu and H. Tominaga. Authentication system using concealed images in telematics, 1988. 144 [39] D. Kundur and D. Hatzinakos. Digital watermarking for telltale tamper proofing and authentication. Proceedings of the IEEE, 87(7):1167-1180, July 1999. [40] G.C. Langelaar and R.L. Lagendijk. Optimal differential energy water- marking of dct encoded images and video. IEEE Transactions on Image Processing, 10(1):148-158, January 2001. [41] C. Lin and S. Chang. Semi-fragile watermarking for authenticating jpeg visual content. In SPIE Security and Watermarking of Multimedia Con- tent II, pages 140-151, San Jose, CA, January 2000. [42] C.-Y. Lin, M. Wu, J.A. Bloom, I.J. Cox, M.L Miller, and Y . M . Lui. Rotation, scale, and translation resilient watermarking for images. IEEE Transactions on Image Processing, 10(5):767-782, October 2001. [43] J. Liu, K. Najarian, and E. El-Kwae. Comparative evaluation of wavelet- based digital image watermarking. In International Conference on Acous- tics Speech and Signal Processing (ICASSP'2001), volume III, pages 211- 214, 2001. [44] S. H. Low, N. F. Maxemchuk, J. T. Brassil, and L. O'Gorman. Document marking and identification using both line and word shifting. In Infocom '95, Boston, MA, April 1995. [45] C.-S. Lu, S.-K. Huang, C.-J. Sze, and H.-Y Mark Liao. Cocktail water- marking for digital image protection. IEEE Transactions on Multimedia, 2(4):209-224, December 2000. [46] C.-S. Lu and H.-Y.M. Liao. Multipurpose watermarking for image au- thentication and protection. IEEE Transactions on. Image Processing, 10(10):1579-1592, October 2001. [47] A. Lumini and D. Maio. Blind watermarking system for digital images in the wavelet domain. In SPIE International Symposiom Electronic Imaging Security and Watermarking of Multimedia Contents II, pages 524-535, January 2000. [48] S. Mallat. Multiresolution approximation and wavelet orthonormal bases of E2. Trans. Amer. Math. Soc, 315:69-87, September 1989. 145 [49] S. Mallat. A theory for multiresolution signal decomposition: The wavelet representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, ll(7):674-693, 1989. [50] J.R. Hernandez Martin and M. Kutter. Information retrieval in digital watermarking. Special Issue of IEEE Communication Magazine on Digital Watermarking for Copyright Protection: A Communication Perspective, 39(8):110-116, August 2001. [51] L. M. Marvel and C. T. Retter. The use of side information in image steganography. In IEEE International Symposium on Information Theory and Its Applications (ISITA '2000), November 2000. [52] P. Meenvald and A. Uhl. Watermark security via wavelet filter parametrization. In IEEE International Conference on Image Process- ing (ICIP'2001), volume II, pages 1027-1030, October 2001. [53] Yves Meyer. Ondelettes et Operateurs. Hermann, Paris, France, 1990. In two French volumes. [54] F. Mintzer, G.W. Braudaway, and A.E. Bell. Opportunities for water- marking standards. Communications of the ACM, 41(7):57-64, July 1998. [55] N. Nikolaidis and I. Pitas. Robust image watermarking in the spatial domain. Signal Processing, 766(3):385-403, 1998. [56] A.H. Paquet and R.K. Ward. Wavelet-based digital watermarking for image authentication. In IEEE Canadian Conference on Electrical and Computer Engineering, volume I, pages 879-884, Winnipeg, Manitoba, May 2002. [57] A.H. Paquet, S. Zahir, and R.K. Ward. Wavelet packets-based image re- trieval. In IEEE International Conference on Acoustics Speech and Signal Processing, volume IV, pages 3640-3643, May 2002. [58] F.A.P. Petitcolas, R.J. Anderson, and M.G. Kuhn. Information hiding - a survey. Proceedings of the IEEE, 87(7):1062-1078, July 1999. [59] C.I. Podilchuk and E.J. Delp. Digital watermarking: Algorithms and applications. IEEE Signal Processing Magazine, 18(4):33-46, July 2001. 146 [60] M.P. Queluz and P. Lamy. Spatial watermark for image verification. In SPIE Conference on Security and Watermarking of Multimedia Contents II, volume 3971, pages 120-130, January 2000. [61] E.A. Robinson. Random Wavelets and Cybernetic Systems. Griffin and Co., London, 1962. [62] J. Romberg, H. Choi, R. Baraniuk, and N. Kingsbury. Multiscale classifi- cation using complex wavelets and hidden markov tree models. In IEEE International Conference on Image Processing (ICIP'2000), volume II, pages 371-374, Vancouver, BC, Canada, September 2000. [63] J.J.K. O Ruanaidh, W.J Dowling, and F.M. Boland. Watermarking dig- ital images for copyright protection. IEE Proceedings on Visual Image Signal Processing -Special Section, 143(4):250-256, August 1996. [64] Joseph J. K. O Ruanaidh and Thierry Pun. Rotation, translation and scale invariant digital image watermarking. In IEEE Signal Processing Society 1997 International Conference on Image Processing (ICIP'97), volume III, pages 536-539, Santa Barbara, California, October 1997. [65] Khalid Sayood. Introduction to Data Compression. Morgan Kaufmann Publishers, 2929 Campus Drive, Suite 260, San Mateo, CA 94403, USA, 1996. [66] Eero P. Simoncelli and Edward H. Adelson. Non-separable extensions of quadrature mirror filters to multiple dimensions. Proceedings of the IEEE: Special Issue on Multi-dimensional Signal Processing, 78(4) :652- 664, April 1990. [67] J.R. Smith and B.O. Comiskey. Modulation and information hiding in images. In Presented at the Workshop on Information Hiding - Springer- Verlag Lecture Notes in Computer Science, volume 1174, May 1996. [68] Eric J. Stollnitz, Tony D. DeRose, and David H. Salesin. Wavelets for Computer Graphics: Theory and Applications. Morgann Kaufmann, San Francisco, CA, 1996. [69] Gilbert Strang and Truong Nguyen. Wavelets and Filter Banks. Wellesley- Cambridge Press, Wellesley, MA, 1996. 147 [70] M.D. Swanson, B. Chau B.Z Zhu, and A.H. Tewfik. Multiresolution video watermark using perceptual models and scene segmentation. In IEEE In- ternational Conference on Image Processing (ICIP'97), volume III, pages 558-561, 1997. [71] Mitchell D. Swanson, Bin Zhu, and Ahmed H. Tewfik. Transparent robust image watermarking. In 1996 SPIE Conf. on Visual Communications and Image, volume III, pages 211-214, 1996. [72] M.A. Tefas and I. Pitas. Image authentication based on chaotic mixing. In IEEE International Symposium on Circuits and Systems (ISCAS'2000), volume I, pages 216-219, May 2000. [73] Ahmed H. Tewfik. Digital watermarking; special issue on. Signal Pro- cessing Magazine, 17(9), September 2000. [74] A.Z. Tirkel, C.F. Osborne, and R.G. van Schyndel. Image watermarking - a spread spectrum technique. In IEEE 4th International Symposium on Spread Spectrum Techniques and Applications, volume II, pages 785-789, 1996. [75] M.-J. Tsai, K.-Y. Yu, and Y.-Z. Chen. Joint wavelet and spatial trans- formation for digital watermarking. IEEE Transactions on Consumer Electronics, 46(l):241-245, February 2000. [76] M.-J. Tsai, K.-Y. Yu, and Y.-Z. Chen. Wavelet packet and adaptive spatial transformation of watermark for digital image authentication. In IEEE International Conference on Image Processing (ICIP'2000), vol- ume I, pages 450-453, Vancouver, BC, Canada, 2000. [77] J. Vehel and A. Manoury. Wavelet packet based digital watermarking. In IEEE International Conference on Image Processing (ICIP'2000), volume III, pages 417-420, Vancouver, BC, Canada, 2000. [78] R. Venkatesan and M.H. Jakubowski. Image watermarking with bet- ter resilience. In IEEE International Conference on Image Processing (ICIP'2000), volume I, pages 403-406, Vancouver, BC, Canada, Septem- ber 2000. 148 [79] M. Vetterli. Multidimensional subband coding: Some theory and algo- rithms. Signal Processing, 6(2):97-112, April 1984. [80] M. Vetterli and J. Kovacevic. Wavelets and Subband Coding. Prentice Hall, 1995. [81] G. Voyatzis and I. Pitas. The use of watermarks in the protection of digital multimedia products. Proceedings of the IEEE, 87(7):1197-1207, July 1999. [82] AlphaTec Watermarking, http://www.alphatecltd.com/watermarking/. [83] K.K. Wong, C H . Tse, K.S. Ng, T.H. Lee, and L.M. Cheng. Adaptive watermaking. Transactions on Consumer Electronics, 43(4):1003-1009, November 1997. [84] Ping Wah Wong. A public key watermark for image verification and authentication. In IEEE International Conference on Image Processing (ICIP'1998), volume I, pages 455-459, October 1998. [85] M. Wu and B. Liu. Watermarking for image authentication. In IEEE International Conference on Image Processing (ICIP'1998), volume II, pages 437-441, October 1998. [86] X.-G. Xia, C. Boncelet, and G. Arce. A multiresolution watermark for digital images. In IEEE Signal Processing Society 1997 International Conference on Image Processing (ICIP'97), volume III, pages 548-551, Santa Barbara, California, October 1997. [87] M.M. Yeung and F. Mintzer. An invisible watermarking techniques for image verification. In IEEE International Conference on Image Processing (ICIP'1997), volume II, pages 680-683, 1997. [88] G.J. Yu, C.-S. Lu, H.-Y. M. Liao, and J.-P. Sheu. Mean quantization blind watermarking for image authentication. In IEEE International Confer- ence on Image Processing (ICIP'2000), volume III, pages 706-709, Van- couver, BC, Canada, 2000. 149 Appendix A In this brief appendix, we present some mathematical equations, proofs and explanations to support statements and theories developed throughout the thesis. A . l F o u r i e r A n a l y s i s The Fourier series of a continuous signal of period T is given by the following: oo xp(t) = ^2ak cos(ku)0t) + bk sin(A;a;oi) (A.l) fe=i where the expansion coefficients are given as follows: 2 T 2 ak = — [ x(t)cos(ku>ot)dt (A.2) JT  r bk = — I x(t)sin(ku>0t)dt (A.3) T JT The Fourier transform of a discrete signal x[n] is computed with the following: oo T{x[n}} = X{u) = £ x[n] exp{-jun) (A.4) 150 and the original signal can be reconstructed using the following: x (A.5) A.2 Orthonormality of Haar Basis To make sure that Haar basis functions given in Equation 2.4 form an orthonor- mal basis for signals from the space h(Z), we need to prove the following: 1- Wk\keZ is an orthonormal family. 2. {Vk}kez is complete. First, if we consider 1, we need to show that ((fk, <pi) = S[k — I}. If we take k as even, that is, k = 2i, the basis functions overlap only if I > 2i or if I < 2i + 1. For those two cases, we have the following: (<p2i, ip2i+i) = </>2i[2i] • ¥>2i+i[2i] + <p2i[2i + 1] • </?2i+i[2« + 1] = 0 The same argument can be made for odd Vs, and thus orthogonality of the basis is proven. Now, to prove 2, we need to show that any signal belonging to I2Z can be expanded using Haar basis. This is equivalent to demonstrating that there exist no signal x[n] with ||a;|| > 0 that has a zero expansion or ||(</>fe,a;)|| = 0 for all fc's. To prove 2, we demonstrate that the opposite is impossible, as follows. \\{<pk,x)\\ = 0 * \\(Vk,x)\\2 = 0 ^ £ Wk[n],x[n])\2 = 0 (A.6) 151 Since tpk[n] and x[n] are non negative terms, Equation A.6 holds only if: X[k] = (tpk[n],x[n\) = 0 for all k First, if we take k as even and consider X[2k] = 0, the inverse transform yields x[2k] = -x[2k + 1] for all k. Then, the inverse transform of X[2k + 1] = 0 for odd /c's, gives x[2k] = x[2k + 1] for all k. Therefore, the only way the two conditions are satisfied is when a;[2A;] = x[2A: + l] = 0, which is in contradiction with our first premise. Subsequently, this shows that there is no sequence x[n], \\x\\ > 0, such that \\X\\ = 0, and therefore, proves completeness of the Haar basis {(fik)- In conclusion, Haar basis functions form an orthonormal basis as requested. A .3 Conditions of Filters H{(z) and Fi(z) There are problems linked with the appropriate choice of filters that need to be solved in order for the signal reconstructed from analysis—^synthesis using filter banks to be an equivalent (in fact a delayed version) of the original signal. Since the analysis filters H0 and Hi are not ideal brick wall filters, their responses overlap, therefore creating aliasing. The synthesis filters must be designed in consideration of the problem. Thus, the conditions that the filters must satisfy can be split in two as follows: 1. Alias Cancellation is as follows: F0(z) = Hi(z) and F^z) = -H0(-z) F0(z)H0(-z) + F1(z)H1(-z)=0 (A.7) 152 2. Perfect Reconstruction is as follows: F0(z)H0(z) - F0{-z)H0(-z) = 2z -i (A.8) These can be summarized using the following matrix notation: H0(z) H0(-z) Hx(z) Hx{-z) iF0(z) F^z) = [ 2z~l 0 ] (A.9) A.4 Definition of Multiresolution A multiresolution analysis consists of the following sequence of embedded closed subspace: ... C V2 C Vx C V0 C V-i C V l 2 . . . (A.10) such that 1. Upward Completeness is as follows: Jivm - L2(K). 2. Downward Completeness is as follows: meZ Vm = {0}. 3. Scale Invariance is as follows: /(*) e Vm f(2mt) e V0. 4. Shift Invariance is as follows: f(t)eV0 f(t -n) eVo for all n e Z. (A.ll) (A.12) (A.13) (A .H) 153 5. Existence of a Basis: there exists tpeVo, such that the following is an orthonormal basis for VQ. {<p{t-n)\ n e Z} (A.15) A . 5 S t e p s t o w a r d s M u l t i r e s o l u t i o n Here are the main steps leading to (and necessary for) a multiresolution ap- plication for wavelet decomposition. From [69], we have the following: 1. An increasing sequence of scaling subspace Vj 2. Wavelet subspace Wj that gives Vj + Wj = Vj_i 3. The dilation requirement f(t) in Vj to f(2t) in V}_i 4. The basis <p(t - k) for Vo and ip(t - k) for W0 5. The basis cp(2jt - k) for Vj and -ip(2H - k) for Wj 6. The basis of all wavelets ip(2H - k) form the whole space E2 A . 6 E r a s a b l e W a t e r m a r k i n g From [20], the main steps towards erasable watermarking are : 1. All the information in the work is used to compute a signature. 2. The signature is embedded in the mark in an erasable (invertible) man- ner. 154 3. The recipient of the work extracts and records the embedded signature. 4. The watermarked signature is erased from the cover work. As this point, the work should be identical to the original one. 5. In the same way as in 1, a signature is computed from the work and compared with the extracted signature from 3. 6. Authenticity decision is based on the similarities/differences between the extracted (3) and computed (4) signatures. An erasable watermark can be removed from its associated cover work to obtain an exact copy of the original unwatermarked work. It is however impossible to make an erasable mark that can be embedded in 100 % of dig- ital content. This is to due to the fact that digital works are represented with a finite number of bits. Therefore, there is a fixed (although very large) number of possible works. For example there are 2 5 2 4 2 8 8 possible 256x256 8- bits images. Erasability requires that the original work can be recovered from the watermarked work. This asks for the unicity of the mapping between the original and watermarked works. This means that for the 2 5 2 4 2 8 8 possible orig- inal images, there are exactly 2 5 2 4 2 8 8 corresponding unwatermarked images. If the original and watermarked works are represented with the same number of bits, it means that all the original works must be considered watermarked. Therefore, the only way to achieve 100 % effectiveness is to allow for 100 % false positive, which goes against an important requirement of watermarking schemes. For more detail on the subject, see Chapter 10 of [20]. 155 Appendix B This second appendix presents figures that we thought were interesting, as well as highly informative, for the readers, but not absolutely essential to the core of our work. 156 Audio or Combined Media 157 Broadcast Monitoring 158 Other Transforms 159 Theoretical Approach Without Original Data (public) Figure B.4: Decoding/Detection Procedure used for Digital Watermarking 160 Daubechies 12 Low Pass (FO) and High Pass (F1) filters 0 5 10 15 0 5 10 15 Daubechies 16 Low Pass (FO) and High Pass (F1) filters 1, . . . 1 1, • • • 1 Figure B.5: Discrete Filters used in our Implementation 161 Coiflets 18 Low Pass (FO) and High Pass (F1) filters 0 5 10 15 20 0 5 10 15 20 Coiflets 24 Low Pass (FO) and High Pass (F1) filters 0 5 10 15 20 25 0 5 10 15 20 25 Coiflets 30 Low Pass (FO) and High Pass (F1) filters 0 10 20 30 0 10 20 30 Figure B.6: Discrete Filters used in our Implementation 162 163 164


Citation Scheme:


Usage Statistics

Country Views Downloads
China 7 8
United States 3 0
Japan 2 0
City Views Downloads
Beijing 7 1
Mountain View 3 0
Tokyo 2 0

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}


Share to:


Related Items