A TRANSMISSION SYSTEM FOR DPCM ENCODED MONOCHROME IMAGES OVER A NOISY CHANNEL by SINCLAIR CHI B.A.Sc.(E.E.), University of British Columbia, 1996 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF APPLIED SCIENCE in THE FACULTY OF GRADUATE STUDIES DEPARTMENT OF ELECTRICAL AND COMPUTER ENGINEERING We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA June 1998 © Sinclair Chi, 1998 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of £ u t C T & l C A L AhJ b &>-Hf>V?68. £SJ&/KJ G~ The University of British Columbia Vancouver, ^Canada DE-6 (2/88) Abstract Due to the high correlation of neighbouring pixel values of a digital image, DPCM picture transmission has been a well investigated area of image compression. DPCM has found its way into standards such as JPEG and MPEG which have revolutionized the multimedia industry. However, because of the high noise sensitivity of DPCM compressed images, techniques must be developed to reduce the disturbing effects of channel errors. Hence, a DPCM image transmission system over a binary symmetric channel is studied, along with techniques used to improve picture quality. Two concepts which are the foci of the investigation are joint source/channel decoding (JSCD) and post-processing. The DPCM encoded image usually exhibits some residual redundancy. Thus the encoded image can be modeled as a first-order Markov model, and a bit error rate minimizing decoder takes advantage of the source statistics to correct errors. This is a type of constrained joint source/ channel decoding, and does not use added redundancy in the form of FEC to improve picture quality. Usually, side information consisting of source statistics need to be transmitted to the decoder. It is found that an iterative JSCD system which needs no side information has the same error performance as a system which requires error free transmission of the source statistics, but has a larger decoding delay. Furthermore, the JSCD system was generalized to one using a second-order Markov model and the improvement was found to be negligible. JSCD decoding at a very high probability of error, does not correct all of the channel errors; and there remains some very noticeable errors. Post-processing techniques have been applied to corrupted DPCM images and thus it is proposed that the similar techniques be applied to JSCD DPCM images. A thresholding technique is developed to locate the errors and then i i correct the erroneous symbols. Symbol correction methods which make use of source statistics are compared to a correction technique based solely on the analysis of the decoded DPCM image. Subjective and objective results are presented with the use of the output images and signal-to-noise ratio graphs. These measures demonstrate that the proposed system of a concate-nated source/channel decoder followed by a post-processor improves a noise corrupted DPCM image quite effectively. iii Table of Contents Abstract " List of Tables vi List of Figures vii Acknowledgment ix Chapter 1 Introduction 1 1.1 Motivations and Objectives 2 1.2 Thesis Outline 3 Chapter 2 DPCM Image Compression 5 2.1 DPCM System 6 2.2 Prediction Coefficient 7 2.2.1 Chang Donaldson Prediction Coefficient 10 2.3 Quantization 11 2.3.1 Uniform Quantizer 11 2.3.2 Nonuniform Pdf Optimized Quantizer 12 2.4 Simulation and Numerical Results 13 Chapter 3 Joint Source Channel Decoding 18 3.1 Viterbi Decoding Using Residual Redundancy 19 3.1.1 Training Sequence Sensitivity 26 3.1.2 Iterative JSCD 29 3.1.3 Code Word Mapping 32 3.2 JSCD Using A Second-Order Markov Model 37 Chapter 4 Post Processing Techniques 42 iv 4.1 Error Patterns 43 4.2 Streak Detection Techniques 44 4.2.1 Clipped Reconstructed Sample Values 44 4.2.2 Thresholding Techniques 45 4.2.3 Modified Iterative Lippman Algorithm 49 4.3 Symbol Correction Techniques 53 4.3.1 MAPRI Symbol 53 4.3.2 MAPRI Transition 54 4.3.3 Minimum Squared Error 55 4.4 Simulation and Discussion of Results 56 4.4.1 Performance at Low BER 61 Chapter 5 Conclusion and Future Work 64 Bibliography 67 V List of Tables Table 3.1 Code Word Mapping for 2-Bit D P C M System 32 Table 3.2 Code Word Mapping for 3-Bit D P C M System 33 vi List of Figures Figure 2.1 Histogram of Lenna Image and Difference of Pixel-to-Pixels of Lenna 5 Figure 2.2 Differential Pulse Code Modulation System 6 Figure 2.3 Differential Pulse Code Modulation System with Predictor 7 Figure 2.4 2-Bit DPCM System Using Lloyd Max Quantization 14 Figure 2.5 3-Bit DPCM System Using Lloyd Max Quantization 15 Figure 2.6 Comparison of 2-Bit and 3-Bit DPCM Systems 16 Figure 3.1 Basic Elements of a Digital Communications System 19 Figure 3.2 Trellis Diagram for 2-Bit DPCM + JSCD System 22 Figure 3.3 2-Bit DPCM + JSCD System for Classical and Reoptimized Prediction Coefficients 23 Figure 3.4 3-Bit DPCM + JSCD System for Classical and Reoptimized Prediction Coefficients 23 Figure 3.5 2-Bit DPCM + JSCD Systems for BER=0.05 24 Figure 3.6 3-Bit DPCM + JSCD Systems for BER=0.05 25 Figure 3.7 Images used for Training Sequence 26 Figure 3.8 Training Sequence Sensitivity for 2-Bit DPCM + JSCD System 27 Figure 3.9 Training Sequence Sensitivity for 3-Bit DPCM + JSCD System 27 Figure 3.10 Performance of Iterative JSCD for a 3-Bit JSCD + JSCD System 30 Figure 3.11 Iterative JSCD for 3-Bit DPCM + JSCD System at BER=0.04 31 Figure 3.12 SNR Curves for Mapping Schemes for 2-Bit DPCM + JSCD Systems. ...34 Figure 3.13 SNR Curves for Mapping Schemes for 3-Bit DPCM + JSCD Systems ....34 Figure 3.14 Mapping Schemes for 2-Bit DPCM + JSCD System 35 Figure 3.15 Mapping Schemes for 3-Bit DPCM + JSCD System 36 vii Figure 3.16 Second-order modeling of Encoded Image 38 Figure 3.17 Second-Order JSCD Performance for 2-Bit DPCM System 39 Figure 3.18 Second-Order JSCD Performance for 3-Bit DPCM System 39 Figure 3.19 Subjective Comparison of First-order and Second-order JSCD Systems..40 Figure 4.1 DPCM/JSCD Post Processing System 42 Figure 4.2 Sample DPCM Transmission Error Patterns 44 Figure 4.3 Sliding Window for Lippman's Streak Detection 46 Figure 4.4 Lippman Streak Detection for Probability of Error=0.001 48 Figure 4.5 Flow Chart for Modified Lippman Algorithm 50 Figure 4.6 BER of 2-Bit JSCD + Post Processed Images 52 Figure 4.7 BER of 3-Bit JSCD + Post Processed Images 52 Figure 4.8 Symbol Correction Schemes for 2-Bit DPCM + JSCD Systems 57 Figure 4.9 Subjective Comparisons for Symbol Choice Techniques of a 2-Bit DPCM + JSCD System 58 Figure 4.10 Symbol Correction Schemes for 3-Bit DPCM + JSCD Systems 59 Figure 4.11 Subjective Comparisons for Symbol Choice Techniques of a 3-Bit DPCM + JSCD System 60 Figure 4.12 MSE Error Correction Performance at Low BER for a 3 Bit System 61 Figure 4.13 Streak Detected Images for BER=0.006 for 3 Bit DPCM System 63 viii Acknowledgment I would like to express my sincerest gratitude to Dr. Samir Kallel for his invaluable advice, guidance and encouragement during the course of the research. I would also like to thank Dr. Robert Link for his careful proofreading and for his helpful insights. Finally, I am grateful for the constant love and support of my family and friends. This work was funded by NSERC, BC ASI, and Mobile Data Solutions Inc, of Richmond, BC. ix Chapter 1 Introduction The area of wireless communications has exploded at an astonishing rate, and today wireless networks offer more services than conventional voice telephony. Utility companies now can provide reliable voice and data services via networks such as cellular digital packet data (CDPD) networks. However, the cost of image transmission over digital wireless networks is still quite high due to the large amount of data contained in digital images. Reducing the size of the data file representing an image is known as image compression. A compression technique known as differential pulse code modulation (DPCM) is one means of source coding images. It is a successful technique that has found its way in to standards such as JPEG and MPEG [1]. Unfortu-nately, DPCM systems are very sensitive to noise, and most communication channels can never be considered to be error free. Errors in DPCM encoded data propagate when decoding and cause streaks in the images which are visually very disturbing. Therefore channel errors must be mitigated in DPCM pictures. Several techniques have been developed to increase the picture quality of corrupted DPCM images. Conventional error correction techniques, such as BCH codes, have been studied [2],[3]. However, forward error correction codes (FEC) add as much as 25 per cent of redundancy to give acceptable performance even over a BSC of channel error rate less than 0.01. One alternate technique developed by Ngan and Steele [4] uses PCM updates to locate transmission errors. PCM updates allow the decoder to calculate the difference of the PCM update and the DPCM output. If this difference exceeds a certain threshold then the pixels are replaced by neighbouring pixels which are assumed to be error free. This method also requires side information and increases the bit rate by a significant amount. 1 Chapter 1 Introduction 2 Another thresholding technique, developed by Lippmann [5], uses the identification of error patterns. Once an erroneous pixel is located, its effect is mitigated by replacing the errone-ous pixel by the corresponding pixel in the previous line. This method depends on the fact that adjacent scan lines have highly correlated corresponding pixels. Other techniques utilize the source statistics of the image to help correct transmission errors. When the compression and error correction of data are treated together, the approach is generally known as joint source/channel coding (JSCD). Sayood and Borkenhagen [6] have developed a method which corrects errors without the aid of an explicit channel coder. It is based on a Viterbi like decoding procedure where the code word combinations which are more likely than others are favoured by a relative amount which depends on the amount of channel noise. Source statistics are obtained by modeling the image as a first order Markov model. Emani and Miller [7] have also developed techniques which are similar to that of [6]. They introduce symbol-by-symbol JSCD decoders for first order and second order Markov models. They show that significant improvement can be gained by the use of second order models over first order. However, we have found that although the method of [6] can be modified to use a second order model, and that this leads to better results by objective measure, by subjective measure this generalization of the method fails. Furthermore, for the post processing techniques that we developed, the first order Markov model system, besides being simpler, is actually superior to the second order model. 1.1 Motivations and Objectives The goal is to design the best possible DPCM image system where robustness to noise is the primary design criterion. Several techniques have been introduced which combat the effects of Chapter I Introduction 3 transmission errors, and this thesis will introduce a design which is a hybrid of some of these techniques. Having reproduced the results obtained in [6], we found that the JSCD decoded images at high bit error rates can be further improved. The first order JSCD systems leave many errors which go uncorrected. These uncorrected errors still cause enough of a disturbance to the eye in the way of streaks across the image. Hence, we propose a cascaded system which uses a JSCD decoder directly followed by a post-processor. The post-processor has two objectives: The first is to locate the DPCM data samples which have been received erroneously, by examining the decoded image for streaks. The second objective is to choose the best possible candidate to replace the erroneous sample. The goal, once again, is to achieve the cleanest image possible, not only by objective measure but more importantly by subjective measure as well. 1.2 Thesis Outline In Chapter 2, background knowledge of the DPCM system is introduced. Key components such as the predictor coefficient and quantization are explained and their effects on the overall system are introduced. The well known classical design procedure for a DPCM encoder needs to be modified when the encoded data is to be transmitted over a noisy channel. In Chapter 3, the design of JSCD systems are introduced. This chapter consists primarily of a review of known results; however, we do investigate several generalizations or modifications of the known system, and choose the one most appropriate for our overall system. The use of first and second-order Markov models are explored. Furthermore, code word mapping and its effects on JSCD systems are investigated. Chapter 4 introduces post processing techniques which attempt to correct errors which were not corrected by the JSCD methods of Chapter 3. We introduce a post-processing algorithm Chapter 1 Introduction 4 which uses techniques built upon those found in [5] to locate errors, and uses hereby developed techniques to correct these errors. A final chapter provides a summarizing discussion and conclu-sions. Chapter 2 DPCM Image Compression Digital images contain much redundancy from sample to sample which is more prevalent in lower detailed images than in higher detailed ones. This redundancy allows samples to be predicted from previous samples received and hence compression can be achieved by the removal of such repetition. Most raw grayscale.digital images are represented by samples of 8 bits per pixel covering a range of levels from 0 to 255. Figure 2.1 shows that the standard monochrome Lenna image contains levels with a considerably broad range. However, if the difference of each pixel to pixel is taken, a large percentage will fall within a smaller range which can then be encoded with fewer bits. Figure 2.1 also shows the histogram of the pixel-to-pixel differences of the Lenna image. Encoding the difference of sample values rather than the actual samples has led to a method of image compression known as Differential Pulse Code Modulation (DPCM), which is a component of the JPEG standard. Histogram of Lenna Histogram of Pixel-to-Pixel Differences 3000 25000 2000 h 1000 °4 Figure 2.1 Histogram of Lenna image and Difference of Pixel-to-Pixels of Lenna 5 Chapter 2 DPCM Image Compression 6 2.1 DPCM System DPCM is a form of lossy compression, or in other words there is some loss of information and the data cannot be reconstructed exactly. Both lossiness and compression stems from quanti-zation which is a process by which a set of source output values is represented by a much smaller set of code words. Figure 2.2 is a block diagram depicting the first order differential encoding system. Figure 2.2 Differential Pulse Code Modulation System Consider a sequence of pixel values {xn} then the difference sequence {dn} is obtained by taking the difference {xn - xn_ ,} . The difference sequence is then quantized to obtain the reconstructed sequence {xn} is obtained by adding the quantized difference to the previous reconstructed value. So A Encoder Decoder sequence {dn} = {Q[dn]} = {dn + qn} where qn is the quantization error. At the receiver, the x, = x. (2.1) Chapter 2 DPCM Image Compression 7 For better DPCM performance, the difference value dn should be as small as possible which would give less quantization noise. This can be achieved if the sample xn is predicted by some function of the past values of the reconstructed sequence. Figure 2.3 shows the delay block being replaced by a predictor block. It is obvious that the two major components of DPCM are the predictor and the quantizer. Therefore the following sections will look at how each effect the performance of DPCM. n Encoder Decoder Figure 2.3 Differential Pulse Code Modulation System with Predictor 2.2 Prediction Coefficient The performance gain of DPCM systems are attributed to the reduction of the variance and dynamic range of the difference sequence. This difference can be minimized if predictive techniques are used to remove mutual redundancy between neighbouring pixels which is the effect of minimizing the variance of the difference sequence defined as a? = E[(xn-Pn)2] (2.2) Chapter 2 DPCM Image Compression 8 where E[ ] is the expectation operator. The predictor outputs pn are formulated from a function /(•) which gives Pn = f(xn-l>*n-2> -.*())• (2J) 2 The best selection of /(•) which minimizes ad is desired, but an explicit solution is 2 difficult to achieve due to a coupling effect [8]. The choice of /(•) will effect ad from (2.2) and (2.3) which will then have an effect on the reconstruction values xn which in turn will have an effect on the selection of /(•). However, in most cases an assumption is used known as the fine quantization assumption. That is the quantizer step sizes are assumed to be so small that xn and xn are indistinguishable. Therefore (2.3) becomes Pn = /(*„-!> *„-2> (2-4) We now make another assumption that the source is a stationary process and the function 2 which will minimize <3d is the expectation £'[x / J | (x n _ ] ; xn_2, xn_N)]. The problem is further simplified by requiring the prediction function to be linear which gives N Pn = X 1 ; ( 2 - 5 ) i= 1 where ./V is the order of the predictor. With the help of the fine quantization assumption, the 2 predictor design problem is to find {a-} so as to minimize ad which can be denoted as follows Chapter 2 DPCM Image Compression a / = E -f N xn~ Y,aiXn-Lv ,• = i (2.6) To solve for minimization we take the derivative of <5D with respect to each at and equate it to zero which gives us the following N equations N /= 1 where Rxx(x) is the autocorrelation function of xn : (2.7) (2.8) If there are M points of data, the autocorrelation can be estimated by the following average ^ M-k Rxx(k) = M _jc X XiXi + k i= 1 (2.9) In our simulations we used a single tap predictor therefore only one prediction coefficient needs to be solved for and it simplifies to Rxxi}} R(0) (2.10) For the standard 512 x 512 monochrome Lenna image, a prediction coefficient ax =0.99573 was obtained. Of course these prediction coefficients are image dependent and would have to be recalculated for each different image, which in the first order case is not too computa-tionally intensive. The choice of the prediction coefficient has some interesting effects. A very Chapter 2 DPCM Image Compression 10 good predictor will minimize the variance of the difference sequence which will allow for the greatest compression gain and reconstruction quality. However, under noisy conditions where some symbols may be received in error, the same predictor may not be ideal. Due to the error propagating nature of DPCM images, the "classical" prediction coefficient is not best suited for transmission of DPCM images under noisy channels. 2.2.1 Chang Donaldson Prediction Coefficient The output signal distortion is found to be caused by the sum of three factors. These are: quantization errors, channel transmission errors, and mutual errors arising from the coupling of quantization and channel errors. Thus there was a need to study the effects of the quantizer and predictor under noisy digital channels. Chang and Donaldson have studied the effects of channel transmission errors and have shown that the signal-to-noise ratio (SNR) of a well-designed DPCM system outperforms a PCM system operating on the same noisy digital channel [9]. Their work led to the understanding of the SNR dependence on predictor coefficient values, number of quantization levels, and channel error probability. When the channel is very noisy, the optimal prediction coefficient becomes where a, is the normalized first autocorrelation coefficient (2.10). For the Lenna image we get a reoptimized predictor coefficient of 0.91158 which is less than a,. Due to the use of a smaller predictor coefficient, the transmission errors will not propagate as far as they would in the case of the classical prediction case, because the error continuously undergoes attenuation by the predic-\-{\-a\) (2.11) a opt a Chapter 2 DPCM Image Compression 11 tion coefficient. The smaller the prediction coefficient the greater the attenuation. 2.3 Quantization It was mentioned that the quantizer is a key function of the DPCM system, and rightfully so because quantization is associated with lossy compression. Quantization is the means of representing a large or possibly an infinite amount of data by a much smaller set. The process is simple. At the encoder the range of the source values is divided into a given number of intervals, and each interval is associated with one code word. Therefore, the decoder upon receiving the code word, will map it to a reconstruction value which is normally a midpoint of the interval used at the encoder. This leads to the premier design problem of quantizers, the choice of the intervals and/or reconstruction levels. Choosing the correct reconstruction levels is important because with every quantization there is an associated quantization error which is the difference between the quantizer input and output. This is also referred to as quantizer distortion or quantization noise. Therefore, a poorly designed quantizer will result in a poorly reconstructed image. There are two main types of scalar quantizers: uniform quantizers and non-uniform quantizers. 2.3.1 Uniform Quantizer The uniform quantizer is the simplest and most commonly used quantizer. As the name suggests, the reconstruction levels are uniformly distributed across a range, and the intervals are all the same size, except possibly for the two outer intervals. For an n bit quantizer there would be 2n intervals and reconstruction levels. For a source which produces a uniform distribution between the interval [-X, X] the step size of each level would be Chapter 2 DPCM Image Compression 12 9 Y A = — . (2.12) 2n For sources which do not produce uniform distributions, a uniform quantizer would not be a good choice. From the histogram of Figure 2.1, it is obvious that to quantize the image efficiently, the quantizer of choice would be a non-uniform one. 2.3.2 Nonuniform Pdf Optimized Quantizer For sources which have a nonuniform distribution, but a high density within a narrow range, small step sizes are needed to quantize the values to reduce quantizer distortion. Smaller intervals in those regions cause larger intervals in other regions. M For an M level quantizer let the set of decision boundaries be {r,-}._0 and the reconstruction levels be {r^_ , . Then the quantizer, Q, maps data values to quantizer levels by the rule Q(x) = r f iff ti_l• al = X I (x-rffx(x)dx; (2.14) where fx(x) is the pdf of random variable X. Equation (2.14) is minimized by taking the derivative with respect to r •, equating to zero, and solving for r • to give Chapter 2 DPCM Image Compression 13 ix (2.15) \ fxis)dx Now taking the derivative of Equation (2.15) with respect to tj gives (2.16) which turns out to be the midpoint of the two adjacent reconstruction levels. Solving for equations (2.15) and (2.16) will give the reconstruction levels and decision boundaries which will minimize the mean squared quantization error. However, these are nonlinear equations which must be solved simultaneously. The Lloyd-Max algorithm is a method which will solve the two equations iteratively [10]. The resulting quantizer has some interesting properties. • The mean values of the input and output are equal. • The variance of the Lloyd-Max quantizer output is always less than or equal to the 2.4 Simulation and Numerical Results The quality of reconstructed digital images can be measured by both subjective and objective means, and the most commonly used objective performance measure is the signal-to-noise ratio (SNR) which is defined as follows variance of the input. Chapter 2 DPCM Image Compression 14 SNR = lOlog 1 0 (2.17) where xn is the original source pixel value and xn is the reconstructed pixel value. The following results are obtained for the Lenna image using a row by row scan. We reinitialized the D P C M system after each row because it helps to combat the effects of propagat-ing errors and achieves a higher quality reconstructed picture. The source of the channel errors was simulated by the use of a binary symmetric channel with probability of error rate pe. Two systems are investigated: the two bit system and the three bit system. m DC z CO 40.0 30.0 20.0 10.0 0.Q "A- --A- -A-• • R D P C M A -A D P C M -A- - "A A -'0.00 0.02 0.04 0.06 0.08 Probabi l i ty of Error Figure 2.4 2-Bit DPCM System Using Lloyd Max Quantization -A 0.10 Chapter 2 DPCM Image Compression 15 40.0 J0.00 0.02 0.04 0.06 0.08 0.10 Probability of Error Figure 2.5 3-Bit D P C M System Using Lloyd Max Quantization The graphs of Figures 2.4 and 2.5 will provide a basis upon which the performance of the JSCD and post-processing system will be evaluated. The top curves are the systems utilizing Chang and Donaldson prediction coefficients or renamed as Reoptimized D P C M (RDPCM). a) 2-Bit D P C M b) 3-Bit D P C M Prob. of Error=0.0, SNR=25.38 dB Prob. of Error=0.0, SNR=31.95 dB Chapter 2 DPCM Image Compression 16 e) 2-Bit D P C M Prob. of Error=0.05, SNR=7.68 dB f) 3-Bit D P C M Prob. of Error=0.05, SNR=6.16 dB Figure 2.6 Comparison of 2-Bit and 3-Bit D P C M Systems Chapter 2 DPCM Image Compression 17 The images corresponding to the graphs are shown in Figures 2.6. It is evident that the reoptimized DPCM system using the Chang-Donaldson prediction coefficient in noisy conditions is beneficial. The streaks are much shorter than the classical case and the images are far more recognizable. At a binary symmetric channel bit error rate of 0.05, there is a SNR improvement of as much as 8 dB for the two bit case and approximately a 7 dB gain for the three bit case. Chapter 3 Joint Source Channel Decoding D P C M systems are known to be very sensitive to noise which is almost unavoidable in most communication channels. The previous chapter showed the effects of an error which results in a streak possibly propagating all the way to the end of a scan line. Even at very low channel bit error rates, the affects are unpleasing to the eye. We will be considering the use of joint source channel coding techniques to improve the error performance. A simple digital communication system consists of nine basic elements (see Figure 3.1) [11]. In order they are: an information source, source encoder, channel encoder, digital modulator, channel, digital demodulator, channel decoder, source decoder and finally the information sink. The source encoder has the responsibility of removing redundant information and hence compressing the data. The channel encoder's duty is to protect the data, usually by introducing some redundancy, from the effects of noise encountered during the transmission through a channel. Though conventional source encoder and channel encoders are separate entities, similar performance and reduced complexity can be achieved by use of joint source channel (de)coders (JSCD) [12]. Sayood and Borkenhagen [13] have categorized JSCD systems into three classes: joint source channel coders, a true integration of both the source and channel coders; concatenated source channel coders, where a fixed bit rate is allocated between a source coder followed by a channel coder; and constrained joint source channel coders, which involves a modification of a given source coder and/or decoder according to noise conditions. This chapter wil l primarily focus on the latter set of JSCD systems. 1 8 Chapter 3 Joint Source Channel Decoding 19 Information Source Source Encoder Channel Encoder Digital Modulator Channel Information Sink Source Decoder Channel Decoder Digital Demodulator Figure 3.1 Basic Elements of a Digital Communications System 3.1 Viterbi Decoding Using Residual Redundancy A source coder ideally removes all redundancy from the source information and outputs an uncorrelated sequence. However, in reality, a source coder can not remove all redundant information due to the source encoder's inability to precisely model a source. What remains is known as residual redundancy. In our case, the DPCM encoder assumes that the image is an autoregressive (AR) process which may be near to but not a true perfect match. This leads to a source encoder output sequence containing some structure which can be utilized to combat channel errors. The DPCM encoder outputs a given set of code words, and if there is any redundancy/structure to the source output at all, some code word combinations may occur more frequently than others. The decoder can then use knowledge of code word-to-code word transition probabilities to correct some errors, inhibiting very highly improbable transitions and encourag-ing more probable code word combinations. For the source output, or channel input sequence, let the symbol alphabet be denoted by A = {aQ, a,, a.Q_ J}. Let the channel input and output sequences be s = {s0, sv sK _ ,} Chapter 3 Joint Source Channel Decoding 20 and s = {s0,sl,...,sK_l} respectively. The optimum receiver must minimize the probability of making an error, which is P(E) = ^P(E\s)P(s) , (3.1) s where P(s) is independent of the receiver's operation. Therefore to minimize (3.1) one must minimize P(E\s) = P(s*s\s) (3.2) which is the same as maximizing PQ = s\s), where s is the decoder's choice of output sequence. From Baye's rule, maximizing the previous expression is the equivalent to maximizing (3.3) PQ\s_)P(s) P(s) ' Furthermore, if the channel is assumed to be memoryless, then /^) = np^K)- (3-4) i and if the source is modeled as an Mth-order Markov sequence P{s) = YlP(si\si_1,si_2,...,si_M). (3.5) i If (3.4) and (3.5) are substituted into (3.3), the expression to be maximized becomes Chapter 3 Joint Source Channel Decoding ri'>( s.'h>'> ( ss7~^c) (3-9) where r is the row number,c is the column number, and s is a previously decoded symbol. Figure 3.16 depicts the second-order modeling arrangement used. Unfortunately, there is a cost associ-ated with increasing the order. As the order increases, the amount of possible state transitions increase exponentially, thus the computations and complexity also increase. Chapter 3 Joint Source Channel Decoding 38 O • M ' C r,c-l # O r ' c Figure 3.16 Second-order modeling of Encoded Image For the two bit case the gain of the second-order model is minimal over the first-order case. As seen in Figure 3.17 there is about a 0.5 dB increase over the JSCD O(l) case at high BERs and virtually no improvement at other BERs. In the three bit case (Figure 3.18), however, there is a continuous improvement for the entire range. The second-order JSCD system achieves a gain of 2.95 dB at a BER of 0.02 and as much as 3.61 dB at 0.10 over the DPCM corrupted image. However, this is not a drastic improvement over the first-order model. It only amounts to less than a 1 dB improvement throughout which hardly makes a subjective impact to the image at all. From a quick glance of Figure 3.19 one can see that there is barely a difference between the first-order and second-order systems. However, if the images are compared carefully, there seems to be a slight difference. It appears that the first-order images leave a sharper image. There is more clarity around highly detailed areas such as around the eyes. On the other hand, the second-order system does seem to have fewer streaks which would account for the slightly higher SNR. In the area of Lenna's hat, it is evident that the second-order model creates a blur of streaks, or multiple width streaks. This occurs, because unlikely transitions in the vertical direction are also corrected by JSCD 0(2). If a streak causes this abrupt transition, then the second-order system might possibly introduce another streak which has happened in the cases shown. Chapter 3 Joint Source Channel Decoding 39 30 .0 25 .0 00 -z. CO 20 .0 h 15.0 10. o © DPCM B - o JSCD 0(1) JSCD 0 2 00 0.02 0.04 0.06 Probability of Error 0.08 Figure 3.17 Second-Order JSCD Performance for 2-Bit D P C M System 35.0 0 .10 CO. rr z w 30 .0 25 .0 20 .0 15.0 i o . a Tj.OO G -oDPCM B — eJSCD 0(1) - - -* JSCD 0 2 0.02 0.04 0.06 Probability of Error 0.08 0 .10 Figure 3.18 Second-Order JSCD Performance for 3-Bit D P C M System Chapter 3 Joint Source Channel Decoding 40 c) JSCD O(l), BER =0.06 b) JSCD 0(2), BER =0.06 SNR = 16.05 dB SNR = 16.94 dB Figure 3.19 Subjective Comparison of First-order and Second-order JSCD Systems Chapter 3 Joint Source Channel Decoding 41 Although the second-order Markov model approach does not give a great improvement, there are other approaches which outperform the first-order system. The Modified MAP (MMAP) and Maximal SNR (MSNR) in [7] showed significant gains. The MSNR is a minimum cost decoder where the cost is simply defined as the squared difference of the quantizer values. Link and Kallel [15] have also developed a system which used second order statistics, by increasing the trellis by one dimension to make optimal use of the image redundancy in both vertical and horizontal directions. The two dimensional trellis used in a technique referred to as "sheet decoding" outperforms the MMAP and MSNR techniques by objective and subjective measure. We find that it best to post-process the first-order JSCD images as opposed to higher order modeled images because the streaks left by the first-order system are uncorrelated single width streaks. They are isolated and more easily detectable than the correlated streaks found in second-order Markov modeled JSCD images. Chapter 4 Post Processing Techniques Image processing techniques are frequently used to improve the appearance of a digital image. Sometimes an image may have to be brightened or perhaps the contrast increased. For our case we would like to improve the clarity of the image by further removing the streaks which were unable to be corrected by JSCD. As can be seen from the Figures 3.5 and 3.6, there are many residual streaks, dark and light which still are disturbing to the eye. If these streaks can be accurately located, methods can be applied to correct the data causing the streaks. This chapter investigates such streak detection algorithms, and the methods for the removal of detected streaks. First the characteristics of the streaks will be investigated. Upon understanding how streaks occur and appear, detection algorithms will be studies and designed. The latter sections deal with symbol correction techniques. The entire system is shown in Figure 4.1. DPCM BSC Encoder Post DPCM Processing Decoder Figure 4.1 DPCM/JSCD Post Processing System 42 Chapter 4 Post Processing Techniques 43 4.1 Error Patterns In Chapter 2 the effects of transmission errors on DPCM images were described and two types of error patterns were identified. These error patterns depend on the sign of the transmission error magnitude. If xi is the pixel value of the streak and xi is the correct value, the transmission error can be described by the error Ax • = xi - xl. The streak ends at some value m at which point Axm is below some threshold, or the end of a scan line has been reached. An error pattern with a positive magnitude will cause a brighter streak than the surrounding pixels and similarly an error pattern with a negative magnitude will cause a darker streak. Figure 4.2 demonstrates the two types of streaks caused by transmission errors. However, the two cases do not need to treated separately. Generally all streaks fade and the length of the error streaks are dependent on the magnitude of the error. The error causes the symbol to represent a different quantizer level. As the separation between the correct quantizer level and the incorrect quantizer become wider, the magnitude of Ax- becomes greater. Prediction coefficients also have an effect on the length of the streaks. For first order DPCM systems, the value which gets quantized is xt - a,x._ j where a, is the single tap prediction coefficient. Therefore Ax • gets attenuated by the prediction coefficient, and after n pixels the error caused only by the channel error and not quantization error becomes Axi + n = a" Ax-. (4.1) Prediction coefficients closer to the value one cause longer less attenuated streaks. When the probability of error gets high enough that several errors may occur within one scan line, abruptly terminated short streaks may occur. This occurs due to cancellation of errors. Chapter 4 Post Processing Techniques 44 On the other hand, adjacent errors may occur which cause double width streaks or possibly longer than normal streaks. So the difficulty of the problem lies in detecting the described error patterns and there are many different approaches to this problem. The following sections will introduce the detection algorithms. A x • = x(- - x • > 0 t^BI^6^fe^^j'^ ,.S& !JS ; Wis • i = 0 m A x Axt = x • - Xj- < 0 Figure 4.2 Sample D P C M Transmission Error Patterns 4.2 Streak Detection Techniques 4.2.1 Clipped Reconstructed Sample Values Because pixel values are usually only between the values of 0 and 255, it makes sense for the decoder to have a decoding boundary of the same range. Thus, if a transmission error causes some reconstructed values to fall outside of the range then the values would be clipped to the nearest boundary and the transmission error would be curtailed [5]. Not only can this approach Chapter 4 Post Processing Techniques 45 be utilized to mitigate the effects of transmission errors but in a similar fashion be used to detect streaks. If an image, prior to transmission, is known not to have any extremity pixel values of 0 or 255, then a reconstructed image which contains such values undoubtedly contains errors at or near those locations. For our chosen image Lenna there are no pixel values which take on the value of 0 or 255 (Figure 2.1). Thus, before any post processing begins a search for pixel values of 0 and 255 within the DPCM decoded image can take place. This could be a first pass before any other post-processing takes place. A thresholding method which will be described in the following section catches most of these errors and hence to save computations, our system will simply bypass the search for extreme values. 4.2.2 Thresholding Techniques From the error profiles shown in Figure 4.2 it makes sense to use an algorithm encompass-ing some sort of thresholding. If adjacent pixels differ by a large amount, much larger than a given threshold, then an error could have occurred. Lippman [5] has developed a technique for channel error correction in DPCM picture transmission. Based on the premise that most images have high correlations of pixels from line to line, a thresholding technique can be used to detect errors. For a given decoded DPCM scan line, a window of pixels of size W is considered. Referring to Figure 4.3, the pixels within the window of the current line II are compared to the adjacent pixels of the previous line I and the next line III. This window of three lines slides pixel by pixel along each row computing two sets of gradients. The first set are the differences of the pixels of the current line II and line I. The second set are the differences of the pixels between line II and line III. Figure 4.3 illustrates the window of pixels and gradients calculated. Chapter 4 Post Processing Techniques 46 Previous Line (I) Current Line (II) Line to Be Corrected Next Line (III) Window of Size W Figure 4.3 Sliding Window for Lippman's Streak Detection This algorithm is analogous to searching for edges along both sides of the current line. The gradients are constantly compared to a threshold constant K. Lippman proposes a constant K, for the very first pixel gradient gpl and K2 for the remaining gradients. If all gradient magnitudes exceed the corresponding thresholds, gpX > and gpi > K2 for all i such that 2 < i < W, then a streak is declared to be detected and the initial sample value of the current line is deemed as the beginning of the streak and the corresponding symbol is declared to be received in error. For Lippman's simulations a value of K, =12 and ^ = 10 with a window size of W=10 were used. These values are purely experimental and may prove to work well for some images and not so well for others. A rule of thumb for choosing the threshold values would be to choose low values so most steaks including faint ones have a chance of being detected but not too low otherwise too many false alarms occur when dealing with highly detailed images. Thus when dealing with the transmission of many different images it is difficult to arrive at a set of parame-ters which will be suitable for all cases. Chapter 4 Post Processing Techniques 47 So with Lippman's algorithm we tried it on the Lenna image. However, a window size of 8 as opposed to 10 was found to give better performance because of the Chang Donaldson predic-tion coefficient causing the streaks to be much shorter. For these particular simulations, it is assumed that once a streak has been detected the symbol is automatically corrected correctly. This would not be feasible in real cases, and so in the latter sections of this chapter correction techniques will be discussed and evaluated. From Figures 4.4 one can see that the algorithm works well with a 3 bit system. The streaks in the 2 bit system are very faint and difficult to detect even to the eye. However, the streaks in the 3 bit system are far more pronounced and easily distinguishable. Thus the effect of the Lippman algorithm is far more impressive. Figure 4.4d) shows a fairly clean image compared to that of the streak filled image of Figure 4.4c). Not all the streaks were detected. The algorithm fails when the streaks are too faint and the gradients fall below the threshold values. It will also fail if the streaks are much smaller than the window size; and furthermore in areas of high detail a streak may go undetected. Lippman reports good results at a probability of error rate of 10 and we have shown reasonable results for a -3 - l higher rate of 10 , but we would like to implement this for even high error rates of up to 10 . At these higher rates far more streaks will overlap, perhaps some cancelling one another. Thus there will far more streaks shorter than the window size, meaning far more undetected streaks. As well there will be many more faint streaks adding to the distortion of the image and causing the detection of sharper streaks to be that much harder. The Lippman algorithm also requires a set of input threshold parameters. Hence, a modified Lippman algorithm which addresses these problems is presented in the next section. Chapter 4 Post Processing Techniques 48 a) 2 Bit DPCM b) 2 Bit DPCM + Lippman Streak Detection c) 3 Bit DPCM d) 3 Bit DPCM + Lippman Streak Detection Figure 4.4 Lippman Streak Detection for Probability of Error = 0.001 Chapter 4 Post Processing Techniques 49 4.2.3 Modified Iterative Lippman Algorithm The problem with the Lippman algorithm is that there were not enough streaks detected and at high error rates we would like to correct far more streaks than the original Lippman algorithm is capable of. Also, there is a need to choose proper thresholding parameters, as the detection performance is very sensitive to thresholds chosen. A modified version of the Lippman algorithm has been developed to tackle these problems. To correct a greater portion of errors, it uses the channel statistics to predict the approximate number of errors per scan line. Thus a fixed number of symbols can be attempted to be corrected per line. Because the JSCD system uses knowledge of the channel, the same information could be passed on the post-processing system. The algorithm uses the following steps: • From a start of the line reset the target counter and use the Lippman window technique to locate all possible streaks with a starting set of thresholds. • If a streak or streaks are detected then attempt to correct the streak which has the larg-est gradient and redecode the row. • If no streaks are detected lower the thresholds by one unit. • If thresholds are below the lower bound then use a smaller window size. • If the counter meets the target value then go to next line and redo process. The flow chart in Figure 4.5 shows the order and logic by which these steps are executed. er 4 Post Processing Techniques 50 Reset counter and reset window size to W"i Look for streaks using Lippman algorithm. Are there any streaks? Yes No Attempt to correct streak with largest gradient and increment counter Counter > than target number? Yes Lower thresholds by one. Go to next line. Yes Lower window size to W 2 and reset thresholds Go to next line. Figure 4.5 Flow Chart for Modified Lippman Algorithm Chapter 4 Post Processing Techniques 51 For the simulations we took the Gray coded JSCD images and applied the modified Lippman algorithm to the images. A window size of 8 was used for W\ and for the second pass of each scan line a size of 4 pixels was used for W2 • Only starting threshold values are needed and as long as they are relatively high, small differences in initial values make little or no difference at all in performance of streak detection. The only other parameter needed is the target value of symbols to fix per row. We chose a percentage of the possible errors per scan line. Assuming the JSCD decoder corrects some errors, not all errors will have remained and the actual BER arriving at the post-processor will be less. Furthermore, due to the nature of Gray codes causing faint streaks more often than high transition streaks, the thresholding algorithm will miss a majority of streaks. Hence the target value must be set at a value which will not be too high where non errors will be misappropriately get corrected; but must not be set too low such that the number of errors corrected will be insignificant. We chose a percentage of 50 per cent of possible errors per scan line. So for a scan line of 512 pixels at a BER of 0.01, the approximate number of errors would be 10. Then the modified algorithm attempts to fix at most 5 symbol errors per scan line, but does not need to always correct at total of 5 symbols every line. Figures 4.6 and 4.7 gives an idea of how many streaks get detected at high bit error rates. Once again all streaks detected are corrected with the proper known symbols and hence the curves are optimistic upper limits. Chapter 4 Post Processing Techniques 52 0.10 Figure 4.6 BER of 2 Bit JSCD + Post Processed Images 0.10 Figure 4.7 BER of 3-Bit JSCD + Post Processed Images At a high probability of error=0.10 the two bit system reduces the amount of errors by 13.8% whereas the 3 Bit system reduces the BER by 9.3%. Furthermore, looking closely at the lower error rates of the 2 Bit system, one can see that the streak detection algorithm improves the image at points where JSCD does not contribute to any error correction at all. At high error rates it can be seen that much of the gain in the three bit system comes from JSCD whereas at low error Chapter 4 Post Processing Techniques 53 rates most of the gain is due to streak detection and correction. 4.3 Symbol Correction Techniques Once a streak has been detected, a technique which eliminates or reduces the streak must be implemented. One such technique is to paint over the streak with neighbouring pixel values having assumed that the picture has relatively high pixel to pixel correlations. At high error rates, this procedure becomes undesirable, because streaks which fall above or below other streaks occur more frequently. Hence, we propose to attack the problem by correcting the DPCM code word by redecoding as opposed to estimating the decoded pixel values. The JSCD technique that we used was based on decoding sequences of symbols, but now we are redecoding with an approach which looks at a single symbol at a time. The following sections describe different methods of attempting to choose the correct symbol. They are the MAPRI Symbol, MAPRI Transition and Minimum Squared Error (MSE) correction schemes. 4.3.1 MAPRI Symbol In this case, the symbol in error at position / is replaced with the symbol si which maximizes the a priori probability of being correct. Therefore the term which must be maximized is P(st). (4.2) The first probability term is again related to the channel conditions and the second term is simply the probability of occurrence of each symbol. If the symbol which maximizes (4.2) is the same as the deemed erroneous symbol, we choose the symbol which gives the second highest value of (4.2). This method has greater potential with the two bit system, simply because there are less Chapter 4 Post Processing Techniques 54 symbols and the probability of choosing the correct symbol is much higher than would be for the three bit system. For the Lenna image, our two bit system consists of a codeword which amounts to 53 per cent of all code words. The next most frequent symbol occurs 36 per cent of the time for the Lenna image. 4.3.2 MAPRI Transition This procedure also relies on a priori information about the source. However, instead of choosing the most likely codeword, it selects the most likely codeword transitions. Hence, the metric which must be maximized is P ( ^ _ l ) . (4-3) Therefore, once a streak has been detected and the starting symbol in error has been determined, it requires knowledge of the previous symbol before the start. Assuming the previous symbol is correct, it chooses the most likely state transition and replaces the symbol in error. It is evident that the method relies heavily on the performance of streak detection. If a streak has been detected but the symbol in error has not been pinpointed, then a correct symbol may be changed which may cause worse effects. Hence, we developed two schemes to compare. Scheme I: Force the symbol which is assumed to be in error to be changed. So if the most likely transition happens to terminate with the same symbol received then choose the second most likely transition. Consequently, the corrected symbol will always differ from the received symbol. Scheme II: If the previous symbol and the symbol in question happens to be the most likely transition then do not attempt to make any correction on the latter symbol. Keep the Chapter 4 Post Processing Techniques 55 received symbol and carry on looking for other streaks. 4.3.3 Minimum Squared Error The previous methods utilize a priori information about the source. However, in this method the decoded DPCM image is used to decide the best symbol. Using the window mentioned in section 4.2.2 the best candidate is chosen by selecting the codeword which gives the minimum squared error. So in the two bit case, the codeword in error is replaced by one of the four code words and the row is redecoded. Of course the true error can not be calculated, but as in the streak detection method a window of pixels are taken and the error is assumed to be the differ-ence of the pixel in the current row and the corresponding pixel in the previous row within the same column. So the squared error calculation is as follows w 2 X (*r ,c + i - * r - l , c + i) (4.1) I = 0 where r is the row, c is the column and W is the length of the window. As in the MAPRI Transition Scheme II the original symbol received is allowed to be the best candidate as long as it minimizes the squared error. Because of the dependence of the pixels from the previous row, this method works well when streaks are not lieing on top of one another. Another drawback to this algorithm is that it requires the most amount of computations and hence takes the longest time to perform. However, this method appears to perform the best in terms of objective signal to noise ratios which is revealed in the following section. Chapter 4 Post Processing Techniques 56 4.4 Simulation and Discussion of Results The four different symbol correction techniques were tested on the JSCD decoded Lenna image using the modified Lippman algorithm described in section 4.2.3. Window sizes of 8 and 4 were used and initial values for the thresholds were = 30 and K2 = 28 . The window sizes were chosen so that long streaks are first detected and then shorter streaks which are cancelled by a following error are detected. The window size can also be related to the prediction coefficient since the predictor has an effect on the length of the streaks. We find that the higher the prediction coefficient is the longer the window should be. A simple relation which can be used as a rule of thumb, but does not necessarily give the optimal window size for any given image, would be to choose the window size of integer value W\ such that ax 1 = 0.5 (4.1) where ax is the single tap prediction coefficient. The rule for the second window size is to simply choose a window which is half the size of W, rounded to the nearest integer. For the simulations, Gray codes were used on the Reoptimized DPCM systems for the two bit and three bit cases. Furthermore, the same random number seed used for the binary symmetric channel was used for all four techniques so each could be compared fairly against one another. Figures 4.8 and 4.10 depict the SNR curves for the two, bit system and three bit system respec-tively. Chapter 4 Post Processing Techniques 57 © D P C M • J S C D x MAPRI Symbol •O MAPRI Trans. Scheme I A MAPRI Trans. Scheme II * M S E J i I i I '"'"O.OO 0.02 0.04 0.06 0.08 0.10 Probability of Error Figure 4.8 Symbol Correction Schemes For 2-Bit DPCM + JSCD Systems For the two bit system, we can see that the MSE technique performs the best achieving a gain of 2.3 dB at a BER of 0.02. It is also evident that the MAPRI Symbol and both MAPRI Transitions techniques perform nearly the same. At very high BERs of 0.07 and over, all schemes seem to breakdown and have very poor gains of only 0.8 dB. This can be explained by that fact that there are too many overlapping and overlying streaks and streak detection becomes much more difficult. From Figure 4.9 one can see subjectively the images are in agreement with the SNR curves. Figure 4.9b) depicting the MSE image is much cleaner than the other images.What remains are very faint streaks which are less annoying that the darker streaks which have been removed. ou.u 25.0 C Q rr co 20.0 15.0 i n n Chapter 4 Post Processing Techniques 58 c) MAPRI Symbol d) MAPRI Transitions Scheme II SNR = 22.17 dB SNR = 22.11 dB Figure 4.9 Subjective Comparisons for Symbol Choice Techniques of a 2-Bit DPCM + JSCD System Chapter 4 Post Processing Techniques 59 35.0 30.0 25.0 CQ DC Z CO 20.0 15.0 10.0 T ' 1 1 1 — G © D P C M • • J S C D x x MAPRI Symbol O O MAPRI Trans. Scheme I A A MAPRI Trans. Scheme II * * M S E 0.00 0.02 0.08 0.10 0.04 0.06 Probability of Error Figure 4.10 Symbol Correction Schemes For 3-Bit DPCM+ JSCD System For the three bit case, the MSE also out performs all other schemes. It achieves a gain over the JSCD decoded image of 2.69 dB at a BER of 0.01. This translates to a gain of 4.56 dB over the DPCM decoded image. Even at very high BER's of up to 0.10 the gain is 1.61dB. From the curves we can see that opposite to that of the JSCD gain over DPCM, the MSE gain attenuates as it approaches higher probability of errors. This simply can be explained by that fact that there are too many streaks which overlap one another thereby making error correction much more difficult. From Figure 4.1 ld) it is evident that the algorithm cleans the corrupted image quite nicely in the areas of very low detail. However, within the areas of high detail and sharp gray level transitions, there remains some artifacts. As in the case of the 2-bit system the MAPRI schemes do not differ Chapter 4 Post Processing Techniques 60 Figure 4.11 Subjective Comparisons for Symbol Choice Techniques of a 3-Bit DPCM + JSCD System Chapter 4 Post Processing Techniques 61 by a great deal and the images of Figure 4.11 clearly shows little difference at all. These schemes also appear to work well in areas of low detail and have trouble within areas of sharper transitions. For both the two and three bit systems the MSE correction method is the best choice subjectively and objectively. 4.4.1 Performance at Low BER The curves of Figure 4.8 and 4.10 show that the system works well for very low BER's of less than 0.02. In fact, the gains of the streak detection schemes from the MSE scheme are greater than the gains achieved by JSCD. Therefore, a system designed for low BERs is better off without the JSCD decoder because it gives only a small improvement at the price of a significant increase in complexity. 35.0 o- o D P C M rr co CO T3 30.0 h 25.0 h 20.0 8 1 i i i i i i . i i i — i — i — i — i — i 1 — i — i — i .000 0.004 0.008 0.012 0.016 Probability of Error Figure 4.12 M S E Error Correction Performance at Low BER for a 3-Bit System 0.020 Chapter 4 Post Processing Techniques 62 In Figure 4.12 we have replotted the SNR curves for the three bit system at a low BER of less than 0.02. Also, we have added a curve which represents the modified Lippman algorithm applied only to a corrupted DPCM image. Surprisingly, the streak detection performed on the corrupted DPCM image performs better than the streak detection performed on the JSCD image. The gain is quite small, but the amount of signal processing time that is reduced is quite signifi-cant. Although the SNR curve is higher for the case without JSCD, from Figure 4.13 the subjec-tive quality is better for the case where JSCD has taken place. This occurs because JSCD removes many of the errors including overlapping streaks. So the detection algorithm doesn't have to deal with as many double or triple width streaks. It is for these special cases, the modified Lippman detection algorithm fails. Otherwise, most single width streaks are detected and corrected leaving a reasonably clear picture. Chapter 4 Post Processing Techniques c) DPCM + Mod Lippman SNR = 27.40 dB d) JSCD + Mod Lippman SNR = 26.59 dB Figure 4.13 Streak Detected Images for BER=0.006 for 3-Bit DPCM system Chapter 5 Conclusion and Future Work The design of a DPCM system which combats against channel errors without the addition of redundancy has been investigated. Our proposed system incorporates three components. They are the DPCM encoder/decoder, joint source channel decoder, and the post-processor. Using the standard monochrome image, Lenna, a DPCM encoder was implemented with a Lloyd Max quantizer. Then a binary symmetric channel injects errors into the encoded image and once the corrupted image has been decoded, the classical DPCM system was compared to the reoptimized DPCM system using the Chang Donaldson prediction coefficient. At a bit error rate of 0.1 we find the reoptimized DPCM system to outperform the classical system by about 7dB for the two bit system and by over 6 dB for the three bit system. The joint source channel decoders were also studied. Assuming the system has knowledge of the channel characteristics and knowledge of the source statistics, the JSCD decoder can correct a significant portion of the channel errors. These are bit error rate minimizing decoders which use Markov statistics within a trellis to choose the most likely sequence of quantizer values. It is found from previous studies that JSCD can improve a corrupted image significantly. At very high error rates, an almost indistinguishable image is turned into something that is recognizable. We find that JSCD works far better for a three bit system than the two bit, giving a gain of as much as 3 dB for the three bit system. We also find that the system is also quite sensitive to the training sequence used. However, an iterative approach gives similar error perfor-mance without the cost of having to transmit side information. The trade-off is a longer decoding delay for less amount of transmitted data. Furthermore, having studied and implemented different code word mapping schemes, we find that the Gray code works best for both bit systems by 64 Chapter 5 Conclusion and Future Work 65 achieving the highest SNR and highest subjective quality. The JSCD system was also modified to incorporate second-order Markov statistics, but the gain achieved over the first-order systems is not worth the increase in complexity. Instead, we propose the use of post-processing techniques to correct the errors which were unable to be corrected by JSCD. Our post-processing system incorporates an iterative post-processing technique. Is is a modification of a simple thresholding technique developed by Lippman [5]. It attempts to locate a streak using a sliding window and instead of painting the corrupted pixels within the window, our system goes back to the DPCM encoded image and attempts to correct the symbol. Different symbol correction techniques were investigated and it was evident that the MSE scheme outper-formed the MAP Symbol and MAP Transition schemes. The MSE scheme relies heavily on image redundancy at the DPCM decoded level, and the MAP schemes rely heavily on redundancy at the DPCM encoded level. For all cases, the post processing gives a continuous improvement for both the two bit and three bit systems. A gain of over 4 dB is seen for the three bit system with the use of the MSE correction scheme. More importantly, the images are much cleaner with very disturbing dark and bright streaks removed. In conclusion, the DPCM/JSCD post-processed system achieves its goal of obtaining an image which is far cleaner and appealing than the noise corrupted image. No channel coding was used; only the residual redundancy encapsulated by a first-order Markov model was used. By objective and subjective measure, our system has proven to mitigate the effects the channel errors. Further research in this area could be conducted. It includes a system for not only monochrome images but colour as well. Also, the techniques described herein could be applied to JPEG images and MPEG video sequences. Furthermore, similar techniques could be developed Chapter 5 Conclusion and Future Work 66 for images that have been source coded by methods other than DPCM. The JSCD techniques herein described required only that the source code words have a fixed length. Examples of such source coders include trellis coded quantization and vector quantization. However, the post-processing techniques that we have developed are very specific to DPCM. Bibliography [I] K.R. Rao and J.J. Hwang, Techniques and Standards for Image, Video, and Audio Coding, NJ: Prentice Hall, 1996. [2] R. Lippmann, "Influence of channel errors on D P C M picture coding," ACTA Electron., vol. 19, no. 4, pp. 289-294, 1976. [3] P. Stammnitz, "Error protection of 34 Mbits/s D P C M encoded T V signals with multiple error correcting B C H codes," in Proc. 1980 Zurich Sem. Digital Commun., 1980, pp. G4,l-G4,6. [4] K. N . Ngan and R. Steel, "Enhancement of P C M and D P C M images corrupted by transmission errors," IEEE Trans. Commun., vol. COM-30, pp. 257-264, 1982. [5] R. Lippmann, "A technique for channel error correction in differential P C M picture transmission," in Proc. Int Conf. Commun., Seattle, WA, 1973, pp. 48/12-48/18. [6] K. Sayood and J. C. Borkenhagen, "Use of residual redundancy in the design of joint source/channel coders," IEEE Trans. Commun., vol. 39, no. 6, pp. 838-846, 1991. [7] S. Emami and S. Miller, " D P C M picture transmission over noisy channels with the aid of a Markov model," IEEE Trans, on Image Processing, vol. 4, no. 11, pp 1473-1481, 1995. [8] K. Sayood, Introduction to Data Compression. San Francisco, C A : Morgan Kaufmann Publishers, Inc., 1996. [9] K.Y. Chang and R. W. Donaldson, "Analysis, optimization, and sensitivity study of differential P C M systems operating on noisy communication channels," IEEE Trans. Commun., vol. COM-20, pp. 338-350, Jun 1972. [10] S.P. Lloyd., "Least Squares Quantization in PCM," IEEE Transactions on Information Theory, IT-28;127-135, March 1982. [II] J.G. Proakis, Digital Communications, NY: McGraw-Hill, Inc., 1995. [12] J.L. Massey, "Joint source and channel coding," in Communication Systems and Random Process Theory, J.K. Skwirzynski, Ed. The Netherlands: Sijthoff and Nordhoff, 1978, pp. 279-293. 67 Bibliography 68 [13] K. Sayood, F. Liu, and J.D. Gibson, "A constrained joint source/channel code design," IEEE Journal on Selected areas in Communications, vol. 12, no.9, Dec 1994. [14] A.J. Viterbi and J.K. Omura, Principles of Digital Communication and Coding. New York: McGraw-Hill, 1979. [15] R. Link and S. Kallel, "Optimal Use of Markov Models for DPCM Picture Transmission Over Noisy Channels," submitted to IEEE Transactions on Communications, Nov 1997.