UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Scalable parallel VLSI architectures and algorithms for digital signal and video processing Elnaggar, Ayman Ibrahim 1998

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
[if-you-see-this-DO-NOT-CLICK]
ubc_1998-27134X.pdf [ 4.18MB ]
Metadata
JSON: 1.0064813.json
JSON-LD: 1.0064813+ld.json
RDF/XML (Pretty): 1.0064813.xml
RDF/JSON: 1.0064813+rdf.json
Turtle: 1.0064813+rdf-turtle.txt
N-Triples: 1.0064813+rdf-ntriples.txt
Original Record: 1.0064813 +original-record.json
Full Text
1.0064813.txt
Citation
1.0064813.ris

Full Text

Scalable Parallel VLSI Architectures and Algorithms for Digital Signal and Video Processing  Ayman Ibrahim Elnaggar B.Sc. Electrical Engineering, Cairo University, 1984.  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENT FOR THE DEGREE OF D O C T O R OF PHILOSOPHY in THE FACULTY OF G R A D U A T E STUDIES ELECTRICAL ENGINEERING We accept this thesis as conforming to the  required standard  T H ^ E UNIVERSITY OF BRITISH COLUMBIA April 1997 © A y m a n Ibrahim Elnaggar, 1997  In presenting this degree  thesis  in partial  fulfilment  of the requirements  at the University of British Columbia, I agree that the Library shall make it  freely available for reference and study. I further copying  for an advanced  agree that permission for extensive  of this thesis for scholarly purposes may be granted  department  or  by his or  her representatives.  It  by the head of my  is understood  that  copying or  publication of this thesis for financial gain shall not be allowed without my written permission.  Department The University of British Columbia Vancouver, Canada  Date  DE-6 (2/88)  P\ff\\ 11 - 9 7 S  Abstract  ii  Abstract Over the past few years, the demand for high speed Digital Signal Processing (DSP) has increased dramatically. New applications in real-time multimedia communications, video processing, satellite broadcasting, and radar signal processing demand major performance improvements at several levels: algorithmic, architectural, and implementation. Although a basis of comparison of the various DSP algorithms is the number of multiplications and additions they require, for VLSI implementations other factors such as area of interconnect, I/O bandwidth, complexity of control, power dissipation, design modularity, and layout regularity are also important. This thesis proposes efficient and cost-effective techniques for mapping highly parallel DSP and video computations onto VLSI architectures. The main focus of this research is on developing new recursive formulations for a class of multidimensional DSP applications that allow the generation of larger parallel computations by combining the results of smaller size computations of the same dimension. This is useful for both hardware and software solutions, in which a very efficient smaller size core has been developed, and a larger computation is required. The proposed design methodology can be used to derive regular and modular architectures for DSP. Regularity and modularity are essential factors for facilitating design automation and implementation of parallel organizations in maturing application-specific integrated circuits (ASICs) and emerging programmable logic environments. The proposed methodology targets multi-dimensional convolution and multi-dimensional transforms. A key result of this thesis is the proposal of a number of novel algorithms that can implement a large multi-dimensional transform (or convolution) from a single parallel stage of smaller-size multi-dimensional transforms (or convolutions).  Abstract  iii  Our approach is based on extensive parallelization of several classes of DSP computations and mapping these computations onto parallel hardware. Our methodology employs tensor product (Kronecker product) formulations and permutation matrices as the main tools for expressing DSP algorithms in a parallel form. The effort in modularizing the resulting architectures involves both the computational sections as well as the interconnection sections. Mapping tools then manipulate such formulations into suitable recursive expressions which can be mapped efficiently onto modular parallel architectures.  Table of Contents  iv  Table of Contents Abstract  ii  List of Tables  . . . ix  List of Figures  x  Acknowledgment CHAPTER 1  xii Introduction  1  1.1 Motivation for the Thesis Topic  1  1.1.1  The Need for Dedicated Hardware  1  1.1.2  The Need for Parallel Processing  3  1.2 Research Contributions  4  1.3 Previous W o r k  7  1.4 Thesis Overview  8  CHAPTER 2  Preliminaries and Background  10  2.1 VLSI Model of Computation  10  2 . 2 Tensor Product  10  2 . 3 Permutation Matrices  13  2.3.1  The Replicated Dilated Shuffle (RDS)  2.3.2  Network Folding for Limited / IO  Permutations  13 18  2 . 4 Important Properties of the Tensor Product and Permutation Matrices . 19  Table of Contents  CHAPTER 3  v  One-Dimensional Convolution  22  3.1 Toom s ' Linear Convolution Algorithm  23  3 . 2 The Proposed Recursive Algorithm  25  3.2.1  A Recursive Formulation for A  25  3.2.2  A Recursive Formulation for B  29  3.2.3  A Parallel Recursive form for Toom s ' Algorithm  3.2.4  The Computational Complexity of the Proposed Architectures  31  33  3 . 3 Chapter S u m m a r y CHAPTER 4  Multi-Dimensional Convolution  4.1 Overview  35 38 38  4 . 2 The One-Dimensional Shuffle-Free Algorithm  40  4 . 3 The Two-Dimensional Convolution Algorithm  44  4.3.1  Reducing the Complexity of the 2-d Convolution Algorithm  4.3.2  The Computational Complexity of the 2-d Convolution Algorithm  .46  51  4 . 4 Multi-Dimensional Convolution  52  4 . 5 Chapter S u m m a r y  53  CHAPTER 5  Multi-Dimensional DCT  5.1 DCT for Video Processing 5.2 Previous Related W o r k 5 . 3 Overview of the DCT  55 55 57 59  Table of Contents  5.4  The  vi  One-Dimensional DCT  in a Tensor-Product F o r m  61  5 . 5 Truly Two-Dimensional Recursive Architectures for the DCT  65  5.6 Computation Complexity of the 2-d DCT 5.7 The  72  Multi-Dimensional Recursive Architecture of the DCT  5.8 Chapter S u m m a r y CHAPTER 6  73 75  A General Methodology for Separable Transforms . . . . 76  6.1 The  Discrete Fourier Transform (DFT)  76  6.1.1  The  1-d  DFT  76  6.1.2  The  2-d  DFT  77  6.2 The  Walsh-Hadamard Transform (WHT)  78  6.2.1  The  1-d WHT  78  6.2.2  The  2-d WHT  80  6 . 3 A General F r a m e w o r k for Multidimensional Recursive DSP  Transforms . 81  6.3.1  The  One-Dimensional Case  81  6.3.2  The  Multi-Dimensional Case  82  CHAPTER 7  C o n c l u s i o n s and Future Work  7.1 Conclusions 7 . 2 Future W o r k Bibliography  84  84 86 87  Table of Contents  vii  Appendix A  A Complete Characterization of the Matrix R(a)  Appendix B  The Verilog HDL Files and Schematics of The 16-point Convolution  B.1 The  Verilog Files of A(3)  95  99  99  B.1.1  Half Adder Verilog File  99  B.1.2  Full Adder Verilog File  99  B.1.3  Flip Flop Verilog File  99  B.1.4  Serial Adder Verilog File  100  B.1.5  Serial-Parallel Multiplier Verilog File  100  B.1.6  A(l) Verilog File  101  B.1.7  A(2) Verilog File  101  B.1.8  A(3) Verilog File  102  B . 2 The  Verilog Files of 5 ( 3 )  103  B.2.1  B{l) Verilog File  103  B.2.2  B(2) Verilog File  103  B.2.3  B{3) Verilog File  104  B . 3 The  Verilog Files Combining A(3), D(3), and B(3)  B.3.1  D(3) with A{3) Verilog File  B.3.2  A(3), D(3), and B(3) Verilog File  105 105 107  Table of Contents  B. 4 The  viii  Verilog Files to Generate the 16-point Convolution from the  8-point  107  B.4.1  Q ( 4 ) Verilog File  107  B.4.2  RS{4) Verilog File  108  B.4.3  The  Appendix C  16-point Convolution Verilog File  The Schematics of the DCT Architecture  110  112  C. 1 The  Schematic of the Full Adder  112  C.2  The  Schematic of the Half Adder  113  C.3  The  Schematic of F  C.4  The  Schematic of Q  4  115  C.5  The  Schematic of Q  4  116  C.6  The  Schematic of R  A  117  C.7  The  Schematic of R  A  118  C.8  The  Schematic of the Serial Adder  119  C.9  The  Schematic of the Serial-Parallel Multiplier  120  114  2  C.10  The Schematic of the Serial Subtractor  C.11  The  C.12  The Schematic of the 1-d 2-Point DCT  C.13  The Schematic of the 2-d 4 x 4 DCT  Appendix D  Schematic of the 2-d 2 x 2 DCT  List of Acronyms  121 122 123 124  125  ix  List of Tables  List of Tables Table 1  The n u m b e r of M O P S in the H.261  Table 2  Comparison of the n u m b e r of additions required for  compression  performing convolution  Table 3  36  Comparison of the n u m b e r of multiplications required for performing convolution  Table 4  2  Comparison of the n u m b e r of multiplications required for  36 the  2-d convolution (kernel of dimension 8x8)  Table 5  Table 6 Table 7  The  52  distribution of the computational load in MPEG-1  decoding  56  Comparison of the n u m b e r of multiplications required for  the  2-d DCT  72  The s u m m a r y of pre- and post-processing formulae  82  List of Figures  x  List of Figures Figure 1  The permutation P\ X\  14  Figure 2  The permutation P  15  Figure 3  The  Figure 4  Example of S(64,4)  Figure 5  Bit serial and 2-bit-at-a-time (2-BAAT) block-transpose  2  RDS  2  ^8  permutation of Pi6,8  1  The  7  19  8  network for 0=4  Figure 6  2 4  and w=6  20  recursive decomposition of the proposed convolution  algorithm  23  Figure 7  The direct realization of A  Figure 8  The modification of the A matrix (numbers above arrows  26  denotes the n u m b e r of operands passed from one stage to another)  27  Figure 9  The  recursive realization of A(16)  Figure 10  The  recursive realization of the matrix 5(16)  Figure 11  The modified recursive architecture of Toom s ' algorithm . . . 34  Figure 12  The 2-d recursive convolution architecture  Figure 13  The  32  40  permutation-free recursive realization of the 8-point  convolution  Figure 14  30  42  The permutation-free multiplexed recursive realization of the 8-point convolution  43  Figure 15  The 1-d permutation-free recursive convolution architecture. . 43  Figure 16  The complete realization of Q  49  Figure 17  The multiplexed realization of Q  50  List of Figures  Figure 18  xi  The proposed recursive architecture for computing the  2-d  DCT  59  Figure 19  The 1-d DCT  Figure 20  The realization of i ?  Figure 21  The realization of Q , where C =  2 c o s  ^1  i  Figure 22  recursive architecture  63  4  4  2  =  c  63  o  0  s  (  5  and  1  The direct realization of 2-d DCT for a  ( 7 r / 8 )  r 4  /  8  )  6  5  x 4 input i mage  (m = n = 4) .  67  2  Figure 23  The realization of R for a  4  x 4 input i mage (n = 4)  70  Figure 24  The realization of Q for a  4  x 4 input i mage (n = 4)  . . . . . 71  Figure 25  The complete recursive realization of the 2-d DCT for a  4  4  4  x 4  input image (n = 4)  Figure 26  71  The direct realization of equation 133 for the case n\  —2 n  = 3 n  74  —2  Figure 27  The realization of F  Figure 28  The adder circuit used in the WHT  Figure 29  The 16-point WHT  80  Figure 30  The realization of R(2)  97  Figure 31  The complete realization of i ?  77  4  4  79  98  Acknowledgment  xii  Acknowledgment  I would like to thank my supervisors, D r . Hussein Alnuweiri and D r . Mabo Ito, who have been a continuing source of guidance, support and research interaction. I would like also to thank D r . Peter Lawrence for encouraging us to patent our work. We have filed two patents so far. This work was supported i n part by the G R E A T award from British Columbia Science Council and Seanix Technology Inc. I would like to thank M r . K e n H o , the R & D manager, Seanix Technology Inc., for sharing his knowledge with me. M y thanks go to m y mother, without her blessings this thesis wouldn't have been possible. To the memory of my late father who first taught me how to read and write. M a y his soul rest i n peace i n Heaven. To my role model, m y brother Ashraf. To my colleagues and friends, for their support and encouragement. To my wife, Maha, whose love and understanding made me able to accomplish this work. To my lovely kids, Sara and Moustafa, for the j o y they brought to my life and for their low-key noise while I was studying at home. Finally, I thank G o d for everything.  1 . Introduction  1  1  Introduction  Single-processor core-based is the computational engine for applications such as disk controllers, cellular telephones, and consumer devices. Even, greater speed and more challenging applications can be addressed with the power of highly parallel multiprocessors core-based. Moreover, significant performance improvements can be achieved with algorithms that are tailored to multi-processors implementations. These performance requirements can be achieved by employing parallel processing at all levels. Very Large Scale Integrated Circuits (VLSI) technology supports and provides a good avenue for parallelism.  1.1 Motivation for the Thesis Topic Digital signal and video processing techniques are increasingly finding application in almost all areas of electrical and computer engineering. With the reduction in the cost of integrated circuit technology, and the increasing sophistication in design and prototyping environments, the time now is opportune to study the design synthesis of VLSI architectures for digital signal and video processing. Moreover, with the recent development of video standards (JPEG, MPEG, H.261, etc.) and the introduction of new applications such as video phone and conferencing systems, multimedia, and Satellite Broadcasting, there is an increasing demand for efficient hardware designs for such applications. In the following, we briefly discuss the basic needs of DSP and video applications. 1.1.1 The Need for Dedicated Hardware  While microprocessors and general purpose digital signal processors (DSPS) share a number of common features, important differences exist. DSPS are primary designed for real-time number crunching applications with special features such as:  1 . Introduction  2  1. Hardwired arithmetic circuits to perform multiply-accumulate operations, the fundamental operation for most DSP algorithms. 2. Multiple-access memory, which enables the processor to load multiple operands simultaneously within an instruction. 3. Special on-chip peripherals and I/O interfaces that enable the processor to interface efficiently with other system components, such as analog-to-digital converters and local memory. Although these features contribute a great deal to the high performance of DSPS, many video and image processing systems require a high throughput that cannot be sustained by DSPS. As an example, the number of million operations per second (MOPS) requirement for video compression using an encoder with the H.261 compression/decompression standard on a sequence of 30 frames per second (fps) is shown in Table 1 [37]. Compression  MOPS  RGB to YCbCr conversion  27  Motion estimation (25 searches in a 16x16 region)  608  Inter-/Intraframe coding  40  Loop filtering  55  Pixel prediction  18  2-dimensional Discrete Cosine Transform  60  Quantization, zig-zag scanning  44  Entropy coding  17  Frame reconstruction  99 Total  968  Table 1 The number of MOPS in the H.261 compression.  Table 1 shows that the encoder requires almost 1,000 MOPS of processing power which  1 . Introduction  3  is outside the capabilities of current general purpose or DSP processors. ASICs are often faster and smaller than DSPS. Speed, size, and power advantages are gained when area is optimized as a result of a single-chip implementation that eliminates some of the external interconnects. Although DSPS allow flexibility of implementation of a variety of algorithms, they incur higher design and manufacturing costs, since additional hardware is required for program control. In addition, DSPS require software development tools for targeted applications. Without such tools, writing application software can be difficult, no matter how strong is the processor's performance. Although several vendors provide development tools, including high level language compilers, the expense of software development may not be neglected and has to be considered for deciding which alternative, ASICs or DSPS, has to be used for a specific application. It is worth noting that studies by market research organizations [17] show that there is a growth in DSP core-based ASICs, also called application-specific standard products (ASSPs) and a decline in the sales of general purpose DSPS. Core-based ASICs constituted 35% of the nearly $2 billion market in DSPS in 1995, and it is projected that their share will grow to 47% of the $10 billion market for the year 2000. Thus core-based ASICs will compete with custom ASICs and general purpose DSPS in the near future. General purpose DSPs expect their market share to drop from 50% in 1995 to 26% in 2000. The remainder of the market will be taken by custom ASIC solutions which are projected to take 27% of the market in 2000. 1.1.2  T h e Need for P a r a l l e l Processing  As the gain in device performance from advancement in processing technology is diminishing, incorporating higher concurrency within a chip will play an increasingly important role in the extraction of higher VLSI performance (through a more efficient  / . Introduction  4  use of multiple chip resources). Recent advances in VLSI technology and computer-aided design (CAD) methodologies have made the implementation of highly-concurrent computing systems on a single-chip a reality. Indeed, several forms of parallelism are incoiporated at various levels of most contemporary computing systems. For example, the MPEG-1 standard for motion video requires the two-dimensional Discrete Cosine Transform (DCT) to be computed over each 8 x 8 block within a frame size of 352 x 240 in a video stream processed at a real-time rate of 30 fps. Clearly, such computationally intensive task must be implemented in pipelined or multi-processor architectures to meet the real-time requirements. However, the key to exploiting the power of a parallel architecture is the proper selection and the mapping of algorithms. Specifically, significant performance gains can be achieved with algorithms that are tailored to multi-processors implementations. For example, on a highly parallel array processor, an n-point Fourier transform can be evaluated in O(logn) time [64], as opposed to a single-processor implementation that takes O(relogn) time at best. Fully exploiting the power of parallel processing requires the proper mapping of algorithms onto the target architecture. For example, the original matrix-vector multiply formulation of the Fourier transform fits more efficiently a VLSI parallel processing system than does the Fast Fourier Transform (FFT) formulation that was designed for standard single processors [67]. The common goal of the, often difficult, algorithm mapping process is to minimize the inter-processor dependencies while maximizing the use of the computational resources to achieve a given speed up.  1.2 Research Contributions The above considerations motivated us to develop high performance platforms for video and signal processing with particular emphasis on mapping algorithms onto highly-concurrent  1 . Introduction  5  VLSI architectures and programmable logic devices. An efficient algorithm-architecture mapping is essential for achieving high performance. In general, direct implementation of parallel algorithms demands a tighter coupling between algorithm specifications and logic synthesis. This thesis proposes efficient and cost-effective techniques for mapping highly concurrent digital signal and video computations onto parallel VLSI architectures. The main objective of the research is to develop design methodologies for deriving architectures that can be embedded in regular and modular structures. Regularity and modularity are essential factors for facilitating design automation and implementation of parallel organizations in maturing and emerging programmable logic and ASIC environments. A specific contribution of this research is the derivation of new recursive formulations for several classes of multidimensional DSP computations (convolution, Discrete Cosine Transform (DCT), Discrete Fourier Transform (DFT), Walsh-Hadamard Transform (WHT)) that allow the generation of larger VLSI architecture by combining the computation of a specific number of identical smaller size computations of the same dimension. This is useful for both hardware and software solutions, in which a very efficient smaller size core has been developed, and a larger computation is required. Our methodology employs tensor product or (Kronecker product) formulations and permutation matrices as the main tools for expressing DSP algorithms. Mapping tools then manipulate such formulations into suitable recursive expressions which can be mapped efficiently onto parallel VLSI architectures. Although tensor product techniques have been investigated before [46], [45], our methodology is the first (as far as we know) to derive truly recursive formulae for multidimensional convolution and the DCT. It should be also emphasized that our partitioning and combining method does not make any assumption about  1 . Introduction  6  how the core computations are computed. Indeed, any suitable computation method can be used, which makes the proposed method very flexible and realizable over a wide range of hardware and software platforms. Our techniques can contribute to the design of a core of high performance circuits for a class of DSP and video computations. On the one hand, customizability is improved over DSPS, and on the other hand, the drawbacks of full ASIC designs, such as long design turnaround time, limited design flexibility, and restricted usability, are avoided. For a particular targeted design, the VLSI designer will have the flexibility of choosing from a library of pre-designed building blocks according to certain limiting specifications (e.g. available silicon area, data rate, power consumption, I/O, memory, throughput). An entire design can be defined as a macro-block which can be manipulated by macro-cell placement and routing routines in semi-custom ASICs or full-custom environments. Alternatively, the design can be manipulated to fit on programmable logic devices, such as FPGAs. In this case, another set of limitations must be taken into account including a limited number of global wiring channels among logic blocks, and the degree of utilization of available logic and I/O blocks. Another benefit is that by using a technology-independent definition, the same design can by synthesized using a number of different ASIC technologies. This would allow the creation of a vendor-independent DSP function library (e.g. a macro-block library), which can be implemented on the best available silicon. Although originally conceived as a hardware solution, our method is potentially very useful for RISC processors, superpipelines, and in commercial DSP chips. Because the partitioning strategy results in a core of data-independent computations, such computations can be overlapped in superpipelines, or they can be executed concurrently on multiple functional units in a DSP chip. The method is also well suited for optimizing software  1 . Introduction  1  pipelines in RISC processors.  1.3 Previous Work This section provides a brief survey of related previous work. More detailed comparisons between our approach and other previously proposed approaches will be given in the appropriate places throughout the thesis. Most of the previous work was focused on speeding up DSP computations by reducing the number of multiplications and additions [2], [4], [5], [31], [43], [49], [50], [53], [58], [77]. Although these algorithms achieve a substantial reduction in complexity, they do not address practical design issues, such as regularity, modularity, simple control, and communication complexity. Consequently, these algorithms produce complex structures which are not suitable for direct VLSI implementation [3], [48], [61]. From architectural point of view, some of the common approaches are: 1. Distributed arithmetic architectures with memory lookup tables [43], [77]. This approach has some drawbacks: a. Many input buffers are required to preserve the input data. b.  The latency can be too long.  c. The pipelined performance depends on the word length. 2.  Nested schemes (overlap-and-add) [4], [53] where, a composite size n-point DSP algorithm (n — rs) is decomposed into smaller algorithms of sizes r and s. Although this approach is very computationally efficient, it trades-off the performance with irregular architectures that compromise the modularity and scalability of VLSI implementations especially for larger problem sizes.  1 .  Introduction  3.  Pipelined architectures  8  [2], [50], and systolic arrays [5], [31], [49], [58].  These  approaches speed up the computations by employing direct strategies for parallelizing serial D S P algorithms. Such techniques offer limited speed up and are not efficient for achieving high degree of parallelism. Moreover, these approaches are not practical for V L S I implementation due to their high control and computation scheduling overhead. Tensor products have been previously used for matrix calculus [33], [52], the design and implementation of block recursive numerical algorithms such as F F T [6], and convolution [13]. The tensor product formulations of these algorithms have been used to generate parallel and vector forms for shared-memory and recently for distributed memory multiprocessors [63],  [69].  1.4 Thesis Overview The rest of the thesis is organized as follows. Chapter 2 reviews the basic concepts of the tensor product and permutation matrices.  In Chapter 3, we introduce the  1-d  recursive formulation of Toom's convolution algorithm. A l s o i n this chapter, we compare the computational complexity of our proposed algorithm with other approaches. In Chapter 4, we introduce alternative constructions of the 1-d convolution algorithm that optimize the number of arithmetic units through multiplexing. A l s o , in this chapter, we extend the derivation to the 2-d and the multi-dimensional case. In Chapter 5, we introduce the 1-d D C T algorithm in a tensor product form. Also, i n this chapter, we derive the truly recursive 2-d D C T algorithm and extend the derivation to the multidimensional D C T . The hardware realization of different modules are also given i n this chapter.  Chapter 6 applies our methodology  to the D F T and W H T . Also, i n this chapter we propose a general framework to derive  1 . Introduction  9  recursive formulations for multi-dimensional separable transforms. Finally, Chapter 7 gives conclusions and possible extensions of the research.  2 . Preliminaries and Background  10  2 Preliminaries and Background 2.1 VLSI Model of Computation The VLSI model of computation in this thesis assumes a basic VLSI model in which computational nodes have fixed fan-in, and they perform a single primitive arithmetic/logic operation on their inputs in one time unit. Modularity refers to a property of VLSI networks in which the architecture is decomposed into distinct building blocks (modules) which can be combined through composition to construct larger architectures. The communication among such blocks is done strictly through prespecified input/output ports. Regularity is a topological property of VLSI networks which implies that a given primitive interconnection pattern (among nodes) is repeated according to a fixed pattern. Recursivity refers to the property of defining an architecture for a problem of size n from lower-order (smaller-size) architectures (of size  n/2  in our case). Scalability refers to a resource growth property of  VLSI networks in which the hardware (memory plus interconnect) resources allocated to each input remain constant or increase slowly as the number of ports increases.  2.2 Tensor Product Tensor products (or Kronecker products) have proven to be useful in providing a unified decomposable matrix formulations for multidimensional transforms, convolutions, matrix multiplication, and other fundamental computations [6], [13], [24], [25], [27], [62], [69]. Recently, Tolimieri et. al. [46], and Van Loan [45] have shown that when coupled with permutation matrices, tensor-products provide a unifying framework for describing and programming a wide range of fast algorithms for various transforms.  They have  also described techniques for manipulating tensor-product formulations of multidimensional  2 . Preliminaries and Background  11  transforms into forms suited to parallel processing machines and vector processing machines. The tensor product decompositions given in [6], [13], [62], [69], although useful for extracting parallelism, are not necessarily suitable for VLSI implementation. In VLSI, issues such as communication complexity and I/O dominate the cost of implementation. These issues are the main motivation behind our work. The tensor product is a binary matrix operator which provides a mechanism for multiplying two matrices to form a single larger matrix. Once the tensor-product factorization of a large matrix has been derived, the symmetries identified in the factorization can be used to derive efficient algorithms for a large number of matrix operations. It should be noted that direct implementations of parallel algorithms in silicon demand a tighter coupling between algorithm specifications and logic synthesis. Such coupling can be achieved using tensor product decompositions with few additional enhancements. Let A  TiS  and B ,n be two arbitrary matrices of dimension rxs and mxn, respectively, m  such that «0,0  O 0 , l •••  «0,s-l  A .<s — r  - r-l,0  «r-l,s-l  a  (1) bo, -l n  Br, b m—l.n — 1  L^m-1,0  The tensor product of A and B is a matrix C = A ® B of dimension vm x sn defined by a ,o-B 0  ao,iB  ••  •'  o  0 j S  -ii?  C  (2) La -i,o5 r  a -\ -iB. r  iS  Throughout the thesis, a square rxr matrix A will be denoted by A , while a rectangular r  mxn  matrix B will be written as B , - It is possible to think of the tensor product as a m n  2 . Preliminaries and Background  12  matrix decomposition. Given a large matrix C, it could be expressed i n terms of two smaller matrices A and B, called the tensor matrices i n the form  C =  A®B  (3)  Two particular forms of tensor products have been identified i n [45], [46] for use with parallel and pipelined (vector) machines. Our focus in this research w i l l be on the parallel form described by the following tensor product, B  r  0  B  r  C  n  =  B  (I ®B ) a  (4)  T  r  B  r  where I  s  is the identity matrix of order s. The resulting matrix C „ is a block-diagonal  of order n =  rs.  Consider we would like to apply the matrix C  to an input vector x  n  n  n = 100. Direct implementation of the matrix-vector multiply would requires n  2  multiplications. W h i l e using the parallel form of C  n  s — r = 10) would require s(r ) 2  of size = 10,000  as defined i n equation 4 (assume  = 1,000 multiplications only for a computations saving  of 90%. Another interesting property of the tensor product is the inversion property. If A and B are non-singular matrices, then so is C — A ® B, such that  C'  1  = (A ® B)~  x  =  (A~ ®B -ll  (5)  The dimensions of A and B are usually much smaller than C, and thus inverting these smaller matrices results i n a large computational savings as compared to inverting C directly.  2 . Preliminaries and Background  13  2.3 Permutation Matrices When combined with permutation matrices, the tensor product technique has many more degrees of freedom than most other techniques. This extra freedom is used not just to describe the required operation, but also to specify data partitioning and data-flow efficiently. A permutation matrix P is a special square matrix whose entries are zeroes and ones, such that each row or column of P has a single 1 entry. If n = rs then P  n>s  is an n x n binary  matrix specifying an n/s-shuffle (or 5-stride) permutation. The effect of the permutation matrix P  n>s  on an input vector X of length n is to shuffle the elements of X by grouping n  n  all the r elements separated by distance s together. Thus, the first r elements will be the next r elements are x i , x\ , +s  xi  +2s:  x (_s 1+  r  1  tS  and  so on. The shuffling operation of stride permutations can be easily implemented as an addressing operation on many computers. In this case, stride permutations can be realized at virtually no cost of computation. In parallel architectures, such permutations demand additional wiring area. 2.3.1 The Replicated Dilated Shuffle (RDS) Permutations For the purposes of this thesis, the class of Replicated Dilated Shuffle (RDS) Permutations needs to be introduced. This class of shuffle permutations is very effective for network partitioning and for deriving modular VLSI interconnection networks for convolution as will be shown in the next chapter. A shuffle permutation with dilation d operates on subvectors of size d rather than single elements. For example, the permutation P | operates on subvectors of size two. To illustrate 2  this effect, consider the two operation P^X^ and P  2 4  2  ^8  as shown in Fig 1 and Fig. 2,  2 . Preliminaries and Background  14  respectively,  4,2 1  P  X  -^4,2^8  10 0 0 0 0 10 0 10 0 0 0 0 1  XQ  XQ  X  1  (6)  X\  . 3.  .3  x  x  .  h  o  0  0  XQ  XQ  0  0  h  0  XI  X2  o  h  o  0  X2  X\  0  0  0  12  Xz _  xz  where I2 is the identity matrix of size 2, and X{  Fig.  X2i X2i+1  1 The permutation Pl^Xi  (7)  2 . Preliminaries and Background  15  In general, P^ can be obtained by replacing each 1 entry in a  Id and each 0 in P*  a  by the d x d matrix  by the d x d 0-matrix 0. The main power of RDS permutations  stems from the fact that standard shuffle permutations can be constructed from smaller RDS permutations in a simple recursive fashion. This is a very useful property for enabling the modularization of VLSI networks that employ shuffle networks. Indeed, we can rewrite equation 7 in the form, h  0 0  0 0 0  h  "1  0  0 0 0  0 1 0  0  0 " 0 0  h  0 0  0 0 1 0 0  h_ 0" 0 0 1  For example, the standard perfect shuffle permutation on 16 elements,  Pi6,8  can be specified  2 . Preliminaries and Background  16  in terms of RDS permutations as follows (see Fig. 3 also) P 16,8 — 1  l)  (h ® (P4.2 ® /l))(/ <8> (P ,2 ® / ) ) 4  2  (/l  2  (9)  ® (P ,2 <8> / )) 4  4  2  fJ(/ . 2  ®P , 4  2  ®/ 2-.) 2  An RDS permutation has the general form (I ® P r  Wja  ® 1^). If r = 1, the permutation  (I <g> P , ® h) represents a dilated shuffle permutation only (i.e. P „ ® i^), and if d = 1, r  n a  ]S  it represents a replicated shuffle permutation only (i.e. I ® P„ ). The total size of the r  permutation is r factor.  x d x n,  ]a  where r is called the replication factor and  <i  is called the dilation  Fig. 3 The RDS permutation of P ,s 16  2 . Preliminaries and Background  18  2.3.2 Network Folding for Limited I/O  Network folding is a very effective technique for designing parametrized permutation networks that are adaptable to I/O limitations [64]. This section reviews an index mapping scheme that results in simple folded networks for permutation networks [24], [64]. The main goal is to map a stride permutations network of size N = 2 (i.e. with N I/O terminals) n  into an equivalent network with N/Q I/O terminals. This mapping will be called folding, and Q will be called the I/O reduction-factor. N/Q  A folded stride permutation network has  I/O terminals. Thus, bits from at most N/Q elements can be input or output at a time  through the folded network. The index-mapping procedure views the TV-element input or output vectors as a matrix of Q columns with N/Q elements per column. Specifically, let GF  M = N/Q, then an TV-element vector VMXQ  e  F  M  X  ®  N  can be arranged into m M x Q matrix  as follows: '  V  VMXQ  V  0  v  \  M  v  M + l  •••  V  •••  V _  N  N  _  M  M +  i  (10)  = •VM-1  V M-1 2  •••  VN-1  In other words, column i, 0 < i < Q — 1 , of the above matrix contains elements with indices i^,  + 1, • • •, (i + 1)-^ — 1 . The mapping (permutation) of elements from the  input vector to the output vector can now be expressed as a mapping of elements from the input matrix to the output matrix. For stride permutation, each index in the output matrix is obtained by performing the specified bit-permute operations on the bit representation of the corresponding index (i.e. the index at the same position) in the input matrix. A n example of an 5*8(64,4) network derived by our method is shown in Fig. 4, where S^/^^N,  Q) denotes  a folded 7V/7Vj-shuffle network of size N (with reduction factor Q). Note that, the folded network of Fig. 4 consists of N/Q  2  sub-networks, each of which is a Q x Q transpose  2 . Preliminaries  and Background  19  Fig. 4 Example of Sg(64,4)  permutation circuit. Fig. 5 shows bit-serial and 2-bit-at-a-time (2-BAAT) versions of the transpose circuit with a word length w = 6 bits. 2.4 Important Properties of the Tensor Product and Permutation Matrices Some of the tensor products and permutation matrices properties that will be used throughout this thesis are [45], [46]: 1. Distributive property: AB®  CD = (A®C)(B®  D)  (11)  2. Commutative property: {A ® B) <g> C = A ® (B <g> C)  (12)  20  2 . Preliminaries and Background  Bit-Serial Outputs Bit . A - H J o o o o f o j n o o a - 6 » q D o o o ^ Serial - * q D D O Q < y 4 x r o < V [ p o o o ^ > [ p o o D < W Inputs - ^ ± } D O D < y Q D D D D W O D D O < y Q Q O D 0 6 - * Y  2-BAAT Output  cx>o-*ozro- ••DO  -•DO 2-BAAT Input  ^ ^ _ J ^ X .  I D— DO^tDDO v  -  v  v  D D o ^ D D ^ ^ D D O ^ c i a o  ••DO  —*DOO - T O O - ^ D D O - ^ D O  - • D O  -X]oA-*DDO-MZHZKD-  • • D O  Fig. 5 Bit serial and 2-bit-at-a-time (2-BAAT) block-transpose network for Q=4 and w=6  3. Parallel Operations: if n = n\ri2 then A  ni  <g> B  n2  = P , {In n ni  <8> A )P (I  2  ni  ntn2  ni  ,  (13)  XV,11)3  (14)  <g> B.H2  4. If w = W1W2W3, then  5. Ifra= n\n2nz then P J  n,ni  J  P — P n,n2 — n,n\n-2 1  (15)  6. If n — n\n% then P  P  - T  (16)  2 . Preliminaries and Background  21  7. For non-square matrix A , , we have m  n  (^m,» ® Ic) — -Pm.c,m(Ic ® Am,n)Pn.c,c  (17)  3 . One-Dimensional Convolution  22  3 One-Dimensional Convolution In this chapter we present a new recursive formulation of Toom's convolution algorithm. Our formulation allows the generation of higher order convolution computations from three lower order convolution computations [22], [28]. Efficient parallel architectures for implementing this method are also presented. Convolution is a very important operation in most of the DSP and image processing applications. Thus, an abundance of approaches have been suggested to achieve high speed processing of convolution algorithms and to design efficient convolution architectures including pipelined architectures [2],[50], systolic arrays [5], [49], [58], distributed arithmetic [77], and nesting schemes [4], [53]. Most previous methods speed up the computation of linear convolution by employing direct strategies for parallelizing serial convolution. Such techniques offer limited speed up and are not efficient for achieving high degree of parallelism. Moreover, for large problem sizes, these algorithms are not practical for VLSI implementation due to their high control and computation scheduling overhead. Previous work presented in [7], [53] reduces the complexity of the convolution by decreasing the number of multiplications and additions. However, the resulting architectures are irregular and may not be suited to VLSI implementation. Some efficient VLSI implementations for computing convolution have been reported in [11], [56]. Our linear convolution algorithm decomposes the convolution computation recursively into three cascaded stages as shown in Fig. 6. The second stage is constructed recursively from three parallel blocks each realizing a lower order convolution. The first and the third stages serve as "glue" computations that combine the three lower order convolution blocks to construct the higher order convolution. In Fig. 6, C(n) denotes a convolution on n  3 . One-Dimensional Convolution  23  points while C(n/2) denotes a convolution on re/2 points. This formulation is useful for both hardware and software solutions, in which a very efficient smaller size 1-d linear convolution core has been developed [4], [53] and a larger convolution computation is required. In the following, we start by reviewing some fundamentals then proceed to describe our algorithm and resulting architectures.  Q  L® C ( n / 2 )  C ( n/2)  2n - 1  Postadditions  C ( n/2)  n  Preadditions  Inputs  Outputs  ( n = 2~)  C ( n/2) Stage # 3  Stage # 2  Stage  #1  C(n) Fig. 6 The recursive decomposition of the proposed convolution algorithm  3.1 Toom's Linear Convolution Algorithm Since our approach is based on a parallelized version of Toom's algorithm, we will briefly review Toom's convolution algorithm [69]. Let x  n  and h  n  be two sequences of  24  3 . One-Dimensional Convolution  length n. The linear convolution y 2 n - i = h * x n  n  of x  n  and h is given by yi = 2~2 hk%i-k, n  k  which can be also expressed in the matrix form Y2 -i — C X , n  0  h  yo  n  n  0 " 0  yi  or Xo XI  2/2  X2  =  h -i  0  h -i  ••• •••  0  0  •••  h -2  n  J/2n-2 .  N  n  | |  ho h h  :  |  (18)  x  h -i n  _  -Xn—1 -  For n = 2 , where a is an integer, Granata et. al. [69] presented an efficient tensor a  product formulation for Toom's radix-2 convolution algorithm which has the following "parallel" form, C * = C(a)  =  2  D  = diag  BDA  (19) Ah , r  where a-l k=0  B —  JJ  (20) {h"-  1  ® R <*-  k  2  {Pi{2 - r -\), i{h - r^-\ a  kJ  1  l  a  ® B)P ( a-fc+l_J)  kJ  3 2  k=i 1 11 A  =  1  0  1  (2«-*+l_l)))  ,  (21)  00 |  1  >  (22) B  1  0  0  -1 0  1  -1  0  1  25  3 . One-Dimensional Convolution  and  R c-k 2  is a special matrix of zeros and ones.  Since in practical applications the elements of the vector .h (used in equation 19 to n  calculate the matrix D) are known a priori, this matrix-vector multiplication can be precomputed. Moreover, since D is a diagonal matrix of order 3 , applying the D matrix a  implies performing independent element-wise multiplications, which all can be done in parallel. Consequently, the focus in this chapter will be on deriving modular algorithms and architectures for the two matrices A and B.  3.2 The Proposed Recursive Algorithm  Our main goal is to compute the vector Y -i = C(a)X , 2n  n  where n = 2 and C(a) was a  defined by equation 19. The next subsections present new recursive formulae for A and B, that direcdy lead to modular recursive structures for computing convolution.  3.2.1 A Recursive Formulation for A  The matrix A is defined as  a-l  A  =  J^J  P^a-k  2* ^a-k^I^a-S-k  k 2  ® A)  P^a-k-l  k+l  2  k  2  (23)  A;=0  The direct realization of A is shown in Fig. 7 Applying property 15 to the matrix A and  26  3 . One-Dimensional Convolution  A  Fig. 7 The direct realization of A  noting that the permutations in two adjacent stages can be grouped together into a single  27  3 . One-Dimensional Convolution  permutation, equation 20 can be simplified to a-l  A(n) — J^J  (/3a-!-*:  2  k  ® A) P ^ a - k - i  2 +i fc  3  t  a  -  k  ~  1  2  (24)  k  k=0  where A{n) denotes the matrix A for a given n — 2 . Thus A(n) can be computed by the a  cascaded product of a stages of double matrix products instead of the triple matrix product in equation 20 as shown in Fig. 8 for the case a = 4 (n = 16) [26].  54  f— u 1—Ti 24 1  36  —  p  1 '1  • P  16  i)  A(16)  Fig. 8 The modification of the A matrix (numbers above arrows denotes the number of operands passed from one stage to another)  Our main goal is to manipulate the expression in equation 24 into a recursive modular form. The last stage (k = 0) of the computation in equation 24, can be expressed as  3 . One-Dimensional Convolution  28  follows [22], [(J «-i <g> A ) P a - i 3  =  2,3—i]  3  ®  ^ ) ( ^ 3  ®  ^ 3 —  2  2,3«-0( 6,3 P  ® / 3 —  2  )  (25)  = (h ® ( ^ 3 — ® A)P o-2 3—3 ) (^6,3 ® ^ 3 — ) Similarly, the stage before the last, k = 1, can be expressed as 2  2  3  [(/30.-2 <8> A)P a-2 2 , 3 2  2  a  2  3  2)  (26)  2] •  Combining equations 25 and 26, the last two stages can be expressed as  J^J (^o-l-A 2 ® -A) P^a — k — l 2 + i ~ ~ 2 = k  k  1  a  k  1  k  (27)  k=0  ((7 <8> 3  ( / 3  a  -  2  ® A)P -2 a-2)(P 3 3Q  2)3  7 a-2))((7 -2 <g> A)P a-2 2 a-2 2) •  ®  6]  3a  3  2  2  3  ]3  Note that (P  6 ) 3  <g> Pja-2 )(/ a-2 3  ® A)  2  = (I a-2 <g> A ) ( P 3  2  6 j 3  <g> 7 3 a -3 2 )  = [J <g> (7 a-3 ® A ) ] ( P 3  3  2  (28)  <g> 7 a-3 )  6 ] 3  3  2  and P6,3 <8> / 3 - 2 = P 3 - 2 2 , 6 ( ^ 3 - 2 <S> P 6 , 3 ) P 3 - 2 , 3 - 2 Q  3  Q  2  Q  3  a  2  2  a  3  (29) =  [^3 ® P 3 - 2 , 3 - 2 ] A - 2 2 , 3 - 2 Q  3  2  Q  3  a  2  a  .  2  By substituting equations 28 and 29 in 27 we have l  [(-^o- -* 1  [-^3  <8>  [(-^3 Q  2* ®  2  A)P a-k-i 3  ® A)P c-2 3  fc l - -l fc] =  2  +  2,3 - ] a  2  ]3Q  fc  2  [(^3 - 2 a  3  ® A ) P a - 3 2 a-3 ]] (P a-2 2  3  ]3  2  3  2 , 3 2  Q  "  2  2) •  A simple induction proof can show that applying equation equations 28 and 29 repeatedly to the rest of the stages, k — 2, 3, • • •, a — 2, except the first stage (k = a — 1), results in the following formula for A{n) A(n) = [ ( / ® A ( n / 2 ) ) ( P a - 2 a - 2 3 ) 3  2  l 3 )  k = 0,1, • • • , a - 2.  a  _  1  ]  [ ( /  2  - l  <g> A ) P « a - l ] 2  | 2  k—a— 1  ,  (31)  3 . One-Dimensional Convolution  29  where the term in the left square bracket represents the last (a — 1) stages and can be realized using three parallel blocks of A(n/2) computations preceded by (a — 1) cascaded stages of the permutation  The term inside the right square brackets represents  P - 3,2 - 3a  1  a  2  2  a single computation stage that must be performed at the beginning of the computation of A(n). The matrix computation A{n) can be written more concisely as A(n) = [ ( /  3  ®l(n/2))(P  2 Q  - 3 1  [(J «-l  -2 ) ~ ]  ) 2 a  a  3  1  2  A)P a a-  <g>  2  >2  (32) = (/3<8>A(ra/2))<3„  ,  where Q = (P c-1 3,2—23)^ n  1  2  [(/ c-l ® A)P a a-l] 2  2  (33)  )2  The matrix A(n) can be realized recursively from smaller computational modules as shown in Fig. 9 for the case a = 4 (i.e. for n — 2 = 16 point convolution). The labels on the a  connections in Fig. 9 indicate the  of each link, defined as the number of elements  bandwidth  that can simultaneously traverse the link. Observe that, we have drawn our networks such that data flows from right to left. We chose this convention to show the direct correspondence between the derived algorithms and the proposed VLSI networks. Therefore, the first (or right most) stage of the computation of A{n) in Fig. 9 is the stage for which k — a - 1, and the last (or left most) stage is the one for which k = 0. 3.2.2 A Recursive Formulation for B  From equation 21, stage k consists of the parallel tensor products (/ fc-i 3  (8) P -'=(-P3(2 -' + -l),3(^2 -' + -l ® B)P '2 - +^-l) ,(2 - + -l))) a  a  ;  1  a  ;  1  a  2  k  a  k  1  3  which can be realized by 3  fc_1  parallel blocks each computing the matrix product  P a-J=(P - a-fc + l_ ^3(/2a-'= + l - l <8> P)P ( a-fc + l_l) (2«-* + l-l)) 2  3( 2  (34)  1  3  2  ;  (35)  3 . One-Dimensional Convolution  30  A (4) QI 4  l * ( 2 ) l ^ V(4) |^(2)U2  Q]  4 4  4  V(4)  81  Km sua-  G,  ITS | A (2)|^_Q  sea-  A (8)  "A (4)  4  7Si(4)  4 x 4  76  A (2)1 A (4)  4 x 4  s i a - ^ (4)  Se2~  16  A(8)  ^(4)  SM-  Q  %(4)  I A (2)| ]A (2)  I  8  A(16)  A (8)  A (4)  Fig. 9 The recursive realization of A(16)  (A complete characterization of the distribution of l's and 0's in R c-k = R(a) is given in 2  Appendix A). It should be observed that the tensor decomposition V - P 3 ( 2 - + - l ) , z { h - ^ - \ ® -#)^3(2 - + -l), ( 2 ° - ' + - l ) ) a  f c  1  a  k  a  fc  1  [  1  (36)  3 . One-Dimensional  31  Convolution  of equation 35 is similar to the decomposition of the matrix A(n) discussed in section 3.2.1 with the exception that the core matrix A is replaced by the matrix B defined in equation 22. By applying tensor product manipulations similar to those applied to the matrix A(n), the matrix B(n) can be expressed in the following form B(n) = [ i 2 a - f c ( P ( 2 a _ ) (/2«»-l ® 5 ) P ( a _ ) ( o , _ ) ) ] " v ' 2  3  1  i  3  3  2  1  )  2  1  k=l a  J J (/ k=2  3  (g> i? a-fc(P ( a-fc + l _  ® I k-2  2  3  3  2  1  ) 3(/ o-fc +l _ i  2  1  (g) 5)P ( -fc + l _ 3  2Q  1  ) (2«-* + l-l))) (  [P -*(P ( _i) (/2«-l <8> -B)-P3(2 -l),(2 -l))] a  2Q  3  ( ^ ® IT (^3 3  2Q  fc  -  a  i3  2  ® R2 ~ (Pz(2 '- + -l),3(h - + -l a  k  ,:  k  1  a  k  ® •S)A(2 - + -l)  1  a  k  1  Jt=2  V  = [i2 a- ( P 2  fc  3 ( 2  a  1  » - i ) , ( / - - i ® 5 ) P 2 _ i ) , ( 2 « - i ) ) ] (^3 ® £ ( n / 2 ) ) 3  2  = P ( j ® P(n/2)) n  , (2 -*+ -l))) } /  3(  Q  ,  3  (37) where P  = [P «-*(^3(2«-l), 3(-^2«-l ® P)-P3(2 -l),(2 -l))] Q  n  a  2  C ^) 3  Equation 37 shows that B(n) can be realized by recursively combining three parallel blocks for computing B(n/2) followed by a single R block as shown in Fig. 10 for the case n  n = 16 [28]. 3.2.3 A Parallel Recursive form for Toom's Algorithm  By substituting equations 32 and 37 in 19, we can rewrite Toom's convolution algorithm in the form C(n) = R (/ ®B(n-l n  3  j)5{n) ( J <g> A(n - 1)) Q 3  n  (39)  3 . One-Dimensional Convolution  32  3 . One-Dimensional  33  Convolution  be partitioned arbitrarily into any size blocks. Therefore we have D(n) = h® D(n/2)  (40)  By substituting equation 40 in equation 39 and applying property 11, we have C(n) = R ( / <g> 5(n/2)) (7 <g> 5(n/2)) (/ <g> A(ra/2)) Q„ n  3  3  3  = Rn [h ® (P(n/2)5(n/2)2(n/2)) ] Q  (41)  n  = i2„(/ <8)C(n/2))Q 3  n  Note that Q and i?„ represent pre-addition and post-addition stages, and serve as glue n  structures that combine three identical lower order convolution architectures (C(n/2)) to construct the higher order architecture ( C ( n ) ) . The complete recursive modular structures for Toom's convolution algorithm is shown in Fig. 11 for the case n = 2 — 2 = 16. a  4  3.2.4 The Computational Complexity of the Proposed Architectures The number of adders to compute the matrix A(n) is equal to the coefficient of the identity matrix in equation 24 and is given by a—1  a—l  3 -* .2 a  _1  fc  =3  3~ .2  a _ 1  k=0  k  k  k=0  = 3*" £  (2/3)*  1  (42)  k=0  = 3 (l-(2/3) ) a  = (3  a  -2 )  a  a  The number of adders needed to compute the matrix B{n) is the sum of the number of adders to compute the matrix R(n) plus the adders that compute the matrix ( • ^ 3 ( 2 « - ' = + - l ) , 3 ( ^ 2 - ' + - l ®-5)P ( a-*:+i_ ) (2 -'=+ -l)) >  1  Q  s  1  Q  3  2  1  )  1  (43)  Fig. 11 The modified recursive architecture of Toom's algorithm  3 . One-Dimensional Convolution  35  Since each of the 3-input adders used in computing the matrix B can be realized by two 2-input adders, the number of adders required to compute equation 26 is given by 2  — l ) . From Appendix A, the number of adders used to realize the matrix  3 ^ (2 ~ _1  a  k+1  k=i a  R(a) is given by 2 ^ 3 (2 ~ — l ) . Therefore, the number of adders needed to compute _ k=i the matrix B(n) is given by 2 [ * (2 ~ - l) + 3 (2 - - l ) k=i fc_1  a  k  3  = a "  _1  a  k+1  fc_1  a  k  £ ( § ) ' - ! £ » '  1  *=i  = 3x2  (44)  ~ jb=i  a + 1  (Qy-l)-2(3 -l) B  = 4x3 -6x2 +2 Hence, the total number of adders used to compute the convolution of size n = 2 is given by (3 - 2 ) + (4 x 3 - 6 x 2 + 2) = 5 x 3 - 7 x 2 + 2 Q ,  a  a  a  a  a  a  a  = 5x3  l o g 2 n  a  - 7n + 2  (45)  The number of multipliers required is equal to the number of multiplications to compute the matrix D(n) and is equal to 3 § . Tables 2 and 3 show the number of additions and lo  2n  multiplications, required by the proposed method and compare these numbers to the direct method and the FFT method [32]. These tables show a significant reduction of the number of multipliers of the proposed algorithm over the direct and FFT methods. For example, although the number of adders proposed is twice that for the FFT, the number of multipliers is less than half of the FFT multipliers for the case n = 1024.  3.3 Chapter Summary In this chapter, we presented a class of highly modular parallel VLSI structures for the 1-d linear convolution based on a modified version of Toom's algorithm. The proposed  3 . One-Dimensional  Convolution  36  Method  Formula to compute the number of additions  n =8  n = 256  n = 1024  Direct  n(n — 1)  56  65,280  1.047xl0  FFT  12ralog22n  384  27,648  0.135xl0  6  81  31,015  0.288xl0  6  The proposed work based on Toom's algorithm  5 x  3 S " l o  2  In  +  2  6  Table 2 Comparison of the number of additions required for performing convolution.  Method  Formula to compute the number pf multiplications  n =8  n = 256  n = 1024  Direct  n  64  65,536  1.048xl0  FFT  12nlog 2n + 8n  448  29,696  0.143xl0  27  6561  59,049  The proposed work based on Toom s algorithm  2  2  6  6  Table 3 Comparison of the number of multiplications required for performing convolution.  algorithm allows the generation of higher order convolution architectures from three lower order convolutions in a parallel realization (with a single stage of multipliers) with the  3 . One-Dimensional Convolution  37  addition of two glue stages of pre- and post-processing (adders only). Equation 41 is the key result for this chapter, showing that an n-point convolution C(n), can be constructed from a parallel stage of three (re/2)-point convolutions C(n/2). As part of testing and verifying the functionality of our computational blocks, a Verilog HDL code has been developed to simulate our method. The Verilog files of the different building blocks that recursively generate the 16-point convolution (C(16)) from three identical 8-point convolutions (C(8)) , pre-processing block (Qs), and post-processing block (Rs) are given in Appendix B. The circuits generated by the Verilog code were mapped and tested on an FPGA-based transformable computer. The results are reported in [1].  4 . Multi-Dimensional Convolution  38  4 Multi-Dimensional Convolution This chapter generalizes the results of the previous chapter by presenting a novel recursive algorithm for generating higher-order multi-dimensional (m-d) convolution by combining the computation of 3  m  identical lower-order (smaller size) convolution computations of the  same dimension (m). We also show that with careful design, the number of lower-order convolutions can be reduced in the VLSI implementation from 3  OT  to less than § (3 ) OT  [23], [30]. Our methodology is based on a non-trivial generalization of the 1-d convolution algorithm presented in the previous chapter.  4.1 Overview Two-dimensional (2-d) convolution have found many applications such as image processing and synthetic aperture radar (SAR) processing, etc. In real time image processing, 2-d convolution requires very high computation power. For example, a convolution with a 16 x 16 kernel, a frame size of 512 x 512 pixels and a frame rate of 25 fps results in 1.7 billions of multiply and accumulate operations. Additionally, in many applications two or more convolutions with different coefficient masks have to be applied in parallel to the same video signal, e.g. for edge detection and pattern recognition [56]. Therefore, it is necessary to perform the computationally intensive 2-d convolution in very efficient hardware. One of the major problems with conventional FFT techniques for computing 2-d convolutions is that matrix transpose operation must be performed. Moreover, the FFT approach requires the previous acquisition of the whole input sequence which may cause an extra latency. Therefore, several authors have suggested several approaches to improve the efficiency of the 2-d convolution such as block processing [72], systolic architectures [56], [49], and look-up tables [54].  4 . Multi-Dimensional  39  Convolution  The main objective of this chapter is to derive recursive formulations for the 2-d convolution which are useful for the true modularization and parallehzation of the resulting computation. The main result reported shows that a large 2-d convolution computation can be decomposed recursively into three cascaded stages as shown in Fig. 12. The center stage is constructed recursively from 9 parallel (data-independent) blocks each realizing a smaller-size 2-d convolution. The post-additions  and the pre-additions stages serve as "glue"  computations that combine the 9 lower order convolution blocks to construct the higher order convolution. We also show that the proposed algorithm can be extended such that an m-d convolution can be constructed from 3 smaller size m-d convolutions. We also show, using TO  an alternative (permutation-free) construction, that the number of core convolution blocks can be reduced to less than | (3 ). The main features of our construction are the modularity m  and high concurrency. The proposed architectures have very small depth and contain only a single stage of multipliers, while all other stages contain adders only. The majority of the previous approaches focused on expressing an m-d convolution in terms of m consecutive stages of 1-d convolutions. However, it should be mentioned that the multirate filter banks approach proposed in [7], [36], [71] and its generalization [51] implement a given convolution using shorter convolutions running at slower speeds. Although these approaches are computationally efficient, the number of small convolutions depends on the aperiodic convolution algorithm used. Moreover, these approaches are mostly suited for symmetric 2-d n x n sequences but not for sequences of length mxn  (where  m is different from n). In addition, the subsampling and upsampling processes proposed in [7], [51], [71] impose additional requirements on hardware speed and analog-to-digital (A/D) conversion.  4 . Multi-Dimensional  40  Convolution  C(ry  2-d Convolution  2 ,ry2)|  (1)  ^C(n /2,n /2)^ 1  2  (2) Preadditions  Postadditionsl  n xn 1  Input  Output  C(rV2,n /2J 2  (9)  Stage # 3  Stage # 2  Stage # 1  C(n , n ) 2  Fig.  12 The 2-d recursive convolution architecture.  4.2 The One-Dimensional Shuffle-Free Algorithm We will show how we can remove the shuffle permutations from equations 33 and 37 representing the pre-additions and the post-additions, respectively. In this case, the effect of the shuffle permutations is implicitly embedded in the tensor expression. Even though the resulting circuits are topologically equivalent to the ones derived previously in chapter 3, removing the shuffle permutations from the tensor formulations can simplify data movement and also simplify the specification of such circuits in high-level hardware description languages. Thus, the resulting shuffle-free networks are suited for implementation using CAD tools that employ automatic partitioning, placement, and routing [26].  2  41  4 . Multi-Dimensional Convolution  First, recall that the a — 1 permutations in the term  (-P2 - 3,2 - 3) Q  1  Q  2  a  contained in  equation 33 can be simplified by iteratively applying property 15 which yields (P  Also, substituting  for B  I <*-i  Qn =  =  - 3,2 - 3) 1  a  2  P2°— 3, 1  =  3  •  (46)  in property 13 and applying it to equation 33, results in  n2  2  a 2  {P2 - 3,2 - 3) a  P a-1 2  1  a  3(-^2  3  = A (g)  {h*- ® A)  2  1  ®  a _ 1  A)P a 2  OL  2  P * a-i 2  >2  (47)  — \  I a-1 2  Note that the above expression for Q does not include any (explicit) permutations. n  Similarly, substituting / ^ " - I for A  nj  R  n  in property 13 and applying it to 33, gives  = R(a)(P f "-i),3(h"-i 3  2  <g> £ ) P ( 2 " - l ) , ( 2 - l ) ) a  3  (48) = R(a)(B ®  I °-i) 2  In Appendix A, it is shown that R(a) can be represented as a combination of the identity matrices I * only and hence, the representation of R has no permutations. The permutation2  n  free recursive realization of the 8-point convolution using three 4-point convolutions is shown in Fig. 13. We assume that each addition requires one unit of time. A careful scrutiny of the realization shown in Fig. 13 reveals that the data movement through the computational stages encounters different amounts of delays. In particular, the computations involved in the Q$ matrix affect only the middle 4-point convolution in the center stage. Thus, the top and the bottom 4-point convolutions can be computed one addition cycle ahead of the middle 4-point convolution. This means that only two 4-point convolvers are needed at a time. Therefore, through the use of a multiplexers, either the top or the bottom 4-point convolvers can be removed from the architecture without affecting the speed of the computations as shown in Fig. 14.  4 . Multi-Dimensional  Convolution  42  Q \j_ —-rr-0"  4- Point Convolution  — i  C(4)  •vi vf|  'ii ij Hi! If ? '& M %  15  ••%  —  Outputs  p  \i  1  8  i ;  I •<<<^<<<<<<>> •  il.  1 1 f i  8  4- Point Convolution  Inputs  C(4)  1 /  \l_  4- Point Convolution  C(4) TO  s© /  7  A©/  4  c•(8)  Fig. 13 The permutation-free recursive realization of the 8-point convolution.  Similarly, the realization of R$ can be modified such that the first stage of three-input adders (realizing B ® Ij) can be replaced with two-input adders. This is because the outputs from the second stage of the demultiplexer (realized by the seven horizontal arrows with shaded background in Fig. 13) are now moved to a second stage of 3-input adders (realizing i?(3)) as shown in Fig. 14 instead of the 2-input adders used previously. The permutation-free 1-d recursive convolution architecture is shown in Fig. 15.  4 . Multi-Dimensional  Convolution  43  Fig. 14 The permutation-free multiplexed recursive realization of the 8-point convolution.  1:2 2n-1  Conv (n/2)  Postadditions  2:1 Preadditions  Outputs  Inputs  Conv (n/2) Stage # 3  Stage # 2  Stage # 1  Conv (n)  Fig. 15 The 1-d permutation-free recursive convolution architecture.  4 . Multi-Dimensional Convolution  44  4.3 The Two-Dimensional Convolution Algorithm Let n\ — 2  011  and n = 2 . Pratt [59] has shown that for an n\ x n input data image, a2  2  2  the two-dimensional convolution output is given by P = C ,J  ,  ni n  (49)  where C „ is the 2-d convolution transform matrix; and p and / are the output and input nij  2  column-scanned vectors, respectively of size n = n\n each. Pratt has also shown that, for 2  separable transforms, the matrix C  can be decomposed into the tensor product  r a i > n 2  C  = C{ )®C{n )  m>n2  ni  (50)  2  where C(n\) and C(n ) represent row and column 1-d convolution operators on / , 2  respectively, as defined by equation 41. From equations 49 and 50, we can write the 2-d convolution in the form ^  = ( ^ 1 ) 0 ^ 2 ) ) /  (51)  Using equation 41, the 2-d convolution algorithm can be expressed as a function of 1-d convolutions as follows q = {[R , (h ® C(rai/2))Q ] <g> [R (h n  n2  Bl  ® C{n /2))Q })f  (52)  n2  2  Applying property 11, leads to q = [(Rn, ® Rn )((h  ® C( /2))  2  ® (7 ® C{n /2)))(Q  ni  3  2  ® Q )]f  ni  n2  (53)  RCQ f  where ={R ®R )  R  m  C ,n, ni  Q  ,  n2  = ((h®C(n /2))®{h®C{n /2))) 1  = ( Q  n  i  ® Q n  2  2  ) -  ,  (54)  4 . Multi-Dimensional  Convolution  Note that the matrix  contains the 1-d convolutions matrices  C n nii  45  2  a tensor product expression. Our next task is to manipulate C  ni )Tl2  expression. We start by applying property C ,n ni  12  to C  = {{h ® C( /2)  2  n i i 7 l 2  and  C(n\/2)  2  to derive a more efficient  which gives  ,  ® h) <g> C ( n / 2 ) )  ni  in  C(n /2)  (55)  2  Applying property 14, yields C ,n ni  = ( P g ^ - i - i ) ^ - ! - ! ) ^ ®C (2 - ))P ,  2  a i  1  Since the convolution matrix C(n/2) is of dimension [(n  9 ( 2 c < 1  1)  —  -  1 ) !  3)®C(2  a 2  - )  (56)  1  x n / 2 ] , we can write  C(n /2) 2  as C(TI /2) = J 2  N 2  _!  C(n /2) 2  J  (57)  N A / 2  Substituting 57 in 56 and applying property 11, C ,n ni  3  ® C ( « l / 2 ) ) P ( n / 2 ) , 3 ) ® ( ^ ( n - l ) C(n /2)  = (PBim-lMm-Vih =  1  9  (-FsKm-iJ^m-i)® ( " 2 - i ) ) J  3  2  I( /2)) N2  x  (58) (/9®(C(m/2)®C(n /2))) x 2  (^9(n /2),3 ® ^(n /2)) • a  2  Now, substituting equation 58 in equation 53 gives =  9  Bl (I  9  <g> C 7  N I / 2 > N 2 / 2  ) §  /  (59)  where C /2,n /2 n i  a  = C V i / 2 ) ® C(n /2)  (60)  2  is the lower order 2-d convolution matrix for an n\/2 x n /2 input image, and 2  Q =  ( 9(n,/2),3 P  ® I(n /2))(Qni 2  ® Qn ) 2  (61)  4 . Multi-Dimensional  Convolution  46  and R =  {R ®R )(P ni  n2  9  ® I{n -1) )  (wi-l),3(m-l)  (62)  2  are the new 2-d pre- and post-additions, respectively. Equation 59 represents the recursive 2-d convolution algorithm. In this case we use 9 of the lower-order (rai/2 x re /2) convolutions i n parallel to generate the higher order ( n i x n ) 2  2  convolution as was shown i n F i g . 12. 4.3.1 Reducing the Complexity of the 2-d Convolution Algorithm Now,  we w i l l further manipulate equations 61 and 62, so that we can compute the 2-d  convolution algorithm using less than 5 lower-order convolution only. From 47 and 58, we can rewrite Q as Q =  (A(n /2),3 ® J(n /2))(Qm 1  ® W)) K  2  A  (^9( /2),3  =  ® Qn )  2  ni  = (P9( /2),3 ® J(n /2)) [0  4  ni  a  ® n,l'l) J  ®{A® I )) n2/2  ® / / 2 ® A) ® n i  (63)  (/ )] na/2  A l s o , from properties 11 and 12, we have Q= =  [P9n /2A ® n /2®A]] A  ® J  I  1  1  1 ^ / 2 ,3 ( ( ^ 2 ) ® /  = [P9 l2,M®  3 n i  / 2 (4^2  hn,l2){I ®  ni  ni  n a / 2  ®  ® 4 /2 2  A)]  ( > 64  ®I /  n2 2  Using property 13, the parallel form (7 «i <g> A) i n 64 can be modified to the vector form 2  (I  m  ®A) = P  3 r a i  , ( A <g> / ) P n 2 n i  n i  2  (65)  1 )  A l s o , from property 13, we can write the vector form (A ® 13^/2) i n a parallel form as  (66) Tll/2,3 Tli/2 (/3 ®  /„ / ))P3n 3 • 1  2  l l  4 . Multi-Dimensional  Convolution  47  Substituting 65 and 66 in 64 and applying property 16, we have -f 9n /2,3 >  1  x  -^9 T U / 2 , 3 m / 2  hm/2  —  P<Sni,3 X P"in\,ni  (67)  — -^3m •  From equations 65, 66, and 67, we can write Q in the form Q = [(h ® (A® I )){A  ® / n ) P n ] ® 4 /2  ni/2  1  2  1)2  2  (68) = {{h®Qn )(A®I )P n ,2]®In /2 1  ni  2  1  •  2  The vector form of 68 can be modified to the parallel form using property 13 and hence, we can write Q in the final form Q = A>m/2.n /2,9m/2{4 /2 ® [(h ® Ql){A ® / n j ^ m ^ ] } 2  ^,m,n /2  2  x  •  2  (  = (^3 ® ^ 3.m/2.n /2,3.n /2) (-^3.n /2,3 ® h.m/2) 1  )  2  {4 /2 ® [(/3 < 8 > Q n , ) ( A ® / „ ) P n , 2 ] } P . „ „ / 2 2  9  x  ,  2  6  1  2  1  ni  2l  2  •  The complete realization of Q is shown in Fig. 16, where one can observe that the computations involved in the upper permutations (realized by the white rectangle) are one addition-time period ahead of those involved in the middle permutations (realized by the gray rectangle). Therefore, both computations can be multiplexed, thus reducing the number of outputs from 9(rai/2 x n /2) to 2  3(n /2) 2  to  2(n /2) 2  6(rei/2  x n /2) and the number of 2  Q\  blocks from  as shown in Fig. 17. Moreover, since the computations within the  lower permutations (realized by black and white dotted rectangle) are similar to those of the 1-d convolution explained before in section 4.2 and shown previously in Fig. 13, their computations also can be multiplexed in a way similar to that of Fig. 14, thus reducing its output from 3(nj/2 x n /2) to 2  2(n\/2  x n /2). Consequently, the total number of 2  the multiplexed outputs from Q is reduced to b(n\ x n ). Note that this is equivalent to 2  4 . Multi-Dimensional  48  Convolution  reducing the number of the lower-order 2-d convolutions in the core from 9 to 5 with a computational saving of 44%. Note also that, in the multiplexed architecture, the original speed of computations are not affected. Additional multiplexing stages can be inserted to further reduce the number of the computations but this will affect the speed of the original (non-multiplexed) algorithm.  A multiplexed version of the computation of the matrix R can be also derived using a similar procedure. Thus, from equations 48 and 62, we can write R in the form R = (R  ® Rn ){P9(n,-l),3(n -l)  ni  2  ® {n -1 T  1  2  [(Rn, (B <g> / „ , _ ! ) ) ® (R (B  ))•  <g> J „ _ i ) ) ]  n2  2  (70)  x  (•P9(n -l),3(n -l) ® ^ ( n - l ) ) • 1  1  2  which, using property 11, can be modified to R={R  ni  <S> Rn ){(B 2  ® I -i) ni  ( ^ 9 ( m - l ) , 3 ( n i - l ) ® (n -l)) J  2  <g> (B <8)/„ -i))  x  2  (71)  •  Applying property 12, we have  R=  {R  ®R )  (R  ® Rn )[(B  ni  m  {{B®!,  n2  2  (Rn, ®Rn ) 2  ® I,  i®5)(8)/  B 2  _i)  x  1 ® B)P9(ni-l),3(ni-l)  ® I(n -1). 2  X  . ( 5 ® 73(^-1))  ®B)P9{ -l),3(m-l) ni  ® ^(n -l). • 2  (72)  4 . Multi-Dimensional Convolution  /  49  W / 3  3n/2.n/2,3n/2  V  2  3  W  <'V  V  n  3n,/2 n HOT! 3(n/2Xn/2)j  ft®Q)(M,,)g2 3n,/2,  .  n  3n,/2r-—in, /Y"\ 3n  9(n/2Xn/2) 1  M  2  —  3(n/2Xn/2«  . r7=n /2  2n,  *  n  ft«QX*«gg,,  3jV2  3(n/2Xn/2)  ^  ^ n . n ,n 12  ^  3n, /2r-fn,  A\  *--|^7K--(+) 3n,/2|—-,n, ^  P  ®l  3n/2,3  2n,  ft®Q)(^®/n,)e  3n,/2  Fig. 16 The complete realization of Q.  ^n.n ,n I2\  4 . Multi-Dimensional  Convolution  50  Fig. 17 The multiplexed realization of Q.  which, using property 13, can be reformulated in the parallel form R=  {R ,  ®R )  n  <g>  [(B  x  n2  / 3 ( - l ) ) (-^3(11,-1) ® B ) P B l  = {Rn, <8> Rn )P9(n -l)(n -l),9{n -l) a  [/(„ _1) ® 3  1  <g>  2  ^9(n -l)(n -l),(n2-l) 1  2  1 ) t H n i  _  i )  <g> /(„ _1)] • 2  ^  X  ni  •  _  (73  1  / ( n , - l ) ) {h( -i) 3  9 { n i  ®^^(m-i^Ou-i)]  x  4 . Multi-Dimensional Convolution  51  Using similar analysis to that explained earlier for deriving an expression for R , the n  three-input adders can be moved from the stage (B <g> /3( _i)) to replace the two-input rai  adders at the stage (R  ni  <8> Rn ) °f the multiplexed realization, thus reducing the number of 2  the B blocks from 3(ni — l ) ( n — 1) to 2(n\ — l ) ( n — 1). 2  2  4.3.2 The Computational Complexity of the 2-d Convolution Algorithm Let MU(n x n) be the number of multiplications needed to compute the (n x n) 2-d convolution. From equation 59,  MU{n  x n) = 9MU{n/2  x n/2)  (74)  xn)  x n/2)  (75)  This number is reduced to MU{n  = 5MU(n/2  using our multiplexed forms as explained in section 4.3.1. It should be mentioned that, the computation complexity in our proposed algorithm depends on the computations involved in the core (MU(n/2  x n/2)).  It is possible to use our proposed convolution algorithm for the design of digital filters by zero-padding both inputs to make them of the same size and eliminate aliasing. However, in most practical situations the image to be filtered is considerably larger in size (such as 512 x 512) than the support of the filter impulse response (e.g., 21 x 21). As a result the evaluation of the output using the zero-padding method will computationally intensive. The sectioning schemes may be used to overcome this problem. The topics of digital filter  4 . Multi-Dimensional  52  Convolution  designs and sectioning schemes for digital filters are beyond the scope of the thesis. More details can be found in [65]. The number of multiplications in our proposed algorithms based on [53] as a core, compared to direct method and FFT method is shown in Table 4 [65]. The table shows a significant reduction of the number of multipliers of the proposed multiplexed algorithm over the direct and FFT methods.  The Direct Multiplication  The FFT  The pro;osed 2-d algorithm  The proposed 2-d multiplexed algorithm  162  96  104  58  64 x 64  1147  144  140  78  128 x 128  4246  156  148  83  n  X  n  8x8  Table 4 Comparison of the number of multiplications required for the 2-d convolution (kernel of dimension 8x8).  4.4 Multi-Dimensional Convolution We can extend the derivation of the 2-d convolution algorithm in section 4.3 to the m-d case, for any m > 2. From equations 48, 49 and 50, we can write the m-d convolution in the form q = (C(ni) <g> C(n )  <g> • • • <g> C(n ))f  2  OnAf  m  .  .  ™  .1 = 1  Using properties 11-17 and following the derivation steps in section 4.3, we can write equation 76 in the form  f  (77)  4 . Multi-Dimensional Convolution  where,  53  m—l /  i  /  Q=  n  \  i=l  \  m  p  u „ u ^ i u  (g)g  / /  J]  1  m  ,  (78)  m—1  ^ a , » 2 ® ^  / \i=l V  3  ,  (79)  /=t  m—i  =3 ~  ?  \i=l  /m — 1 /  P = I®Pfc  .ib=l  / m  3  k=l \  U l  \ \  (2 ^ )  l+1  a  1  ,  (80)  3=1  u =3 ,  (81)  2  i  = 3  ^2  1  J] (2 " a  - 1) ,  (83)  k=l  and Qi and i?^ are defined in equations 47 and 48, respectively. Equation 77 represents the recursive m-d convolution computation. In this case, we use 3  m  of the lower-order architectures in parallel. Similar analysis to that of deriving Q and  R in section 4.3 can be applied to Q and R, respectively. Also, multiplexed formulations can be derived for both Q and R to reduce the number of the lower-order convolution architectures used to less than |(3 ). ra  4.5 Chapter Summary In this chapter, we showed that the higher-order m-dimensional convolution can be generated recursively by combining the computation of 3 identical lower-order (smaller m  size) convolution computations of the same dimension. We also showed that, with careful  4 . Multi-Dimensional Convolution  54  design, this number can be reduced to less than | (3 ). The proposed networks have very m  small depth and contain only a single stage of multipliers, while all other stages contain adders only.  5 . Multi-Dimensional DCT  5  55  Multi-Dimensional D C T  In this chapter, we present a novel recursive algorithm for generating higher-order Tridimensional DCT by combining the computation of 2 identical lower-order (smaller size) m  DCT architectures [21]. Basically, a multi-dimensional DCT computation can be constructed from exactly one stage of smaller DCT computations of the same dimension. This is useful for both hardware and software solutions, in which a very efficient smaller size m-d DCT core has been developed, and a larger DCT computation is required.  5.1 DCT for Video Processing The Discrete Cosine Transform (DCT) is a fundamental computation that arises in many important image and video processing applications [20]. Because it results in high image compression ratio [37], DCT-based image coding became the basis for most contemporary image and video compression standards such as MPEG, JPEG, and H.261 [9], [10]. The basic computation of the DCT-based system is the transformation of an 8 x 8 image block from the spatial domain to the DCT domain. The choice of the DCT in the standards is motivated by the many benefits it offers [20]: 1. For highly correlated image data, the DCT compaction efficiency is close to that obtained with the optimum transform, namely the KLT (Karhunen-Loeve transform). 2. The DCT is an orthogonal transform. Thus, the computation of the forward and the inverse DCT are nearly the same. Thus, from a hardware implementation view point, the same computation unit can be used for both the forward and the inverse DCT. 3.  An important property of the 2-d DCT and IDCT transform is separability. This implies that the 2-d DCT can be obtained by first performing 1-d DCT on the rows followed by 1-d DCT on the columns. From an implementation viewpoint, this row-column approach  5 . Multi-Dimensional  DCT  56  may also simplify the hardware requirements at the expense of a slight increase in the overall operations count. This property can be extended to the general m-d DCT. 4. The DCT computations can be performed in real arithmetic with fast algorithms that require fewer operations than the original matrix-vector computations. Fast algorithms are desirable from both a hardware and a software implementation viewpoint. These fast algorithms also tend to be parallelizable and thus can be efficiently implemented on parallel architectures as it will be shown throughout this chapter. The MPEG-1 standard for motion video requires the 2-d DCT to be computed over each 8 x 8 image block within a frame size of 352 x 240 pixels in a video stream processed at a real-time rate of 30 fps. These computations consume 33% of the encoder computational load and around 22% is consumed by the IDCT on the decoder side as shown in Table 5 [37]. In the JPEG standard for still image compression, the computation of the DCT demands close to 45% of the overall encoder computation time. Clearly, such computationally intensive task must be implemented in fast hardware to meet the real-time requirements.  Decoding Function Bit stream header parsing Huffman decoding and inverse quantization  Load (%) 0.44 19  8x8 IDCT  22.1  Motion compensation  38.64  Color transformation and display  19.82  Table 5 The distribution of the computational load in MPEG-1 decoding.  5 . Multi-Dimensional  DCT  57  5.2 Previous Related Work Although a significant amount of work has appeared for computing the recursive 1-d DCT [18], [38], [57], [66], [69], [76], much fewer results are available on truly recursive 2-d algorithms [16], [47], [60]. The work presented in [60] is based on concatenating columns (or rows) of the 2-d input, to reduce it to a 1-d array. The transform matrix is then factorized in block form, but only part of the factorization is truly recursive which makes the algorithm less than optimal in efficiency and not modular in structure [76]. Although the algorithm presented in [16] has an improved recursive architecture, the algorithm works directly on the 2-d data and therefore, it has an irregular structure. Hardware implementations of the DCT are abundant, including systolic arrays [31], [68], [75], parallel architectures based on distributed arithmetic [34], [41], bit-serial conventional row-column decomposition [8], [12], [43], [74], and full custom VLSI architecture [44]. Other efficient approaches that work directly on the 2-d data to decrease the number of multiplications and additions are based on direct polynomial transformations [14], [15], [70], extensive manipulation of sparse matrices [42], [73], or using 1-d DCT's [39], [40]. It should be noted that although these approaches are very computationally efficient (for example, the algorithm presented in [73] required only 94 multiplications for an 8 x 8 DCT and 54 multiplications for the scaled 8 x 8 DCT), they trade-off the performance with irregular architectures that may not be suitable for VLSI implementations especially for larger problem sizes. In VLSI, the complexity of data movement has significant impact on wire area and subsequently on the overall performance of the VLSI layout and delay. In relation to the above, this chapter presents a new recursive formulation for the 2-d DCT which results in new class of scalable parallel architectures for computing the transform. Our main contribution is the derivation of a recursive truly 2-d formulation for the DCT  5 . Multi-Dimensional DCT  58  which are useful for the modularization and parallelization of the resulting computation. The main result reported in this chapter shows that a large 2-d DCT computation can be decomposed into a single parallel stage of four smaller 2-d DCT's with one additional pre-processing stage and one post-processing stage as shown in Fig. 18. Our methodology can be generalized such that an m-dimensional DCT can be constructed using 2  m  smaller  size m-d DCT's. The majority of the previous approaches focused on expressing an m-d DCT in terms of m consecutive stages of 1-d DCT's. It should be mentioned that the 2-d DCT algorithm based on tensor product decomposition previously proposed in [55] scales the individual components constituting the 1-d DCT to obtain the 2-d DCT. Thus, it is not a truly 2-d decomposition. Also, the recursive 2-d DCT algorithm presented in [47] is a generalization of the result reported in [18] to the 2-d case. Compared to the algorithm proposed in this chapter, the 2-d method proposed in [47] requires multiplications in both the pre- and post-processing stages while our algorithm requires multiplications in the pre-processing stage only. Moreover, while our algorithm uses smaller-size DCT's to compute the large DCT, the method in [47] employ smaller blocks that are not a complete DCT computation. Thus, our algorithm is truly recursive, modular in structure, and can be generalized easily to the multidimensional case. Modularity in our case means that any suitable "small" DCT algorithm can be used in the core computations. One immediate outcome of our results is the true "scalability" of the DCT computation. Basically, a multi-dimensional DCT computation can be constructed from exactly one stage of smaller DCT computations of the same dimension. This is useful for both hardware and software solutions, in which a very efficient smaller size m-d DCT core has been developed [14], [40], [73], and a larger DCT computation is required. In such a situation, our approach can use these algorithms as the core stage in computing the larger size DCT by simply  5 . Multi-Dimensional  DCT  59  12  DCT ( n, x n/2 ) 2  DCT  12  DCT ( n, x n/2 ) 2  Post-Processing  M  Pre-Processing  12a,/2)  Output  nxn 1  2  Input  DCT ( n, x  12  DCT (n, x 1^/2) Stage # 3  Stage # 2  Stage # 1  DCT(n x n)  Fig.  18 The proposed recursive architecture for computing the 2-d DCT  adding the pre- and post-processing stages. Regularity and modularity are essential factors for facilitating design automation and the implementation of parallel circuits in maturing and emerging programmable logic devices and ASIC environments, and our techniques contribute significantly to such design approaches.  5.3 Overview of the DCT From the definition of the DCT [76], for an input vector {XQ, X\, x , • • •, x - i }> the DCT 2  output vector  {XQ, X\,  X, 2  •••,X„_i}  Xk =  where erj = ^= and  n  is given by the relation  — ^  Xi cos  ?r(2z + l)k 2n  (84)  = 1 for k > 1. In a matrix form, the DCT can be written as X = T x n  (85)  60  5 . Multi-Dimensional DCT  where X and x are the output and input vectors of length n each, T is an n x n coefficient n  matrix. For example, T 4 w i l l have the form 1  1  V2  7T  COS  8 COS  7T  COS  4  COS  0  1 3TT  8 3TT  4  &  x/2 5TT  COS  COS  8 5ir  COS COS  Q  COS COS  7TT  8 7jr  (86)  2lV  Q  Based on the 1-d recursive algorithm proposed by H o u [18], Cvetkovic et. al.  [57]  presented a modified algorithm that has a fewer number of adders and no shift registers, and can be represented by the following matrix-vector product  ~x  t =  where m  n 2  Xp  Tm  Tm  LmTmCm  x  r  (87)  n  C  m  1 1 0 0 0 1 1 0 0 0 1 1  0 0 0  0 0 0  0 0  1 0  0 1  0 0  0 0 0 0  = diag  (88)  2 cos <f>M  <i> = ( 4 M + l)7r/2n , M  (89)  M = 0, 1, 2, • • • , rn- 1.  X  T  XX e  0  X = [x X ] p  r  = [XQ X2 X4 • • • X^-2 ^ 1 ^ 3 ^ 5 • • • Xjs[-\}  = [XQ X  2  • • • £7V_ Xjy_i £jV-3 • • ' X\] 2  (90)  (91)  5 . Multi-Dimensional  DCT  61  5.4 The One-Dimensional DCT in a Tensor-Product Form The vector X defined by equation 90 can be expressed as follows X —  (92)  P :iX n  Also, the vector x defined by equation 91 can be written as X = (4/2  where  Ij  [46],  n  n  Jj  2  2  ® Jn/2)Pn,2X  ,  (93)  is the identity matrix of dimension n/2, © is the direct sum operator defined in is the exchange matrix (opposite diagonal matrix) of order n/2 defined as "0 0  r  0 • • 0 0 • • 1  0  (94)  Jn/2 —  1 • • 0 0 • • 0  0 1  0 0  and (95)  Vn = (In/2 © Jn/2)Pn,2  From equations 85, 92, and 93, it follows that (96)  T = T V -t-n — n n •> x  v  and from equations 87 and 92, it follows that T  n  (97)  — Pn,2Tn •  From property 16 and substituting equation 96 in 97, we obtain T — P + n — n,r 1  "  L> T Cm m  m  Vr n j  (98)  5 . Multi-Dimensional DCT  62  which can be decomposed into the following matrix product, n/2  0  n  0  T  n/2  l  Tn — P ,n/2  0  Ln/2  'n/2  f /2  %/2 -I, n/2  'n/2 ^n/2  n/2  o  r  a  (99)  or in the tensor product form Tn = P ,n/2 (4/2 © L ) n  (l  n/2  ® f  2  n / 2  )  (4/2 © C  n  /  2  ) (F  2  <g> /  n / 2  ) K  •  (100)  4/ )K  (101)  Applying equation 96 yields  Tn = Pn,n/2(ln/2  © 4*/)  ) (/  (4 ®  2  0 C  n / 2  n / 2  ) (^2 ®  2  which can be transformed, using property 11, into the simpler form Pn = Pn,n/2 {In/2 © L )  (I  n/2  = Rn(h  <8>  ® T )  2  (l  n/2  <g> V-jty (/  2  re/2  © C ) n/2  {F  <g>  2  4  / 2  )K  (102)  T )Q n/2  n  where Pn = P ,n/2(Pa/2  ® L/)  n  ,  n 2  Qn = (4 ® i g j ) (4/2 ©  G ) n/2  (103) (F  2  <g>  4 ) 14 /2  •  (104)  Equation 102 represents the recursive tensor product formulation for the 1-d D C T i n which the core computation (l  2  ® T/)  consists of a parallel stage of two identical smaller D C T  n 2  computations each of size n / 2 , Q  and R  n  n  serve as pre- and post-processing (glue) structures  that combine the two lower-order D C T modules to construct the higher order D C T of size n as shown i n F i g . 19. It should be noted that the computation of R  n  only, while the computation of Q  n  involves additions  involves additions and multiplications. For example, the  realization of R± for a 4-point input (n = 4) is shown i n F i g . 20, where  L  2  =  1 0  1 1  (105)  5 . Multi-Dimensional  DCT  63  Post-Calculations  n Output  Pre-Calculations  T  4J/2  n/2  n  R n  Q /2  n  n72  Ti  Input  /2  Stage # 3  Stage # 1  Stage # 2  T„  Fig. 19 The 1-d DCT recursive architecture  (©/. 2  M—  2  X  -4  —  R  4  Fig. 20 The realization of  The computational structure of Q can be simplified by further manipulation of V as n  n  defined in equation 95. Since the three matrices I / , J /2, and P „ n 2  n  >2  are unitary matrices,  5 . Multi-Dimensional  64  DCT  then we can write V ,1 as n/2  n/2 = [ ( ™ / 4 © n/4)Pn/2,2]  V  =  *  J  J  [ ( / n / 4 © 4/4)-^n/2,2] (T  .  nrs T  . \  ~ n/2,2 (4/4 © 4 / 4 j P  (106)  J  IV  4i/2,n/4 (4/4 © 4 / 4 ) Using property 13, the term (F2 <8> 4/2) i  n  equation 104 can be converted to the following  parallel form  F ®Im  = Pn,2{Im ® F )P ,m  2  2  (107)  n  Thus, for a 4-point input vector, Q4 w i l l have the form  Q  = [I <g) V ~ ] (I © C 2 ) P 4 , 2 ( 4 ® -^2)^4,2^4 , l  4  2  2  where V4 = (4 © 4 ) P i , 2 and V ^  Q  4  2  = 4-  - 1  = (/ © C ) P 2  2  4 > 2  (108)  Therefore  ( / <8> F )P4,2{h 2  2  ©4)P  4 > 2  (109)  The realization of (54 for a 4-point input (n — 4) is shown i n Fig. 21, where from equation 8 9  '  <> =  C  2cos(x/8)  a  n  d  =  2CO (5T/8)S  Similar results, albeit i n different formulations have been reported previously for computing the 1-d D C T [18], [38], [57], [69]. However, the tensor product form proposed in equation 102 can be translated directiy into hardware realization as shown i n F i g . 19, 20, and 21.  5 . Multi-Dimensional DCT  65  Fig. 21 The realization of Q , where Co 4  2 cos (7r/8)  and C i =  2 cos (57I-/8) '  5.5 Truly Two-Dimensional Recursive Architectures for the DCT This  section  will  develop  two  recursive  computations from smaller D C T computations.  methodologies  for realizing 2-d D C T  First, we present a direct method for  realizing the 2-d D C T computations using the conventional row-column decomposition of the 1-d D C T i n a tensor product form.  The second methodology provides a truly 2-d  recursive realization. The resulting structures employ one stage of smaller 2-d D C T ' s to realize the large 2-d D C T . This method provides significant advantages over other proposed architectures reported previously, and as far as we know, this is the first truly 2-d recursive formulation of the D C T .  66  5 . Multi-Dimensional DCT  The 2-d DCT is denned as [20] Wl —  1  «2  —1  (2n + l)u7r  2ni  n\n  2  (2m + l)u7r  cos  2^  ra=0  n=0  u = 0 , 1 , • • • , ni —  1,  (HO)  u = 0,1, • • •, n - 1 , 2  / = 0 / > 0  Since the DCT matrix is separable [10], the 2-d DCT for an image of dimension n\ x n can 2  be computed by a stage of n parallel 1-d DCT computations on n\ points each, followed 2  by another stage of ri\ parallel 1-d DCT computations on n points each. This can be 2  represented by the matrix-vector form X — 2~ni ,n  (HI)  ^  2  where T „ is the 2-d DCT transform matrix for an ni x n image, and X and x are the nij  2  2  output and input column-scanned vectors, respectively. For separable transforms, the matrix T n ni!  2  can be represented by the tensor product (112)  where T  ni  and T„ are the row and column 1-d DCT operators on x, respectively. By 2  substituting equation 112 in equation 111 we have X =  (T  ni  <g> T  )  n2  x  (113)  which by using property 13, can be expressed in a parallel form as X  —  -Pn n ,n {In 1  2  1  2  ® T )P n (I ni  nin2i  2  n:i  ®T  n2  )X  (114)  5 . Multi-Dimensional  DCT  67  The direct realization of equation 114 for a 4 x 4 (n\ = n  2  = 4) input image is shown  in F i g . 22.  4-polnt 1- D DCT 4-polnt 1- D DCT  X=16  x = 16  4-polnt 1- D DCT 4-polnt 1- D DCT 1-D Rows DCT  4X4 DCT  Fig. 22 The direct realization of 2-d DCT for a 4 x 4 input image {n-y = n  2  = 4)  N o w we w i l l derive a truly 2-d recursive formulation of the D C T by further manipulation of equation 114. Substituting equation 102 i n equation 114, we have X = ([Rn, (I  ®T  2  n ] / 2  ) Q ] <g> [Rn (h n i  2  ® T / )Qn }) n2 2  2  X .  (115)  From property 11, we can write X as X = [(Rn, ® Rn )((h 2  ® T, ) n  ® (h ® T ))(Q  /2  n2i2  ® Q )]  ni  n2  X (116)  = [Rni,n Tni,n Qni,n2] 2  j  x  2  where •R-ni,n2  =  (Rni ® - ^ n ) ?  (117)  2  J-rn,n2 — ((h®T )®(l ®T )) ni/2  2  n2/2  ,  (118)  5 . Multi-Dimensional DCT  68  Qn ,n 1  — {Qni  2  ® Qn )  (119)  •  2  N o w , from property 12, we can write equation 118 i n the form  J-rii,n  {(h®T ®I )®T / )  —  2  ni/2  2  (120)  n2 2  Applying properties 11 and 14 on equation 120, we have T  n  i  ,n  2  =  ® (4 /2 - n /2  (P2m,m (h ® T ^ )P ) n  2  T  2  2ni>2  2  • n /2) J  2  {h®{T ®T )). ni/2  n2/2  {P2n 2 ® 4 / ) lt  2  2  (121)  :  (P2n ,n 1  ® n /2, T  1  {h ® r  2  n i / 2 i n 3 / 2  (P2n 2®  )  I /2)  u  n2  where P /2,n /2 ni  2  (122)  — Tni/2 ® 4J /2 2  Finally, substituting equation 121 i n equation 116, we have  X =  {h®T  )Q  Ml,712  ni/2>n2/2  (123)  where Qni,n  =  2  (P2m,2 ®  ,  (124)  Pa j l)  (125)  4 / ) Q r un, 2  2  2  and R-7ii,n — R-m,n (P2ni,n-L ® 2  2  c  2  5 . Multi-Dimensional  DCT  69  Equation 123 represents the truly recursive two-dimensional DCT in which Q  rail  n  2  and R , n n i  2  are the pre- and post-processing glue structures, respectively, that combine 2 identical lower2  order 2-d DCT  modules each of size ni/2 x n /2 in parallel to construct the  (T„ /2, /2) 1  2  n2  higher order 2-d DCT of size ni x n . 2  In the remaining part of this section, we will further manipulate equations 124 and 125 to obtain parallel forms for Q„ 1>7l2 and R W l i W 2 that are more suitable for direct VLSI realization. For simplicity, let n\ = n = n, then we can rewrite equation 125 as 2  R-n,n = (Rn ® R )(P2n,n  ® In/2)  n  (126)  Applying property 13, we get  R ®R n  = P \n(In  n  ® R )P \n(In  n  n  n  ® Rn)  (127)  substituting equation 127 in equation 126, we have R ,„ n  = [P 2 n(I  ® R )Pn ,n(In  = A (P n,n  ® In/2)  n  t  2  n  2  n  2  n  ® Rn)} (P2n,n ®  In/2)  (128)  ,  where A  n  = P ^ (In n  n  ® Rn)  (129)  Equation 128 represents the post addition stage in which the term  (P2 ,n  the permutation  represents the cascade of  P ,  2n n  dilated by n/2 [28]. The second term,  A , 2  n  n  ® I /2) n  represents  two identical stages, each is realized by n-parallel copies of R , followed by the permutation n  5 . Multi-Dimensional DCT  70  P 2 . The realization of R4 for a 4 x 4 input image (n = 4) is shown in Fig. 23, while n  n  the realization of R± was shown in Fig. 20.  Fig. 23 The realization of R  for a 4 x 4 input image (n = 4)  4  Similarly, Q„ can be written in the form Qn  =  (P~2n,2 ® n/2){Qn  =  (P n,2®l  <8>  T  Qn)  (130) n/2)  2  B  2 n  where B  n  = P 2 (I n  >n  n  (g) Q )  (131)  n  Equation 130 represents the pre-processing stage in which B\ represents the cascade stages, each is realized by n-parallel copies of (P2n,n  ® In/2)  Q ,  represents the permutation  n  followed by the permutation  P ,n 2n  P„2„.  The term  dilated by n/2 [28]. The realization of  Q 4  for a 4 x 4 input image (n = 4) is shown in Fig. 24, while the realization of Q was shown 4  71  5 . Multi-Dimensional DCT  in F i g . 21. The complete recursive realization of the 2-d D C T for a 4 x 4 input image (n = 4) is shown i n F i g . 25.  P ®I 8,2  W  =gJ',®°>  B  2  16  16 Input  Output  Q Fig.  16 (DCT Output)  24 The realization of Q  for a 4 x 4 input image (n = 4)  ^4  2X2  ^4  2X2  ^4  2X2  ^4  2X2  4  DCT  DCT  DCT  DCT  4X4 Fig.  4  4  4  4 4  Q.  16 (4x4 Input  Image)  DCT  25 The complete recursive realization of the 2-d DCT for a 4 x 4 input image (n = 4)  5 . Multi-Dimensional  72  DCT  5.6 Computation Complexity of the 2-d DCT Let M U ( n x ri) be the number of multiplications needed to compute the (n x ri) 2-d D C T . From equation 123,  MU(nxn)  = 4MU(n/2xn/2)  +n  (132)  2  It should be mentioned that, the computation complexity i n our proposed algorithm depends on the computations involved i n the core (MU(n/2  x n/2).  The number of  multiplications i n our proposed algorithm based on [40] as a core, compared to other algorithms is shown i n Table 6.  Row-Column [38]  C. Ma [47]  M. Vetterli [70]  P. Z. Lee [19]  The proposed Algorithm based on [40] as a core  4x4  32  24  16  20  16  8x8  192  144  104  128  128  16 x 16  1024  768  568  704  768  32 x 32  5120  3804  2840  3584  3584  nx n  Table 6 Comparison of the number of multiplications required for the 2-d DCT.  In general, a basis of comparison of the various D C T algorithms is the number of multiplications and additions they require.  However, for a V L S I implementation other  factors are also important. These factors include complexity of control logic, regularity, modularity, size, power, and complexity of interconnect. A s shown i n Table 6, the number of multiplications i n the proposed algorithm has been improved over the conventional rowcolumn algorithm [38] and is comparable to those of [19] and [47]. It should be emphasized  5 . Multi-Dimensional DCT  73  again that our goal is to have a regular and modular algorithm that can be mapped efficiently onto V L S I architectures. Polynomial transforms [70] require fewer multiplications but have irregular structure and complex interconnection schemes among processing elements. Hence, these algorithms may be used for a software implementation on a general purpose processor or on a programmable DSP processor, but are seldom the basis for dedicated DCT processors in video codecs [37].  5.7 The Multi-Dimensional Recursive Architecture of the DCT  The multi-dimensional D C T is very important for many video processing applications. The 3-d D C T , in particular, has many applications in 3-d video compression such as the newly proposed multiple-component JPEG [35]. We can extend the 2-d D C T derivation in section 5.5 to the multi-dimensional case. From the separability property of the DCT, the m-dimensional D C T has the form  X =  {T ®T ®---®T )x ni  n2  nm  (133)  where T  ni  is the 1-d DCT coefficient matrix as defined in equation 85. The direct realization  of equation 133 for the case ni = n = n = 2 (the core of the recursive 3-d DCT) is shown 2  3  in Fig. 26. Using properties 11-17 of the tensor product factorization and by following the  5 . Multi-Dimensional DCT  1A  2-point 1- D DCT  2-polnt 1- D DCT  2-polnt 1- D DCT  2-polnt 1- D DCT  2-point 1- D DCT  2-point 1- D DCT  2-polnt 1- D DCT  2-polnt 1- D DCT  2-polnt 1- D DCT  2-polnt 1- D DCT  2-polnt 1- D DCT  2-polnt 1- D DCT  z-Direction DCT  y-Direction DCT  x  =8  x-Direction DCT  2 X 2 X 2 DCT  Fig. 26 The direct realization of equation 133 for the case n\ = n  2  = ns = 2  steps in section 5.5, we can write equation 133 in the form X  R  Im 2  Q *p T ./ i=l n  (134)  Q  2  where 0 T / is the lower order m-dimensional D C T and n%  2  i=i  ^m-l  (135) V i=l  V  k=l  / /  'm-\  \i=l  m—1 (136)  g=l  /  i=  u  2  \i=l  \  n M '  l=i  (137)  i=i  u =2 , 2  (138)  5 . Multi-Dimensional DCT  75  n  1 2 -TI ( m-j+i) 1  U 3  =  ,  n  ^H(n )  (139)  ,  k  (140)  k=i ^2 =  H(n ) k  ,  (141)  k=l m  w  * = \ II  ("i)  >  (  1 4 2  >  V  Z ) =(Z ®Z ®---®Z ) / v  \«=i Ri and  1  2  V  ,  (143)  are the one-dimensional pre- and post-processing, defined in equations 103 and  104, respectively. Equation 134 extends our results by showing that a large m-d DCT can be computed from a single stage of smaller m-d DCT's in parallel in addition to the pre- and post-processing glue structures defined by equations 135 and 136, respectively.  5.8 Chapter Summary In this chapter, we presented a class of highly modular parallel VLSI structures for the m-d DCT. The proposed algorithm allows the generation of higher order DCT architectures from 2 lower order DCTs in a parallel realization with the addition of two glue stages m  of pre- and post-processing. As part of testing and verifying the functionality of our computational blocks, a 4 x 4 DCT architecture has been developed. The schematics of different parts of the design are given in Appendix C. The circuits generated were mapped and tested on an FPGA-based transformable computer. The results are reported in [29].  6 . A General Methodology for Separable Transforms  76  6 A General Methodology for Separable Transforms In this chapter, we extend our methodology to other separable transforms. Two important transforms will be formulated in recursive tensor product forms; the Discrete Fourier Transform (DFT) and the Walsh-Hadamard Transform (WHT). A general framework that summarizes our methodology and that can be applied to any separable transforms are also given in this chapter.  6.1 The Discrete Fourier Transform (DFT) 6.1.1  The 1-d DFT  The tensor product formulation of the 1-d n-point DFT is defined by [46]  F = (F ® / „ / ) T „ ( / ® n  2  2  2  F )P ,2 n/2  n  (144)  where F is the 2-point DFT, 2  Tn = {L/2 © D ) n/2  ,  (145)  is the direct sum operator defined in [46] and D is the twiddle factor defined by n  1 0 D  r  0  •••  0  LO  0  •••  :  0  '••  •••  0  0  •••  0  (146) n-l  2iri  and  LO  =  e" .  We can rewrite equation 144 as F  n  =  Rn(h®F )Q. n/2  (147)  6 . A General Methodology for Separable Transforms  11  where Rn=  (F ®I )T 2  n/2  ,  n  (148)  (149)  Qn = Pn,2  Equation 147 represents the recursive formulation of the 1-d DFT in tensor product form where, R and Q work as glue structures that combine two of the lower order DFT's n  n  {F / ) n  2  to construct F . The realization of F± is shown in Fig. 27. n  Fig. 27 The realization of F4.  6.1.2 The 2-d DFT Since the DFT is a separable transform, we can write the 2-d DFT in the form F n nu  2  =F  ni  ®F  n2  (150)  78  6 . A General Methodology for Separable Transforms  where F  and F  m  represent row and column 1-d DFT, respectively. By substituting 147  n2  in 150, we have F ,n ni  = [Rn, (h <8> F )Q ]  2  ni/2  ® [R  ni  n2  (h ® F )Q ] n2/2  ' (151)  n2  which using properties 11, 12, and 16 can be written as Fn nu  2  = {Rn, ® Rn ){{h  ®F )  ® (h  ® 4z / ) )(Qm  = (R  ® R ){{h  ®F )  ® (I  ® F ))(Q  = (R  ® R ){{P2m,n,  = (R  ® Rn )(P ,  2  ni  ni  ni  ni/2  n2  ni/2  n2  2  =  2  T  2  R(h®F ,  2  ®  ni  Q) n2  ®^„ / )(Qn  2ni;  ® n /2){h®  2ni ni  n2/2  (h®F )P 2) ni/2  ® Qn )  2 2  2  2  F l )(P , ® ni 2tn2/2  1  ® Qn ) 2  I l ){Q  2ni 2  n2 2  ni  ®Qn ) 2  )Q  n /2>n2/2  (152) where, R = (Rn, ® Rn ){P2n 2  uni  ® 4/)  (153)  2 2  Q = {P2m ,2 ® 42/2) (Qn, ® Qn )  (154)  2  Equation 152 represents the recursive formulation of the 2-d DFT in tensor product form. R  n  and Q work as glue structures that combine four of the lower order DFT's (F j ) n  n 2  to  construct F . It should be noted from equations 153 and 154 that the pre-processing stage, n  R , has no computation stages and can be realized by shuffle permutation only while the n  post-processing stage, R , has two stages of additions and two stages of multiplications. n  6.2 The Walsh-Hadamard Transform (WHT) 6.2.1 The 1-d WHT For  n — 2, a  the Walsh-Hadamard transform (WHT), W , is defined as n  W  n  =  Wa 2  = W ® W ® • • •® W 2  2  2  ,  (155) = W® 2  W -i 2a  6 . A General Methodology for Separable Transforms  79  where, 1 1  W  2  1 -1  which can be realized as shown in Fig. 28. Using property 13, we can write equation  X . + X. ' i+1 X. - X. / i+1  i+1  Fig. 28 The adder circuit used in the WHT  155 in the form W « = Y[P », (I °-i 2  2  2  2  ® W)  (157)  2  i=i  That can be realized by a similar stages of the computation P <* (I 2  we can realize equation 157 by one block of P ^(Ii"a  1  2  j2  ®  ^ 2 ) - Alternatively,  ® W2) and take the output after  (a) iterations as shown in Fig. 29 for the case n = 2 = 16. 4  -i  2a  We can modify equation  155 to the recursive form using property 11 as w =w® w n  2  n/2  = (h®W )(W ®I ) n/2  2  (158)  n/2  = {h®W )Q n/2  ,  n  where Qr.  w  2  'n/2  (159)  Equation 158 represents the recursive formulation of the 1-d WHT. In this case we use two of the lower order WHT in parallel in addition to one pre-processing stage.  6 . A General Methodology for Separable Transforms  80  Fig. 29 The 16-point WHT  6.2.2 The 2-d WHT  Since the WHT is a separable transform, we can write the 2-d WHT in the form W  niiH7  = W, ® W n  (160)  n2  Substituting equation 158 in equation 160 and applying properties 11-17, we have W ,,n n  2  = W, ® W n  n2  = (h ® W )Q  ® (h ®  = [(h ® W )  ® (I ® W )}  ni/2  ni  ni/2  =  =  [{P2n,,n,  2  ® /„ / ) a 2  W / )Qn n2 2  [Q , ® Qn ]  n2/2  n  {h ® Wn,j n / ) 2:  2  2 2  (161)  2  (An,  ,2 ®  4 / )] [Qn, ® 2  2  Qn ] 2  R{h®W n / )Q ni/2)  2 2  where Q = (P2n„2  ® I /2){Qn, n2  ® Qn ) 2  (162)  6 . A General Methodology for Separable Transforms  81  and (P m,m®I /2)  R=  ,  n2  2  (163)  are the 2-d pre- and post-processing, respectively. It should be noted that equation 161 is the recursive formulation that generates the higher order 2-d WHT from the lower order 2-d ones. However, from the definition of the 1-d WHT given in equation 155, we can alternatively write the 2-d WHT in equation 160 as W ,n ni  2  =W,  ®W  n  n2  = W®W 2  ®W  ni/2  n2  (164) =  W ®(W ®W ) 2  =  ni/2  n2  W  niXn2  Which means that the 2-d WHT on an n i x n input is equivalent to the 1-d WHT on a 2  1-d input of size n\ x n . 2  6.3 A General Framework for Multidimensional Recursive DSP Transforms In this section, we present a general framework to derive recursive formulations for multidimensional DSP transforms. 6.3.1 The One-Dimensional Case  Given a 1-d DSP algorithm in a matrix-vector form Y  (165)  = T ,n X  m  m  n  where, T ,n is the transform matrix, X and Y are the input vector of size n and the m  n  m  output vector of size m, respectively. Then, using sparse matrix factorization approach [4], the matrix T ^ can be factorized so that m  n  T ,n m  = TT X  2  •••T  k  (166)  6 . A General Methodology for Separable Transforms  where each of the matrices T\, T , • • •,  82  is sparse. Sparseness implies that either most of  2  the elements of the matrix are zero or the matrix in the block diagonal form. By applying property 13 of the tensor product to the block diagonal matrices of equation 166, we have T n = (Ri)(Ij®T m>  )(Q )  m/2tn/2  (167)  k  where, Qk and Ri represent the pre- and post-processing glue structure that combine j blocks in parallel of the lower-order transform  T  T O  /2, /2n  F ° example, j = 3 for the linear r  convolution transform matrix, while j = 2 for the DCT, DFT, and WHT. The values of Qk and Ri for the 1-d transforms derived throughout the thesis are summarized in Table 7.  Ri  Qk Convolution  DCT  A®  ( 2 ® n/l) J  /2 a  (4/2 © C )  V  n/2  DFT  R(a)(B ® 7 a _ i )  1  2  (F ®  I )V  2  n/2  n  4i,n/2(4/2 © Ln/2)  {F ®I )T  Pn,2  WHT  w ® 2  2  4/2  n/2  n  In  Table 7 The summary of pre- and post-processing formulae.  6.3.2 The Multi-Dimensional Case Step 1 : For an n i x n input data image ( X „ 2  CF  n i )  „ ), 2  ljn2  ) , and a separable 2-d Transforms  we can the 2-d output in the form Ynin  2  —T  n i t  n X 2  n i n 2  (168)  6 . A General Methodology for Separable Transforms  where  and  X  NITL2  83  are the input and output column-scanned vectors, respectively. For  Y  NIN2  separable matrices, the 2-d Transform  (T  n i >  can be written in the tensor product form  „ ) 2  as [59] T  where T  and T  ni  n i > n 2  =T  n i  ®T  (169)  n 2  are the row and column 1-d transforms, respectively as defined in  n2  equation 167.  Step 2 : We replace  T  ni  and  by their corresponding values from equation 167 and  T  n2  apply the tensor product properties 11-17 to derive the 2-d recursive form (170)  )Q  where  and R  Q ,n ni  2  that combine j  ni  2  ,n  2  are the 2-d pre- and post-processing glue structure, respectively  of the 2-d lower-order transform of dimension n\/2 x nij2  as  (T / n /2) ni  2t  2  explained in details throughout the thesis for the convolution, DCT, DFT, and WHT.  Step 3 : For the multidimensional transforms, steps 1 and 2 can be extended to have the general form m (g)r„ «'=1  where  Q  / t  m  = i? U - ( g ) T \  1=1  \ n  i  /  (171)  Q  2  /  and R are the m-d pre- and post-processing glue structures, and 0 i=l  lower-order m-d transform.  T  r a  ./  2  is the  7 . Conclusions  and Future  Work  84  7 C o n c l u s i o n s and Future Work 7.1 Conclusions This thesis proposed a general approach for developing scalable VLSI architectures for several fundamental computations in signal and video processing such as multidimensional convolutions and discrete transforms. One advantage of this approach is the close relationship between the algorithm formulation and the structure of the underlying hardware which simplifies the, often difficult, algorithm mapping process. The major contribution of this thesis is the novel formulations proposed for several parallel DSP computations such as convolution and multi-dimensional transforms. Unlike previously proposed methods, our formulations are truly recursive in any dimension. In contrast, the widely used row-column approach for multi-dimensional transforms relies on decomposing the computation into a series of 1-d computations, then applying a recursive definition to each stage. Chapters 3 and 4 of the thesis developed a new algorithm for multi-demensional linear convolution based on Toom's algorithm. Although similar work has been done for the 1-dimensional case by Granata et al [69], their method can not derive a computational structure that employs smaller-size convolution in the core stage, as is the case with our method. Our formulation also maintains the original feature of Toom's algorithm which involves a single stage of multiplications in the computation. For the DCT, Chapter 5 provided a similar formulation that utilizes a single parallel stage of multi-dimensional DCT computational blocks to construct a larger DCT computation of the same dimension. A potential benefit of this research is that it can contribute to a design methodology that results in parallel computations which can be embedded in regular and modular hardware  7 . Conclusions and Future Work  85  structures. Regularity and modularity are essential factors for facilitating design automation and implementation of parallel organizations in VLSI. Indeed, our recursive formulations for a class of fundamental multidimensional DSP computations allow the generation of a larger VLSI architecture by combining the computations realized by a single parallel stage of identical smaller size VLSI blocks. Such approach leads to truly scalable performance for large computations. Our methodology emphasized the highly parallel approach for VLSI architectures. The proposed recursive approach is also useful for software-based solutions, in which a very efficient code has been developed for a smaller size problem, and a larger computation is required. However, this point is open for further investigation. The merits of our proposed architectures can be summarized as follows: 1. A multi-dimensional (m-d) convolution of size n can be constructed from a single parallel stage of 3 , m-d convolutions each of size (n/2) with the addition of two glue m  stages of pre- and post-processing. 2.  A multi-dimensional (m-d) DCT of size n can be constructed from a single parallel stage of 2 , m-d DCTs each of size (n/2) with the addition of two glue stages of prem  and post-processing. 3.  The methodology can be extended to derive similar recursive formulations for DFT, WHT, and a large number of separable transforms.  4. They provide a recursive design methodology for multidimensional DSP and Video Processing. The same design can be reused as a core for different-size computations. A user can freely choose a very fast core and recursively build larger architectures using only additional pre- and post-processing stages. Fully pipelined versions of the proposed architecture allow for the full utilization of the arithmetic and data movements resources. Moreover, the area can be reduced dramatically by network folding [24], [64] and by  7 . Conclusions and Future Work  86  using scheduled pipelines. This approach is well suited for automated synthesis and can lead to significant savings in design and verification time.  7.2 Future Work The work of this thesis can be extended in a number of ways, few of which are suggested below. 1.  Developing new mapping techniques.  The past research has concentrated on using  the tensor-product decomposition as the primary tool for developing efficient mapping techniques. However, other types of matrix decomposition are known. 2. Developing our mapping methodology into an automatic tool for high-level synthesis of parallel circuits for D S P . 3. Applying the mapping techniques to solve other classes of problems in D S P and related areas.  Bibliography  87  Bibliography  [1] H. A. Chow ; A. Elnaggar ; H. M . Alnuweiri. "Highly Parallel Processing on the Virtual Computer". SPIE' 95, Field Programmable Development and Re configurable  Computing,  Gate Arrays (FPGAs) for Fast Board  1995.  [2] T. Lu ; M . R. Azimi-Sadjadi. "Interleaved Pipeline Structures for Two-dimensional Recursive Filtering". IEEE Trans, on Circuits and Systems for Video Technology, 3(1), 1993. [3] G. A. Jullien ; M . A. Bayoumi. "A Review of VLSI Technologies in Digital Signal Processing". Proc. of IEEE Midwest Symp. on Circuits and Systems, 1992. [4] R. E. Blahut. "Fast Algorithms  for Digital  Signal  Processing".  Addison-Wesley  Publishing Company, 1987. [5] X . Liu ; L. T. Bruton. "High-Speed Systolic Ladder Structures for MD Recursive Digital Filters". IEEE Trans, on Signal Processing, 44(4), 1996. [6] H. V. Sorensen ; C. A. Katz ; C. S. Burrus. "Efficient FFT Algorithms for DSP Processing Using Tensor Product Decompositions". Proc. ofICASSP'90,  1990.  [7] Z. J. Mou ; P. Duhamel. "A Unified Approach to the Fast FIR Filtering Algorithms". Proc. ofICASSP'88,  1988.  [8] S. Cucchi ; M . Fratti. " A Novel Architecture for VLSI Implementation of the 2-D DCT/IDCT". Proc. of ICASSP'92, 1992. [9] D. L. Gall. "MPEG: A Video Compression Standard for Multimedia Applications". Communications  of the ACM, 34(4), 1991.  [10] P. Pirsch ; N. Demassieux ; W. Gehrke. "VLSI Architectures for Video Compression A Survey". IEEE Trans, on Signal Processing,  83(2), 1995.  Bibliography  88  [11] A. Giri ; V. Visvanathan ; S. K. Nandy ; S. K. Ghoshal. "High Speed Digital Filtering on SRAM-Based FPGAs". Proc. of the 7 Int. Conf. on VLSI Design, 1994. th  [12] M . T. Sun ; T. C. Chen ; A. M . Gottlieb. "VLSI Implementation of a 16x16 Discrete Cosine Transform". IEEE Trans, on Circuits and Systems, 36(4), 1989. [13] D. Soudris ; V. Paliouraas ; T. Stouraitis ; A. Skavantzos ; C. Goutis. "Systematic Design of Full Adder-Based Architectures for Convolution". Proc. ofICASSP'93,  1993.  [14] M. Vetterli ; P. Duhamel ; C. Guillemot. "Trade-offs in the computation of Mono- and Multi-Dimensional DCT's". Proc. ofICASSP'88,  1988.  [15] P. Duhamel ; C. Guillemot. "Polynomial Transform Computation of the 2-D DCT". Proc. ofACASSP'90,  1990.  [16] M . A. Haque. "A Two-Dimensional Fast Cosine Transfom". IEEE Trans, on ASSP, 33(6), 1985. [17] M . S. B. Romdhane ; V. K. Madisetti ; J. W. Hines. "Quick-Turnaround in VHDL:  Core-Based Behavioral  Synthesis".  ASIC  Design  Kluwer Academic Publishers, 1996.  [18] H. S. Hou. "A Fast Recursive Algorithm for Computing the Discrete Cosine Transfom". IEEE Trans, on ASSP,  35(10), 1987.  [19] P. Z. Lee ; F.-Y. Huang. "Restructured Recursive DCT and DST Algorithms". IEEE Trans, on Signal Processing,  42(7), 1994.  [20] K. R. Rao ; J. J. Hwang. "Techniques and Standards for Image, Video and Audio Coding".  Prentice Hall PTR, 1996.  [21] A. Elnaggar ; H. M . Alnuweiri ; M . R. Ito. " A New Multi-Dimensional Recursive algorithm for Computing the Discrete Cosine Transform". Accepted for publication The IEEE Trans, on Circuits and Systems for Video Technology.  in  Bibliography  89  [22] A. Elnaggar ; H. M . Alnuweiri ; M . R. Ito. " A New Tensor Product Formulation for Toom's Convolution Algorithm". Accepted for publication in The IEEE Trans, on Signal Processing.  [23] A. Elnaggar ; H. M . Alnuweiri; M . R. Ito. "Highly Parallel Recursive Multi-Dimensional Algorithms and Architectures for Digital Filtering and Convolution". In review, IEEE Trans, on Circuits and Systems-Part II: Analog and Digital Signal  Processing.  [24] A. Elnaggar ; H. M . Alnuweiri ; M . R. Ito. "Mapping Tensor Products Onto VLSI Networks with Reduced I/O". Proc. of IEEE Fourth Great Lakes Symposium on VLSI, 1994. [25] A. Elnaggar ; H. M . Alnuweiri ; M . R. Ito. "Scalable ParaUel VLSI Networks for Digital Signal Processing". Proc. of ICECS '94, IEEE First Int. Conf. on Electronics, and Systems,  Circuits  1994.  [26] A. Elnaggar ; H. M . Alnuweiri; M . R. Ito. "Efficient Modular Interconnection Networks for Digital Filtering and Convolution". Proc. of IEEE Pacific Rim Conference On Communications,  Computers, Visualization, and Signal Processing,  1995.  [27] A. Elnaggar ; H. M . Alnuweiri ; M . R. Ito. "Highly Parallel VLSI Architectures for Linear Convolution". Proc. of IS CAS'95, 1995. [28] A. Elnaggar ; H. M . Alnuweiri; M. R. Ito. "New Tensor Product Factorization for Toom's Algorithm". Proc. of IEEE International Conference on Digital Signal Processing, 1995. [29] A. Elnaggar ; H. M . Alnuweiri ; M . R. Ito. "Embedding Large Multidimensional DSP Computations in Reconfigurable Logic". Proc. of SPIE East '96 Symposium, High Speed Computing, Digital Signal Processing, and Filtering for FPGAs,  1996.  [30] A. Elnaggar ; H. M . Alnuweiri ; M . R. Ito. " A Novel Parallel Algorithm for  Bibliography  90  Multi-Dimensional Digital Filtering and Convolution". Accepted for publication in The CCECE'97-  The 1997 Canadain Conference on Electrical and Computer  Engineering,  May 25-28, St. John's, Canada, 1997. [31] C. Chakrabarti ; J. Jaja. "Optimal systolic Designs for Computing DHT and DCT". VLSI Signal Processing  III,  1988.  [32] E. C. Ifeachor ; B. W. Jervis. "Digital Signal Processing - A Practical  Approach".  Addison-Wesley Publishing Company, 1993. [33] R. A. Horn ; C. R. Johnson. "Topics in Matrix Analysis".  Cambridge University Press,  1991. [34] K. Shin ; H. Jeon ; Y . Kang. "An Efficient VLSI Implementation of Vector-Radix 2-D DCT Using Mesh-Connected 2-D Array". Proc. ofISCAS'94,  1994.  [35] J. H. Kasner. "Multiple-Componenet JPEG". ISO/IEC TTC 1/SC 29/WG1 - Technical Proposal,  1996.  [36] S. Muramatsu ; H. Kiya. "Multidimensional Parallel Processing Methods for Rational Lattice Alteration". Proc. of ISCAS'95,  1995.  [37] V. Bhaskaran ; K. Konstantinides. "Image and Video Compression Standards: and Architectures".  Algorithms  Kluwer Academic Publishers, 1996.  [38] B. G. Lee. "A New Algorithm to Compute the Discrete Cosine Transform". IEEE Trans, on ASSP,  32(6), 1984.  [39] N . I. Cho ; S. U. Lee. "Fast Algorithm and Implementation of 2-D DCT". IEEE Trans, on Circuits and Systems,  38(3), 1991.  [40] N. I. Cho ; S. U. Lee. "A Fast 4x4 Algorithm for the Recursive 2-D DCT". IEEE Trans, on Signal Processing,  40(9), 1992.  Bibliography  91  [41] W. Liebsch. "Parallel Architecture for VLSI Implementation of a 2-Dimensional Discrete Cosine Transform for Image Coding". Proc. of IEE Conf. on Image Processing and Its Applications,  1989.  [42] E. Feig ; E. Linzer. "Scaled DCT's on Input Sizes That Are Composite". IEEE Trans, on Signal Processing,  43(1), 1995.  [43] M . H. Sheu ; J. Y. Lee ; J. F. Wang ; A. N. Suen ; L. Y. Liu. "A High Throughput-Rate Architecture for 8x8 2-D DCT". Proc. ofISCAS'93,  1993.  [44] V. Srinivasan ; K. J. R. Liu. "Full Custom VLSI Implementation of High-Speed 2-D DCT/IDCT Chip". Proc. of IEEE Int'l Conf. on Image Processing, 1994. [45] C. Van Loan. "Computational Frameworks for the Fast Fourier Transform". Siam, 1992. [46] R Tolimieri ; M . An ; C. Lu. Convolution".  "Algorithms  for  Discrete  Fourier  Transform  and  Springer-Verlag, 1989.  [47] C. Ma. "A Fast Recursive Two Dimensional Cosine Transform". Proc. of SPIE' 88, The Int'l Society of Optical Engineering,  1988.  [48] V. K. Madisetti. "VLSI DSP: An Introduction Synthesis".  to Rapid  Prototyping  and  Design  Butterworth-Heinemann Publishers, 1995.  [49] F. Marino. " A fixed-point Parallel Convolver without Precision Loss for the Real-Time Processing of Long Numerical Sequences". Proc. of the T IEEE Symposium on Parallel h  and Distributed  Processing,  1995.  [50] K. K. Parhi ; D. G. Messerchmitt. "Pipeline Inerleaving and Parallelism in Recursive Digital Filters- Part I: Pipelining Using Scattered Look-Ahead and Decomposition". IEEE Trans, on ASSP,  37(7), 1989.  Bibliography  92  [51] I.-S. Lin ; S. K. Mitra. "Fast FIR Filtering Algorithms Based on Overlapped Block Structure". Proc. of ISCAS'93,  1993.  [52] P. Regalia ; S. Mitra. "Kronecker Products, Unitary Matrices, and Signal Processing Applications". SIAM Review, 31(4), 1989.  [53] P. C. Balla ; A. Antoniou ; S. D. Morgera. "Higher Radix Aperiodic-Convolution Algorithms". IEEE Trans, on ASSP, 34(1), 1986. [54] M. S. Mohammed ; S. L. Matilla ; L. Nozal. "Fast 2D Convolution Filter Based on Look up Table FFT". Proc. of the Int'l Symposium on Industrial Electronics, 1992. [55] H. R. Wu ; F. J. Paoloni. "A Two-Dimensional Fast Cosine Transform Algorithm Based on Hou's approach". IEEE Trans, on Signal Processing, 39(2), 1991. [56] V. Hecht ; K. Ronner ; P. Pirsch. "An Advanced Programmable 2D-Convolution Chip for Real Time Image Processing". Proc. ofISCAS'91,  1991.  [57] Z. Cvetkovic ; M. V. Popovic. "New Fast Recursive Algorithm for the Computation of Discrete Cosine and Sine Transforms". IEEE Trans, on Signal Processing, 40(8), 1992. [58] J. L. Aravena ; W. A. Porter. "Array Based Design of Digital Filters". IEEE Trans, on ASSP,  38(9), 1990.  [59] W. K. Pratt. "Digital Image Processing".  John Wiley & Sons, Inc., 1991.  [60] F. A. Kamangar ; K. R. Rao. "Fast Algorithm for the 2-D Discrete Cosine Transfom". IEEE Trans, on Computers,  31(9), 1982.  [61] A. B. Kahng ; G. Robins. "On Optimal Interconnections for VLSI". Kluwer Academic Publishers, 1995. [62] D. Rodriguez. "Tensor Product Algebra as A Tool For VLSI Implementation of the Discrete Cosine Trnsform". Proc. ofICASSP'91,  1991.  Bibliography  93  [63] S. D. Kaushik ; S. Sharma ; C.-H. Huang ; J. R. Johnson ; R. W. Johnson ; P. Sadayappan. "An Algebraic Theory for Modeling Direct Interconnection Networks". Proc. ofICASSP'92,  1992.  [64] H. M . Alnuweiri ; S. M . Sait. "Efficient Network Folding Techniques for Routing Permutations in VLSI". IEEE Trans, on VLSI, 3(2), 1995. [65] R. King ; M . Ahmadi ; R. Naguib ; A. Kawabkw ; M . Sajajadi. "Digital Filtering In Plenum Press, 1989.  One And Two Dimensions".  [66] L-P Chau ; W-C Siu. "Recursive Algorithm for the Discrete Cosine Transform with General Lengths". IEEE Electronic Letters, 30(9), 1994. [67] Toby Skinner. "Harness Multiprocessing Power For DSP Systems". Electronic Magazine,  Design  February, 1994.  [68] H. Lim ; E. E. Swartzlander. " A Systolic Array for 2-D DFT and 2-D DCT". Proc. of Int'l Conf. on Application  Specific Array Processors,  1994.  [69] J. Granata ; M. Conner ; R. Tolimieri. "Recursive Fast Algorithms and the Role of the Tensor Product". IEEE Trans, on Signal Processing, 40(12), 1992. [70] M. Vetterli. "Fast 2-D Discrete Cosine Transform". Proc. ofICASSP'85,  1985.  [71] M. Vetterh. "Running FIR and IIR Filtering Using Multirate Filter Banks". IEEE Trans, on ASSP,  36(5), 1988.  [72] T. Wang ; C. L. Wang. " A New Two-Dimensional Block Adaptive FIR Filtering Algorithm". Proc. of ICASSP'94,  1994.  [73] E. Feig ; S. Winograd. "Fast Algorithms for the Discrete Cosine Transform". IEEE Trans, on Signal Processing,  40(9), 1992.  Bibliography  94  [74] F. A. McGovern ; R. F. Woods ; M . Yan. "Novel VLSI Implementation of 8x8 point 2-D DCT". IEEE Electronic Letters, 30(8), 1994. [75] M . H. Lee ; Y . Yasuda. "New 2-D Systolic Array Algorithm for DCT/DST". IEEE Electronic  Letters,  [76] K. R. Rao  Applications".  25(25), 1989.  ; P. Yip. "Discrete  Cosine  Transform:  Algorithms,  Advantages,  and  Academic Press, 1990.  [77] S. Zohar. " A VLSI Implementation of a Correlator/Digital Filter Based on Distributed Arithmetic". IEEE Trans, on ASSP, 37(1), 1989.  95  Appendix A. A Complete Characterization of the Matrix R(a)  Appendix A . A Complete Characterization of the Matrix R(a) The matrix R(a) has the following properties : 1- The number of 1-entries per row is either 1 or 2 ; all other entries are zeros. 2- The number of 1-entries per column is exactly 1; all other entries are zeros. 3- The number of rows that have two 1-entries is 2(2" - 1). It should be noted that this number is the same as the number of adders required to realize the effect of the matrix R(a) on its input vector. Using these properties, we can derive modular realizations for the matrix R(a) at different stages. For a convolution of size n = 2 , the coordinates of the 1-entries are [ift™ ) a  + j , i(2 - 1) + j] where  -1  a  i = 0 , 1 , 2.  (172) j = 0 , 1 , • • • , (2« - 2) .  Let a - 1 = j3. then, the coordinates for the 1-entries become [i.2@ + j, i ( 2  /3+1  — l) + j]  i = 0, 1 , 2. (173) j = 0, I,--- , ( 2 ^ - 2 ) . Now, observe that if i = 0 while j varies over its entire range, the set of coordinates for the 1-entries is given by  M = 4-i which describes an identity matrix that occupies rows 0 to 2  (174)  /3+1  - 2 and columns 0 to 2  /3+1  - 2 of the matrix  R(a). Similarly when i = 1 and j varies over its entire range, the set of coordinates for the 1-entries is given by [(2^+i) (2^-l) I  + i  ] jg) _ =  1  (175)  1  which describes another identity matrix I ?-n-i placed in rows 2P to (3.2  0+1  2  to ( 2  /3+2  - 2) and columns (2 / 3 + 1 - l)  - 3). Finally, when i = 2 and j varies over its entire range, then the set of coordinates for the  1-entries is given by [(2^+ +i),2(2^+ -l)+i] =I%_ 1  (176)  1  X  which describes the third identity matrix J / + -i placed in rows 2 9  2  to (3.2"+ -4). 1  1  /3+1  to (2  f}+2  - 2) and columns ( 2 / J + 2 - 2)  Appendix A. A Complete Characterization of the Matrix  R(a)  96  Observe that while the three identity matrices occupy distinct columns of the matrix R(a), they overlap in their row occupancy. Specifically, it is easy to see that the last (2^ — l ) rows of 1^ rows of i"( ) both occupy rows 2 2  0  to ( 2  / 3 + 1  - 2) of the matrix R(a).  and the first (2^ - l ) rows of 7 ) both occupy rows 2 (3  / 3 + 1  to 2  / 3 + 2  that there is no overlapping between the row coordinates of matrix R(a)  and the first (2^ — l )  Also, the last (2^ - l ) rows of  - 2 of the matrix R(a).  Finally, notice  and I - ) which means that each row of the 1  3  will contain only two 1-entries at the row coordinates specified above, and only one 1-entry  in the remaining rows. Using these properties, we can derive modular realizations for the matrix R(a) for different convolution sizes. For a convolution of size 2 (a = 1), the dimension of the matrix R(l) is of size (2 — l ) x 3(2 - 1) = 2  3 x 3 and it has the form,  1 0 0  0 0 1 0 0 1  (177)  and the number of rows that have two 1-entries is 2(2° - l ) = 0. It should be noted that the number of rows that have two 1-entries is the same as the number of adders required to realize the effect of the matrix R(a) on its input vector. It is clear that the matrix R(l) in this case is the identity matrix I and has no effect on the 3  input vector. Similarly, for a convolution of size 4 (a = 2), R(2) is a 7 x 9 matrix with the following form,  '1 0 0 0 0 0 .0  0 1 0 0 0 0 0  0 0 1 0 0 0 0  0 0 1 0 0 0 0  0 0 0 1 0 0 0  0 0 0 0 1 0 0  0 0 0 0 1 0 0  0 0 0 0 0 1 0  0' 0 0 0 0 0 1.  (178)  and the number of rows that have two 1-entries is 2(2 - 1) = 2. The adder network induced by this matrix ]  is shown in Fig. 30. The complete realization of R  4  represented in (14) is given in Fig. 31.  Appendix A. A Complete Characterization of the Matrix  R(a)  97  R(2)  9  Fig. 30 The realization of R(2).  Appendix A. A Complete Characterization of the Matrix  R(a)  98  Appendix B. The Verilog HDL Files for Convolution  Appendix B. The Verilog HDL Files and Schematics of The 16-point Convolution B.1 The Verilog Files of A(3) B.1.1 Half Adder Verilog File module halfadder(sum,carry,a,b); input a,b; output sum,carry; and adl(carry,a,b); xor xol(sum,a,b); endmodule  B.1.2 Full Adder Verilog File module fulladder(sum,carry,a,b,c); input a,b,c; output sum,carry; halfadder hfl(suml,carryl,a,b); hf2(sum,carry2,suml ,c); or orl(carry,carryl,carry2); endmodule  B.1.3 Flip Flop Verilog File module flop(d,clk,reset,q); input d,clk,reset; output q; nand ndl(a,d,clk,reset), nd2(b,nd,clk),  99  Appendix B. The Verilog HDL Files for Convolution  nd3(dd,c,b,reset), nd4(e,c,nclk), nd5(f,dd,nclk), nd6(qb,q,f,reset); nand nd7(c,a,dd), nd8(q,e,qb); not invl(nd,d), inv2(nclk,clk); endmodule  B.1.4 Serial Adder Verilog File odule serialadder(sum,a,b,clk,reset); input a,b,clk,reset; output sum; fulladder fdl(s,c,a,b,cc); flop fpl(c,clk,reset,cc), fp2(s,clk,reset,sum); endmodule  B.1.5 Serial-Parallel Multiplier Verilog File module Serparmmtp(serout,b0,serin,parinl,parin2,parin3,parin4,parin5,parin6,parin7,parin0,clk,reset); output serout; input b0,serin,parinl,parin2,parin3,parin4,parin5,parin6,parin7,parin0,clk,reset; serialadder sel(sl,al,bO,clk,reset), se2(s2,a2,s 1 ,clk,reset), se3(s3,a3,s2,clk,reset), se4(s4,a4,s3,clk,reset), se5(s5,a5,s4,clk,reset),  100  Appendix B. The Verilog HDL Files for Convolution  se6(s6,a6,s5,clk,reset), se7(s7,a7,s6,clk,reset), se8(serout,a8,s7,clk,reset); and an l(al,parin7,serin), an2(a2,parin6,serin), an3(a3,parin5,serin), an4(a4,parin4,serin), an5(a5,parin3,serin), an6(a6,parin2,serin), an7(a7,parinl,serin), an8(a8,parin0,serin); endmodule  B.1.6 A(l) Verilog File module a_one(aO,al ,a2,x0,xl ,clk,reset); output a0,al,a2; input xO,xl,clk,reset; serialadder sal(al,xO,xl,clk,reset); flop nl(xO,clk,reset,aO), fl2(xl,clk,reset,a2); endmodule  B.1.7 A(2) Verilog File module a_two(a,x,clk,reset); output [8:0] a; input clk,reset; input [3:0] x; a_one aal(a00,a01,a02,x[0],x[2],clk,reset),  101  Appendix B. The Verilog HDL Files for Convolution  aa2(a 10,al 1 ,a 12,x[ 1 ] ,x[3] ,clk,reset), aa3(a[0],a[l],a[2],a00,al0,clk,reset),. aa4(a[3] ,a[4], a[5] ,a01 ,al 1 ,clk,reset), aa5(a[6],a[7],a[8],a02,al2,clk,reset); endmodule  B.1.8 A(3) Verilog File module a_three(a,x,clk,reset); output [26:0] a; input clk,reset; input [7:0] x; wire [3:0] w0,wl,w2; a_one aa 1 (wO[0],w 1 [0],w2 [0] ,x[0],x[4] ,clk,reset), aa2(w0[l],wl[l],w2[l],x[l],x[5],clk,reset), aa3(w0[2],wl[2],w2[2],x[2],x[6],clk,reset), aa4(w0[3],wl[3],w2[3],x[3],x[7],clk,reset); a_two abl(a[8:0],w0,clk,reset), ab2(a[17:9],wl,clk,reset), ab3(a[26:18],w2,clk,reset); endmodule  102  Appendix B. The Verilog HDL Files for Convolution  B.2 The Verilog Files of B(3) B.2.1 B(l) Verilog File module b_one(b0,b 1 ,b2,x0,x 1 ,x2,clk,rest,one); output b0,bl,b2; input x0,xl,x2,clk,rest,one; flop fpl(xl,clk,rest,xll), fp2(xll,clk,rest xlll), >  fp3(one,clk,rest,ones), fp4(x0,clk,rest,x01), fp5(x01,clk,rest,x02), fp6(x02,clk,rest,b0), fp7(x2,clk,rest,x21), fp8(x21,clk,rest,x22), fp9(x22,clk,rest b2); >  serialadder sadl(s 1 ,x0,x2,clk,rest), sad2(s2,invs 1 ,ones,clk,rest), sad3(b 1 , x l 11 ,s2,clk,rest); not invl(invsl,sl); endmodule  B.2.2 B(2) Verilog File module b_two(b,x,clk,rest,one); output [6:0] b; input [8:0] x; input clk,rest,one; b_one bl(bl0,bll,bl2,x[0] x[l],x[2],clk,rest,one), )  b2(b20,b21,b22,x[3],x[4],x[5],clk,rest,one),  103  Appendix B. The Verilog HDL Files for Convolution  b3(b30,b31,b32,x[6] x[7],x[8],clk,rest,one), >  b4(out0,bb0,bb6,bl0,b20,b30,clk,rest,one), b5(outl ,out3,out5 ,b 11 ,b21 ,b31 ,clk,rest,one), b6(ba0,bal,out6,bl2,b22,b32,clk,rest,one); serialadder sadl(b[2],ba0,bb0,clk,rest), sad2(b [4] ,bal ,bb6,clk,rest); flop fpl(outO,clk,rest,b[0]), fp2(outl clk,rest,b[l]), (  fp3(out3,clk,rest,b[3]), fp4(out5,clk,rest,b[5]), fp5(out6,clk,rest,b[6]); endmodule  B.2.3 B(3) Verilog File module b_three(b,x,clk,rest,one); output [14:0] b; input [26:0] x; input clk,rest,one; wire [6:0] b21,b22,b23; b_two b221(b21[6:0],x[8:0],clk,rest,one), b222(b22[6:0],x[17:9],clk,rest,one), b223(b23[6:0],x[26:18],clk,rest,one); b_one bbl(bbl0,bblI,bbl2,b21[0],b22[0],b23[0],clk,rest,one), bb2(bb20,bb21 ,bb22,b21 [1 ] ,b22 [ 1 ] ,b23 [1 ] ,clk,rest, one), bb3(bb30,bb31 ,bb32,b21 [2] ,b22 [2] ,b23 [2],clk,rest,one), bb4(bb40,bb41 ,bb42,b21 [3],b22[3] ,b23 [3] ,clk,rest,one), bb5(bb50,bb51,bb52,b21[4],b22[4],b23[4],clk,rest,one), bb6(bb60,bb61 ,bb62,b21 [5] ,b22 [5] ,b23 [5],clk,rest,one),  104  Appendix B. The Verilog HDL Files for Convolution  105  bb7(bb70,bb71,bb72,b21[6],b22[6],b23[6],clk rest,one); )  serialadder sadl(b[4],bb50,bbll,clk,rest), sad2(b[5] ,bb60,bb21 ,clk,rest), sad3(b[6],bb70,bb31,clk,rest), sad4(b[8],bb5 l,bbl2,clk,rest), sad5 (b[9] ,bb61 ,bb22,clk,rest), sad6(b [10] ,bb71 ,bb32,clk,rest); flop ipl(bblO,clk,rest,b[0]), fp2(bb20,clk,rest,b[l]), fp3(bb30,clk,rest,b[2]), fp4(bb40,clk,rest,b[3]), fp5(bb41,clk,rest,b[7]), fp6(bb42,clk,rest,b[ll]), fp7(bb52,clk,rest,b[12]), fp8(bb62,clk,rest,b[13]), fp9(bb72,clk,rest,b[14]); endmodule  B.3 The Verilog Files Combining A(3), ,0(3), and B(3) B.3.1 D(3) with A ( 3 ) Verilog File module a_dJhree(d,hO,hl,h2,h3M hl9,h20,h21,h22 h23,h24,h25,h26,x,b0,clk,reset); )  output [26:0] d; input clk,reset,bO; input [7:0] x; input [10:0] h 0 , h l , h 2 h 3 , h 4 , h 5 , h 6 ^ )  hl9,h20,h21,h22,h23 h24,h25,h26; )  Appendix B. The Verilog HDL Files for Convolution  wire [26:0] a; a_three aa(a,x,clk,reset); parsermult p0(d[0],b0,a[0],h0,clk,reset), pl(d[l],bO,a[l],hl,clk,reset), p2(d[2],b0,a[2],h2,clk,reset), p3(d[3],b0,a[3],h3,clk,reset), p4(d[4],b0,a[4],h4,clk,reset), p5(d[5],b0,a[5],h5,clk,reset), p6(d[6],b0,a[6],h6,clk,reset), p7(d[7],b0,a[7],h7,clk,reset), p8(d[8],b0,a[8],h8,clk,reset)  )  p9(d[9],b0,a[9],h9,clk,reset), pl0(d[10],b0,a[10],hl0,clk,reset), p 11 (d[ 11 ] ,b0,a[ 11 ] ,h 11 ,clk,reset), p 12(d[ 12] ,b0,a[ 12] ,h 12,clk,reset), pl3(d[13],bO,a[13],hl3,clk,reset), pl4(d[14],b0,a[14],hl4,clk,reset), pl5(d[15],b0,a[15],hl5,clk,reset), pl6(d[16],b0,a[16],hl6,clk,reset), pl7(d[17],b0,a[17],hl7,clk,reset), pl8(d[18],b0,a[18],hl8,clk,reset), p 19(d[ 19] ,b0,a[l 9] ,h 19,clk,reset), p20(d[20],b0,a[20],h20,clk,reset), p21(d[21] b0,a[21],h21,clk,reset), )  p22(d[22],b0,a[22],h22,clk,reset), p23(d[23] ,b0,a[23] ,h23,clk,reset), p24(d[24] ,b0,a[24] ,h24,clk,reset),  106  107  Appendix B. The Verilog HDL Files for Convolution  p25(d[25],b0,a[25],h25,clk reset), )  p26(d[26] b0,a[26],h26,clk,reset); )  endmodule  B.3.2 A(3), D(3), and 5(3) Verilog File module conv_eight(c,h0,hl,h2^ h 19,h20,h21 ,h22 h23,h24 h25,h26,x,b0,clk,reset,one); )  >  input [7:0] x; input [10:0] h0,hl,h2 h3 h4 h5,h6,h7 h8 h9,hl0,hll,hl2,hl3 hl4,hl5,hl6 hl7,hl8 )  >  (  )  )  )  )  >  h 19,h20,h21 ,h22,h23,h24,li25,h26; input clk,reset,b0,one; output [14:0] c; wire [26:0] d; a_d_three ad(dMhl,h2,h3,h4,h5 h6 h7,h8,h9,hl0,hll,hl2,hl3,hl4,hl5,hl6,hl7,hl8, >  >  h 19,h20,h21 ,h22,h23,h24,li25,h26,x,b0,clk,reset); b_three b(c,d,clk,reset,one); endmodule  B.4 The Verilog Files to Generate the 16-point Convolution from the 8-point B.4.1 Q ( 4 ) Verilog File odule q_four(a,x,clk,reset); output [23:0] a; input clk,reset; input [15:0] x; a_one  a0(a[0],a[8],a[16],x[0],x[8],clk,reset),  al(a[l],a[9],a[17],x[l],x[9],clk,reset), a2(a[2],a[10],a[18],x[2],x[10],clk,reset),  Appendix B. The Verilog HDL Files for Convolution  a3(a[3],a[ll],a[19] x[3],x[ll],clk,reset) )  I  a4(a[4] ,a[l 2] ,a[20],x[4] ,x[l 2] ,clk,reset), a5(a[5],a[13],a[21],x[5],x[13],clk,reset), a6(a[6],a[14],a[22],x[6],x[14],clk,reset), a7(a[7],a[15],a[23],x[7],x[15],clk,reset); endmodule  B.4.2 RS(4) Verilog File module r_s_four_one(convl6,x,clk,reset,one); output [30:0] convl6; input [44:0] x; input clk,reset,one; // w ' l l use 15 of B l ' s (8+8+1=17) f f s (7+7=14) serialadder's b_one b0(b00,b01,b02,x[0],x[15],x[30],clk,reset,one), bl(bl0,bll,bl2,x[l],x[16],x[31],clk reset,one), )  b2(b20,b21,b22,x[2] x[17],x[32],clk,reset,one), >  b3(b30,b31,b32 x[3],x[18],x[33],clk,reset,one), )  b4(b40,b41,b42,x[4],x[19],x[34],clk reset,one), )  b5(b50,b51,b52,x[5],x[20],x[35],clk,reset,one), b6(b60,b61 b62,x[6],x[21],x[36],clk,reset,one), >  b7(b70,b71,b72,x[7],x[22],x[37],clk,reset one), >  b8(b80,b81 b82,x[8],x[23],x[38] clk,reset,one), )  >  b9(b90,b91,b92,x[9],x[24],x[39],clk,reset,one), b 1 _0(b 100,b 101 ,b 102,x [ 10] ,x[25],x[40] ,clk,reset,one), b 1_1 (b 110,b 111 ,b 112,x[l 1 ] ,x[26] ,x[41 ] ,clk,reset, one), bl_2(bl20,bl21,bl22,x[12],x[27],x[42],clk,reset,one), bl_3(bl30,bl31,bl32,x[13],x[28],x[43],clk,reset,one), bl_4(bl40,bl41,bl42,x[14],x[29],x[44],clk,reset,one);  108  Appendix B. The Verilog HDL Files for Convolution  flop f0(b00,clk,reset,convl6[0]), fl(bl0,clk,reset,convl6[l]), f2(b20,clk,reset,conv 16[2]), f3(b30,clk,reset,convl6[3]), f4(b40,clk,reset,conv 16[4]), f5(b50,clk,reset,convl6[5]), f6(b60,clk,reset,conv 16[6]), f7(b70,clk,reset,convl6[7]), f 8(b71 ,clk,reset,conv 16[ 15]), f9(b72,clk,reset,convl6[23]), fl0(b82,clk,reset,convl6[24]), f 1 l(b92,clk,reset,convl6[25]), fl2(bl02,clk,reset,convl6[26]), fl3(bl 12,clk,reset,convl6[27]), fl4(bl22,clk,reset,convl6[28]), f 15 (b 132,clk,reset,con v 16 [29]), fl6(bl42,clk,reset,convl6[30]); serialadder s0(convl6[8],b01,b80,clk,reset), s 1 (conv 16[9] ,b 11 ,b90,clk,reset), s2(conv 16[ 10] ,b21 ,b 100,clk,reset), s3(convl6[l l],b3 l , b l 10,clk,reset), s4(conv 16[ 12] ,b41 ,bl 20,clk,reset), s5(conv 16[ 13] ,b51 ,b 130,clk,reset), s6(conv 16[ 14] ,b61 ,b 140,clk,reset), s7(conv 16[16] ,b81 ,b02,clk,reset), s8(conv 16[ 17] ,b91 ,b 12,clk,reset), s9(conv 16[ 18] ,b 101 ,b22,clk,reset),  109  Appendix B. The Verilog HDL Files for Convolution  110  s 10(conv 16[ 19] ,b 111 ,b32,clk,reset), s 11 (conv 16[20] ,b 121 ,b42,clk,reset), s 12(conv 16[21 ] ,b 131 ,b52,clk,reset), s 13(conv 16[22] ,b 141 ,b62,clk,reset); endmodule  B.4.3 The 16-point Convolution Verilog File odule Conv_sixteen(c,h0 hl h2 h3 h4,h5,h6,h7,h8 h9 hl0 hll,hl2 hl3,hl4,hl5 >  >  >  )  )  )  )  >  hl9,h20,h21,h22,h23,h24,h25,h26^ h35 h36 h37 h38 h39 h40 h41,h42,h43,h44 h45,h46 h47M8 h49,h50,h51,h52,h53, >  )  )  )  >  )  )  >  )  h54,h55,h56,h57,h58,h59,h60,h61,h62,h63,^ h74,h75,h76,h77,h78,h79,h80,x,b0,clk,reset,one); input [15:0] x; input [10:0] h0 hl,h2,h3,h4,h5,h6,h7,h8,h9,hl0 hlI,hl2,hl3,hl4,hl5,hl6,hl7,hl8, >  )  1119,1120,1121,1122,1123,1124,1125,1126,1127,1128,1129,1130,1131, h32,h33,h34,h35,h36,h37,h38,h39,h40,h41,h42,h43,h44,h45,h46, h47,h48,h49,h50,h51,h52,h53,h54,h55,h56,h57,h58,h59,h60,h61, h62,h63,h64,h65,h66,h67,h68,h69,h70,h71,h72,h73, h74,h75,h76,h77,h78,h79,h80; input clk,reset,b0,one; output [30:0] c; wire [23:0] a; wire [44:0] cc; conv_eight Cl(cc[14:0],h0,hl,h2,h3,h4,h5,h6,h7,h8,h9,hl0,hll,hl2,hl3,hl4,hl5,hl6,h hl9,h20,h21,h22,h23,h24,h25,h26,a[7:0],b0,clk,reset,one), c2(cc[29:15], h27,h28,h29,h30,h31 ,h32,h33,h34,h35,h36,h37,h38,h39,h40,h41, h42,h43,h44,h45,h46,h47,h48,h49,h50,h51,h52,h53,a[15:8],b0,clk,reset,one),  c3(cc[44:30], 1154,1155,1156,1157,1158,1159,1160,1161,1162,1163,1164,1165,1166,1167, Ii68,h69,h70,h71,h72,h73,li74,h75,h76,h77,h78,h79,h80,a[23:16],b0,clk,reset,one);  Appendix B. The Verilog HDL Files for Convolution  q_four q(a,x,clk,reset); r_s_four_one rs(c,cc,clk,reset,one); endmodule  111  Appendix C. The Schematics of the DCT Architecture  Appendix C. The Schematics of the DCT Architecture C.1 The Schematic of the Full Adder  112  Appendix C. The Schematics of the DCT Architecture  C.2 The Schematic of the Half Adder  113  Appendix C. The Schematics of the DCT Architecture  114  C.3 The Schematic of F  2  _Q Z)  cn  Z5  _Q cn  um  ca C/)  Z5  I  er.  cn  i  cn  er  ~o o  Cn cu c  o  1 1  o  X}  cu CJ  o  tt  AAA* O  _Q  ^  CU  cu  O  -Q  73  cu cn  Appendix C. The Schematics of the DCT Architecture  C.4 The Schematic of Q4  115  Appendix C. The Schematics of the DCT Architecture  116  C.5 The Schematic of C?4  +  >S >N >N  i t * *  J )  ID  •ttt m o> ^ >< >. >. ><  N  >N >> >~ >..  f  mr  CN  S  CN ro  n  CU  o u in  O  u u w  m  CO  N  >N  ^ CN  ro  S  CN  ^ (N f ) (D _ <D X X X X O U (0  S  S  ^- CN ro (D  A A A * (D  m  >N >N  S  >\ >^ >S >S  S  CN ro  ro  CU  **** C N  ro ^ J - i n  u u cn  Appendix C. The Schematics of the DCT Architecture  117  C.6 The Schematic of R4  d  X0  > y0  q  ce >c clr TT4-  12  FDHF d  •y2  q  ce elk  >c clr  x2 x3  ,10 sum  a b elk ce  ser_add  • y1  Appendix C. The Schematics of the DCT Architecture  118  C.7 The Schematic of R4  (S3 — C\|  TJ- m  KI  ID  N  CN  T —  **** OQ <T)  n  m  >-. >N ^  >N >> ^ >N  CS  •— CN fO  QJ  _  IS  CN r o <D  5)  CN ro  OJ  tSl ^ -  CN rO  QJ  4 - S3  'T  CN  ro  cS  '—  CN  r o  <S> T~ CN r o >N >~ >->  t u  <S  1  ITT CO  01  i—  CN IO  D  Appendix C. The Schematics of the DCT Architecture  119  C.8 The Schematic of the Serial Adder  c c X)  4  4  U  o  Appendix C. The Schematics of the DCT Architecture  C.9 The Schematic of the Serial-Parallel Multiplier  120  Appendix C. The Schematics of the DCT Architecture  C.10 The Schematic of the Serial Subtractor  121  Appendix C. The Schematics of the DCT Architecture  122  C.11 The Schematic of the 2-d 2 x 2 DCT  IS  CN cj  ro  rs  <-  It  o u  o 5:  C  o ca —  IS  IS o  "  o  5  <p c o J  CD C O  J rs x  IS x  x  co o  z£ o  cu vi  IS  *—  x  x  cj  CN x  fO x  CD o  CD — o  CD  w  -~ x  CD CO  Appendix C. The Schematics of the DCT Architecture  C.12 The Schematic of the 1-d 2-Point DCT  o CM  (/)  E  cr  i o Q .  I CD C/l  o  U  w  ub  eri  C  E  cn  D (/)  a;  CD  O  X)  o js J  CD  c  o o  iA  CD  O  1  o  "(D If)  u  u  123  Appendix C. The Schematics of the DCT Architecture  124  I  X  X I  y<9>  y<!4> I  ! ! ! Sza I  v<6> y<7> I  I  X y<5>  y<3>  f  I  I y<2>  C.13 The Schematic of the 2-d 4 x 4 DCT  3  s'-tNroTriniorv[oo)ca--fN'0'^'in  JZ D  O  J  O *  o *  o  •  o Q CM ro V ± U X X X X U U W  B  -  N n  U  X X X X <J  — CMnTriniONDOOiQ-'MrO-tlfJ  c o  f x<8>  \  •  V  x<6>  i  t  ca  r x<4>  S  ir  ni  KI  * in  =5 4>  B-  IN  n  V  ft  0)  —  Q tsl ro 4) X X X X O U  ="  Appendix D. List of Acronyms  List of A c r o n y m s  A/D ASIC ASSP BAAT CAD CD CLB Codec DCT DFT DSP DVD FFT FPGA fps HDL IDCT I/O JPEG KLT LAN MOPS MPEG m-d  Analog-to-Digital Application Specific Integrated Circuits Application Specific Standard Products Bit-At-A-Time Computer Aided Design Compact Disc Configurable Logic Block Encoder/Decoder Discrete Cosine Transform Discrete Fourier Transform Digital Signal Processing Digital Video Disc Fast Fourier Transform Field Programmable Gate Arrays Frame Per Second Hardware Description Language Inverse Discrete Cosine Transform Inputs/Outputs Joint Photograph Exepert Group Karhunen-Loeve transform Local Area Networks Million Operations Per Second Moving Picture Expert Group Multi-Dimensional  125  Appendix D. List of Acronyms  1- d PC RDS RGB RISC ROM SAR 2- d  One-Dimensional Personal Computers Replicated Dilated Shuffle Permutations RED-Green-Blue Reduced Instructions Set Computers Read Only Memory Synthetic Aperture Radar Two-dimensional  VHDL  Very-high-speed-integrated-circuit Hardware Description Language  VLSI WAN WHT  Very Large Scale Integration Wide Area Networks Walsh-Hadamred Transform  126  

Cite

Citation Scheme:

    

Usage Statistics

Country Views Downloads
China 11 4
United States 6 1
India 5 1
France 5 0
Poland 2 0
Tunisia 1 0
United Kingdom 1 0
City Views Downloads
Unknown 11 1
Hefei 6 0
Shenzhen 4 2
Ashburn 3 0
Mumbai 3 0
San Jose 1 0
London 1 0
Ujjain 1 0
Beijing 1 2

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}
Download Stats

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0064813/manifest

Comment

Related Items