Low Dimensional Search for Efficient Texture Synthesis by Fred Kimberley B .Sc , The University of British Columbia, 2002 A THESIS S U B M I T T E D IN P A R T I A L F U L F I L M E N T O F T H E R E Q U I R E M E N T S F O R T H E D E G R E E O F M A S T E R OF S C I E N C E in The Faculty of Graduate Studies (Department of Computer Science) We accept this tfiSsis as conforming to the required standard T H E U N I V E R S I T Y OF BRITISH C O L U M B I A October, 2004 © Fred Kimberley, 2004 FACULTY OF GRADUATE STUDIES THE UNIVERSITY OF BRITISH COLUMBIA Library Authorization In present ing this thesis in partial fulf i l lment of the requi rements for an advanced degree at the Universi ty of British Columbia, I agree that the Library shall make it freely avai lable for reference and study. I further agree that permiss ion for extens ive copying of this thesis for scholar ly purposes m a y be granted by the head of my depar tment or by his or her representat ives. It is unders tood that copying or publ icat ion of this thesis for f inancial gain shall not be a l lowed wi thout my wri t ten permiss ion. N a m e of Author (please print) ' Date (dd /mm/yyyy) Title of Thesis: Lpw Pir"tV>siflri<7 ^arc~/) fflf E f f i c i W Te*tWf ^ V t / A f f K Degree: h i k . Year: 2 0 OH Depar tment of CoMOVifyf ^CltY\f€--rt : A . . _rn_:j.:_i_ y*~> ^ i..I i_:_ The University of British Co lumbia Vancouver , BC C a n a d a g rad. u be. ca/form s/?form I D=TH S page 1 of 1 last updated: 20-Jul-04 Abstract i i Abstract Current texture synthesis algorithms rely upon high dimensional approximate nearest neighbour (ANN) searches to determine the best pixel to use at the current position. The feature vectors used in the A N N search are typically between 100 and 300 dimensions. A large amount of research has examined how to reduce the number of feature vectors that need to be searched but very little has been done to speed up the actual comparisons. We present two new texture synthesis algorithms that use an order of magnitude fewer dimensions during the A N N search. In addition, we construct and make use of an error texture that further reduces the time spent comparing two feature vectors. Contents iii Contents Abs t rac t ii Contents ii i L i s t of Tables vi L i s t of Figures vii Acknowledgements ix 1 In t roduct ion 1 1.1 Motivation 1 1.2 Overview 2 2 Previous W o r k 4 2.1 Pixel-Based Methods 4 2.1.1 Cluster-Based Probabilistic Model 5 2.1.2 Steerable Pyramids 7 2.1.3 Parzen Estimation 7 2.1.4 Multiresolution Sampling 8 2.1.5 Non-Parametric Sampling 9 2.1.6 Fixed Neighbourhood Search 9 2.1.7 Coherence Based 11 2.1.8 Non-Hierarchical Synthesis 11 2.1.9 Variable Neighbourhood Search 11 2.1.10 Image Analogies 12 Contents 2.1.11 Jump Maps 13 2.1.12 Progressively Variant Textures 13 2.2 Patch-Based Methods 14 2.2.1 Chaos Mosaic . . . 15 2.2.2 Image Quilting 15 2.2.3 Patch based Sampling 16 2.2.4 Hybrid Texture Synthesis 17 2.2.5 Non-Periodic Tilings 17 2.2.6 Graphcuts 19 2.2.7 Feature Matching and Deformation 20 3 L o c a l and G l o b a l Feature Synthesis A l g o r i t h m 23 3.1 Obtaining the Texton Mask 23 3.2 Texton Mask Synthesis 24 3.2.1 Patch Selection 24 3.2.2 Patch Warping 25 3.3 Colour Synthesis 28 3.3.1 Low Dimensional Feature Vector 28 3.3.2 Distance Function 28 3.3.3 Search Techniques 30 4 En t ropy M i n i m i z i n g Synthesis A l g o r i t h m 33 4.1 Discretizing the Texture 33 4.1.1 Vector Quantization 33 4.1.2 Feature Vectors 34 4.2 Choosing the Dimensions to Keep 34 4.2.1 Entropy and Information Gain 35 4.3 Synthesizing the Texture 40 5 Resul ts 41 5.1 Local and Global Synthesis 41 5.1.1 Texton Mask Synthesis 41 Contents v 5.1.2 Colour Synthesis 42 5.2 Entropy Minimizing Synthesis 47 6 Conclusion 58 6.1 Future Work 59 6.1.1 Texton Masks 59 6.1.2 Entropy Minimizing Synthesis 60 A Effects of High Dimensionality on Nearest Neighbour Search 61 A. l Sparsity of Data 61 A.2 Volumes of Hyperspheres 62 B Synthesis Times 64 Bibliography 68 List of Tables vi List of Tables 4.1 Conditional entropies 38 B . l Texton mask synthesis times 64 B.2 Colour synthesis times for hash-based search 65 B-3 Colour synthesis times for fc-coherence search 65 B.4 Candidate neighbourhood selection times 66 B.5 Synthesis times using information gain 67 List of Figures vii List of Figures 2.1 Popat and Picard's causal neighbourhood 6 2.2 L-shaped causal neighbourhood 10 2.3 Pixel orders for variable neighbourhood search 12 2.4 Minimum error boundary cut 16 2.5 Wang tile creation 18 2.6 Wang tile packing 20 2.7 Boundary edge selection using Graphcuts 21 3.1 Texton mask feature vector 26 3.2 Patch warping 27 4.1 Naive neighbourhood ordering using information gain 37 4.2 Avoiding blocks of repeated pixels 39 5.1 Automated texton selection 43 5.2 Texton mask synthesis 44 5.3 Texton mask synthesis 45 5.4 Effect of number of generations in order independent search . . . 47 5.5 Robustness of the distance-based hashing 48 5.6 Colour synthesis failures 49 5.7 Colour synthesis with fc-coherence search 50 5.8 Colour synthesis results with distance-based hashing 51 5.9 Effect of number of textons on synthesis results 53 5.10 Effect of number of textons on low frequency textures 54 List of Figures viii 5.11 Synthesis of natural textures 55 5.12 Synthesis of low frequency textures 56 5.13 Synthesis of highly structured textures 57 Acknowledgements ix Acknowledgements I would like to thank my supervisor for encouraging me to work on this topic. His insights have been invaluable during this work. I would also like to thank Jim Little for his many helpful comments and suggestions. Thanks to my proof-reader Ciaran Llachlan Leavitt for taking the time to correct my many mistakes. Thanks to Michael Cohen, Alexei Efros and Vivek Kwatra for their permission to reproduce images from their publications. I would like to thank everyone around the lab who has made the last two years such an enjoyable experience. Last, but definitely not least, I would like to thank my family for their continuous support throughout my education. I couldn't have done it without them. Chapter 1. Introduction 1 Chapter 1 Introduction Begin at the beginning and go on till you come to the end; then stop. —Charles Dodgson. 1.1 Motivation The quest for realistic computer generated images has led to the development of many techniques. One of these techniques that is used in almost every mod-ern graphics application is texture mapping [8]. Texture mapping provides an efficient way to include detail on the surfaces of objects in the scene. The effectiveness of texture mapping is limited by its reliance on the image to be mapped. If an image is mapped onto an object in the scene that is bigger (in texture coordinates) than the original texture then the texture must be tiled. In this case care must be taken that the original texture is continuous across its boundaries. If it is not then a seam will appear on the rendered object. Even when the original texture can be tiled tiling often leads to a noticeable, regular repetition of features. One possible solution is to use larger textures. Using larger textures presents other disadvantages. Textures can be captured from nature, created by hand, or procedurally defined [32]. Textures captured from real objects can vary widely depending on lighting conditions and the camera angle. Visual artifacts can be seen when the lighting in the scene does not match the lighting conditions from when the texture was captured. Having an artist create a texture is a time Chapter 1. Introduction 2 consuming process. Also, any textures that are stored take up space in memory. Even on modern systems there is a limit to how much memory can be used to store textures. Procedural textures can be used to create textures of any size without having to store the entire texture. Most procedural textures are limited to a specific class of textures such as animal spotting patterns [39]. It can also be difficult to select the proper parameters to generate the texture you want. Texture synthesis was developed to address these problems. Most modern texture synthesis algorithms can synthesize tileable textures even when they are given a nontileable source texture. Also, because texture synthesis algorithms create arbitrarily large textures, regular feature repetition does not cause a problem. Texture synthesis can be applied to many different types of textures and the results will be similar to the source texture. The parameters used in texture synthesis algorithms usually provide an intuitive speed/quality trade-off. Most modern texture synthesis algorithms rely on an approximate nearest neighbour (ANN) search to find the best patch to synthesize at the current location. Calculating nearest neighbours in high dimensions is computationally expensive. The usefulness of finding the nearest neighbour (NN) also becomes questionable as the dimensionality increases [6] [19]. Some synthesis algorithms have dealt with this by removing N N search entirely from the runtime portion of the algorithm [46] [21]. Although this works for many textures it results in noticeable artifacts for some classes of textures. Principal components analysis (PCA) is a relatively fast method to reduce the dimensionality but often retains the instability found in high dimensional searches [1]. To avoid some of these problems we reduce the dimensionality of the A N N search while still comparing enough information. 1.2 Overview In this thesis we explore two new algorithms for texture synthesis. The goal of both algorithms is to use as few dimensions as possible when performing an A N N search. The main contributions of our approaches are: Chapter 1. Introduction 3 • Separation of local and,global features: First a texton mask [47] is synthesized (Section 3.2). Global information is taken from the. texton mask and uses only 5-9 dimensions (Section 3.3.1). The local informa-tion is then supplied using a small causal neighbourhood. A n error metric for using the combined local and global information is developed (Sec-tion 3.3.2). Search structures that take advantage of this separation are then developed (Section 3.3.2). • A method for identifying and removing the least impor tant d i -mensions: The image is clustered into n unique pixel values (Section 4.1). Feature vectors are constructed using this clustered representation. The dimensions in the feature vectors are compared to determine which pro-vide the most information gain (Section 4.2). This process is repeated until the desired number of dimensions is reached. These dimensions are the only ones used in the A N N search. Chapter 2 provides an overview of related work. Chapters 3 and 4 provide a detailed explanation of the two algorithms. Chapter 5 shows the results that are obtained from them, and Chapter 6 discusses possible improvements and extensions to the algorithms. Chapter 2. Previous Work 4 Chapter 2 Previous Work You can know the name of a bird in all the languages of the world, but when you're finished, you'll know absolutely nothing whatever about the bird... So let's look at the bird and see what it's doing -that's what counts. —Richard Feynman. Several approaches to texture synthesis have been developed in the past few years. Most recent texture synthesis methods consist of finding the best matching patch from the original texture and placing it in the current position in the synthesized texture. After placing the patch, some form of correction is often used to fix areas that don't match up properly with the existing patches. Pixel-based methods can be regarded as a special case where the patch size is l x l . The main differences between these methods is how they select the patch to be placed, synthesis order, and how they correct the patch boundaries. In this section, I will concentrate on methods which follow this general frame-work. This will be followed by a brief discussion of methods that do not follow this framework. 2.1 Pixel-Based Methods Pixel-based texture synthesis methods have had a lot of recent successes with certain types of textures. In general, pixel based methods model texture as a Markov random field (MRF) [11]. MRFs have two important properties for texture synthesis: stationarity and locality. Locality means that the probability Chapter 2. Previous Work 5 of the current pixel having a given value is dependent on only a finite number of surrounding pixels. Mathematically, given an image / , a pixel p e i* and a neighbourhood ui(p) C I the value of p is independent of I \ u>{p). Stationarity implies that the probability distribution function (PDF) is the same for all p £ I. Consequently the size and shape of the neighbourhood to(p) is the same for all p£l. Pixel based methods all create some type of feature vector to describe the neighbourhood. This feature vector is then used to determine the value of the current pixel either deterministically or nondeterministically. Unless otherwise stated, the feature vectors used for all of the following pixel based methods are composed of a concatenation of R G B triples for all of the pixels in the neighbourhood. 2.1.1 Cluster-Based Probabilistic Model Popat and Picard [33] use a semi-parametric model to synthesize textures. They note that it is infeasible to calculate the probability density function (PDF) of a high-dimensional vector unless it is broken down into a product of conditional PDFs. However, this fails to capture the interdependence of the vector elements. As a result, they use a probability mass function (PMF) p(x) and restrict it to be of a form where the chain rule can be applied. p(x) = p(xi,...,xN) = p{xi)p(x2\xl)p(x3\x1,x2) • • -p{xN\xi,...,XN—I) Initially the data is clustered into M clusters using the Linde-Buzo-Gray (LBG) algorithm [25]. The P M F , p(x), is composed of a weighted sum of M component PMFs , pm(x). Each component P M F is centred on a different clus-ter. M Chapter 2. Previous Work (i where wm > 0 and ]£m=l = To ensure q(x) can be factored by the chain rule the component PMFs are restricted to the form Qm(^0 — J^J / m , n ( * E n ) > where the fm,nS are discretized Gaussians. A causal neighbourhood is used to predict the value of the current pixel according to the restricted P M F . A causal neighbourhood is a neighbourhood where all of the pixels in the neighbourhood have already been synthesized. A noncausal neighbourhood contains pixels that have not been synthesized. The shape of causal neighbourhoods often depends on the synthesis order used. To reduce the dimensionality, a hierarchical method is proposed. Once the previous level has been synthesized, the pixel values of every fourth pixel are given by the pixel values of the previous level. This pixels are not synthesized with the current level but they are added to the causal neighbourhood as in Figure 2.1. • • • • • • • • • • X • • • • • • • • • • • • • X • • • • • • • ••••• • • X • • Figure 2.1: The causal neighbourhood combined with pixels synthesized at the previous level in Popat and Picard's hierarchical texture synthesis. Pixels with dots are in the neighbourhood, pixels with an X are the pixel to be synthesized and grey pixels were synthesized at the previous level. Chapter 2. Previous Work 7 2.1.2 Steerable Pyramids Heeger and Bergen [17] used knowledge of the human visual system to create their synthesis algorithm. Textures are considered similar if they produce a similar set of responses in a bank of orientation and spatial-frequency selective linear filters. They take advantage of this fact by forcing the synthesized texture to have matching histograms of filter outputs with the original texture. The algorithm starts by initializing the synthesized image to random noise. The algorithm then iteratively alternates between forcing the histogram of the synthesized image match the histogram of the original image and matching the histograms of each of the pyramid subbands. Although there is no proof of convergence, in practice the algorithm converges within a few iterations. Allowing the algorithm to continue for excessive iterations introduces artifacts. 2.1.3 Parzen Estimation Paget and Longstaff [29] were able to obtain accurate texture synthesis by using a nonparametric M R F model. Given a noncausal neighbourhood for the current pixel, the local conditional probability density function (LCPDF) is computed to determine the probabilities of the current pixel value. The L C P D F is ob-tained by using the Parzen-window density estimator [31] to perform density estimation on the multi-dimensional histogram created by the neighbourhood data. The texture is then synthesized using relaxation. The Parzen-window density estimator is a nonparametric density estimation technique that gathers information from neighbours within a fixed distance of the current point. Re-laxation techniques modify the image one pixel at a time until it converges with a probability given by the P D F of the M R F . Two well known stochastic relax-ation algorithms are the Gibbs sampler and the Metropolis algorithm [12][15]. The relaxation is performed using a deterministic algorithm called the iterative conditional modes (ICM) algorithm [7]. A multiscale approach similar to the one used by Popat and Picard [33] is used to reduce the neighbourhood size. 1 Paget and Longstaff later adapted their algorithm to make use of the strong Chapter 2. Previous Work 8 M R F property [30]. A strong M R F is a M R F where every sub-neighbourhood of the defined neighbourhood is Markovian. The L C P D F for a strong M R F can be written as p{x\xi,...,xn)= Y[ P(x\xr,r€C), C C w ( p ) , C g C ' o ' ( p ) where C C u>(p) is a clique. Cliques are subsets of the neighbourhood where every pixel in the clique is a neighbour to every other pixel in the clique. This new formulation for the L C P D F alleviates some of the problems with having sparse data to calculate high dimensional PDFs. 2.1.4 Multiresolution Sampling DeBonet [9] starts with the assumption that, below some resolution, textures contain regions that differ by less than some discrimination threshold. Below this threshold, similar regions can be permuted without altering the perceived characteristics of the texture. Initially an analysis pyramid is constructed. The analysis pyramid consists of a Laplacian pyramid and the responses to a filter bank of oriented first and second order Gaussians derivatives. Synthesis is done using a multiresolution approach, beginning at the lowest resolution level. The synthesis is initialized with the top level of the synthesized pyramid consisting of a tiled copy of the top level of the analysis image (a single pixel). Every level of the synthesis pyramid is constructed using information from the equivalent level of the analysis pyramid. For a given pixel location at the current level a feature vector composed of the filter responses for its parent pixels at all of the higher levels is constructed. A candidate set is then composed of all of the pixels in the analysis pyramid that are similar enough to the current pixel. To measure similarity, a vector of thresholds is used. If the distance between any features in the analysis pixel feature vector and the synthesized feature vector exceeds the corresponding threshold then the pixel is not added to the candidate set. A separate vector of thresholds is used for every level of the pyramid. Once the candidate set has Chapter 2. Previous Work 9 been calculated uniform random sampling within the candidate set is used to determine the value of the current pixel to be synthesized. 2.1.5 Non-Parametric Sampling Efros and Leung [14] model the input texture as a Markov Random Field (MRF) . Given an image / and a pixel p € hynth they let uj(p) C I s y n t h be a square patch of the synthesized image with width w and centred at p. Because M R F s are local, the value of p is independent of / \ ui(p) given The closest match uibest = argminwci(a;(p), ui) C / for the current pixel is found. A set &(p) = {w C I : d(u>{p),ui) < (1 + e)d(ui(p),<jJbest)} is then created. The values of the centre pixels in O(p) are used to create a histogram. This histogram is then sampled uniformly to find the value of the synthesized pixel. The synthesis is initialized with a 3 x 3 patch taken from the original image / . The distance function is the L2 norm multiplied by a two-dimensional Gaussian. In general, not all of the pixels contained in u>(p) are known when pixel p is being synthesized. In this case, only the known values of u>(p) are used and the error is normalized by the number of known pixels. To synthesize the whole image, pixels are synthesized in a spiral order starting at the 3 x 3 seed patch. 2.1.6 Fixed Neighbourhood Search Wei and Levoy [42] developed a technique that is very similar to Efros and Le-ung's [14]. While Efros and Leung only used the locality property of MRFs , Wei and Levoy also use the fact that MRFs are stationary. This means that the neighbourhood does not change from pixel to pixel. Wei and Levoy use an L-shaped causal neighbourhood uj(p) as shown in Figure 2.2. The pixels contained in this neighbourhood can all be assumed to have been synthesized because the synthesis progresses in scanline order. The synthesized image is initialized to random values. The pixel values in the neighbourhoods are concatenated to form a high-dimensional feature vector. To make the synthesis more efficient the feature vectors are preprocessed using tree-structured vector quantization Chapter 2, Previous Work 10 (TSVQ). This process reduces the number of pixels to be searched to approxi-mately 10% of the original pixels. It also provides an efficient search structure so that the nearest neighbour is found in 0(log(n)) time where n is the number of pixels remaining after TSVQ. The nearest neighbour is determined using the 1,2 norm and is used as the value of the synthesized pixel. 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Figure 2.2: A 5 x 5,2 causal neighbourhood used by Wei and Levoy. At the current level the neighbourhood is L-shaped so that all pixels in the neighbourhood have been synthesized when synthesis is performed in scanline order. The lower resolution level uses a square neigh-bourhood because the entire lower resolution image has already been synthesized. To further speed up their algorithm, Wei and Levoy incorporated a mul-tiresolution pyramid. Synthesis is performed one level at a time starting with the level with the lowest resolution. Except for the top level, each level of the pyramid uses pixel values from the current level and from the level above it. At the current level the feature vector is composed of pixels from the L-shaped neighbourhood of the current level and a square shaped neighbourhood from the previous level. This allows smaller neighbourhoods to be used because large-scale features are synthesized at lower levels. These differences make Wei and Levoy's algorithm about two orders of magnitude faster than Efros and Leung's. Chapter 2. Previous Work 11 2.1.7 Coherence Based Ashikhmin [5] noted that the nearest neighbour to a pixel in the synthesized image often came from a location in the source image that was near the source locations of pixels in its neighbourhood. Along with the synthesized image, a coherence map that stores the source locations of the pixels in the synthesized image is maintained. Each pixel in the L-shaped neighbourhood w(p) adds an appropriately shifted pixel to the candidate set. This results in a candidate set with only 0(w) pixels to be checked. This implies that the number of pixels checked is constant with respect to the source image size. This algorithm works especially well for images with high frequencies that tend to be blurred by Wei and Levoy's algorithm [42]. 2.1.8 Non-Hierarchical Synthesis The typical scanline order does not work well for all textures. A n image of a plant with branches coming off of a stem will not be synthesized well. If the image is synthesized from the bottom up the branchings are more likely to be correct. Harrison [16] noted this fact and developed an algorithm that deter-mines an optimal synthesis order for each texture. The locations and values of previously synthesized pixels constrain surrounding pixels. The most con-strained pixel can be synthesized most accurately. The entropy for every pixel in the image is calculated and stored. The entropies are modified with a small amount of noise to avoid having pixels with the same entropy. The pixel with the lowest entropy in the image is synthesized. After the pixel is synthesized the entropies of pixels in its neighbourhood are updated. 2.1.9 Variable Neighbourhood Search A N N search tends to lead to features with blurred edges in the synthesized image. Tran and Datta [37] suggest that the blurring is caused by using large neighbourhoods in areas where they are unnecessary. They reduce the blur-ring by using a variable neighbourhood search. The neighbourhood begins as Chapter 2. Previous Work 12 a 5 x 5 L-shaped noncausal neighbourhood. The average R G B value of the neighbourhood is calculated. If the difference between the R G B pixel values and the avarage value exceeds a user defined threshold, A, the A N N is found using the current neighbourhood. If the threshold is not exceeded the next pixel is added to the neighbourhood. The pixel ordering is shown in Figure 2.3. The neighbourhood has a maximum size of 21 x 21. To accelerate the A N N search a kd-tree is used. i 1 2 3 1 ____ . , . 0 15 16 17 18 19 20 21 14' 6 7 8 9 10 22 13 5 1 2 3 11 23 12 4 0 j 4 0 Figure 2.3: The pixel ordering used in variable neighbourhood search. The 3x3 , 5 x 5 and 7 x 7 neighbourhoods and their orders are shown. 2.1.10 Image Analogies Hertzmann et al. [18] combined Ashikhmin's work with Wei and Levoy's to create a higher quality synthesis algorithm. To synthesize a pixel the nearest neighbours using an approximate nearest neighbour (ANN) search and coher-ence based search are both computed. To encourage verbatim copying, a coher-ence parameter K is selected by the user. The synthesized pixel is given the value of the coherence search result unless the distance to the A N N result is less than K times the distance to the coherence based result. Higher values of K result in more coherent images. In practice n is typically in the range 0.5 < K < 5. Although this algorithm takes significantly longer than Ashikmin's algorithm it does a better job at hiding the edges between coherent patches. Chapter 2. Previous Work 13 2 . 1 . 1 1 Jump Maps Jump maps [46] reduce run-time processing to an absolute minimum. Most of the run-time computation spent during texture synthesis is spent on A N N calculations. In the jump-map, no A N N computation is done at run-time. In-stead, up to three ANNs are computed as a preprocess. Each of these ANNs is assigned a probability based on its similarity to the current pixel and then the probabilities are normalized. These normalized probabilities are placed into the jump map. At run-time one of the previously synthesized adjacent pixels is randomly selected with equal probability. A random number in the range [0, n] is selected, n is a user defined parameter that controls the patch-size. Higher values of n result in decreased jumps and more verbatim copying. If the random number is less than 1, the jump map is accessed and one of the ANNs is chosen based on their probabilities. To avoid artifacts a small Gaussian patch (3x3 or 5 x 5) is blended into the synthesized image instead of a single pixel. Jump maps provided the first real-time pixel based texture synthesis algo-rithm. However, it produces noticeable artifacts in textures with low frequency content. It is also very dependent on the synthesis order. In practice, a Hilbert curve is used. 2 .1 .12 Progressively Variant Textures The progressively variant textures algorithm [47] was developed to synthesize nonhomogenous textures. Nonhomogenous textures are those where the size and shape of features varies as a function of spatial location throughout the texture. The key to the algorithm is a user specified texton mask. Textons are the fundamental elements in preattentive texture perception [48] [22]. The exact definition of what a texton is is still unclear. Julesz proposed that they can be elongated blobs, terminators, and crossings. Shum suggests that 2D textons may be image patches that can be obtained using simple clustering techniques [35]. The texton mask is a coarse segmentation of the the image based on this definition of a texton. The texton mask corresponds to features in Chapter 2. Previous Work 14 the source texture and usually consists of only 2 or 3 colours. In addition to the texton mask the user also specifies an orientation field and transition function for both the original texture and for the synthesized texture. To synthesize a target pixel p, first the neighbourhood must be constructed. This neighbourhood is oriented according to the orientation field vector at the current point and then scaled according to the magnitude of the vector. The already synthesized pixels are then resampled to determine the neighbourhood. The texton mask at the current pixel is then synthesized using a nearest neigh-bour search in the L2 norm. Next the pixel value is determined by finding the pixel that minimizes dist(Nc(y), Nc{p)) + dist(Nm(v), Nm(p)), where Nc(p) is the neighbourhood around pixel p in the colour image, Nm(p) is the neighbourhood around pixel p in the texton mask, p is the target pixel and v is the current candidate pixel. Because of the rotation, synthesis quality is affected by synthesis order. The synthesis order used is the same as in [38]. For synthesis of images a scanline order is used. When synthesizing the texture directly onto a geometric model an orientation field is specified and is used to provide local coordinates. The synthesis proceeds to the next vertex in scanline order using this local coordinate frame. 2.2 Patch-Based Methods Patch-based texture synthesis techniques enforce local statistics by directly copying whole blocks, B, of the original texture. Due to the reduced number of calculations required to synthesize an image patch-based methods are typically significantly faster than pixel-based methods. In fact, several real-time patch based algorithms have been developed. The major differences between most patch-based algorithms is how they deal with patch boundaries. Features in the texture are not guaranteed to match across a patch boundary. Figure 2.4 Chapter 2. Previous Work 15 demonstrates that even when the best matching block is used discontinuities are present at the boundaries. 2.2.1 Chaos Mosaic Chaos mosaics [45] permute the patches of the synthesized image in a determin-istic, yet visually stochastic, manner. The key to the algorithm is the use of the cat map to permute the patches. In the cat map, the pixel at location [xl ,yl) is mapped to (xl+1,yl+1) by the equations xl+l = (xl + yl)mod n yi+1 = [xl + 2i/)mod n where the source texture is n x n. Although the cat map results in visually stochastic, tileable images, it does not sufficiently preserve local statistics for texture synthesis. To preserve local features the cat map is applied to blocks of texture instead of individual pixels. The algorithm starts with a simple tiling of the source texture to obtain the desired output texture size. For each tile in the output texture a random block is selected. These random blocks not only have random location, but random width and height as well. Several iterations of the cat map are applied to all of the random blocks. Edge mismatches are dealt with by blending the edges of the blocks with the tiled image. 2.2.2 Image Quilting The image quilting algorithm [13] attempts to place patches in an intelligent manner. Synthesis is performed in scanline order one patch at a time. Each patch to be synthesized has a small overlap with the previously synthesized patches. The best patch is selected as the patch which minimizes the Li norm for the pixels in the overlap region. Once the best patch has been selected, dynamic programming is used to find the minimum error boundary cut (MEBC) through the overlap region. The M E B C is the cut through the overlap region Chapter 2. Previous Work Hi Figure 2.4: Given the source texture (a) patch based methods construct output textures (b), (c), and (d) by placing patches and then correcting edge mismatches, (b) is synthesized by randomly placing patches and the patch edges are clearly visible, (c) finds the best match and places it but edge discontinuities are still visible, (d) finds the best match and then uses the M E B C to correct artifacts at patch boundaries. The image is from [13] and used here with permission from the authors. 2.2.3 Patch based Sampling Liang et al. [24] developed a patch based algorithm that was very similar to im-age quilting. The two main differences between there algorithm are the bound-ary fixing and the patch selection algorithm. The overlap region is corrected by blending with the previously synthesized patches as in [45]. The current patch is selected by using a novel approximate nearest neighbour (ANN) algorithm. The A N N search is sped up by combining three methods. First, Principal Component Analysis (PCA) is applied to the data to reduce the dimensionality Chapter 2. Previous Work 1 7 of the problem. Second, a kd-tree is used to find the high-dimensional A N N after P C A . Finally, a quad-tree pyramid (QTP) is used to find a small set of initial candidates. The Q T P is similar to a Gaussian pyramid except that at every level, 4 higher level images are computed. After these techniques are applied to speed up the A N N search the algorithm is able to achieve real-time results. 2.2.4 Hybrid Texture Synthesis Hybrid texture synthesis [ 2 8 ] was developed Nealan and Alexa to improve the quality of the tile overlap. After the current tile has been found and added to the synthesized image the quality of the overlap region is examined. Pixels that differ beyond a threshold T are removed. If the entire overlap region exceeds a threshold then it is resynthesized with patches one quarter the size of the current patch. This procedure is performed recursively until an acceptable patch is found or individual pixels are used. The removed pixels in the boundary region are then replaced using individual pixel synthesis. Because the search neighbourhood cannot be precomputed, A N N search structures cannot be used to speed up the search. A full brute force search over the input texture would be very time consuming. Instead the best match is found in the Fourier domain using an 0(n log n) time algorithm. Additionally, the colour channels are weighted with the vector W J J , G , B = { 0 . 2 9 9 , 0 . 5 8 7 , 0 . 1 1 4 } . This weighting corresponds to the luminance component of the Y I Q colour space. 2.2.5 Non-Periodic Tilings Cohen et al. [ 2 1 ] used Wang tiles to construct a nonperiodic tiling of texture patches. A Wang tile is a square tile where each edge is assigned a colour. A valid tiling requires that all shared edges between two tiles have matching colours. There exist sets of tiles that can be shown to create strictly aperiodic tilings of any size. However, these tilings may require large amounts of backtracking Chapter 2. Previous Work 18 and often do not result in visually stochastic tilings. The authors use a set of tiles that does not guarantee aperiodicity. Tiles are placed in scanline order. As a result only the top and left sides of the new tile are required to match. In the tile sets that the authors construct, there are two matching tiles for any combination of top and left sides. One of the two matching tiles is randomly selected as the current tile. Tiles are constructed so that no blending is required at runtime. A diamond shaped block is randomly selected for each colour in the tiling. The blocks are placed in each Wang tile so that the outer edges of the tile have the appropriate colours. The block edges within the Wang tiles are blended using M E B C . If the error for the set of Wang tiles exceeds a user specified threshold, then the coloured blocks are discarded and reselected. Because of how the blocks are constructed, edges with the same colour are guaranteed to match without any blending. Figure 2.5: Wang tiles are created by choosing four diamond shape regions from the original texture. They are then placed together to form a larger diamond and M E B C is used on the overlap regions. A square shaped tile is taken from the large diamond to form a single Wang tile, (b) shows the arrangements used to make an 8 tile set. The image is from [21] and used here with permission from the authors. A graphics hardware implementation using Wang tiles has also been devel-oped [41]. Because graphics hardware does not provide any guarantees about the order in which the tiles will be synthesized the algorithm should provide a method for determining the four edge colours without knowing the four adjacent (a) (b) Chapter 2. Previous Work 19 tiles. A hash table based on the Cat-Map [45] is used to determine the four edge colours. The construction of the hash table guarantees that the edges of the patch will match adjacent patches. If the final rendered texture i s n x n tiles, the hash table should contain at least n entries to prevent repetition. The hardware implementation of Wang tiles uses all possible edge combina-tions. To store the tiles a 1-dimensional packing is developed that uses all possi-ble edge combinations and ensures that all tiles match adjacent tiles. This pack-ing is then applied in both dimensions to construct the final 2-dimensional pack-ing. This packing is important to avoid boundary artifacts caused by mipmap filtering. 2.2.6 Graphcuts Kwatra et al. [23] represent the texture to be synthesized as a graph and re-formulate the problem of finding the best seam into a min-cut problem. The min-cut problem has been well studied and several algorithms exist for solving it. Given two texture patches A and B a graph is constructed in which every pixel in the overlap region is a node. Additional nodes are added for A and B and edges between them and some of the pixels are given an infinite cost. This constrains these nodes to come from one of the two original patches. A cost function is defined as M{s,t,A,B)= : \\«.)-Bisn + \ \ « t ) - B m GdA(s) 11 + 11 G\(t) H + ll G%(s) H + ll G%{t) ||' where s and t are adjacent pixels in the overlap region, A(s) and B(s) are the colours of the pixel at location s in the old and new patches, G^(s) is the gradient in patch A along direction d, and || • || is an appropriate norm. The seam is chosen using a solution to the min-gut problem as in Figure 2.7.' When a new patch is added near an old seam, the old seam is taken intq^account and may be altered. Three separate algorithms are proposed for selecting a patch and placing it in the output image. The important feature in all of these algorithms is that Chapter 2. Previous Work 20 Figure 2.6: Wang tile packing. The left image shows an example of a 1-dimensional packing using two edge colours. The right image shows a 2-dimensional packing using 2 horizontal and 3 vertical edge colours. the patch can be placed anywhere, even on top of a previously synthesized area. This can be desirable for covering bad seams. 2.2.7 Feature Matching and Deformation Wu and Yu [44] make use of a feature map to help reduce discontinuities across patch boundaries. A feature map is a binary image that contains the set of curvilinear features in the image. The feature map is obtained using a bilateral sharpening filter followed by a two pass edge detection algorithm. The first pass Chapter 2. Previous Work 21 Figure 2.7: The left hand image shows the overlap region between two patches. The right side shows the graph formulation and the minimum cost cut. The image is from [23] and used here with permission from the authors. finds strong edges and the second pass finds connections between the edges. Finally, a thinning algorithm is applied to ensure the features have only one pixel width. A n error metric between two feature map patches is required for the syn-thesis. The distance function should account for both the distance between the features and the orientation of the features. Two features that cross but are perpendicular to each other are not a good match. The distance between two features is fdist{h,f2)=\\h- h \ \ 2 + T \ \ v l - V 2 \ \ 2 , where v\ and v2 are the tangents at f\ and f2 and r is a weight indicating the importance of the tangent. The function W(fsynth) returns the feature, patchy i n t h e c u r r e n t patch that m i n i m i Z e S fdist{fsynth,fpatch). In general, the overlapping region of a patch to be synthesized will contain more than one feature. The function Bf(ffatch) returns a count of the number of features, jsynth j.jjat a r e mapped to f P a t c h Given the set of already synthesized features, ^jsynth^ jsynth^ ^ jsynth^ a n ( j the set of features in the patch currently being compared, ( jff**, / J o t c \ . . . , fiatch}, the distance is dist(f, 9) = i £ fdist(frth, wfurth)) + \BfUp3atch -i j where j3 is a weight between the location/orientation and bijectivity terms. Chapter 2. Previous Work 22 Bijectivity is then enforced by removing extraneous features from the overlap region. The feature map and the final image are synthesized at the same time using a scanline order. The weighted sum of the feature map error and the Z, 2 norm of the colour information is used to select the best patch. Once the best matching patch has been selected it is warped so that the features are consistent across patch boundaries. The warping function moves the points in the overlap region so that the features in the newly synthesized feature map patch match the already synthesized features. At the same time it is desirable that the features stay fixed along the bottom and right hand edges of the synthesized patch. The warp is obtained using scattered data interpolation. First a thin-plate spline [10] warping is attempted. Thin-plate splines satisfy the warp constraints with the minimum bending energy. Unfortunately, they require a matrix inversion to calculate. The matrix is singular for many sets of constraints. If the matrix is singular Shepard's method [20] is used instead of thin=plate splines. Shepard's method is an iterative algorithm for scattered data interpolation. Once the warping function has been determined it is applied to both the feature map and the colour information. Warping takes a considerable amount of time and the algorithm is only able to synthesize a 256 x 256 pixel output texture in about 2 minutes on a 2 GHz Pentium IV. This is considerably longer than many previous patch based methods. Chapter 3. Local and Global Feature Synthesis Algorithm 23 Chapter 3 Local and Global Feature Synthesis Algorithm Make everything as simple as possible, but not simpler. —Albert Einstein. The texture synthesis algorithm proposed here can be broken down into three main stages; obtaining a texton mask, synthesizing the texton mask, and synthesizing the final texture using the texton mask. These stages are described in detail here. 3.1 Obtaining the Texton Mask Before synthesis can begin, a texton mask must be created for the input texture. The texton mask can be created manually [47]. However, manual construction of the texton mask is a time consuming process and, in general, synthesis results are not overly sensitive to the texton mask used [47]. Because of this, a fast technique that creates reasonable texton masks without user intervention is desirable. The k-means algorithm is used to separate the image into k textons where k is specified by the user. For most applications, values of 2 or 3 are sufficient for reasonable synthesis results. The A;-means algorithm is a simple iterative clustering algorithm. The algorithm is initialized by choosing k random pixels as the initial centroids. Every pixel in the image is then assigned to the cluster Chapter 3. Local and Global Feature Synthesis Algorithm 24 with the closest centroid. Once all of the pixels have been assigned the centroids are recalculated and the process continues until convergence. If the algorithm does not converge within a set number of iterations then the current centroids at that point in time are used. In practice the algorithm almost always converges. Clustering is performed on the 3-dimensional R G B triples of the pixel values. Clustering can also be performed in other colour spaces such as HSV however this does not significantly affect the results in most cases. 3.2 Texton Mask Synthesis The texton mask is synthesized using a patch-based technique. A technique based on image warping is used to ensure that features are aligned at patch boundaries. A special feature vector is used for both selection and warping. Because the texton maps have a sparse set of possible values the feature vectors only encode edges. Each entry in the feature vector is a triple consisting of the current texton, the horizontal or vertical offset within the patch, and the normal direction. To construct a feature vector for a candidate patch, start on a pixel above the upper left corner of the patch. Continue one pixel at a time to the right until an edge is encountered. Add an entry to the feature vector for the pixels on the left and right sides of the edge. Continue with this until the right hand edge of the patch is reached. A similar process is used for constructing feature vectors for the left hand side of the candidate patch. Because feature edges are sparse for many textures, the feature vectors have a relatively low dimensionality. 3.2.1 Patch Selection Because we are using an image warping technique to align patch boundaries, it is important that the features located at patch boundaries be bijective. That is, the feature vectors for the previously synthesized patches and the patch to be synthesized must both have the same textons in the same order. If this is Chapter 3. Local and Global Feature Synthesis Algorithm 25 not the case then there is no warping that can ensure that the features match at the edges. Each candidate patch has two feature vectors. One for the left side and one for the top. To ensure bijectivity patches are rejected immediately if the sequence of textons in any feature vector does not match the sequence of textons in the already synthesized patches. The distance between two feature vectors is calculated using equation 3.1. i Here vnew and vold are the feature vectors, o, is the ith offset and rij is the ith normal and r is a weight usually set to 0.1. Patches are synthesized in scanline order. The first patch is randomly se-lected. For patches along the top and left sides both feature vectors are taken from the only adjacent synthesized patch. If a point is reached during the syn-thesis where a bijective match cannot be found the process is restarted using a different random starting point. Patch synthesis is initialized by randomly choosing a patch. 3.2.2 Patch Warping The patch warping should meet several criteria. The positions of the pixels along the bottom and right edges of the patch should not change. The positions of the texton edges immediately above the patch and to the left of the patch should match the features of the adjacent previously synthesized patches. The first method that was examined for the warping was thin-plate splines (TPS). TPS simulate the bending of a thin sheet of metal that is bent to fit the constraints. TPS have the minimum bending energy of any warping function that can fit the constraints. To calculate the interpolation from TPS a linear system must be solved. For this problem the linear system was frequently i l l -conditioned. In fact, it is not uncommon for the matrix to be singular. Chapter 3. Local and Global Feature Synthesis Algorithm 26 Left Side Feature Vector: {1,5, 160}, {0, 6, 340}, {0, 11, 160}, {1, 12, 340} Figure 3.1: A feature vector is constructed for both the top and left edges of the texton mask patch to be synthesized. The feature vectors are composed of triples containing the texton value, the offset between the current texton boundary and the edge of the patch and the tangent of the texton boundary. To maintain a relatively high speed we use a bilinear warping function. For every pixel in the patch, the horizontal and vertical offsets from the warping are computed independently and then combined to determine the source pixel in the original patch. To calculate the horizontal offset, the patch is divided vertically by a series of lines going from the point (o" e t u , —1) to the point o°ld, P) for a patch of size P x P. The closest line on each side of the current pixel is found. Here the left and right edges of the patch are considered vertical lines. The y-coordinate of the source pixel is _ (ydest +mi+ bi)(mr. *Xdest + W - mi * xdest - h) , . Vsource — . , . L . . , > \^-^J -mr + br + mr - br + mi * x d e s t + bi where (xdest, ydest) are the coordinates of the current pixel within the patch, mi,mr,bi, and br are calculated using the equation for the line for the lines immediately left and right of the current pixel. Equation 3.2 can also be used to calculate x s o u r c e by appropriately switching the x and y values in the coor-dinates. Chapter 3. Local and Global Feature Synthesis Algorithm 27 Figure 3.2: The texton mask patches are warped during synthesis. The x coor-dinate of the pixel in the unwarped patch that gets mapped to the current position, {xdest,ydest), is calculated by Equation 3.2. Chapter 3. Local and Global Feature Synthesis Algorithm 28 3.3 Colour Synthesis Once we have the texton mask the final texture can be synthesized. A low-dimensional feature vector containing information from the texton mask and from the synthesized colour values is used to synthesize the final texture. As a result of how the information is stored, a variety of search structures can be used to speed up the search. 3.3.1 Low Dimensional Feature Vector The feature vector used to synthesize the final texture consists of two parts. The first portion of the feature vector uses data taken from the synthesized texton mask. The first element in the feature vector contains the current texton. The next four values contain the distance from the current pixel to the nearest texton edge directly above, below, left and right of the current pixel. For efficiency, these four values are computed and stored in a distance map before colour synthesis begins. The next four dimensions contain the texton values on the other side of the four edges from the previous four dimensions. These last four dimensions may be omitted when the texton mask contains only two texton values. At most, the texton mask contributes 9 dimensions to the feature vector. Because the texton mask contains the largest features in the texture the colour information only local colour information is needed. In many cases a 3 x 3 causal neighbourhood is sufficient to produce reasonable results. If a scanline order is used then this results in a 21 dimensional feature vector. If a random order is used then there is a 33 dimensional feature vector. This is much lower than traditional techniques. A typical 9 x 9 x 2 causal neighbourhood as used by Wei and Levoy [42] has a 195 dimension feature vector. 3.3.2 Distance Function Because the feature vector contains several different types of information an Lk norm would not be useful for calculating the distance between two feature Chapter 3. Local and Global Feature Synthesis Algorithm 29 vectors. The distance values and the colour values can be easily compared using weighted Lk norms. However, it does not make sense to compare texton values using an Lk norm because there is no ordering information between textons. Instead a delta function is used. The full distance equation is 6 ! dist(u,v) = r(1 - SUl :Vl) + A y ] \\UJ - Vj\\^ + t = 2 io d - SUi,Vi) + W(i)\\Ui ~ vi\\k2> (3-3) t = 7 i = l l where r , A, and K are constant weights and w(i) is a weight function such that the combined weight of a single pixels colour channels sums to one. w(i) may be a constant or it may weight the colour terms unevenly as in [28]. The correct choice for the constants r , A and K is important for the algorithm to work effectively. Because it is very important that the final colour value belongs to the same texton as the current texton r is set extremely high. Often r is set to infinity. This guarantees that the final pixel will come from the same texton and can simplify search. A and K are also set fairly high relative to the individual colour values. By keeping these values high, the large scale global information is preserved. The choice of which Lk norms to choose is also important. It has been shown that in high dimensions the choice of k can affect the meaningfulness of the result of a nearest neighbour search [2]. Higher values of k provide poor contrast between the nearest and farthest neighbours as the dimensionality increases. For high dimensions, the relative contrast given by an Lk norm with a smaller value for k is more likely to dominate the contrast of an Lk norm with a higher k value. Although we have reduced the dimensionality significantly, Chapter 3. Local and Global Feature Synthesis Algorithm 30 this behaviour can be empirically shown to some degree for dimensionalities as low as 2. For dimensionalities of around 10-20, this behaviour applies over 95% of the time when the L% and L2 norms are compared [2]. To account for this behaviour we usually use the L\ norm in our distance calculations. The behaviour of nearest neighbour search in high dimensions is discussed in detail in Appendix A. 3.3.3 Search Techniques The unique structure of the feature vector lends itself to specialized search structures. For example, if r = 00 then we can create one small search structure for every possible texton value instead of having one large search structure. For all of the search structures described here it can be assumed that the search space has already been split into sets with the same texton values. Sequential Scan At first glance a full systematic search seems like a hopelessly naive algorithm. However, it has been empirically shown that many exact nearest neighbour search structures are outperformed by a sequential scan for dimensionalities as low as 10 [40]. Due to the low dimensionality of our method compared to other modern methods we can perform a full search in a relatively short amount of time. Additionally, when r = 00 then we only need to perform a sequential scan over approximately l/nth of the search space where there are n unique texton values. Distance-Based Hashing Since the value of A is relatively high compared to the coloured values it makes sense to split the search space based on the distances to the nearest texton edge. It is very likely that the nearest neighbour will match at least one of the edge distances exactly. This likelihood can be increased by using the L2 norm for the distances since it penalizes large differences. A separate hash table is Chapter 3. Local and Global Feature Synthesis Algorithm 31 constructed for each of the four distance dimensions. Each hash table contains one hash bucket for each possible distance value. In practice, there are usually significantly less possible distance values than the width or height of the source texture. Each hash bucket is capable of holding enough pixels to store the entire original source image. During search, for each distance dimension, the appropriate hash bucket is chosen. Within the buckets a systematic search is performed. Although, this results in some repeated distance calculations for pixels that match more than one distance the extra calculations is reasonably small. The space complexity of distance-based hashing is 0(nm) where n.is the number of pixels in the source image and m = max(heightsource,widthsource). As a result the space complexity can vary between 0(ny/n) and 0(n2). The worst case time complexity is 0(n) per synthesized pixel. \ fc-coherence Search fc-coherence search is based on Ashikhmin's synthesis method for natural tex-tures [5]. Ashikhmin's texture synthesis method [5] can be viewed as the special case where k = 1. It was first used by Tong et al. [36] to synthesize bidirectional texture functions (BTFs). Every pixel po in the synthesized image corresponds to some pixel p\ in the source image. When synthesizing a pixel ps we look at all pixels po in the neighbourhood of ps. For each po £ N(Ps) there is some translation ip such that ip(loc(po)) = loc(ps). We then apply tp to all of the pixels pi corresponding to the pixels po € N(ps) to obtain the set of candidate pixels p i . For every pixel p i in the source image we build a coherence set C(p\) of the k — 1 pixels that are most similar. The complete candidate set C(ps) for possible pixels consists of all of the pixels p i and all of the pixels in their coherence sets C(pi) . Mathematically this can be described as C(ps) = {Vp|p € pi U C(pi ) ,p i = rl>(pi),4>{loc(p0)) = loc(ps)\/p0 € N{ps).} Because the noncausal neighbourhoods used in the the feature vector are very small (typically 3 x 3) a larger neighbourhood A r (p s ) is used to determine Chapter 3. Local and Global Feature Synthesis Algorithm 32 the candidate set Cps- Typically a 5 x 5 causal neighbourhood is sufficient to produce reasonable results. Properly selecting which pixels pi belong in the coherence set C(p\) is im-portant for this search method to work effectively. Selection of the coherence sets is a preprocess so we do not need to use the low-dimensional feature vector to find the k—1 nearest neighbours. We use a feature vector composed of R G B triples from a square neighbourhood around the current point. The size of this neighbourhood varies depending on the characteristics of the source texture. Typically a 9 x 9 neighbourhood is sufficient. This results in a 243 dimensional feature vector. Because this is a high dimensionality we use the L\ norm to compute distances. It is typical for pixels located near each other in the source texture to appear in the same coherence set. We want to prevent this to en-courage diversity in the synthesized texture. As in [46] we prevent pixels that are within a certain radius r of pixels in the coherence set from being added to the coherence set. r is typically set to 5. fc-coherence search has a space complexity of 0(kn) and a time complexity of 0(km) where the input and output images contain n and m pixels respectively. For fixed values of k, the time complexity is constant and the space complexity increases linearly with the number of input pixels n. These are both desirable properties for a texture synthesis algorithm. Chapter 4. Entropy Minimizing Synthesis Algorithm 33 Chapter 4 Entropy Minimizing Synthesis Algorithm Only entropy comes easy. —Anton Chekhov. The second texture synthesis algorithm uses a more traditional approach. The main contributions of this algorithm are: • Automatically selecting a causal neighbourhood for each texture. • Doing most of the computation for the distance calculations as a prepro-cess. 4.1 Discretizing the Texture For the later steps of the algorithm to work properly the texture must be dis-cretized into a small number of values. For an n x m texture we use vector quantization (VQ) to find at most MIN(TO, n) vectors. There are many choices of which V Q algorithm and pixel representation to use. 4.1.1 Vector Quantization We use the fc-means algorithm for V Q . The /c-means algorithm begins by ran-domly selecting m pixels and setting the initial cluster centroids to their feature vectors. Then for every pixel in the image the distance between its feature vec-tor and every cluster centroid is calculated. The pixel is assigned to the cluster Chapter 4. Entropy Minimizing Synthesis Algorithm 34 with the closest centroid. Once all pixels have been assigned to a cluster the centroids are recalculated. The new centroid is the mean of all the feature vec-tors for pixels assigned to the cluster. The process repeats until either no pixels change clusters or a maximum number of iterations is exceeded. At some point in the algorithm, a cluster may have no pixels assigned to it. When this happens, it is assigned its previous value although the cluster is empty. The cluster will often become nonempty again in future iterations. Depending on the texture and the choice of feature vectors it may not be possible to assign m unique centroids. In this case, we divide the number of clusters in half. We repeat this until we can assign all of the clusters a unique initial starting value. 4 .1.2 Feature Vectors We can use any data in the image to construct the feature vectors as long as there is a good error metric for them. The easiest feature vector to use is the R G B triples for the pixels and an Lk norm for the distance metric. Because this is being done as a preprocess it makes sense to perform a little extra computation and use a perceptually linear colour space such as La*b*. Simpler feature vectors such as only using the luminance values per pixel can also be used. Additional information such as the horizontal and vertical gradients per colour channel can also be included. Finally, values from surrounding pixels can be included in the feature vector and combined with complicated error metrics that include masking effects [26]. 4.2 Choosing the Dimensions to Keep Using the assumption that textures can be represented as M R F s we know that every pixel value can be computed by examining a finite set of each pixel's neigh-bours. Wei and Levoy [42] took advantage of this by using an L-shaped causal neighbourhood of pixels around the current pixel. This neighbourhood must be Chapter 4. Entropy Minimizing Synthesis Algorithm 35 large enough to contain information about the feature being synthesized and the relationship between features. For example, if a wire mesh is being synthesized the neighbourhood must be at least as large as the space between wires. Oth-erwise the mesh may become irregular with many wires close together in some areas and large gaps in others. This requirement leads to large neighbourhoods and consequently high dimensional feature vectors. If we are using an n dimensional feature vector to synthesize a texture, can we achieve a result with similar quality using an n — 1 dimensional feature vector? In many common cases the answer is yes. The colour channels in an R G B triple do not vary independently for most textures. Adjacent pixels often have similar R G B values. This provides a large amount of redundancy that can be exploited. From this point on, instead of rejecting dimensions from the neighbourhood, we will call the standard causal neighbourhood the candidate neighbourhood and add candidate dimensions to the actual neighbourhood. 4.2.1 Entropy and Information Gain To decide which dimensions to add to the neighbourhood we need a metric to determine how much information about the current pixel each dimension contributes. Information gain provides a simple way to do this. Storing a data point, X, requires H(X) bits. H(X) is the entropy of X and is calculated by where m is the number of possible values X can have and pj is the probabil-ity that X is the jth value. In most applications data points don't exist by themselves. Previous data points are often related to the current data point in some fashion. Given the value of a previous data point, Y, we can compute the conditional entropy m (4.1) Chapter 4. Entropy Minimizing Synthesis Algorithm 36 w ) = E E p ^ = ^ y = ^ P { x = v \ l Y = V ] ) m ri « = g = vj)y£ P(X = VI\Y = V J ) i o g p { x = v A Y = v j ) m = ^2P{Y = vJ)H(X\Y = vj), , (4.2) J = I where VJ is the j " 1 possible value for Y and P ( Y = Vj) is the probability that Y = Vj. The difference between H(X) and i7(A"|y) is the information gain IG(X\Y) = H(X) - H(X\Y) (4.3) Applying this to texture synthesis, we want to keep the dimensions that provide the largest information gain. To simplify the calculations we use the clusters from the V Q instead of using actual pixel values. This also means that instead of choosing an individual colour channel within a pixel to add we are choosing the whole pixel. A naive implementation would simply calculate the information gain for each pixel in the candidate neighbourhood and add them in order of decreasing information gain until the number of dimensions in the neighbourhood reaches a threshold. In practice this results in pixels that are close to the target pixel being included and further pixels are left out. Figure 4 . 1 shows the order that pixels are added to the neighbourhood for some textures using this approach. This means that we lose the information about interfeature distances that the larger neighbourhood provided and we are not fully exploiting the redundancy between pixels. To build the neighbourhood we start by finding the pixel, Yi that results in the lowest conditional entropy H(X\Y{) and add it to the neighbourhood. Then we find and add the pixel Y2 that minimizes the conditional entropy H(X\Yi,Y2). We repeat this until the conditional entropy is equal to zero H(X\Yi, Y 2 , . . . , Yn) = 0. Now we restart the process using the next best con-ditional entropy H(X\Yn+\) > H(X\Y\). If the best pixel has already been Chapter 4. Entropy Minimizing Synthesis Algorithm 37 23 20 14 16 17 21 24 | 23 20 16 11 15 19 24 ! i 22 12 8 5 9 11 19 j 22 12 8 5 7 13 21 i : 18 10 4 1 3 7 I 13 I 17 9 4 1 3 10 18 ! 1 5 L 6 2 14 6 2 11 15 17 21 I F 20 | ! 15 16 12 19 17 12 16 13 18 19 | | 21 ! i 18 14 20 13 10 14 I 10 L 11 r : 6 7 8 1 4 8 t 5 2 I 9 ! ! 3 6 2 1 3 i 4 ! I 5 7 9 Figure 4.1: The top images are two example textures. The middle images show a neighbourhood for each texture with the neighbours ordered by decreasing information gain. The bottom images show multiresolu-tion neighbourhoods ordered by decreasing information gain. Chapter 4. Entropy Minimizing Synthesis Algorithm 38 Dim # H(X\Yn) Dim # H(X\Yn) 0 4.86652 12 4.5736 1 4.8833 13 4.58375 2 4.90284 14 4.5859 3 4.93099 15 4.57048 4 4.92979 16 4.52477 5 4.8721 17 4.59278 6 4.88349 18 4.49937 7 4.87238 19 4.54705 8 4.92225 20 4.56063 9 4.92542 10 4.86394 11 4.87569 Table 4.1: The conditional entropies from the green scales texture using a {5 x 5,2} causal neighbourhood. The dimensions on the right are from taken from the lower resolution level of a Gaussian pyramid while those on the left are from the current level. The dimension number is the same as used in Figure 2.2. added to the neighbourhood it is still used to calculate joint conditional en-tropies but is not added again to the neighbourhood. The process continues until the threshold for the number of dimensions is reached. For a given pixel, the conditional entropies obtained using pixels from higher levels in the multi-resolution pyramid are often significantly lower than the con-ditional entropies using other pixels at the current level. Table 4.1 shows the conditional entropies H(X\Y) for every pixel in a {5 x 5, 2} causal neighbour-hood. When the number of pixels being compared is less than the number of pixels in the neighbourhood that come from the previous level then it is possible that all of the pixels being compared come from the previous level. When this occurs the algorithm is unable to distinguish between the pixels at the current Chapter 4. Entropy Minimizing Synthesis Algorithm 39 level that contribute to a single pixel at the lower resolution level. This leads to 2 x 2 blocks of pixels that all have the same source pixel in the original image. To correct this, a check is made to ensure that at least two of the pix-els in the candidate neighbourhood came from the current level. If there are not at least two pixels from the current level the candidate dimension with the highest conditional entropy is removed from the candidate neighbourhood and replaced with the dimension from the current level with the lowest conditional entropy. This process is repeated until there are two candidate dimensions from the current level. Figure 4.2: The right hand side shows the 2x2 blocks of repeated pixels that oc-cur when only dimensions from the lower resolution neighbourhood are used in the synthesis. The left hand image shows the result when the algorithm is forced to choose some dimensions from the current level. The top of the image shows the pixels contributing to the feature vectors. Chapter 4. Entropy Minimizing Synthesis Algorithm 40 4.3 Synthesizing the Texture A A;-coherence search is used to provide a constant time search for approximate nearest neighbours. To further speed up the A N N search an approximate error is used instead of the actual error. The approximate error is computed using the cluster centroids that were obtained by VQ. For every pair of centroids the distance between them is computed and stored in an error texture. The number of errors that need to be stored is 0(n) where n is the number of pixels in the source image. The approximate error is calculated by accumulating the approximate errors for each pixel in the final neighbourhood. Although this results in some inaccuracy in the search it allows arbitrarily computationally expensive error metrics to be used with a constant cost at runtime. To minimize the number of dimensions needed to accurately synthesize textures a multi-resolution Gaussian pyramid is used. Chapter 5. Results 41 Chapter 5 Results Results! Why, man I have gotten a lot of results. I know several thousand things that won't work. —Thomas Edison A l l of the results shown here were obtained using an 1.8 GHz Pentium IV C P U with 512 M B of R A M . 5.1 Local and Global Synthesis 5.1.1 Texton Mask Synthesis It takes between 0.5 and 9 seconds to determine the initial texton mask. The amount of time required depends on the size of the source image and the desired number of textons. The time spent synthesizing the texton mask varies widely depending on the initial texton mask. When 32 x 32 patches are used the feature vectors typically have between 6 and 12 triples for two texton values. Textures with larger feature vectors take longer to synthesize because they need to restart more often to maintain bijectivity. As the output size grows it becomes more likely that the synthesis process will need to restart. This means that the time required to synthesize a texton mask grows rapidly with the size of the output. Table B . l shows the synthesis times for several texton masks. The enforcement of bijectivity means that most candidate patches are rejected very early in the comparison process. This helps keep the time from growing quickly with the size of the input. Chapter 5. Results 42 Texton mask synthesis has a few weaknesses. The algorithm for determining the textons does not always divide the images in ways that are appropriate for the human visual system. The segmentation is usually acceptable when only two texton values are used but becomes more noticeable when the number of texton values used is increased as shown in Figure 5.1. In many cases, even when the texton mask does not divide the image well, the final synthesis result is still acceptable. Highly structured texton masks are often poorly synthesized. The best matching patch at a given step in the procedure might require significant warp-ing. This affects the structure of the final image and is visually quite notice-able. Additionally, straight edges in texton masks can become bent across patch boundaries or even curved within a single patch (Figure 5.3). Horizontal and vertical edges in texton masks are often warped to noticeably different angles. Texton masks containing more than two texton values or with large num-bers of transitions between textons typically take longer to synthesize. The requirement for bijectivity across patch boundaries results in an increased num-ber of restarts in these cases. When synthesizing larger textures the randomized restart process prevents some texton masks from being synthesized. 5.1.2 Colour Synthesis The results obtained using the exhaustive search are the same as those obtained from the distance-based hash search. The hash method examines far less pixels and achieves its results more quickly than does the full search. As a result no exhaustive search results are included here. The distance-based search method is robust when it is given poorly synthe-sized texton masks. In general the maximum distance between textons in the texton mask does not grow with the size of the input texture. As a result the time taken by this method grows linearly with the input size. fc-coherence search is faster than the distance-based search because it com-pares fewer potential matches per synthesized pixel. The difference increases Chapter 5. Results 43 (a) (b) (c) (d) (e) (f) Figure 5.1: The results of automated texton selection. The overall structure is captured well and the major features are visible for all of the images. In (c) the highlights from the red section of the image are clustered with the green texton. Smooth transitions like those in (d) and (e) are not captured well by the texton mask. The addition of a third texton value to (d), (e) and (f) does not add much information about the features. In (f), the third texton does not differentiate between the green patches and the yellow patches. Chapter 5. Results Figure 5.2: Results of texton mask synthesis, (a) contains some visible patch boundaries, (b) shows a poor distortion in the lower left corner. Chapter 5. Results 45 (a) (b) (c) Figure 5.3: Results of texton mask synthesis. The horizontal and vertical edges in (b) and (c) are distorted by the patch warping. Although the original texton mask for (c) only had straight edges the synthesized mask has some curved edges as a result of patch warping. Chapter 5. Results 46 with larger source textures because the synthesis time for fc-coherence search is unaffected by the size of the source image. The results using fc-coherence search typically have numerous obvious artifacts that can be removed by using multiple passes of the synthesis algorithm. This makes fc-coherence search ideal for order independent search [43]. The texton mask provides global informa-tion about the texture making multiresolution pyramids unnecessary, even when using order independent search. The time spent using order independent search is usually comparable to that obtained by Wei and Levoy [43]. Our algorithm performs far worse when the texton mask synthesis requires several restarts. The results using texton masks with fc-coherence search have different artifacts from Wei and Levoy's when not enough generations are used. When only one generation is used Wei and Levoy's results have errors similar to Bonet's [9]. Our results show the global structure after the first generation but are blocky, similar to what would be expected if a lower resolution level of a Gaussian pyramid were scaled to the current size. Figure 5.4 shows the importance of using enough generations in the synthesis. The fc-coherence search is not as robust as distance-based hashing when a poorly synthesized texton mask is used. This is a result of the larger number of pixels searched by the distance-based hashing. Figure 5.5 shows a comparison of the two methods when a poorly synthesized texton mask is used. Some failures of the colour synthesis algorithms are shown in Figure 5.6. Textures that contain smooth transitions of a single hue such as water, a sky with light clouds, or flames are synthesized poorly. The texton mask can not break up the image in a fine enough manner and, as a result, sharp edges appear in the synthesized image instead of smooth transitions. Another common problem is a 'speckled' effect. The distance portion of the error calculation overrides the local colour information leading to a lack of coherence in the reproduced colours. This leads to high frequency noise in the final image. Chapter 5. Results 47 2 iterations 3 iterations Figure 5.4: The original texture and its texton mask are shown at the left. The synthesized texton mask is shown next to them. The results of A>coherence search using order independent texture synthesis are shown on the right. 5.2 Entropy Minimizing Synthesis Using information gain to reduce the dimensionality results in very efficient synthesis. The results are generally very high quality. Selecting the dimensions for the candidate neighbourhood typically takes between 30 seconds and 10 minutes. This depends on the number of pixels required in the candidate neigh-bourhood, the size of the source image, and the number of textons the image is divided into. The complexity for calculating the conditional entropy grows exponentially with the number of pixels already known. Textures that require large numbers of pixels to be known before the conditional entropy becomes Chapter 5. Results 48 Figure 5.5: The original texture and its texton mask are shown at the left. The synthesized texton mask is shown next to them. The results of k-coherence search and distance based hashing are shown on the right. The fc-coherence search shows extensive problems with colour mis-matches in areas where the synthesized texton mask is very different from the original texton mask. zero spend more time calculating the pixels in the conditional neighbourhood. Table B.4 shows some examples of the time required to calculate the candidate neighbourhood. Table B .5 shows that the number of textons used does not affect the final synthesis time. The number of textons used affects the time required to choose the dimensions that are kept and results in different candidate neighbourhoods being selected. The number of textons used also affects the error calculation. The more textons are used, the more accurate the error calculation will be. Figures 5.9 and 5.10 show the effect of the number of textons on the final synthesis result. Unless otherwise specified all of the results in this section use the Z/2 norm in the R G B colour space. The reduced neighbourhood search is able to accurately synthesize a wide range of textures When artifacts are present they are similar to those caused by fixed neighbourhood search [43]. When a very small number of dimensions (~ 4 — 6) or a very small number of textons (~ 8) is used then artifacts appear that are similar to those obtained using to small a neighbourhood with other synthesis techniques. The reduced neighbourhood search has poor results on some low frequency Chapter 5. Results 49 Figure 5.6: Examples of colour synthesis failures. The textures that contain various shades of a single colour are synthesized poorly. The smooth transitions are lost in the caustics image. The green scales image lacks the checkerboard pattern from the original image. The rock and brick textures have the correct overall structure but the colours are poorly reproduced. A l l of the images were synthesized using fc-coherence search with 3 generations and k = 2. Similar results are obtained using distance-based hashing for these examples. The texton masks are shown in Figures 5.2 and 5.3. Chapter 5. Results 50 Figure 5.7: Examples of colour synthesis using A>coherence search. Although the synthesized texton masks are visibly quite different from the originals the details and colour are accurately reproduced. A l l of the images were synthesized using fc-coherence search with 3 generations and k = 2. Chapter 5. Results 51 Figure 5.8: The left two columns show the source textures and their texton masks. The right columns show the synthesized texton masks and the results of using distance-based hashing. Even when the texton mask is distorted the colour synthesis results are acceptable. Chapter 5. Results 52 textures and some highly structured textures. In some low frequency examples the artifacts are similar to those caused by jump maps [46]. The noticeable patch discontinuities are not as frequent as they are when jump maps are used. Additionally, changing some of the parameters often removes the jump map type artifacts. The artifacts resulting from using information gain to reduce the neighbour-hood size are more like the artifacts caused by jump maps [46] than artifacts caused by using to small a neighbourhood. Unlike [46] our method does not cre-ate obvious seams when synthesizing low frequency textures. Artifacts caused by using a small neighbourhood are not present when the candidate neighbourhood is chosen from a sufficiently large neighbourhood unless a very small number of dimensions is used (~ 4 — 6). Highly structured textures are synthesized poorly. These are textures, with regularly spaced features. In cases where the features are highly detailed the synthesis algorithm often does not have enough dimensions in the search to accurately calculate the best visual match. As a result the synthesized image contains high frequency noise in some of these areas. Chapter 5. Results 53 32 64 Figure 5.9: The number of textons used in the synthesis has a large impact on the quality of the results. When only 8 textons are used the image is very blurry. When 16 textons are used the image is still slightly blurred. At 32 textons the image is clear but there is an uneven distribution of black and red berries. The image is clear and the distribution of black and red berries matches the input image when 64 textons are used. 32 64 Figure 5.10: The number of textons used in the synthesis also affects low fre-quency textures. When too few textons are used the interfeature distances are not maintained and the features are not accurately reproduced. When too many textons are used the images show slight discontinuities at patch boundaries. Chapter 5. Results 55 Figure 5.11: Natural textures are synthesized using fc-coherence search with k = 1. The quality of synthesis is similar to Ashikhmin's results. The images are all synthesized using 32 textons except for the rocks which use 64 textons. The circuit texture used 8 dimensions, the rocks used 16 dimensions, the grass used 12 dimensions and the flowers used 24 dimensions. Chapter 5. Results 56 Figure 5.12: Low frequency texture synthesis. The artifacts that are present in the images are consistent with fixed-neighbourhood search results. The wavy pink lines have some obvious patch edges that are similar to what would be seen using jump maps. The green scales texture was synthesized using a {5 x 5,2} neighbourhood at every level except the top which used a {5 x 5,1} neighbourhood. Chapter 5. Results 57 S ? describing the response r>f that neuro ht.-.! i function of posUion—ii perhap functional description of that neuron. seek « single conceptual and mathem. scribe the wealth of simple-cell recej id neurophy»iologicallyUJ and inferred especially if such a framework has the it helps us to understand the funetio leeper way. Whereas no generic mo« ussians (DOG), difference of offset C ri v i tive of a Gaussian, hifher derivati - function, and so on—can he expect! inip!e-ceU receptive field, we noneth 122 Ms* »*V«**'^*< t < M nruroph3»ll<>os»lp#«»« L 9 • V ; w , K i . l l y i f . u c h . . „ y . * » • h;bir lieip* m tn underitiinoc. ' l fue *er woy. W l m u n , G«I -•onuroph* (DOG), difference a r' Y- H i l l y * of • Giuuien, hither it' llle-t^ ua to urmd so on—e»n be exi • n V.»»»h, rhereetive field, we nor °™:- funwt field, we nt^her ef th«t r\, S S S L f t * ' ^ * ^ * ' "«« J pn . t f . l ' ""™ 1 I «aur«r„ „ ; " < t K « « tstortP " * * < « • * N < «ir»; , ! """•1» r ' t *«*?S*!J « - * thru. t-"""n,"> *"*"... on,|^:ititJ<:tio>.-«'i!:wene»jtf< poldeamraH'oler,* ,«lllo*' « » ™ n r - 1 j>a7».Bihtrro]oallnh m^raMnilWi^ii i mil Figure 5.13: Texture synthesis on highly structured examples. The images in the left column are the original textures. The middle column con-tains synthesis results with k = 1. The right column contains synthesis results with k = 2. The artifacts when k = 1 are similar to the artifacts generated by Ashikhmin's method. The artifacts when k = 2 are similar to those generated by fixed neighbour-hood search except the checkerboard pattern. The artifacts in the checkerboard are similar to jump map artifacts. Chapter 6. Conclusion 58 Chapter 6 Conclusion A conclusion is simply the place where someone got tired of thinking. —Arthur Block. Splitting the synthesis into local and global features works well on textures where there are a small number of well defined features. In these cases the texton mask is able to segment the image in a reasonable manner. Texton masks are useful if a particular pattern is desired but the storage costs for the full texture are too large. A texton mask containing the pattern can be stored using only 1-2 bits per pixel. A small colour image correspond-ing to a patch in the texton mask would provide the colour information and the remaining colour would be synthesized. Another possible application is a sketching interface. A n artist could start with a small image showing some sky with clouds, quickly sketch out some cloud boundaries in an image and allow the program to fill in the details of the image. Many of the dimensions in a standard search neighbourhood can be removed without having much impact on the quality of the results obtained. Using approximate distances does not have a large impact on the results obtained by A N N search when the approximation is good enough. These two modifications result in a significant speed up over fixed-neighbourhood search. Chapter 6. Conclusion 59 6.1 Future Work 6.1.1 Texton Masks The texton mask synthesis currently restarts whenever it cannot find a bijective match at the current position. Consequently, a lot of work is redone, especially for larger synthesized textures where this problem is more common. Instead of restarting the search from a random starting point the last synthesized patch could be undone and resynthesized using a slightly worse fit. This process can be used to go back several steps if necessary. Using this method only some of the previous work is redone. A method based on hybrid texture synthesis [28] could also be used for synthesizing the texton mask. The resynthesis step would ensure that textons match properly across patch boundaries. However, hybrid texture synthesis was designed to maximize image quality without concern for speed. Using it to synthesize the texton mask would result in a rendering time measured in minutes instead of seconds. Other patch-based synthesis methods will not be appropriate for texton mask synthesis. The texton mask consists of a very small number of distinct values. Any algorithms that blend adjacent patch edges together would be unusable for this application because they would create values that don't exist in the original texton mask. M E B C would restrict the synthesized texton mask to the same set of textons but it cannot adequately handle patch boundaries where feature edges don't line up well. The current V Q algorithm for generating the initial texton mask could be replaced by a better image segmentation algorithm. There are many image segmentation algorithms from computer vision that would work [4]. Interactive image segmentation [34] may also be used to achieve higher quality results. Chapter 6. Conclusion 60 6.1.2 Entropy Minimizing Synthesis A more sophisticated V Q algorithm may improve the synthesis results. There are many V Q algorithms used for image coding that could be applied to this problem [27]. Additionally, implementing a more sophisticated error metric may improve the quality of the results. Other methods for deciding which dimensions to use in the A N N search may work better than VQ followed by information gain. Using the covariance matrix is one possibility. The covariance matrix works directly on the data and would not require V Q . Information gain is used in machine learning to construct decision trees. Using our pixel clustering a constant number of two level decision trees could be constructed using 0(n) space where n is the number of pixels in the image. Individually these trees would not be able to reduce the entropy sufficiently to produce an accurate result. The results from the trees could be combined using weighted averaging or a 'voting' system to select the value for the synthesized pixel. There are several obvious extensions for the entropy minimizing synthesis algorithm. A third dimension can be added to the neighbourhood to allow for video texture synthesis. It is likely that the number of dimensions kept would have to increase for this application. A hardware implementation using order independent neighbourhoods [43] is also possible. Animation synthesis is another possible extension, m frames of animation can be thought of as an rh x 1 texture. Each pixel would be an n-tuple containing all of the joint angles (or other appropriate representation) for the entire frame. The error texture makes the synthesis algorithm independent of the number of dimensions per pixel. Special care would be required to select the clusters Appendix A. Effects of High Dimensionality on Nearest Neighbour Search 61 Appendix A Effects of High Dimensionality on Nearest Neighbour Search Real-World problems are often 'high-dimensional', that is, are described by large numbers of dependent variables. Algorithms must be specifically designed to function well in such high-dimensional spaces. —David Rogers. High dimensional spaces have very different properties from low dimensional spaces. As a result our intuition about how objects behave does not scale well to higher dimensions. Common distance metrics and algorithms were developed for low-dimensional spaces and do not always generalize well to high dimension-alities. We will examine some of the properties of high dimensional spaces and their effects on nearest neighbour search. A . l Sparsity of Data To sample a d dimensional space uniformly with n samples per dimension re-quires nd samples. For texture synthesis we begin with a constant number of samples (the number of pixels in the source image) and cannot increase this to suit the dimensionality of the search space. Given a constant number of data Appendix A. Effects of High Dimensionality on Nearest Neighbour Search 62 points m there are only tfm data points per dimension. For a 64 x 64 texture in a 100 dimensional space this corresponds to less than 2 samples per dimension. This implies that the space is poorly sampled and the nearest neighbour may be quite far away from a randomly chosen query point. A.2 Volumes of Hyperspheres The volume of a d dimensional hypersphere of radius r is given by -f Jo Sdrd-Hr SdRd where Sd is the surface area of a unit hypersphere. The surface area of a unit hypersphere is given by . 27TD/2 d ~ T(d/2)' The gamma function is /•oo r(m) = 2 / er\2m-ldr. Jo Using special forms of T(d/2) when d is an integer allows Sd to be rewritten as sd T3=2JT! 2TT" K (d/2-l)! The double factorial n\\ is computed as d is odd d is even. n - ( n - 2 ) . . . 5 - 3 - 1 n - ( n -2 ) . . . 6 -4 -2 1 n > 0 odd n > 0 even n = ' - l , 0. Sd increases as d increases until d = 7. After this point Sd decreases as d increases. The d that maximizes Sd can be computed numerically as approxi-mately d = 7.257. Beyer et al. [6] proved the following theorem. 11**11 If I ^ E [ \ \ X d \ = 0 Appendix A. Effects of High Dimensionality on Nearest Neighbour Search 63 then for every e > 0 lim P[Dmaxd < (1 + e)DminJ = 1. d—>oo This result imply that in high dimensions all of the data points will be roughly equidistant from any query point for many distributions. Hinneburg et al. [19] later proved the following: Let the distance function 11 • 11 be an Lk metric. Given an arbitrary distribution of two points Dmax* — D [_dJ-where Ck is a constant depending on k hmd ooE[ d(1/fc)_(1/2) d] = Ck, This shows that the difference between the farthest and closest points to the query point increase at a rate dependent on the Lk norm and independent of the data distribution. For Lk norms where k > 2 Dmaxkd— Dm,inkd converges to 0 with increasing dimensionality. It has been shown empirically that Lk norms with lower values of k are better able to discriminate between the nearest and farthest neighbours [2]. The inability to discriminate well between nearest and farthest neighbours has several implications. Branch and bound techniques become less useful as the dimensionality increases. Arya et al. [3] showed that the optimal A N N search for any Minkowski metric and fixed dimensionality can be found in 0(d\l + 6d/e] dlogn) time. As dimensionality increases all tree based A N N search techniques can be beaten by a simple linear scan of the data. The nearest neighbour also becomes less meaningful as dimensionality in-creases. If a small perturbation of the query point can make the nearest neigh-bour into the farthest neighbour then how useful is finding the nearest neigh-bour? Appendix B. Synthesis Times 64 Appendix B Synthesis Times Absolutely nothing should be concluded from these figures except that no conclusion can be drawn from them. —Joseph L. Brothers. A l l of the results shown here were obtained using a 1.8 GHz Petium IV C P U with 512 M B of R A M . source texture input size # of textons output size time (s) red and green 64 x 64 2 192 x 192 0.88 green scales 64 x 64 2 192 x 192 0.16 green scales 64 x 64 2 256 x 256 0.25 green scales 64 x 64 3 128 x 128 0.69 green scales 64 x 64 3 192 x 192 23.63 green scales 64 x 64 3 (manual) 128 x 128 0.14 green scales 64 x 64 3 (manual) 192 x 192 — water 128 x 128 2 192 x 192 0.49 water 128 x 128 3 192 x 192 31.27 kpt4 258 x 258 2 192 x 192 1.10 kpt4 258 x 258 2 256 x 256 1.82 Table B . l : The time taken to synthesize the texton mask grows slowly with increasing input size. The time grows rapidly with increasing output size due to the bijectivity constraint. The synthesis algorithm was unable to synthesize all of the textures at the desired sizes. Appendix B. Synthesis Times 65 source texture input size # of textons output size time (s) red and green 64 x 64 2 • 192 x 192 1.33 green scales 64 x 64 2 256 x 256 3.10 green scales 64 x 64 3 (manual) 128 x 128 0.33 green scales 64 x 64 3 128 x 128 0.66 green scales 64 x 64 3 192 x 192 1.44 water 128 x 128 2 192 x 192 9.88 water 128 x 128 3 192 x 192 5.51 Table B.2: The times required to synthesize the colour components of a texture using hash-based search. The synthesis time increases dramatically when the size of the input texture is increased. source texture # of textons output size # of iterations k time (s) red and green 2 192 x 192 3 1 1.04 red and green 2 192 x 192 3 2 1.14 red and green 2 192 x 192 3 3 . 1.24 green scales 2 256 x 256 1 2 0.52 green scales 2 256 x 256 2 2 1.13 green scales 2 256 x 256 3 2 1.93 green scales 3 128 x 128 3 2 0.38 green scales 3 192 x 192 3 2 0.87 Table B.3: The times required to synthesize the colour components of a texture using /c-coherence order independent search. The time required to synthesize an image is proportional to the number of pixels, k, and the number of iterations used. The time decreases with increasing numbers of texton values. Appendix B. Synthesis Times 66 source texture #of # o f size of neighbourhood pixels means input calculation time (s) green scales 12 64 64 x 64 28.17 green scales 16 64 64 x 64 35.26 green scales 20 64 64 x 64 58.30 green scales 24 64 64 x 64 124.90 green scales 24 64 64 x 64 895.88 green scales 20 32 64 x 64 30.91 green scales 20 16 64 x 64 14.85 red and green 12 64 64 x 64 53.26 red and green 16 64 64 x 64 82.85 red and green 20 64 64 x 64 109.05 S16_m 20 64 128 x 128 213.14 Table B.4: Time required to select the candidate neighbourhood. All of the re-sults were obtained using four level Gaussian pyramids with neigh-bourhood sizes of {3 x 3,1}, {5 x 5,2}, {7 x 7,2}, {9 x 9,2}. Appendix B. Synthesis Times 67 source texture #of #of output k synthesis pixels means size time (s) red and green 12 64 192 x 192 2 0.17 red and green 16 64 192 x 192 2 0.16 red and green 20 64 192 x 192 2 0.17 green scales 12 64 192 x 192 2 0.16 green scales 16 64 192 x 192 2 0.17 green scales 20 64 192 x 192 2 0.18 green scales 24 64 192 x 192 2 0.19 green scales 65 64 192 x 192 2 0.27 green scales 16 32 192 x 192 2 0.17 green scales 20 32 192 x 192 2 0.18 green scales 20 64 256 x 256 2 0.32 S16_m 16 64 256 x 256 2 0.38 S16_m 16 64 256 x 256 1 0.20 S16_m 20 64 256 x 256 1 0.21 pebbles 16 32 192 x 192 1 0.12 flowers 24 32 256 x 256 1 0.28 grass 12 32 256 x 256 1 0.20 kpt4 20 128 512 x 512 1 0.36 Table B.5: Synthesis times using the candidate neighbourhoods determined by maximizing the information gain. All of the results were obtained using four level Gaussian pyramids with neighbourhood sizes of {3 x 3,1}, {5x5,2}, {7x7,2}, {9x9,2}. Bibliography 68 Bibliography [1] Charu C. Aggarwal. On the effects of dimensionality reduction on high dimensional similarity search. In Proceedings of the twentieth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, pages 256-266. ACM Press, 2001. [2] Charu C. Aggarwal, Alexander Hinneburg, and Daniel A. Keim. On the surprising behavior of distance metrics in high dimensional space. Lecture Notes in Computer Science, 1973:420-434, 2001. [3] Sunil Arya, David M. Mount, Nathan S. Netanyahu, Ruth Silverman, and Angela Y. Wu. An optimal algorithm for approximate nearest neighbor searching fixed dimensions. Journal of the ACM, 45(6):891-923, 1998. [4] Tetsuo Asano, Danny Z. Chen, Naoki Katoh, and Takeshi Tokuyama. Polynomial-time solutions to image segmentation. In Proceedings of the seventh annual ACM-SI AM symposium on Discrete algorithms, pages 104-113. Society for Industrial and Applied Mathematics, 1996. [5] Michael Ashikhmin. Synthesizing natural textures. In Symposium on In-teractive 3D Graphics, pages 217-226, 2001. [6] Kevin Bayer, Jonathan Goldstein, Raghu Ramakrishnan, and Uri Shaft. When is "nearest neighbour" meaningful? Lecture Notes in Computer Science, 1540:217-235, 1999. [7] Julian E. Besag. On the statistical analysis of dirty pictures. Journal of the Royal Statistical Society, series B, 48:259-302, 1986. Bibliography 69 [8] James F. Blinn and Martin E . Newell. Texture and reflection in computer graphics. In Communications of the ACM, pages 542-547, 1976. [9] Jeremy. S. De Bonet. Multiresolution sampling procedure for analysis and synthesis of texture images. In Computer Graphics, pages 361-368. A C M S I G G R A P H , 1997. [10] Fred L . Bookstein. Principal warps: Thin-plate splines and the decomposi-tion of deformations. IEEE Trans. Pattern Anal. Mach. Intell., 11(6):567-585, 1989. [11] Rama Chellappa and Anil Jain. Markov Random Fields: Theory and Ap-plication. Academic Press, 1993. [12] Richard C. Dubes and Anil K . Jain. Random field models in image analysis. Journal of Applied Statistics, 16(2):131-164, 1989. [13] Alexei A . Efros and William T. Freeman. Image quilting for texture syn-thesis and transfer. In Eugene Fiume, editor, SIGGRAPH 2001, Computer Graphics Proceedings, pages 341-346. A C M Press / A C M S I G G R A P H , 2001. [14] Alexei A . Efros and Thomas K . Leung. Texture synthesis by non-parametric sampling. In IEEE International Conference on Computer Vision, pages 1033-1038, Corfu, Greece, September 1999. [15] Stuart Geman and Donald Geman. Stochastic relaxation, Gibbs distri-butions, and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6(6):721-741, 1984. [16] Paul Harrison. A non-hierarchical procedure for re-synthesis of complex textures. In V . Skala, editor, WSCG 2001 Conference Proceedings, 2001. [17] David J . Heeger and James R. Bergen. Pyramid-based texture analy-sis/synthesis. In SIGGRAPH, pages 229-238, 1995. Bibliography 70 [18] Aaron Hertzmann, Charles E. Jacobs, Nuria Oliver, Brian Curless, and David H. Salesin. Image analogies. In Eugene Fiume, editor, SIGGRAPH 2001, Computer Graphics Proceedings, pages 327-340. ACM Press / ACM SIGGRAPH, 2001. [19] Alexander Hinneburg, Charu C. Aggarwal, and Daniel A. Keim. What is the nearest neighbour in high dimensional spaces? In The (VLDB) Journal, pages 506-515, 2000. [20] Josef Hoschek, Dieter Lasser, and Larry L. Schumaker. Fundamentals of computer aided geometric design. A. K. Peters, Ltd., 1993. [21] Michael Cohen Jonathan. Wang tiles for image and texture generation. In John Hart, editor, SIGGRAPH 2003, Computer Graphics Proceedings, pages 287-294. ACM Press / ACM SIGGRAPH, 2003. [22] Bela Julesz. Dialogues on Perception. MIT Press, Cambridge, Mass ; London, Eng., 1995. [23] Vivek Kwatra, Arno Schodl, Irfan Essa, Greg Turk, and Aaron Bobick. Graphcut textures: image and video synthesis using graph cuts. In John Hart, editor, SIGGRAPH 2003, Computer Graphics Proceedings, pages 277-286. ACM Press / ACM SIGGRAPH, 2003. [24] Lin Liang, Ce Liu, Ying-Qing Xu, Baining Guo, and Heung-Yeung Shum. Real-time texture synthesis by patch-based sampling. ACM Trans. Graph., 20(3): 127-150, 2001. [25] Yosef Linde, Andres Buzo, and Robert M. Gray. An algorithm for vector quantizer design. IEEE Transactions on Communications, 28(l):84-95, 1980. [26] Karol Myszkowski, Takehiro Tawara, Hiroyuki Akamine, and Hans-Peter Seidel. Perception-guided global illumination solution for animation ren-dering. In ACM SIGGRAPH, pages 221-230, 2001. Bibliography 71 [27] Nasser M . Nasrabadi and Robert A . King. Image coding using vector quantization: a review. IEEE Transactions on Communications, 36(8):957-971, 1988. [28] Andrew Nealen and Marc Alexa. Hybrid texture synthesis. In Eurographics Symposium on Rendering 2003, pages 97-105, 2000. [29] Rupert Paget and Dennis Longstaff. Texture synthesis via a nonparametric markov random field, 1995. [30] Rupert Paget and Dennis Longstaff. A nonparametric multiscale markov random field model for synthesising natural textures. In Fourth Inter-national Symposium on Signal Processing and its Applications, volume 2, pages 744-747, 1996. [31] Emanuel Parzen. On the estimation of a probability density function and the mode. Annals Mathematical Statistics, 33:1065-1076, 1962. [32] Ken Perlin. An image synthesizer. In Proceedings of the 12th annual confer-ence on computer graphics and interactive techniques, volume 19(3), pages 287-296, July 1985. [33] Kris Popat and Rosalind Picard. Novel cluster-based probability model for texture synthesis, classification, and compression, 1993. [34] Carsten Rother, Vladimir Kolmogorov, and Andrew Blake, "grabcut": interactive foreground extraction using iterated graph cuts. ACM Trans. Graph., 23(3):309-314, 2004: [35] Heung-Yeung Shum. In search of textons. In Proceedings of the Shape Modeling International 2003, page 197. I E E E Computer Society, 2003. [36] X i n Tong, Jingdan Zhang, Ligang Liu, X i Wang, Baining Guo, and Heung-Yeung Shum. Synthesis of bidirectional texture functions on arbitrary sur-faces. In Proceedings of the 29th annual conference on Computer graphics and interactive techniques, pages 665-672. A C M Press, 2002. Bibliography 72 [37] Minh Tran and Amitava Datta. Synthesising textures using variable neigh-bourhood searching. In Proc. 7th International Conference on Digital Im-age Computing: Techniques and Applications, pages 643-652. CSRIO Pub-lishing, 2003. [38] Greg Turk. Texture synthesis on surfaces. In Eugene Fiume, editor, SIG-GRAPH 2001, Computer Graphics Proceedings, pages 347-254. ACM Press / ACM SIGGRAPH, 2001. [39] Marcelo Walter, Alain Fournier, and Mark Reimers. Clonal mosaic model for the synthesis of mammalian coat pattern. In Graphics Interface '98, 1998. [40] Roger Weber, Hans-Jorg Schek, and Stephen Blott. A quantitative analysis and performance study for similarity-search methods in high-dimensional spaces. In Proc. 24th Int. Conf. Very Large Data Bases, VLDB, pages 194-205, 24-27 1998.' [41] Li-Yi Wei. Tile-based texture mapping on graphics hardware. In Graphics Hardware 2004, pages 55-63, August 2004. [42] Li-Yi Wei and Marc Levoy. Fast texture synthesis using tree-structured vec-tor quantization. In Kurt Akeley, editor, Siggraph 2000, Computer Graph-ics Proceedings, pages 479-488. ACM Press / ACM SIGGRAPH / Addison Wesley Longman, 2000. [43] Li-Yi Wei and Marc Levoy. Texture synthesis by fixed neighborhood search-ing. PhD thesis, Stanford, 2002. [44] Qing Wu and Yizhou Yu. Feature matching and deformation for texture synthesis. In SIGGRAPH 2003, Computer Graphics Proceedings. ACM Press / ACM SIGGRAPH, 2004. [45] Ying-Qing Xu, Baining Guo, and Heung-Yeung Shum. Chaos mosaic: Fast and memory efficient texture synthesis. Microsoft Research Technical Re-port MSR-TR-2000-32, April 2000. Bibliography 73 [46] Steve Zelinka and Michael Garland. Towards real-time texture synthesis with the jump map. In Paul Debevec and Simon Gibson, editors, Proceed-ings of the Thirteenth Eurographics Workshop on Rendering Techniques, pages 99-104. Eurographics Association, 2002. [47] Jingdan Zhang, Kun Zhou, Luiz Velho, Baining Guo, and Heung-Yeung Shum. Synthesis of progressively variant textures on arbitrary surfaces. In John Hart, editor, SIGGRAPH 2003, Computer Graphics Proceedings, pages 295-302. ACM Press / ACM SIGGRAPH, 2003. [48] Song Chun Zhu, Cheng en Guo, Ying Nian Wu, and Yizhou Wang. What are textons? In Proceedings of the 7th European Conference on Computer Vison-Part IV, pages 793-807. Springer-Verlag, 2002.
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Low dimensional search for efficient texture synthesis
Open Collections
UBC Theses and Dissertations
Featured Collection
UBC Theses and Dissertations
Low dimensional search for efficient texture synthesis Kimberley, Fred 2004
pdf
Page Metadata
Item Metadata
Title | Low dimensional search for efficient texture synthesis |
Creator |
Kimberley, Fred |
Date Issued | 2004 |
Description | Current texture synthesis algorithms rely upon high dimensional approximate nearest neighbour (ANN) searches to determine the best pixel to use at the current position. The feature vectors used in the ANN search are typically between 100 and 300 dimensions. A large amount of research has examined how to reduce the number of feature vectors that need to be searched but very little has been done to speed up the actual comparisons. We present two new texture synthesis algorithms that use an order of magnitude fewer dimensions during the ANN search. In addition, we construct and make use of an error texture that further reduces the time spent comparing two feature vectors. |
Extent | 9176166 bytes |
Genre |
Thesis/Dissertation |
Type |
Text |
FileFormat | application/pdf |
Language | eng |
Date Available | 2009-11-23 |
Provider | Vancouver : University of British Columbia Library |
Rights | For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use. |
DOI | 10.14288/1.0051742 |
URI | http://hdl.handle.net/2429/15561 |
Degree |
Master of Science - MSc |
Program |
Computer Science |
Affiliation |
Science, Faculty of Computer Science, Department of |
Degree Grantor | University of British Columbia |
GraduationDate | 2004-11 |
Campus |
UBCV |
Scholarly Level | Graduate |
AggregatedSourceRepository | DSpace |
Download
- Media
- 831-ubc_2004-0508.pdf [ 8.75MB ]
- Metadata
- JSON: 831-1.0051742.json
- JSON-LD: 831-1.0051742-ld.json
- RDF/XML (Pretty): 831-1.0051742-rdf.xml
- RDF/JSON: 831-1.0051742-rdf.json
- Turtle: 831-1.0051742-turtle.txt
- N-Triples: 831-1.0051742-rdf-ntriples.txt
- Original Record: 831-1.0051742-source.json
- Full Text
- 831-1.0051742-fulltext.txt
- Citation
- 831-1.0051742.ris
Full Text
Cite
Citation Scheme:
Usage Statistics
Share
Embed
Customize your widget with the following options, then copy and paste the code below into the HTML
of your page to embed this item in your website.
<div id="ubcOpenCollectionsWidgetDisplay">
<script id="ubcOpenCollectionsWidget"
src="{[{embed.src}]}"
data-item="{[{embed.item}]}"
data-collection="{[{embed.collection}]}"
data-metadata="{[{embed.showMetadata}]}"
data-width="{[{embed.width}]}"
async >
</script>
</div>
Our image viewer uses the IIIF 2.0 standard.
To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0051742/manifest