UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Filtering volumetric data Buchanan, John W. 1994

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_1994-893338.pdf [ 3.55MB ]
Metadata
JSON: 831-1.0051470.json
JSON-LD: 831-1.0051470-ld.json
RDF/XML (Pretty): 831-1.0051470-rdf.xml
RDF/JSON: 831-1.0051470-rdf.json
Turtle: 831-1.0051470-turtle.txt
N-Triples: 831-1.0051470-rdf-ntriples.txt
Original Record: 831-1.0051470-source.json
Full Text
831-1.0051470-fulltext.txt
Citation
831-1.0051470.ris

Full Text

FILTERING VOLUMETRIC DATAByJohn W. BuchananB.Sc., University of Windsor, 1986M.Sc., University of Toronto, 1988A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFDOCTOR OF PHILOSOPHYinTHE FACULTY OF GRADUATE STUDIES(DEPARTMENT OF COMPUTER SCIENCE)We accept this thesis as conformingto the required standardTHE UNIVERSITY OF BRITISH COLUMBIAJanuary 1994© John W. Buchanan, 1994In presenting this thesis in partial fulfillment of the requirements for an advanced degreeat the University of British Columbia, I agree that the Library shall make it freelyavailable for reference and study. I further agree that permission for extensive copyingof this thesis for scholarly purposes may be granted by the head of my department or byhis or her representatives. It is understood that copying or publication of this thesis forfinancial gain shall not be allowed without my written permission.(Department of Computer Science)The University of British Columbia2075 Wesbrook PlaceVancouver, CanadaV6T 1Z1Date:AbstractThe display of volumetric data is a problem of increasing importance: Thedisplay of this data is being studied in texture mapping and volume renderingapplications. The goal of texture mapping is to add variation to the surfacesthat is not caused by the geometric models of the objects. The goal of volumerendering is to display the data so that the study of this data is made easier.Three-dimensional texture mapping requires the use of filtering not onlyto reduce aliasing artifacts but also to compute the texture value which isto be used for the display. Study of two-dimensional texture map filteringtechniques led to a number of techniques which were extended to three dimensions: namely clamping, elliptical weighted average (EWA) filters, and apyramidal scheme known as NIL maps; (NIL stands for nodus in largo, therough translation of which is knot large).The use of three-dimensional textures is not a straightforward extensionof the use of two-dimensional textures. Where two-dimensional textures areusually discrete arrays of texture samples which are applied to the surface ofobjects, three-dimensional textures are usually procedural textures which canbe applied on the surface of an object, throughout the object, or in volumesnear the object. We studied the three-dimensional extensions of clamping,EWA filters, and NIL maps for filtering these textures. In addition to thesethree techniques a direct evaluation technique based on quadrature methods11is presented. The performance of these four techniques is compared using avariety of criteria, and recommendations are made regarding their use.There are several techniques for volume rendering which can be formulatedas filtering operations. By altering these display filters different views of thedata can be generated. We modified the NIL map filtering technique for useas a filter-prototyping tool. This extension incorporated transfer functionsinto the NIL map technique. This allows the manipulation of the transferfunctions without requiring the re-computation of the NIL maps. The use ofNIL maps as a filter-prototyping tool is illustrated with a series of examples.111ContentsAbstract iiContents ivList of Tables viiList of Images viiiList of Figures xAcknowledgments xiDedication xii1 Introduction 11-1 Overview 11-2 Three-dimensional textures 61-3 Volume rendering 71-3.1 Transfer functions 91-3.2 Fourier-based methods 101-4 Filtering 111-4.1 Colour textures 161-4.2 Filtering semantics 171-4.3 Reconstruction filters 171-4.4 Filter approximation requirements 181-5 Thesis Goals 191-6 A Road-map to this thesis 202 Definitions 233 Related work 303-1 Two-dimensional textures 303-2 Three-dimensional textures 403-3 Volume rendering 433-4 Wrapup 464 Filtering techniques 49ivCONTENTS4-1 Clamping4-2 Direct evaluation4-3 Elliptical Weighted Average filtering4-4 NIL maps4-4.1 Normalization4-4.2 Weighting the levels of the NIL4-4.3 Transfer functions4-4.4 Procedural textures4-4.5 Cosine textures4-4.6 Cosine textures with NIL maps4-4.7 Three dimensions4-4.8 Trigger points4-5 Wrapup5 Comparison and application of the techniques5-1 Three-dimensional texture filter comparison5-2 Filter technique evaluation criteria5-3 Clamping5-4 Direct evaluation5-5 EWA filters5-6 NIL maps5-7 Wrapup6 Filters for volume rendering6-0.1 Data set for examples. .6-1 Conventional display6-2 Slicing the data6-3 Maximum revisited6-4 Comments on NIL maps for volume6-5 Wrapup7 Conclusions7-0.1 Three-dimensional texture map filtering7-0.2 Filters for volume rendering7-0.3 NIL maps7-1 Contributions7-2 Future workNotationGlossary137139maps4952565966676971727378798081818285909598105109110111118120123128129129131132133134renderingvCONTENTSBibliography 141A NIL innardsA-i Speeding up the computation of Cj, for discrete texturesA-i.i Sample and holdA-i.2 Trilinear interpolationA-i.3 Three dimensionsA-i.4 Negative levels for tn-linear interpolationA-2 Non symmetric casesA-2.i Non integral powers of 2 in one dimensionA-2.2 Rectangular three-dimensional texturesB High level view of NIL code 170C Motion blur filter 173150151151157159164165165168viList of Tables5.1 MSE and SNR of clamping methods 895.2 Comparison of run times for filtering techniques. The parenthesized numbers in the fourth column indicate which technique was used to computethe MSE and SNR 915.3 Parameters for the displays of the marble block in Plate 5.6 965.4 EWA times and performances 965.5 NIL map pre-processing times for sample cosine texture (4 x 4 x 4). . 975.6 NIL times and comparisons 1005.7 Filters approximated 1065.8 Texture class allowed 1075.9 Pre-processing cost 1075.10 Evaluation cost 1085.11 Visual evaluation 1086.1 NIL map volume rendering timings (Mm) 1156.2 NIL map execution time breakdown 126vii4.25.15.25.35.45.55.65.75.85.95.105.115.125.135.145.155.16List of Images234591151161171171181191201.1 Three-dimensional (left) and two-dimensional texture (right) on a unitsphere. These procedural textures have similar definitions in two andthree space1.2 Geometric objects approximating an iso-surface1.3 Direct display of volumetric data using Sabella’s method4.1 A Gaussian centered in texture space is not centered in screen space.Texture defined by cosine series with I = 4, J = 4, and K = 4 74Point sampled cosine texture 84Step function used for clamping 86Quadratic function used for clamping 86Direct evaluation of Box filter 92Direct evaluation of Gaussian filter 92Marble blocks, with box, Bartlett, and volume filters 94EWA filter. Max samples = 3 97EWA filter. Max samples = 11 97NIL map filter. M=1, tol=5 99NIL map filter. M=4, tol=5 99NIL map filter. M=1, tol=2 99NIL map filter. M=4, tol=2 99Texture for motion blur example 104Motion blur caused by a rotation of r/24 104Motion blur caused by a rotation of 7r/12 104Motion blur caused by a rotation of 7r/6 . 1046.1 Side view of data set displayed using Sabella’s technique 1136.2 Front view of data set displayed using Sabella’s technique 1146.3 Constant patch approximation using tolerance = 2, and trigger 4, 8, 16,32. The images are ordered left to right and top to bottom6.4 Linear patch approximation using tolerance = 2, and trigger = 4, 8, 16,32. The images are ordered left to right and top to bottom6.5 Quadratic patch approximation using the 2 x 2 x 2 level of the NIL map6.6 Linear patch approximation of general filter for 128 x 128 x 21 data set.The number of trigger points is 512. The tolerance is 36.7 Direct rendering with 2,4,8,16 samples per ray6.8 Thick slice of the data6.9 Thin slice of the dataviiiLIST OF IMAGES6.10 Search and display of volumetric data 1216.11 Sample and filter display. Po 0.14, side view 1236.12 Sample and filter display. Po 0.14, front view 1246.13 Sample and filter display. Po 0.18, side view 1246.14 Sample and filter display. Po 0.18, front view 1256.15 Wind-passage generated using Marching Cubes technique with threshold= 0.18 125ixList of Figures1.1 The general graphics pipeline augmented with a texture pipeline 51.2 Transfer functions 81.3 An illustration of aliasing resulting from bad sampling 121.4 Non uniform sampling in computer graphics 131.5 Normal distribution 161.6 Filtered normal distribution 161.7 Box reconstruction filter 181.8 Linear reconstruction filter 182.9 Pyramidal representation of two-dimensional data 283.10 Texture distortions due to perspective projection, transformation, and occlusion 334.11 Clamping function C = max(0, 1 — (71_)2) when frnax = 3.2 514.12 Box enclosing projection of circular pixel onto tangent plane 534.13 Box enclosing projection of circular pixel onto plane parallel to viewingplane 544.14 Elliptical weighted average filter evaluation 564.15 Figure showing that the center of a circle does not project to the center ofthe ellipse 574.16 Error associated with centering a Gaussian. This ratio was computed onthe image used in the next chapter 584.17 Pyramid data structure for a one-dimensional NIL map 624.18 Filter that spans the whole texture 634.19 Generation of NIL map hierarchy 644.20 Narrow filters result in different approximating hierarchies 654.21 Narrow filter showing the need for negative levels 664.22 Approximation hierarchy at two levels 686.23 Transfer functions used in NIL map examples 112A.24 Building NIL maps with samples where n 2 166xAcknowledgmentsMany people helped.Alain, from a guy with a texan accent to a mentor and a friend, I enjoyed the walk.Mary Jane, much more than a partner, a friend. Pierre, never did miss an opportunity toargue. Jeremy, many questions, few answers. Roy, always the right question. Ruhamah,always ready for a game of tag. Michal, charcoal eyes. George and Anne, family now.Michal, Teresa, Laura, and Anna, brats one and all.Additional thanks to: Bob, for delivering the thesis to many places. Bob and Victoria,for taking me to a tragedy before my defense. Grace, for all her work. Peter, for his salsarecipe. The members of the imager lab, for making working in the lab so interesting.The work reported in this thesis was strongly supported by my committee. I am verygrateful to them for their support and encouragement.Supervisory committeeKellogg Booth, Computer ScienceDavid Forsey, Computer ScienceAlain Fournier (research supervisor), Computer ScienceMaria Klawe, Computer ScienceDavid Lowe, Computer ScienceGunther Schrack, Electrical EngineeringUniversity examinersWilliam Casselman, MathematicsJames Little, Computer ScienceExternal examinerJane Wilhelms, University of California, Santa CruzThis work was partially supported by grants from the Natural Science and EngineeringCouncil (NSERC) of Canada.xiDedicationAlfred John Buchanan20 Feb 1935-14 Mar 1977It was one of those gifts that we loved so much. Dad had brought us some scrapsfrom the print shop. This time it was a whole stack of cards measuring 5cm by 8cm.David and I had decided to make a periscope, we knew we needed a tube and mirrorsat each end. After making the tube 5cm by 5cm we wanted to buy the mirrors for theend. —But what size should these mirrors be? We did not have a clue, so of we went tosee dad. After carefully listening to our dilema he pulled out a pencil and wrote somenumbers on paper then told us that we wanted 5cm by 7cm mirrors. You know what?They worked perfectlyThis was magic, pure and simple, and I wanted it. How could someone get themeasurements of a piece of glass off a piece of paper. In that moment you taught memore than you will ever know. Thanks for the magic dad.xiiChapter 1IntroductionA man who carries a cat by the taillearns something he canlearn in no other way.Mark Twain1-1 OverviewThe objects we encounter in every day life are usually far more complex than the objectsthat we can model and display using computer graphics. A great deal of this complexityis due to the detail of texture found on the surfaces of these objects. These texturesare usually small enough so that modeling them with computer graphics primitives isnot feasible. Texture mapping is an answer to the problem of incorporating this kindof texture into the images that we generate. Initially texture mapping was restricted totwo-dimensional textures. By digitizing the surface characteristics of different objects inthe real world we were able to wrap these textures onto our objects. Two-dimensionaltexture mapping was not a sufficient tool for modeling many of the textures that weencounter, because textures are defined throughout the material from which the object ismade. Three-dimensional textures were introduced as a tool with which to simulate thesesolid textures. In Plate 1.1 we see examples of two-dimensional and three-dimensionaltextures applied to a sphere.The study of three-dimensional textures has so far concentrated mainly on the modeling of textures. The resulting procedural models have provided us with a rich class11-INTRODUCTIONThree-dimensional (left) and two-dimensional texture (right) on a unit sphere. Theseprocedural textures have similar definitions in two and three space.Plate 1.1of three-dimensional textures. In most cases the incorporation of a three-dimensionaltexture map into display (rendering ) systems is a simple process that typically requiresless work than two-dimensional textures. These textures are then computed by passingthe shading parameters for a surface point to a procedural texture engine. Because thetextures are procedurally defined they can be made to alter any of the variables in theshading equation. Even though the original three-dimensional textures were applied tothe surfaces of objects, there have been a number of systems that have extended three-dimensional textures to include textures that are defined near the surfaces of the objects.When these volumetric textures are used, the display method must be adapted to compute the texture throughout the region in which it is defined. In a sense we can viewthese regions of texture as small volumetric data sets.Volume rendering is an area of study that has received much attention of late. Theresearch in this area is driven by a variety of applications that generate complex threedimensional data sets. Examples of sources for data are, medical imaging (CAT scans,MRI scans), geology (seismological surveys), and simulations (fluid mechanics, stress21-INTRODUCTIONanalysis. Methods for displaying volumetric data may be categorized into two classes,surface extraction and direct display. Surface extraction methods use some propertyof the data to generate traditional computer graphics primitives, such as triangles orquadrilaterals. These objects are then displayed using traditional computer graphicstechniques. Direct display techniques attempt to display the data without using intermediate geometric primitives.Geometric objects representing a volumetric data set. This data set is a 256 x256 x 21 8bit MRI scan of the region between the collar bone and the bridge of the nose. The230,000 triangles used to generate this image were constructed using the marchingcubes technique. The iso-surface was generated using a low-threshold of 0.31 and ahigh-threshold of 0.60.Plate 1.2The techniques for the direct display of the data can be split into two groups, pixelbased and voxel-based. Pixel-based volume renderers calculate the image pixel by pixel,usually by traveling through the data along the line of sight through each pixel. In31-INTRODUCTIONDirect display of volumetric data. This display of the MRI data set was constructedusing Sabella’s volume rendering technique.Plate 1.3most cases this is approximated using a ray casting technique that computes the shadingalong the ray. Voxel-based volume rendering techniques compute the image by projecting individual voxels onto the screen and updating the affected pixels. If occlusion isimportant then the order in which the voxels are processed is important, and, they mustbe processed relative to the viewing direction.The display of volumetric data, whether in texture mapping or volume rendering, willcontinue to be air interesting area of study. Texture mapping has been shown to be veryuseful as part of the computer graphics pipeline. The increased computational powermakes the study of the display of larger and more complex datasets possible.The computer graphics pipeline [Fole9O, Newm79] has proven useful as a conceptualframework. Even though texture mapping is a subset of the final step of the pipeline41-INTRODUCTION(see Figure 1.1), we can think of the texture mapping process as itself being a pipeline.The modeling stage consists of developing the procedural models. The transformationstage is typically a set of simple transforms to map the object into the texture space.The display stage is where the texture is evaluated or sampled. Most of the research inthree-dimensional texture mapping has concentrated on the modeling of textures.‘0 0Modeling Geometry DisplayThe general graphics pipeline augmented with a texture pipeline.Figure 1.1The process of displaying volumetric data also fits into a pipeline model. Given a setof data we make a choice of which model will be used for the display. The transformationstage allows the user to set the viewing parameters to view a particular aspect of the data.It is only natural that the first two steps be simple because the modeling required for thedisplay of a data set consists of choosing a display model and the viewing transformations.This typically results in a view of the data set where the data set almost fills the entire51-INTRODUCTIONviewing screen.Whenever a signal is sampled there is the possibility that this sampling will be inadequate. When the sampling of the signal is inadequate a variety of effects known asaliasing can appear in the reconstructed signal. This problem is exacerbated in computer graphics because we are required to sample objects and their associated texturesin a non-uniform manner. The solution to this problem is well known, and requires thefiltering of the textures to remove the frequencies that are causing the problems. Thecost of evaluating these filtered samples is high. This high cost stimulated the search forefficient approximations to two-dimensional filters. An overview of the two-dimensionaltexture filtering literature is presented in Chapter 3.1-2 Three-dimensional texturesTexture mapping has allowed us to generate images of rich visual complexity[Catm74].Initially two-dimensional textures were used to modulate the surface characteristics ofthe objects. This technique was soon extended to allow the perturbation of the normalson the surface [Blin78a], and to modulate the transparency on surfaces [Gard84]. Using two-dimensional texture maps textured objects were easy to model and display. Aproblem with this technique was that sculpted objects were difficult to display. Threedimensional textures [Gard84, Gard85, Grin84, Per185, Peac85, Four86J were introducedpartly to address this weakness of two-dimensional textures. Instead of requiring a complex mapping from the surface of the object to the texture space, a simpler set of affinetransforms proved sufficient in most cases. Over this three-dimensional texture space a61-INTRODUCTIONprocedural model of the texture was defined. This model of the texture was then sampled at the surface points of the object being displayed. Because these textures wereprocedurally defined any of the shading parameters could be altered by this procedural texture engine. This is in contrast to two-dimensional texture mapping where mostof the textures are discrete. The relative difficulty of acquiring three-dimensional datafor the optical characteristics of solid materials and the costs of storing sufficiently highresolution data may account for this choice.Early work on three-dimensional textures concentrated on texturing the surface of theobject. More recent three-dimensional texture mapping systems have placed texture ina neighbourhood near the surface of the object {Per189, Kaji89, Lewi89, Eber9O]. Thesetextures model objects with high frequency or fuzzy surfaces, such as fur. These texturesrequire the computation of the texture throughout the texture region instead of at asingle point.1-3 Volume renderingVolumetric data display is proving to be a useful tool in a variety of areas. In medical imaging it has allowed better diagnosis to be made. In geology it has given usa better understanding of the underground structure. The study of methods for displaying volumetric data has primarily concentrated on scalar volumetric data. Themethods developed to display this data can be divided into two classes. The firstset of methods displays the volumetric data by first fitting geometric objects to thedata {Artz8la, Artz8lb, Artz8O, Artz79a, Artz79b, Chen85, Herm83, Herm82, Herm8O,71-INTRODUCTIONWilh9Oa, Wilh9Ob, Lore87, Shir9O, Galy9l, C1in88]. These geometric objects or primitives are then passed to a traditional display system. We will refer to this set of techniquesas geometric volume rendering. The second set of methods displays the data without fitting geometric objects to the data. This set of methods assumes that the data representsdensity samples taken throughout the volume. Displays of this volume are then generatedby simulating the transport of light through the volume. These techniques have beenAir#fttissueOriginal histogram Constituent’s distributionsSoftBoneMaterial assignmentsDensity distribution, probability functions, and transfer functions for CAT scans. Thelinear approximations on the bottom are the transfer functions used by Drebin et al.[Dreb88]Figure 1.2called direct volume rendering. In the ensuing discussion we will refer to these techniquessimply as volume rendering because geometric volume rendering techniques lie outsidethe scope of this manuscript.Two approaches to volume rendering are being studied, voxel-based and pixel-based.Voxel-based techniques, [West9O, Wilh9l, Laur9l, Dreb88, Upso88, Max9O] calculate avolume of influence around the voxel and then project this volume onto the screen, this81-INTRODUCTIONprocess is also known as voxel splatting. The pixels that lie in the projection of the voxelare updated as required. If occlusion is a desired property of the display process then theprocessing of the voxels must be done in an order determined by the viewing direction.The direction of this processing adds another label to these techniques; thus we haveback-to-front and front-to-back voxel-based volume rendering. The manner in which avoxel is displayed depends on the shape of the voxel, the distribution of density throughthe voxel, and the intent of the method.Pixel-based volume rendering techniques [Levo9Oa, Levo9Od, Levo9Ob, Levo9Oc,Sabe88, Levo88, Upso88, Novi9O] typically cast a ray from the eye through the pixelinto the scene computing the integral of the densities/intensities along the ray. Theresulting intensities are used to determine the colour of the pixel.Voxel-based techniques have the advantage that they do not need to have all the datain main memory at the same time, but they have the disadvantage that they may haveto process all of the data. Pixel-based techniques have the advantage that they can finishprocessing along a line of sight when one of the calculated quantities passes a pre-setthreshold, however these techniques require that all of the data, or a major portion of it[Novi9O], be in main memory. One could argue that as memory sizes and compute powerincrease this will not be a problem. But it is probably the case that no matter how largememory becomes, there will always be a larger data set that needs to be studied.1-3.1 Transfer functionsDifferent aspects of the data can be highlighted by concurrently displaying separatetransformations of the data. This separation is often done by means of procedurally91-INTRODUCTIONdefined transfer functions. These functions will either map a scalar data set into anotherscalar data set or into a multi-dimensional data set. By carefully tailoring these transferfunctions different properties of the data can be highlighted. An example of transferfunctions is presented by Drebin et al. [Dreb88]. Transfer functions were used to segmentthe data into three data sets corresponding to fatty tissue, muscle, and bone. Thesetransfer functions and the related density distribution curves are presented in Figure 1.2.To date most of the volume rendering systems presented point sample the data.Pixel-based techniques point sample each pixel and then use sample points along theray generated from the pixel. Voxel-based techniques generate an approximation of theprojection of the voxel onto the screen. The pixels that lie in this projection of thevoxel are then updated according to the shading/display model being used. A few systems [Upso88, West9O] deal with approximations to a three-dimensional integral overthe volume of the voxel that is cast onto the pixel. Unfortunately these systems relyon point sampling to generate their approximations to the integrals. In this thesis wepresent a study of the filtering requirements of volume rendering and propose the use ofthree-dimensional NIL maps for approximating these filters.1-3.2 Fourier-based methodsLevoy [Levo92] recently presented a technique for volume rendering that used the Fourierprojection-slice theorem. The data is first transformed into the Fourier or frequency domain. Given the orientation of the desired display a corresponding slice of the Fouriertransform is sampled. The inverse two-dimensional Fourier transform of this data yields101-INTRODUCTIONan orthogonal display of the data. Most volume rendering techniques require the processing of 0(n3) voxels per image. In contrast to this the Fourier based technique requiresthe processing of 0(n2) pixels for the construction of the sample slice. The cost of transforming this slice into the spectral domain is 0(n2 log n). The pre-processing step costs0(n3 log n). This means that the total cost is per view 0(n2 log n). This technique workswell but allows little control of the display model since it is fixed.In order to compute the slice an interpolating filter of width W is used. The costof evaluating each element of the slice is W3. This implies that the cost of evaluatingthe spectrum slice is actually 0(W3n2). Work in this area continues with encouragingresults [Tots93, Ma1z93].1-4 FilteringGiven a signal T(t) that is defined over some interval [a, bj we wish to store a sampledor digitized version of the signal in such a way that we can later reconstruct the signal.We know from signal processing theory that the frequency of the sampling grid usedmust be greater than twice the highest frequency in the signal1; this is known as theNyquist frequency. In Figure 1.3 we present an illustration of the problem that occurswhen a bad sample set is used to reconstruct a signal. The circles depict a sample setwith a frequency lower than the Nyquist frequency, and the squares a sample set with afrequency that is higher than the Nyquist frequency. The reconstructions that result fromapplying the ideal reconstruction filter to these sample sets are illustrated underneath.‘For a good overview of signal theory see [Rose76].111-INTRODUCTIONR1/r\\ •• f\R2Two reconstructions of a sampled signal. The first reconstruction (R1) results fromusing an ideal (sinc) reconstruction filter on the samples defined by the circles. Thesecond reconstruction is generated from the samples defined by the squares. Becausethe first sample set sampled below the Nyquist frequency it is impossible to correctlyreconstruct the original signal.Figure 1.3The erroneous signal that results from the first sample set is called an aliased signal.When we are sampling signals with more complex frequency spectra we must eithersample at or above the Nyquist frequency or remove the high frequency components thatare causing the aliasing. This latter process is called filtering.These sampling problems are compounded in computer graphics since we must oftensample our signal using non uniform sample grids. A simple example of this situation ispresented in Figure 1.4. In this figure we see a view of a square that is being displayedusing a perspective transform. Because the picture elements or pixels on the viewingplane are regularly spaced we sample the square based on this grid. On the left of121-INTRODUCTIONthe illustration we see the texture displayed with the sample points highlighted on it.We notice that these samples are not regularly distributed throughout texture spaceeven though the samples are uniformly distributed on the viewing plane. If the square issubstituted with a large plane the spacing between the texture sample points can becomearbitrarily large near the horizon.x x x xx )< X XV V VX /____________V / I10 0Non uniform sampling in computer graphics. The uniform grid of samples generated onthe screen does not generate a uniform grid of samples on the texture.Figure 1.4For a given texture the resulting sampling frequency may be lower than the texture’sNyquist frequency. We may choose to take more samples to ensure that the Nyquistfrequency is satisfied. By averaging these samples a representative value can be computed.In order to compute this average we must determine an area of the texture space fromwhich the samples are to be taken. This is usually accomplished by using some profile orperimeter of a pixel, such as a circle. The texture elements or texels within this projectedpixel are then averaged to generate the filtered sample.131-INTRODUCTIONOnce we have developed this filtering process we can start to consider more sophisticated filters. The Bartlett filter and the Gaussian filter have proven useful in thecomputer graphics literature. These filters have been used for anti-aliasing. The use offilters in computer graphics is not restricted to anti-aliasing. By tailoring different filterswe can compute different effects, such as motion blur and depth of field. In both of thesesituations the filters are not removing aliasing frequencies but are being used to computethe value of the texture. In the case of motion blur the filter is deformed, or spread out,along the path in which the surface point is traveling. By averaging the texture underthis filter we can produce motion blur effects. In a similar way the blurriness of the filtercan be defined as a function of the distance from the focal plane. In this way a depth offield effect can be computed. Because the cost of directly computing the convolution ofthese filters is high, research has focused on finding reasonable approximations to thesefilters.In order to apply a two-dimensional texture map to an object in a meaningful waywe must develop a u, v parametrization of the surface. This parametrization is then usedto index into the two-dimensional texture. Constructing these parametrizations becomesincreasingly difficult as the complexity of the object increases. As the complexity of thev parametrization increases so does the complexity of the filter shape required for antialiasing. Fortunately there has been some work done on constant cost approximationsto these filters [Wi1183, Four88a, Gots93]. One of the most successful of these approximations uses pyramidal representations of the data to approximate the filter[Four88aJ.The constant cost is a result of approximating the filter at different levels of the pyramid,141-INTRODUCTIONthus requiring a fixed number of samples to be taken from the data.Suppose we wish to texture an object in such a way that it appears as though it werecarved out of some solid material. Three-dimensional textures provide adequate tools forthis task, since the coordinates on the surface of an object can be used to index into thesolid texture. This carved out of effect is difficult if not impossible to accomplish usingtwo-dimensional textures.With two-dimensional textures much of the complication of the filter’s shape was dueto the complex parametrizations used. With three-dimensional texture maps we do nothave to concern ourselves with this aspect since the transformations used to map thetexture onto the object are not that complex. Instead, we now have to worry about themodel of the material that is used for the object. If the material is opaque the texture isonly visible on the surface, and if the material is translucent then the texture is importantin a region or volume near the surface of the object. This means that if we are attemptingto filter the texture on an object that is made of a opaque material we must tailor thefilter to concentrate on the texture at the surface. On the other hand when we wish tofilter textures on objects that have translucent properties then we must use a filter thatprocesses a volume near the surface of the object2.Current implementations of three-dimensional textures allow any of the variables ofthe shading equation to be altered. This raises a number of filtering issues. Considerthe case where it is not the colour of the object that is altered, but the normal of thesurface (bump mapping). The bumpy or rough surface that is produced cannot be filtered2Note that in the case of translucent textures the texture may have to be evaluated throughout the wholevolume along the viewing direction.151-INTRODUCTIONusing the traditional filtering approach since the normal is not a linear component of theshading equation. Even though the usual filters cannot be used on these normals, thefilters do provide a means of removing the aliasing [Blin78c, Blin78aj. A simple exampleserves here to illustrate how using standard filters removes aliasing for bump mapping,but does not do it in a “correct” way. Consider the surface with the distribution ofnormals detailed in Figure 1.5. If these normals are replaced with their average theresulting normal distribution bears no resemblance to the original distribution (Figure1.6). There is ongoing work in the area of filtering normal distributions [Four93].Normal distributionFigure 1.5Filtered normal distributionFigure 1.61-4.1 Colour texturesIn this thesis we will be dealing with those elements of the texture space that can be‘averaged’ i.e. variables that are linear. When dealing with three-dimensional texture we161-INTRODUCTIONwill primarily concern ourselves with textures that alter the colour of the object.1-4.2 Filtering semanticsIn its most general form a filter is a weighting function F(t) applied to another functionT(t). There are different reasons for filtering. Anti-aliasing was the original reasonfor this work. It soon became apparent that the use of filters for three-dimensionaltextures provided a far richer set of operations than simply anti-aliasing. In both three-dimensional texture mapping and volume rendering there are a large number of possiblefilters that can be applied to the data. We will not attempt to enumerate these filtersin this thesis, but rather we will select some examples of filters and show how theirapplication can be evaluated using a set of approximating techniques. These techniquesare developed and evaluated in Chapter 4 and 5 of this dissertation. Chapter 6 discussesthe use of filters for volume rendering.1-4.3 Reconstruction filtersThe reconstruction of the data is another area where there is potential for aliasing orreconstruction artifacts to appear. For the most part the two reconstruction filters thathave been used are the box filter and the tn-linear interpolation filters. Figures 1.7and 1.8 illustrate these reconstruction filters in one dimension for a particular sampleset. When the box reconstruction filter is used the image often looks as though thevolume is made up of uniform cubes. This effect is somewhat diminished when tn-linearinterpolation is used. To date there is no discussion on using higher order reconstructionfilters in volume rendering applications. Because most three-dimensional textures are171-INTRODUCTIONprocedurally defined the issue of reconstruction filters has not been so relevant.EdITJJT*TLJVJBox filter reconstruction of a one-dimensional signal. The bold vertical lines indicatethe position and values of the sample set.Figure 1.7Linear interpolation reconstruction of the signal used in the previous figure.Figure 1.81-4.4 Filter approximation requirementsIn some sense two-dimensional filtering is a much simpler task than the filtering of threedimensional textures. The initial motivation for this work was to provide filters foranti-aliasing of three-dimensional textures. As we started to look at three-dimensionaltextures it became obvious that filters could be used for more than simple anti-aliasing.181-INTRODUCTIONFor some of the newer volumetric textures the display of the texture required an averagingover a region near the surface of the object. This calculation can easily be formulatedas a filtering operation. Similar filters can be used in the display of volumetric data.Different displays of the volumetric data can thus be generated by designing differentdisplay filters.This leaves two possible avenues of research, filter design and filter evaluation orapproximation. Considering the path by which we arrived at this point, it is naturalthat we chose to concentrate on filter approximation techniques. Our hope is that theresults in this thesis will allow the further investigations of filter design with applicationsin volume rendering and texture mapping. In order to support further study into filterdesign we will base our analysis primarily on the flexibility of the filter approximatingtechnique. Other evaluation criteria might include, pre-processing costs, filter evaluationcost, fidelity measures, and visual effects.1-5 Thesis GoalsThe main contributiolls of this thesis can be summarized as follows:• An overview of the three-dimensional texture mapping filtering issues is provided.We study the relationship between two-dimensional computer graphics samplingand filtering problems and the related three-dimensional problems.• The usefulness of filtering is shown to be greater than that of simply anti-aliasing.We show how some of the display problems in volumetric texture display and volumerendering can be formulated as filtering problems.191-INTRODUCTION• The evaluation of these filters is costly. We develop three techniques for approximating the evaluation of filters. These are:— Direct convolution evaluation using numerical quadrature.— Elliptical Weighted Average (EWA) filters. This technique is developed asan extension of the similarly named two-dimensional technique presented byGreene and Heckbert [Gree86}.— NIL maps. This technique is also developed as an extension of the corresponding two-dimensional technique presented by Fournier and Fiume [Four88aj.• These three techniques are studied along with a fourth technique from the literature.This technique is known as Clamping [Nort82, Per185] We compare their flexibility,performance, and domain of application. Examples of their application to texturemapping and volume rendering are also included.• We present the results of an initial investigation into the use of filters for volumerendering.1-6 A Road-map to this thesisThere are five chapters that follow. These are:• Chapter 2DefinitionsAn overview and precise definition of the terms, concepts, and formulas usedthroughout the dissertation.201-INTRODUCTIONChapter 3Related workThis work was influenced by the two-dimensional texture mapping, the three-dimensional texture mapping, and the volume rendering literature. In this chapterwe present an overview of the literature from these three areas.• Chapter 4Filtering techniquesThree filtering techniques were developed for volumetric data, direct evaluation byquadrature, EWA filters, and NIL maps. These techniques are presented with discussions on their possible implementation. A fourth technique from the literatllre,known as clamping, is also presented briefly.• Chapter 5Comparison and application of the techniquesThe four techniques are evaluated with regard to their application to texture mapping and volume rendering. We present example images illustrating application ofthese techniques to texture mapping.• Chapter 6Filters for volume renderingThree examples are used to illustrate the use of filters for volume rendering.211-INTRODUCTION• Chapter 7ConclusionsSummary, conclusions, and recommendations of dissertation.• Appendix AImplementation details of NIL mapsImplementation details of NIL maps. A variety of speed-ups for the technique arepresented.• Appendix BHigh level view of NIL codeThe code at the heart of the NIL map implementation for volume rendering.• Appendix CCode required to make a NIL motion blur filterAn example of C code required for the implementation of a three-dimensional motion blur filter.22Chapter 2Definitionsit is not necessary to understandthings in order to argue about them.Pierre de Beaumarchais.Definition 1 Volumetric Data:Volumetric data is a discrete data set or a continuous function defined over a volumevcR3—vc.Definition 2 Discrete Volumetric Data:A discrete set of samples of volumetric data acquired by some sampling process over afinite volume.Definition 3 Procedural Volumetric Data:A continuously defined volumetric data set which is defined algorithmically.Three-dimensional texture maps have been implemented as procedural textures. Mostof the acquired volumetric data used in volume rendering is discrete.Definition 4 Texture:A texture is a map from a geometric space to a texture spaceExamples of textures include, colour textures ( 2 (red,green,blue)), normal perturbation or bump maps (J?2 p(nx,ny,mz)). In practice two-dimensional texture maps havebeen discrete and three-dimensional textures have been procedural.232-DEFINITIONSDefinition 5 Frame Buffer:A portion of main memory dedicated to the storage of an image.Usually a frame buffer is associated with a display device. If this is the case there is animplicit mapping from the frame buffer values to the intensities displayed on the displaydevice.Definition 6 Pixel:The smallest addressable element of a frame buffer.As such the strict definition of a pixel is a numerical value representing the colour ata point in an image. There is some confusion inherent in this term, partly due to theconfusion between an image stored in a frame buffer and an image displayed on a physicalscreen. When an image is displayed on a screen the pixel values of the frame buffer areconverted into an intensity setting over an area of the screen. Sometimes pixels havebeen defined as the smallest addressable elements on a display device.The linking of a pixel to a physical display device is not without its merits. We canuse a model of the “physical pixels” as an indication of the area of the viewing screen overwhich we must integrate the image display. In this sense pixels may have a shape. Wemust emphasize, however, that the shape of the pixel model bears little if any resemblanceto the physical shape of pixels on a display device. For further discussion on this topicplease see [Lyon89, Naim89].242-DEFINITIONSDefinition 7 Texel:A texel is a texture element.This term is used when referring to discrete textures. In three dimensions the texel maybe either a volume over which a procedural texture is defined [Kaji89] or a sample of aprocedural texture.Definition 8 Voxel:A voxel is an element of a discrete volumetric data set.Notice that this definition is independent of the volume which the sample is thought torepresent. This volume has been tailored according to the display technique. Some peopleconsider it a rectangular parallelepiped of uniform density [Wilh9l], others consider ita rectangular parallelepiped whose density is defined by a tn-linear interpolation of itseight corner points [Upso88], and others have taken the volume to be a spherical orelliptical volume enclosing the sample point [West9O.Definition 9 Filter:Consider a signal T(t) defined on [—, oc]. A filter is any function F(t) such that theintegral= i_: T(t)F(t)dtis well defined.252-DEFINITIONSDefinition 10 Space invariant filter:A space invariant filter is a function which is translated and applied to a signal. Thefiltering operation is then a function of the centre of the filter t0.1(t0)= f T(t)F(t —t0)dt.This operation is called the convolution of the filter with the signal at t0. In two dimensions this convolution is defined byp00 p00I(u0,v0)= J J T(u, v)F(u — u0, v —v0)dudv,-00 -00and in three dimensions byI(u0,v0,w0)= f f f T(u, v, w)F(u — u0, v — v0, w —w0)dudvdw.-00 00 -00Definition 11 Ray tracing:The process of intersecting a line defined by an origin and a point on the viewing planewith geometric objects.Typically this process is used for rendering or displaying objects. The viewing planeis discretized according to the size of the frame buffer being used. The rays are usedto approximate the path which the light arriving at a point on the viewing plane hasfollowed. This is typically done backwards, i.e. proceeding from the viewer’s positionout into the geometric definition of the scene.262-DEFINITIONSDefinition 12 Ray marching:The process of stepping along a ray and sampling some function at each step.These samples may be used to find a property of the function or may be incorporatedinto an overall computation which yields a single value. In the case that an object isdefined by an implicit surface F(x, y, z) = 0, the ray marching technique can be used tofind the surface. If the object being displayed has a volumetric texture near the surfacethe ray marching technique can be used to approximate the transport of light throughthis texture volume, thus yielding a color for the display of the pixel in question.Definition 13 Voxel splatting:The process of projecting the volume representation of a voxel onto a viewing plane.When the area of the image screen is found the affected pixels within this area are updated.Definition 14 Pyramidal data structure:A pyramidal data structure is a hierarchical data structure[Tani75, Rose 75]. The datastructure is constructed in such a way that for a one-dimensional data set each levelrequires half the storage of the next lower level. This provides a pyramid of representationsfor the data.This technique has been used primarily in the vision literature. The first application ofthis storage technique serves as an example to illustrate both the data structure and howthe levels provide multi-resolution representations of the data.272-DEFINITIONSDefinition 15 MIP map:Multum In Parvo (Many things in a small place)A MIP [Wil183] map is a pyramidal data structure for representing two-dimensionaltextures. Each level contains texels which are the average of the four texels in the leveldirectly below them. If the original image is of resolution x= y = 2° then the height ofthe MIP map is p. In this case the single texel on the p + 1’ level is an average of thewhole texture. This scheme is illustrated in figure 2.9./ ///v/ /////////Pyramidal representation of two-dimensional dataFigure 2.9Definition 16 NIL map:Nodus In Largo (Knot large)A NIL [Four88aJ map is a pyramidal data structure in which a set of basis functions havebeen pre-integrated with a texture. These pre-integrated basis functions can then be usedto approximate the convolution of filters with a texture.282-DEFINITIONSDefinition 17 EWA filters:Elliptical weighted average filters is a technique which takes advantage of the radialsymmetry of a Gaussian filter so that the filter evaluation can be accomplished using asimple table-lookup.If an affine transformation was used to generate the elliptical area or volume overwhich the filter is to be evaluated this technique can be used.Definition 18 Transfer functions:A set of functions which map a scalar data set into one or more different scalar sets.In this dissertation the term transfer function indicates a function which maps a densitydistribution into another density distribution. The use of these functions is importantin the volume rendering literature since it allows users to easily segment the data theyare studying. These segments of the data are often mapped into other scalar data sets.These new data sets can then be displayed concurrently or separately.29Chapter 3Related workHe who boasts of his descentpraises the deeds of another.SenecaThe work reported here was motivated by a variety of sources. The images and ideasof three-dimensional texture mapping motivated the research into this area. Many ofthe details that remain open to study in three-dimensional texture mapping have beenextensively studied in two-dimensional texture mapping. Two techniques for the filteringof two-dimensional texture stand out when this literature is read with the idea of extending the techniques to three-dimensional textures, namely EWA filters and NIL maps.The initial study of three-dimensional texture filtering showed that in many cases theproblems being studied were similar to those being studied in volume rendering. In thissection we present an overview of the relevant literature from two-dimensional texturemapping, three-dimensional texture mapping, and volume rendering.3-1 Two-dimensional texturesComputer graphics has shown that the display of shaded polygons is easily done. Fortunately for us, the real world is much richer than this.’ This richness is due partlyto the texture details on many of the objects we see. In order to generate comparable images we must develop techniques for adding this complexity to objects. One‘This assertion is one that the post-modern architects seem bent on nullifying. Their frenzied desire topopulate our cities with buildings that look like their simple computer graphics models still continues.303—RELATED WORKapproach would be to generate geometric models of the texture and use these modelsto compute the texture components of the image. The work on anisotropic reflection[Ward92, Poul9O, Poul89, Kaji85, Bren7O] is an example of such an approach.Two-dimensional colour textures was the first tool with which texture informationcould be added to geometric objects. Using scanned textures, objects can be mappedwith these textures. In carpentry veneering is often used to enhance the look of furniturebuilt from inferior wood2. When one encounters furniture that has been veneered it isquite easy to spot the discontinuities in the texture. Two-dimensional texture mappingsuffers a similar problem.The mapping from the surface of the object to the two-dimensional texture space isaccomplished by generating a two-dimensional parametrization of the surface. When weneed the texture parameters for a point (x, y, z) on the surface we evaluate the corresponding parameters (u, v) and use them to index into the texture. As an example ofsuch a mapping we can use the sphere. For a given point on its surface (x, y, z) we havethe corresponding spherical coordinates (r, 0, q). Because we are interested in texturingthe surface of the object we can ignore r and use 0 and as the texture parameters. Ifwe have a texture defined over the [1,01 x [1,01 square in u,u space we have to find amapping from [0, 2Tr] x [0, 7rJ to [1,0] x [1,01. Thus the map0(x,y,z)u(x,y,z)2ir2Even though the primary application of the veneer process is to disguise, another similar process is marquetry.In this process small, usually geometric, shapes are glued to a surface to produce strikingly beautiful patterns.‘Marquetry taken to an extreme’ would be a fitting description of intarsia. This process uses marquetry togenerate images of impressive complexity. These processes are hopefully a better analogy for the two-dimensionaltexture mapping process.313-RELATED WORKq(x,y,z)v(x,y,z) =will wrap the texture around the sphere. This mapping introduces a singularity ateach of the poles of the sphere. These singularities and the complexity of some of theparametrizations required for two-dimensional texture mapping motivated some of theearly work on three-dimensional texture mapping [Peac85].Two-dimensional texture mapping techniques have been concerned with the relatedsampling and filtering issues from the very beginning. Most of the textures that are beingused by two-dimensional texture mapping applications are discrete textures. The sampling of these texture data sets introduced many undesirable artifacts. As we discussedpreviously the only solution was to incorporate filtering into the texture sampling process. In general this requires filtering with space variant filters, a process that precludesthe use of multiplication in the Fourier domain. This means that the only remainingoption is to compute the direct convolution of the texture with the filter. This usuallyinvolves computing the integral of the filter centered at u0,v0 with the texture over someintegral area A.I= fj T(u, v)F(u — u0, v —v0)dudvFigure 3.10 shows a simple computer graphics scene. The textured objects in thisscene are the two checkered planes that are mapped with the same texture. By pickingthree pixels we illustrate some of the problems of finding the appropriate filter to applyto the texture. For simplicity’s sake let us assume that the area on the viewing planeover which we want to compute the texture is a circle. The first pixel circle is near32s-RELATED WORKthe bottom of the screen and when it is transformed into texture space it suffers littledistortion. The second pixel is near the top left of the screen and its projection intotextllre space introduces a strong distortion caused by the perspective transformation.The third pixel contains two objects, the spout of the teapot and the checkered boardbehind the teapot. The projection of this pixel into texture space should take into accountthe amount of the texture that is obstructed by the teapot’s spout. The solution to theproblem of occlusion has usually been addressed by subdividing the pixel into smallerregions until we can assume that only one object covers each sub-pixel. An average ofthese sub-pixels is then used as the final pixel colour [Fiurn83a, Fium83b, Carp84]. If itwere possible to tailor the filter so that the weights of the filter in the occluded regionswere set to zero then this occlusion problem would be addressed. Thus we have a complexrE::J-----Q---Texture distortions due to perspective projection, transformation, and occlusion.Figure 3.10set of areas over which the convolution of a filter and the texture must be computed. Asthe scene complexities increase so the complexity of the shape of these filters will also333-RELATED WORKincrease.Once we have found the area of integration we must compute the integral. The costof evaluating these integrals motivated the search for filter approximating methods.Catmull [Catm74, Catm75] is generally credited with introducing two-dimensionaltexture mapping to computer graphics. Because the texture was tied to hi-cubic parametric patches the (u, v) parameters of the patch were used to index into the texture.3Aliasing was reduced by filtering the patch segments with a box filter placed over thepixel. In addition to this box filter the texture was pre-filtered to remove excessivelyhigh frequencies. A pyramid filter was proposed by Blinn and Newell [Blin78b, Blin78aj.Using the quadrilateral approximation to the projection of a square pixel into texture,a pyramid filter was computed as a weighted average of the texels under the distortedpyramid. Feibush, Levoy, and Cook [Feib8Ob] proposed a rather complex method forapproximating a Bartlett filter or triangle filter. First the bounding rectangle of thepixel is projected into the texture space. The texels in the resulting quadrilateral areprojected back to the screen and a weighted sum of the texels that project into the circular representation of the pixel is performed. A modification to this method is presentedby Gangnet, Perny, and Coueignoux [Gang82, Pern82]. In their method sample pointson the screen are projected into the texture and used in the weighted sum. In mostcircumstances these two techniques perform essentially the same filtering operation, theprojection of screen samples into texture space requires the evaluation of the texture atpoints between texels. If the cost of evaluating the reconstruction filter for the texture3This parametrization of the surface does allow textures to be applied to the surfaces of the objects. Thereis a problem in that the parametrization induced by a particular basis may not be a desirable parametrizationof the surface.343-RELATED WORKis high then Gagnet’s technique will be costly. On the other hand there may be situations where the somewhat superior sampling technique employed by Gangnet producesa better result.A different approach was proposed by Norton, Rockwood, and Skolmoski [Nort82l.They approximate a box filter in the Fourier domain (sinc filter) by the first two terms inits power series expansion 1 —x2/6. The textures that can be filtered with this techniqueare those textures that can be expressed as T(x, y) = A + F(x, y), where A is a constantterm and F(x, y) has a simple Fourier representation. The clamping is applied to the F()component of the signal depending on the area of the texture map that is to be filtered.The high cost of computing these approximations motivated the study of pre-filteringtechniques for approximating filters. Dungan, Stenger, and Sutty [Dung7Sj used pyramidal data structures [Tani75, Rose75] to allow the use of pre-filtered images in texturemapping. They generated a filtered pyramid of the image using a box filter to generate the different levels. Based on the area that the filter covers in texture space theyselect a level in the pyramid and use the appropriate texel in this level for the filter.Williams [Will83] suggests using a tn-linear interpolation scheme on the same pyramidwhere the sample point now lies between levels. Bi-linear interpolation is used on eachlevel to determine two values of the texture. Linear interpolation between levels is usedto determine the final value of the texture that is to be used.A variant of this pyramidal approach is to use a summed area table as suggested byCrow [CrowS4] and also by Ferrari and Sklansky [Ferr84, Ferr85]. This technique allowsthe approximation of rectangular area filters to be computed. The method proposed353-RELATED WORKby Ferrari and Sklansky also allows arbitrary rectilinear polygons to be used as filters.Glassner further extended this method to allow arbitrary quadrilaterals to be approximated [G1as86]. The tables for this approach are constructed by pre-integrating in u andin v. An extension to this idea was proposed by Heckbert [Heck86]. His method relieson the identitydf / if(x) * g(x) = *‘j,) g(x)dx”With Heckbert’s technique filters are approximated by axis-aligned B-splines.Greene and Heckbert {Gree86l use the radial symmetry of the Gaussian filter in theirapproximation technique for Gaussian filters. Assuming that pixels are circular theirprojection onto a plane is either an ellipse, a parabola, or a hyperbola. When the projection of the pixel is an ellipse the resulting elliptical Gaussian is easily found by thecombination of a scale and rotation. The projection of the filter can easily be computedwhen the circular representation of the pixel projects to an ellipse. For each texel in thebounding box of this ellipse the distance from the centre of the ellipse is evaluated. Thevalue of the filtered sample is then the weighted sum of all the texels inside the ellipse.If the filter being approximated is a radially symmetric filter, then the values of the filtercan be pre-computed in one-dimension. The weight of the filter is then a simple tablelook up instead of the computation of the filter weight, which may be costly. Discussionsof the generalization of EWA filters to three dimensions are presented in Chapter 4.A study of conformal mapping with an application to texture mapping by Fiume,Fournier, and Canale [Fium87] motivated the study of space variant filters. NIL mapsare another use of pyramidal data structures proposed by Fournier and Fiume [Foiir88aj.363-RELATED WORKIn their approach the filter is approximated by a set of parametric patches4. By analyzingthe integrals of the filter approximation with the texture it was found that the integralsof the basis functions with the texture are independent of the shape of the filter. Becausethese integrals are independent of the filter they can be pre-computed. The approximation to the integral of the filter with the texture is then computed by finding the controlpoints for the patches that are to approximate the filter. These control points are thenused as weights for the pre-integrated basis functions. If a hierarchical approach is usedfor approximating the filter the cost of this approximation is no longer dependent onthe size of the filter but on the number of patches in the hierarchy that approximatesthe filter. In chapter 5 we present a more detailed description of NIL maps and theirextension to three dimensions.A method derived from NIL maps has recently been proposed by Gotsman [Gots93].Instead of using parametric spline patches to approximate the filters they construct a setof basis functions using singular value decomposition. By restricting the class of filtersthat they are going to approximate they are able to generate basis functions that betterapproximate this class of filters. They illustrate their method by developing a set of basisfunctions for Gaussian filter applied to ellipses.Most of the work in two-dimensional texture mapping has been concerned with thedevelopment of fast approximations to the required filtering operations. Little consideration has been paid to the properties of the filters. In their paper, Mitchell and Netravali[Mitc88] present a class of cubic reconstruction filters with a discussion of the tradeoffs41n their implementation they used constant, bi-linear, bi-quadratic, and bi-cubic Catmull-Rom patches as anexample. They point out that there is no restriction on the class of patches used so long as they can be definedby a reasonably ‘nice’ set of basis functions.373-RELATED WORKinvolved in using these filters for reconstruction of a signal. The class of filters theystudied is parameterized by two parameters. Over this two-dimensional space they classify the filters based on the observed visual characteristics of these filters. The maincharacteristics used for this classification were ringing, blurring, and anisotropy. Basedon this visual classification they present a map of these filters over the two-dimensionalspace defined by the parameters. The map divides fairly simply according to the visualcharacteristics they were looking for.One of the points they make in their paper is that it is difficult to measure objectivelythe performance of filters in computer graphics. They rank their filters using severalsubjective visual properties. This is probably the best measure we may have for filtersin computer graphics.Unfortunately this is the only work of which the author is aware in which the properties of filters in the context of computer graphics have been studied. This kind of analysisis more popular in the vision field where filters are used for a variety of purposes such asedge detection [Rose76, Cann86J, texture segmentation [Bovi87], and detection of opticalflow [Ade185, Heeg87, Heeg88].There has been some work in the development of texture models for two-dimensionaltexture mapping. Feibush and Greenberg [Feib8Oa] showed how artificial textures couldbe used in the context of architectural design. Schweitzer [Schw83] showed how artificialtextures could be added to objects to aid in their understanding. The textures were tiedto geometric properties of the objects such as curvature or normal orientation. Usingstatistical models for texture Gagalowicz et al. [Benn89, Gaga88, Gaga87, Gaga86, Ma86,383-RELATED WORKGaga83] developed a texture mapping technique that relies on these statistical models.First a white noise texture is mapped onto the textured regions of the image. Thesetextured regions are then adjusted so that they have the same statistical properties asthat of the texture model. Because the process is performed on the image after therendering of the geometric objects has occurred there is no simple way of ensuring thatthe same point on an object receives the same texture in two different images.Another approach to two-dimensional texture mapping is to use these techniques todisplay textures that are related to a dynamic process. By altering the texture van Wijk[Wijk9l] was able to animate two-dimensional data sets in a variety of ways. Reactiondiffusion [Turk9l, Witk9l] is a model that has been proposed for the growth of textures.Examples of textures that this process is able to model include the fur markings onmany animals such as zebras, tigers, and giraffes. The reaction diffusion process is usedto produce two-dimensional textures that can then be mapped onto the objects using atwo-dimensional parametrization. Even though reaction diffusion textures can producestriking images it is not clear how to set up the parameters to produce a particulartexture. Generating new textures using this technique can involve the search in a largeparameter space.Research in filtering two-dimensional texture maps has concentrated on finding approximations to the convolution of the filter with the texture. This convolution is necessary because the shape of the filter changes throughout the scene. A variety of schemeshave been proposed for this approximation. Of these schemes two approaches stand out.Elliptical weighted average (EWA) filters [Gree86] and NIL maps [Four88a]. EWA filters393-RELATED WORKallow the use of the Gaussian or other radially symmetric filter with little memory overhead. The cost of evaluating the filter is dependent on the size of the filter in texturespace. When it is known that a radially symmetric filter is to be used, this choice seemsto be the appropriate one. When a more complex filter is required we may not be able usethis technique. NIL maps, on the other hand, allow us to approximate arbitrary filters.The cost of evaluating one of these approximations is not dependent on the size or shapeof the filter, but rather on the quality of the approximation required. In contrast to EWAfilters the NIL map filtering approximation requires a considerable amount of memoryfor the storage of the NIL maps. In the next chapter we discuss the extension of thesetwo filtering techniques to three dimensions.3-2 Three-dimensional texturesThe first references to three-dimensional textures were made by Fournier and Amanatides[Grin84], Gardner [Gard84, Gard8ô], Perlin [Perl85] and Peachey [Peac85]. Fournier andAmanatides mentioned it in another context. Gardner used an approximation to theFourier series to model a variety of the effects seen in clouds. His textures were used tomodulate the translucence of planes or ellipsoids. These objects were then grouped together to approximate clouds. Perlin [Per185j used solid textures as part of his renderingpackage. Peachey [Peac85] presented the idea of solid textures with the goal of removingsome of the parametrization problems associated with two-dimensional textures. Perlin’s work was largely based on using a band-limited noise function for the modeling oftextures. By applying a variety of transformations to these noise functions he was able403-RELATED WORKto generate different textures. Because many of Perlin’s textures are developed fromband-limited noise functions, a clamping [Nort82] approach can be used to ensure thatno aliasing frequencies are introduced to the texture while it is being generated. The cutoff frequency for the clamping of the signal is computed as a function of the size of theprojected pixel. Peachey recognized that, even though the introduction of these solidtextures removed some of the aliasing problems that were due to the problems of two-dimensional parametrizations, the use of three-dimensional textures introduced anothersource of potential aliasing problems.Lewis [Lewi89] proposed a different way of generating the noise function and itsassociated transformations. In addition to a new method for generating the noise functionhe, Perlin and Hoffert [Per189], and Kajiya and Kay [Kaji89j showed how textures couldbe applied in regions or volumes near the surface of the object.Kajiya and Kay developed a volumetric model for fur. This fur texel was then mappedonto the surface of the object. When a ray intersected the object the texture was computed by first calculating the entry and exit points of the ray into the texel volume.Between these two points ray marching was used to incrementally compute and accumulate the shading. The resulting value was used as the value of the pixel. Their shadingmodel included interference and scattering.Perlin and Hoffert, and Lewis generated deformed objects using three-dimensionaltextures. By embedding their objects in a solid density function they were able to use athresholding of this function to define a surface. The object was then displayed using aray marching technique. In this case the ray would not be computing the shading at each413-RELATED WORKpoint, but rather looking for the point at which the pre-defined threshold was reached.When this threshold was found the shading was computed as if a surface was present atthat point. This allowed them to generate objects with highly complex surfaces. Thesevolumetric textures clearly blur5 the distinction between textures and objects.Greene [Gree89] presented volume based technique for growing plants. When automata were placed in a voxel grid with the correct rules, interesting plant-like objectswere generated. The resulting volumetric objects were displayed using volumetric displaymethods, either surface extraction or direct display.Ebert and Parent [Eber9O] presented a system that allows the rendering of standardgeometrical objects along with gaseous objects. The gaseous objects are modeled usingPerlin’s turbulence function. In order to display these hybrid scenes they used a variationof the A-buffer scan line rendering technique [Fium83a, Fium83b, Carp84].In previous work the author [Buch9lj made the case for the filtering of three-dimensional textures. Using Simpson’s mid-point adaptive quadrature ru’es over rectangular parallelepiped regions provided a means for evaluating the required integral integrals [Burd8l]. The cost of evaluating this integral can be high,6 mainly due to theadaptive nature of the quadrature method used and the variety of shapes that the filtercan assume. The cost of evaluating the integral is directly proportional to the volume ofthe rectangular parallelepiped over which the integral is being evaluated and the fourthderivative of the product of the filter with the texture function.To date there are two filtering options for three-dimensional textures, clamping, as5Pun intended6The Simpson’s quadratic quadrature rule in one dimension requires a minimum of 5 samples to be taken. Ina three-dimensional implementation this means that a minimum of 125 samples must be taken.423-RELATED WORKproposed by Perlin, and filtering over rectangular volumes, as proposed by the author.In the situations where we know the Fourier spectra of the signal, clamping the signalmay suffice, however, most procedural textures are defined directly rather than by theirFourier spectrum. If we could find the Fourier transform of these textures then clampingwould be feasible. The class of functions that is available in most procedural textureengines is not a set of functions for which nice7 Fourier transforms exist. An exampleof this is the step function, whose Fourier transform has infinite support. Because theprocedural implementations of these textures usually allow the introduction of arbitrarilyhigh frequencies, sampling these textures and finding their Fourier transform are notusually options. The method proposed by the author allows the filtering of arbitrarythree-dimensional textures but in many situations this filtering becomes prohibitive incost.3-3 Volume renderingThe increase in available volumetric data and the decrease in the cost of computationpower has stimulated research into the field of volume rendering. The display of thisdata has been studied along two directions, surface extraction and direct display.The premise of surface extraction methods is that there is some property of thedata that can be directly related to a surface in the data. In medical imaging it isobvious that the surface of a bone corresponds to a discontinuity in the density function8.Using this density discontinuity it is possible to extract surfaces that approximate the7As in most cases “nice” means “easy to evaluate”.8CAT scans exhibit this property.433-RELATED WORKsurface of the bones in the data. This approach has been followed by a variety of people[Artz8la, Chri78, Fuch77, Lore87]. The complexity of the generated geometric primitivesdepends on the reconstruction filter that is applied to the data. If the simple sample andhold filter is used the resulting signal is made up of rectangular voxels of uniform density.If the reconstruction is more complex then the procedure for generating the geometricobjects that represent the surface becomes more complicated. This complication is dueto the fact that a continuous signal must now be sampled and the surface reconstructedfrom these samples. Fortunately the other reconstruction filter that is used is the tn-linear interpolation filter. Because this results in voxels with a tn-linear distribution ofdensity throughout them it is simple to estimate the location of an iso-valued surface(iso-surface).The direct rendering of volumetric data is accomplished by imposing a physical modelonto the data [West9O, Wilh9l, Laur9l, Dreb88, UpsoSS, Max9O, Levo9Oa, Levo9Od,Levo9Ob, Levo9Oc, Sabe88, Levo88, Upso88, Novi9O]. This model usually has no relationship with the physical object from which the data was obtained. It also has little todo with the process by which the data was obtained. The display of the data is accomplished using the properties of this display model. Most display techniques use a varyingdensity gas model. The density of the gas is related to the data set. Because most of databeing displayed is scalar, this relationship is usually a linear one. If the data is not scalarthen either a more complex display model must be found or the different dimensions of5The term ‘fortunately’ here is used in the context of the complexity of the resulting code. The truth ofthe matter is that the issues of reconstruction filters and their associated properties in the context of volumerendering needs to be studied. Trousset and Schmitt [TrouS7] indicate that there is a problem in using a simplereconstruction filter for the evaluation of the gradient. Rather than introducing a higher order filter they simplyextend the 3 x 3 x 3 box filter to a 5 x 5 x 5 box filter.443-RELATED WORKthe data can be mapped into different scalar volumes that can then be displayed usingthe gas model. Kajiya and von Hertzen [Kaji84] suggestedj=-j’ p(x(),y(),z())d x I(x(t), y(t), z(t))(cosO)] x p(x(t), y(t), z(t))dtas an approximation to the amount of light that arrives at a point on the screen, whereare the equations that define the ray along which thisintegral is being computed and I is the integral representing the light that arrives fromlight source L to points in the data x(t),y(t),z(t). 0, is the angle between the viewingdirection and the direction of the ray. The integrals I need only be computed whenthe light locations are changed. T is a user parameter that controls the strength of thescattering effect.If the integrals towards the light are dropped, as suggested by Sabella {Sabe88], thenwe havee1tay(), z(t))dt (3.1)Integrating this equation over a pixel area on the viewing plane defined by (u0,u1) x(vo, vi)givesL’ J’ Idudv = L:’ x:’ e_nftxp(x(t),y(t),z(t))dtdudv (3.2)Because the display model we use is arbitrary we can approximate e_T f p(x(),y(),z())de_T(t_t0). When this approximation is used the computation of the shading can be viewed453-RELATED WORKas the convolution of an exponential decay filter with the data set.rvi fyi çtJ J Idudv = J J J e_T(t_t0)p(x(t),y(t),z(t))dtdudv (3.3)U0 V0 U0 VO taThis shows how the volume rendering problem can be formulated as a filtering operation. As in texture mapping, the cost of evaluating these convolutions is high, but if theapproximation methods we develop for these filters are flexible enough there is no reasonto restrict our choice of filters. We may wish to query some local property at some pointin the data. If this query can be formulated as a filter operation then we may be able touse one of the approximation methods to evaluate it.Thus we would like to provide users with a system that allows two modes of interaction, global and local. In the general or global inquiry stage the user is interested infinding global properties of the data. The local query mode may involve the search for aparticular property near some point in the data. The filters used in these scenarios willbe different. These different filters will then be approximated using similar evaluationtechniques.3-4 WrapupAdding textures to objects allows us to easily enhance the display of computer graphicsmodels. Whether we choose to use two-dimensional textures or three-dimensional textures there is a set of filtering issues that must be addressed. In two dimensions we mustfilter the texture with a space variant filter. Much of the research in two-dimensional463-RELATED WORKtexture mapping has studied approximations to the evaluation of filters. Two of thesefiltering techniques were selected for further study, namely EWA filters and NIL maps.Three-dimensional textures extended texture mapping so that objects could be textured with solid textures. The filtering of three-dimensional textures has not been studiedmuch except for an application of the clamping technique. Most of the textures that areprovided by three-dimensional texture mapping are procedural. Despite the increase inmemory sizes, the cost of storing textures of sufficiently high resolution will continue tobe prohibitive for the next few years, therefore filtering techniques proposed for three-dimensional textures must allow for the use of procedurally defined textures.In conclusion we wish to develop filtering methods for three-dimensional texture mapping that:• Allow arbitrarily shaped and scaled filters to be used.• Handle three-dimensional procedural textures.Volume rendering has studied a variety of approaches for displaying volumetric data.Of interest here is the set of volume rendering techniques that use ray tracing for thedisplay of the data. These techniques are remarkably similar to the techniques used forthe display of textures defined in volumes near the surface of objects. Transfer functionsare useful tools for volume rendering. This means that it is important that any filteringtechniques used for volume rendering not preclude the use of transfer functions.Thus we require filters for volume rendering that:• Allow approximation of arbitrarily shaped and scaled filters.473-RELATED WORK• Incorporate transfer functions.In the next chapter we describe four filtering techniques. Each of these techniqueshas applications in texture map filtering, and one of the techniques has applications inthe volume rendering area. The evaluation and comparison of the filtering techniques ispresented in Chapter 5.48Chapter 4Filtering techniquesElufriccteEdulcoraeIn this chapter we present the details of four filtering techniques for volumetric data.These techniques are clamping, direct evaluation, EWA filters, and NIL maps. Clamping is the application of the two-dimensional filtering technique of the same name [Nort82]to three-dimensional textures as suggested by Perlin [Per185J. The direct evaluation technique [Buch9l] uses Simpson’s mid-point adaptive and Gaussian cubic quadrature rulesfor approximating the integral of the filter applied to the texture. The implementationof EWA filters for the filtering of three-dimensional textures is an extension of two-dimensional EWA filters presented by Greene and Heckbert [Gree86j. NIL maps are anextension of two-dimensional NIL maps presented by Fournier and Fiume [Four88a]. Inaddition to the extension to three dimensions it is demonstrated how NIL maps can beused with procedural textures and transfer functions.4-1 ClampingIn their paper Norton et al. [Nort82] showed how the filtering of textures could beapproximated using a Fourier representation of the texture. Given a complex-valued’‘The signal need not be complex-valued. As is often the practice, complex variables are being used here todevelop the theory. In practice all operations will be restricted to real valued signals.494—FILTERING TECHNIQUEStexture defined byI(x,y) =they show that the application of a box filter over a parallelogram centered at (x0, yo)and defined by(x0,y)+s(x1,y)+t(x2,y) with—i <s,t <+1can be approximated in the Fourier domain byexo+u1Y0) (i — (kxi + l)2 (kx2 +1Y2)2) = e0Y0)G(xi,x2, Yi, Y2, k, 1).Where Co is the approximation to the Fourier representation of the box filter generatedby using the first two terms of the power series expansion of the sinc function. Norton etal. further simplified this approximation by truncating this quadratic function so that itnever became negative. The clamping function C is then defined by(1 — r(’”2(kx2f1Y))X2, Y2, k, 1) = if (i —_______— (kx2+1Y))) < 10 otherwisewhere r is a spread constant that can be used to control the width of the filter.Perlin [Per185] suggested a variant of the above method for filtering three-dimensionaltextures. The maximum allowable frequency is computed from the size of the projectedpixel into texture space. By comparing the frequency of a texture component with this504-FILTERING TECHNIQUESmaximum frequency it is possible to exclude texture components that may introducealiasing. The use of this step function for a clamping function can introduce discontinuities in the resulting image. We can use a simple variant of the quadratic approximationsuggested by Norton et al. to reduce the impact of these discontinuities.Given a pixel area on a viewing screen we project it onto the tangent plane of theobject. From the size of this projection we compute the maximum allowable frequencyfmax The clamping function we use is then/ f \2C(f) max(O,1 — (-i-——) ).\Jmax/Clamping function C max(O, 1 — (1_)2) when fm = 3.2Figure 4.11This has the effect of gradually removing the aliasing frequencies before they aretruncated. In Figure 4.11 we show the clamping function that results when frna 3.2.514-FILTERING TECHNIQUES4-2 Direct evaluationWe developed a system for approximating three-dimensional filters nsing numerical integration [Buch9l]. Two volumes of integration were developed, one aligned with thetangent plane of the object and the other aligned along the line of sight. The volume ofintegration aligned with the tangent plane is useful for evaluating the texture on objectswith high opacity. The volume of integration aligned with the ray is useful for objectswhose material has low opacity.For those materials that have high opacity we require the evaluation of a filter over thesurface of the object. The area over which this integral is to be evaluated is the projectionof the pixel onto the surface. This area can be approximated by the projection of thepixel onto the tangent plane of the object. In order to reduce the aliasing that may beintroduced by sampling the texture on the tangent plane we evaluate the filter over avolume that encloses this projection. This volume is computed by finding the rectangle inthe tangent plane that encloses the ellipse2. This ellipse is the projection of the circularrepresentation of the pixel onto the tangent plane. This rectangle is then extruded in adirection normal to the tangent plane. This is illustrated in Figure 4.12.The volume of integration aligned with the ray is constructed in a similar fashion.The pixel is projected onto a plane perpendicular to the viewing ray. The center of thisprojected pixel lies at the intersection of the viewing ray and the object. The boundingrectangle of this circle is then extruded into the object. The depth of this extrusion21n most situations the projection of the circular pixel onto the tangent plane will be an ellipse. In therare cases where the circle projects to either a parabola, or an hyperbola we can either use a large elhpse toapproximate the projection or use a pre-computed average for the filtered texture sample. If the DC, or average,component of the texture is known then this could be used as the average of the texture.524—FILTERING TECHNIQUESBox enclosing projection of circular pixel onto tangent plane.Figure 4.12534-FILTERING TECHNIQUESis controlled by the user. By varying this depth parameter (f), materials of differingopacities can be modeled and displayed.Once a volume of integration V is defined we mllst evaluate the integral of the filterF(u,v,w) and the texture T(u,v,w) over this volume.I= ffJ F(u, v, w)T(u, v, w)dudvdwEvaluating this integral analytically is often not feasible, thus we must resort to numerical54Box enclosing projection of circular pixel onto plane parallel to viewing plane.Figure 4.134-FILTERING TECHNIQUESintegration methods. Two numerical integration or quadrature methods were studied.These are adaptive Simpson’s quadrature and fixed-point cubic Gaussian quadrature rules[Burd8l]. When an integral volume is relatively thin in one or more of its dimensions itis appropriate to use the Gaussian rule for evaluating the integral along that direction.For the integral volumes that are aligned with the surface of the object we have foundthat a hybrid method consisting of Simpson’s adaptive mid-point method in the twodimensions over the plane and a cubic Gaussian quadrature normal to this plane worksquite well; in this case the minimum number of samples taken is 5 x 5 x 3 = 75. Thishybrid method works well because in most applications of this filter the dimension of thebox normal to the plane is much smaller than the other two dimensions. Because this isnot the case when we consider the integral volumes aligned with the ray, we have foundthat we need to use Simpson’s adaptive quadrature rule in all three dimensions. Whenthe full adaptive method is used the minimum number of samples taken is 5 x 5 x 5 = 125.The cost of using this approximation to the filter is dependent on the quadrature rulechosen. Simpson’s adaptive rule uses an estimate of the fourth derivative of the functionbeing integrated. This means that the cost of evaluating these three-dimensional filtersis proportional to the magnitude of the fourth derivative of the product of the filter andthe texture. If we wish to bound the cost of this approximation we can either restrictthe level of subdivision that we allow Simpson’s rule to take or we can use a fixed-costquadrature rule.554-FILTERING TECHNIQUES4-3 Elliptical Weighted Average filteringIn a system designed for distorting images for later projection on a OMNIMAXTM screenGreene and Heckbert [Gree86] approximate the convolution of a truncated Gaussian withtextures over arbitrarily oriented ellipses. These ellipses are the projection of circularpixels into the texture space. The Gaussian filter is centered over the resulting ellipseand its weights are evaluated using the radial symmetry of the Gaussian filter. For eachElliptical weighted average filter evaluation.Figure 4.14texel that lies within the truncated Gaussian the weight of the filter at that position iscomputed by first evaluating the square of the radial distance QH2 = du2 + dv2 of thepoint from the center of the ellipse. This radial distance is then used to index into theQ564-FILTERING TECHNIQUESpre-computed one-dimensional Gaussian filter. By using this pre-computed array of filtervalues the cost of evaluating the Gaussian filter is reduced to a table look-up operation.The use of this filtering technique allows the approximation of radially symmetric filterscentered in the projected ellipse. When the perspective projection is being used thecenter of a circle does not project to the centre of the ellipse.If the center of the circle projected to the center of the ellipse then A and B would beequal. This diagram shows that this is not the case.Figure 4.15This is illustrated in Figure 4.15. In Figure 4.16 we see the error associated withcentering the Gaussian filter as the filter projection approaches the horizon in the displayof the image in Plate 4.2. This error is only significant near the horizon. Anotherexample of this problem can be seen in Plate 4.1. In this plate we present two viewsof three ellipses. The first view has the eye positioned so that these ellipses project toA is not equal to B574-FILTERING TECHNIQUES4.8Error associated with centering a Gaussian. This ratio was computed on the imageused in the next chapter.Figure 4.16circles of the same size on the screen. As can be seen from the second view, which showsan orthographic view of the ellipses, the texture on the ellipses is centered where theprojected centre of the texture is not at the centre of the circles in the first view.For three-dimensional texture mapping there are two extensions of this technique thatwe wish to explore. The first is to apply this technique to the projection of the pixel ontothe tangent plane of the object. The other option is to extend this technique to threedimensions so that the filtering is performed over an ellipsoidal volume near the surfaceof the object. In this case the third axis of the ellipsoid will be defined by the sameparameter that defined the normal dimension of the integral volume for objects made ofopaque materials. When procedural textures are being used there are no restrictions onthe positions of the sample points. This allows us to distribute the sample points in thebounding box aligned with the ellipsoid, rather than in the bounding box aligned withthe texture. The number of points used to approximate a filter depends on the size of theA/B0Scan line584-FILTERING TECHNIQUES,14 124A Gaussian centered in textnre space is not centered in screen space.Plate 4.1filter. Because these ellipses can become quite large it is necessary to limit the numberof sample points.The filtered texture sample is then a weighted sum of texture samples. The texture issampled at the sample point locations and the weight of this. sample point is computedby a look-up of the pre-computed Gaussian.4-4 NIL mapsIn 1988 Fournier and Fiume [Four88a] presented an algorithm that used a pyramidalrepresentation of the data to compute the filtering of two-dimensional textures in constant594-FILTERING TECHNIQUEStime (with respect to filter shape and size). An overview of NIL maps is presented,followed by discussions of the extensions of this technique to three dimensions, to includetransfer functions, and to handle procedural textures.Consider a signal T(t) defined over some region [a, b]. We wish to evaluate the convolution of a filter F(t) centered at t0. This is evaluated using the following integral.1(t0)= f F(t —t0)T(t)dt (4.1)If we assume that the signal is identically 0 outside of the region [a, bj the integral becomes:b1(t0)= j F(t —t0)T(t)dt (4.2)Now let us approximate the filter F as K curve segments over the interval [a, b]. With outloss of generality we can assume that the segments are of unit width, that is b—a = K. Ifthis is not the case it is a simple matter to introduce a change of variables so that b—a = Kis satisfied. Each curve segment is defined by a set of basis functions B0,B1,..., BM1 sothat the filter is approximated by the union of these K segments.M-1F(t - t0)= U b(t0,F)B(t - m) (4.3)Note the unusual notation for the control points of the curves. We have indicated thatthe control points are dependent on both the filter F and its location to. For simplicity’ssake we will drop the b’(t0,F) notation in favor of the simpler b notation. Substituting604-FILTERING TECHNIQUESEquation 4.3 into Equation 4.2 we get:K1M—11(t0) j Umo bB(t — m)T(t)dt (4.4)Because the integral is being evaluated over these K segments of the curve we can evaluatethis integral as K distinct integrals. So, separating the integrals and introducing a changeof variable t —* t + rn we have:K—i 1 M—11(t0) f bB(t)T(m + t)dt. (4.5)m=O 0 i=0Because the b coefficients are independent of t:K—i M—1 11(t0) = b J B(t)T(m + t)dt. (4.6)m0 i=0 0This means that the NIL cells G can be pre-computed.=B(t)T(m + t)dt (4.7)The approximation to the integral 4.2 is then:K-i M-11(t0) = bC (4.8)m=0 i=0Unfortunately the cost of evaluating this approximation is still dependent on thewidth of the filter. A solution to this problem is to use our knowledge of the filter’s61)4-FILTERING TECHNIQUESproperties to guide the approximation of the integral.The first step in developing this constant-cost approximation is to compute a pyramidal NIL map. Let’s use-y to indicate the level of the NIL map, where y = 0 indicates weare on the lowest level of the representation, and increasing means we are generatinglevels whose NIL cells cover increasingly larger portions of the texture. For the timebeing let us also assume that K = 2 for some integer p. We deal with the case where Kis not an integral power of 2 in Appendix A.These new eve1s of the pyramid are defined by:= 101 B(t)T(27m+27t)dt. (4.9)The pyramid that this produces is illustrated in figure 4.17. Consider a filter thatI IL1 1 1 100 1 2 2iI I ci0 0y ( I& G CIPyramid data structure for a one-dimensional NIL mapFigure 4.17is relatively smooth and spans the whole texture; such a filter is illustrated in Figure4.18. Let us lay down eight points of interest, or trigger points, where we want theapproximation of the integral to focus. Because the filter spans the whole texture we624-FILTERING TECHNIQUESdistribute these trigger points uniformly through the texture. We now place these triggerpoints at the level of the NIL pyramid that encloses them, in this case it is the top level.We will subdivide this NIL cell if there are more than two trigger points in it. Thissubdivision proceeds until only two or less trigger points lie in a NIL cell. The resultingsubdivision is shown by the positions of the trigger points. The dotted line shows theresulting approximation to the filter if the basis B, chosen is linear. Pseudo code forthis subdivision algorithm is presented in Figure 4.19.3Cl2ClA I I ICl I I IbFilter that spans the whole texture.Figure 4.18In Figure 4.20 we illustrate a filter that has non-zero weights in a localized regionaround t. Because we know that the filter is localized in the area around t0 we increasethe number of the trigger points in this region. Performing the subdivision as indicated3Note that this approximation is a virtual approximation to the filter function. The values of the controlpoints are used for the approximation of the integral of the filter with the texture and not for evaluating thecurve.F(t - t0)634-FILTERING TECHNIQUESFirst find the smallest cell that encloses the trigger point]cell 4=Find_Lowest _Enclosing_NIL_Cell (trigger_points)PROCEDURE Fin&Nil_Hierarchy (points ,cell, to1erance,ymjm)Check to see if tolerance satisfiedIF ( number_of_points_in_cell(cell, points)< tolerance ) thenRETURN (cell )END ifCheck to see if maximum subdivision reachedIF ( -y(cell)= ) thenRETURN ( cell )END ifSplit the cell into its eight children and process themhierarchy ‘nu11FOR CHILD IN CHILDREN(CELL) DOhierarchy hierarchy + Find_NiLHierarchy(points,child,tol)END forRETURN (hierarchy )END FindNi1JIierarchyGeneration of NIL map hierarchyFigure 4.19643CCL0Using information about the shape of the filter we are able to place the trigger pointscloser together. By positioning the trigger points in this manner the resulting hierarchyis better suited to approximate this filter.Figure 4.20Another example of the use of this technique is presented in Figure 4.21. In this figurewe see a highly localized filter whose approximation requires the use of negative levelsof the NIL map. Using these negative levels gives a better approximation of the integralbecause they provide a better fit to the filter curve. Again we defer the details of theimplementation of these negative levels to Appendix A.Fournier and Fiume [Four88bj showed that in the case of two-dimensional texturemaps the number of NIL cells chosen in this manner was linear in the number of triggerpoints and the tolerance. The proof of this in three dimensions is presented in the nextchapter.4-FILTERING TECHNIQUESyields a different hierarchy. This hierarchy focuses the evaluation of the filter in theregion of interest, that was highlighted with a higher density of trigger points.I. zzzAzzzzAzzzzzzzzzF(t — t)=aT(t)=to t=b653C2CL0C—1C4-FILTERING TECHNIQUESNarrow filter showing the need for negative levels.Figure 4.21Using NIL maps as indicated allows the approximation of a range of filters in timeindependent of their width or shape. We wish to impose one further condition on thismethod. Given a filter whose integral over the interval [a, b] is K= f F(t)dt and aconstant texture T(t) = T0 ‘cit E [a, b] the integralbecomesb1(t0)= f T(t)F(t —t0)dtb1(t0) = T0j F(t —t0)d = T0K(4.10)(4.11)i_________________________i/2 I_____JI L---l I I I— t0)t=aT(t)t=b4-4.1 Normalization664-FILTERING TECHNIQUESbecause the texture is constant. We shall require that the approximation of this convolution by any NIL map approximation be equal to the quantity T0K. Because the textureis constant over the [a, b] interval the NIL map coefficients are simply the integralof the basis functions multiplied by the constant K since:f B(t)T(27m+ 2t)dt = T0 f B(t)dt. (4.12)So for a particular approximating NIL hierarchy 7i to the filter F(t) we require:M—17I=T0K=S bT0.f B(t)dt (4.13)pE7- i=OBecause T0 is constant we have that:M—17 1I7- = T0K = ST0 b f B(t)dt. (4.14)pE7 i=OThus the scale factor S is:= K (4.15)M—1 m.LpEN Lo U iwhere the quantities B f0’ B(t)dt can be pre-computed. In many situations the filtersare normalized so that j F(t—t0) = K = 1. In this case the scale factor is simply1 (4.16)c.’ ç’A4—1 mi i4-4.2 Weighting the levels of the NIL maps67Approximation hierarchy at two levelsFigure 4.22Another normalization problem is introduced when we use a hierarchy of NIL cells toapproximate the filter. In Figure 4.22 we illustrate an approximation to a filter with threeNIL cells. Consider the situation when we set the filter F(t) = 1 over the approximationregion and the texture T(t) = 1. The approximation to this filter is then computed by2 2 1 2 1 2v’M— 1 o ,— o —‘M— 1 L 2 ,-- 2 -M— 1 3 3—(1i U i -i T U (j— 2 2 1 2 1 2-M—iLo D01 -M—lL2 D2JV’ML3 D3Ljr=O C) D j Lj=O j J i L—’i=O U j L jBut since T(t) = 1 we have:2 2 1 2 1 2ZM—1 , 0 0 M—1 j 2 2 M—i L 3 3r i0 U j i=0 U j j i0 j j2 2 1 2 1 2 —v’M—1 L° o-’M—1 L2 D2VMl i3L.j=0 Vi 13 j T Lj=0 U j-j L.i=0 U j 13 jAt first glance this appears to produce the correct result. However, the weights thatare assigned to the different portions of the fitter approximation are equal. This meansthat the contribution of the NIL cell that spans half the approximation ( ) is weighted2 2equally to those that span a quarter of the approximation (C and 7 ). This weightingof the NIL cells can cause a problem. A simple example is a texture that is zero over the4-FILTERING TECHNIQUES3C2CIC L’Z-’ZZZZZZZZAZZZZZZZZZZZzII I I I I I I I684-FILTERING TECHNIQUESinterval [0, ) and one elsewhere:T(t)= [0,),T(t) = 1t e [,1],T(t) = 0In this case we have-+ Z’ 0 H- 0 - 1 1— 2 20 1 22 1 2332M—1 b B +Z1 b B H- b BThe solution is to weight the contributions of the NIL cells according to their height inthe NIL map; the quantity accomplishes this. The integral approximation nowbecomes:I- 1— 1+1Z1+1Z1h3 24-4.3 Transfer functionsManipulation of the transfer functions allows us to generate different views of the modelthat we are displaying. This is particularly true in volume rendering. By altering thetransfer functions we can highlight or hide different aspects of the data. In this sectionwe show how the re-initialization of NIL maps can be avoided by using a class of transferfunctions defined by a basis set.Equation 4.9 defines the NIL cells:c= B(t)T(27m+ 2t)dt. (4.17)694-FILTERING TECHNIQUESWhere TQ is the texture function. Consider the case where we wish to evaluate the NILmap for a transformed representation of this texture. In the following discussion we willuse the 7() to indicate a transfer function that maps a scalar-valued texture TO intoanother scalar-valued texture Tp(TQ). The NIL map entries are then computed using amodified version of Equation 4.17:= 101 B(t)(T(27m+ 2t))dt (4.18)Following the pattern of NIL maps we can select a set of basis functions that will beused to generate our transfer functions T()4. Using this notation the transfer functionis defined byN-i7(t) = b1B(t) (4.19)jzzOSubstituting equation 4.19 into equation 4.18 we get:=B(t) bi()Bj(T(27m+ 2t))dt (4.20)Noticing that the b1 coefficients are independent of t:3 = b1 j B(t)Bi(T(2m +27t))dt (4.21)These integrals can be pre-computed since they are independent of the transfer function4This does not need to be a basis set, it can be any function set. The space spanned by these functions thendefines the transfer functions available to the user.704-FILTERING TECHNIQUESchosen.p1D= J B(t)Bi(T(2m + 2t))dt (4.22)0Remembering the original NIL map equation for a set $ of selected segmentsM-1(4.23)S i=0This becomesM—1 N—iI = (4.24)S i=0 1=0for the display using transfer functionLaur and Hanrahan [Laur9l] used a three-dimensional MIP map to speed up thedisplay of volumetric data. Because they compute a MIP map for each of the red, green,blue, and alpha channels they have to recompute the MIP map each time the transferfunctions are edited. Using the above technique with M = 1 and N = 3 would requirethe same amount of memory as the MIP map proposed by Laur and Hanrahan. Thiscorresponds to the use of a one-dimensional basis to approximate the filter and a three-dimensional basis to model the transfer functions.4-4.4 Procedural texturesNIL maps can be extended to allow the filtering of procedural texture. Consider a classof textures defined by some basis functions X0(t),X1(t),..., XR1(t). A texture T is thendefined byR-1T(t) = XrXr(t) (4.25)714-FILTERING TECHNIQUESRecall thatp1= J T(t)B(t)dt01 R—1= j XrXr(t)Bj(t)dtX j Xr(t)Bi(t)dtDefiningp1Xri= J Xr(t)Bi(t)dt0The integral I is approximated by—M—1 1. ‘R—1 yT /—t=0 1i L,rzz0 X-iIri= f xr(27t)Bj(t) (4.26)If the signal T is periodicT(t +1) = T(t)and we need only store one NIL cell per positive level of the NIL map. If negative levelsare required then these can be computed as needed.4-4.5 Cosine texturesIn order to show the application of NIL maps to a procedural texture we have chosento implement a set of procedural textures defined by a truncated cosine series. These724-FILTERING TECHNIQUEStextures serve to illustrate the application of NIL maps to procedural textures and alsogive us a texture set that can be filtered by all four filtering techniques presented in thischapter.The texture TQu, v, w) is a periodic texture with unit period such that T(u + n, v +in, w + o) = T(u, v, w), with n, in, o being integers. Given a set of texture coefficielltsTk the texture at a point (u, v, w) is given byI—i J—1 K—iT(u, v, z) = S T cos(27riiL) cos(2irjv) cos(27rkw) + 0.5i=O j=O k=OThe scate factor S is defined by1= I—i J—1 K—i mL.i0 L.j=O L..,k=O 1jkthe addition of 0.5 is to ensure that the texture values lie in [0, 1]. If these texturecoefficients Tk are obtained by applying the cosine transform on a real data set this 0.5term in not required. An example of a texture generated in this manner is seen in plate4.2.4-4.6 Cosine textures with NIL mapsUsing these cosine textures with NIL maps results in a succinct analytical definition of theNIL maps. Consider the integral of a texture basis function cos(2irrt) with a polynomialbasis function for the filter B(t) = a3t + a2t + a1t + a0. The integral of the product of734-FILTERING TECHNIQUESFTexture defined by cosine series with I 4, J = 4, and K = 4Plate 4.2these two functions Xrj is thenj cos(2nt) [a3t + a2t +ait + ao] dt1= a3j t3cos(2Krt)dt+aft2cos(2Krt)dt+alf tcos(2Krt)dt+aof cos(27rrt)dt— (l2K2rt— 6) cos(27rrt) (4K2rt3— 6t) sin(2irrt) 1— a3164r + 83r2t cos(2Krt) (4K2rt— 2) sin(2vrrt) 1+a24ir2r + 8K3rcos(2Krt) tsin(2Krt) 1a12 24irr 2Trrsin(2irrt) 1+ao2irr3 1=a322 +a2222 (4.27)744—FILTERING TECHNIQUESIn our implementation of NIL maps we use Lagrange polynomials as the basis of ourfilters. We chose these polynomials because the points they interpolate are all in theinterval [0,1].B0(t) jt3+9t2_t+1B1(1) = — 4512 + 91B2(1) = + 1812 —B3(i) = — 9t2 + t. (4.28)With this set of basis functions the NIL map values are:9XrO= 2 28r r—9= 2 28K r—9Xr2 2 28K r9Xr3= 2 28K rIncorporating the 2 factor into the above equations gives a definition of the positivelevels of the NIL map.9X rO 8K2(27r)—9X rl= 8K2(2r)754-FILTERING TECHNIQUES7—9X r2= 82(27r)7 9= 82(27r)Other basis functions can be used for NIL maps, in particular the Catmull-Rom cubicbasis integrated with the cosine basis yields:7 1XrO— 2 287r (27r)7—1X i= 82(27r)7—1X r2= 82(27r)7 1X= 82(27r)For a particular texture defined by {X0,X1,X2,..., XR_1} we can compute the NILmap.The periodicity of this texture ensures that we only have to store a singleNIL cell per level. For most applications we have found that a NIL map with depth7max = 10 is sufficient. If any filter covers more than 210 texels then using the DC, oraverage, component of the texture is appropriate.The negative NIL levels for these textures are also defined analytically. Define forconvenience the following functions= ft3cos(2(r27)(t+m))dt— (122 (r27) t2— 6) cos(2 (r27) (t + m))167r4 (r27)4764-FILTERING TECHNIQUES+(42 (r2)2t3 — 6t) sin(2 (r27) (t + m))87r3 (r2Y)3= ft2cos(2r(r27)(t+m))dt— 2t cos(2 (r27) (t + m))+(42 (r27) t2— 2) sin(2 (r27) (t + m))— 42 (r21) 87r3 (r2)3= ft cos(2 (r2) (t + m))dt— cos(2K (r2) (t + m))+tsin(2 (r2) (t + m))—4ç2(r27) 27r(r27)(t)= f cos(2 (r27) (t + m))dt— sin(2Tr (r27) (t + m))— 27r(r2Y)The negative levels of the nil map are then defined bym—m(4’]1‘‘ ri — a1rThis allows us to define arbitrarily deep NIL maps. Pre-computation of these levelsincurs a heavy memory cost, but because they are procedurally defined and the costof evaluating these NIL cells is roughly comparable to the cost of evaluating the filterbasis functions it makes sense to evaluate these NIL cells as they are needed. If enoughmemory is available, these may be stored in case they are needed later on.Using this definition for NIL maps we can define a general NIL map for a class oftextures defined by a basis set {Xj. Once a particular texture has been defined thegenerality of these NIL maps is no longer needed. Thus evaluating the coefficientscan be done using the coefficients.774-FILTERING TECHNIQUES4-4.7 Three dimensionsThe extension of NIL maps to three dimensions is straightforward. The resulting NILcells are TO b A’ rurvrwijk, and S.= j f f B(x)B(y)Bk(z)T(27m+ x),27(n + y), 27(o + z))dxdydz (4.29)= f j j B(x)B(y)Bk(z)Bl(T(27(m+x),2(n+y), 2(o+z)))dxdydz (4.30)X rrrijk= f j f Xru (2U)Xrv(27v)Xr(27w)B(u)B (v)Bk(w)ddvdw (4.31)orX rrrijk= J Xru(2)Bi()d f Xrv(27V)Bj(V)dV j Xrw(2W)Bk(W)dW= X Tu X rvj X rkThe scaling variable S isM—1 M—1 M—1S = (4.32)abc i0 j=0 k=0withl rl p1Bk= J J J B(u)B(v)Bk(w)dudvdw (4.33)000784-FILTERING TECHNIQUES4-4.8 Trigger pointsAs discussed previously, the placement of trigger points is a critical part of the NIL maptechnique. This is particularly true when we are considering issues of practical efficiency.In order to study the usefulness of NIL maps in both volume rendering and texturemapping we have chosen to use a fairly simple algorithm for laying down the triggerpoints. In a particular application where we have a more restricted set of filters that wewish to approximate we can use the properties of these filters to guide the placementof the trigger points. This possibility was pointed out by Fournier and Fiume and wasexplored by Lansdale in his M.Sc. thesis [Lans9lj.Trigger points for texture mappingThe direct evaluation filter that we presented earlier provides a volume in which to placethe trigger points. The user controls the number of trigger points that are uniformlydistributed throughout this volume. We have found that using a large number of triggerpoints (on the order of 64 = 43) and setting a fairly large tolerance for subdivision (onthe order of 4-8) produces a better hierarchy for approximating the filter than when weuse a small number of trigger points and a low tolerance for subdivision. In the nextchapter we illustrate the flexibility of NIL maps by showing how to model a filter thatsimulates motion blur by using NIL maps. This motion blur is implemented by modifyingthe trigger placement code and by defining the appropriate filter. The code for both thetrigger placement and the filter function definition are presented in Appendix B.794-FILTERING TECHNIQUESTrigger points for volume renderingThe distribution of trigger points throughout the volume extruded from the pixel usestwo user defined variables, width and steps. The width parameter determines the numberof trigger points to be placed per slice. The step parameter determines the number ofslices to be placed through a unit length of the volume. Thus the number of triggerpoints placed for a particular pixel is width2 x steps x length and the maximum numberof trigger points placed is width2 x steps ><Again we have found that using a larger number of trigger points and a higher valuefor the subdivision tolerance induces a better hierarchy with which to approximate thefilter.4-5 WrapupIn this chapter we have presented four filtering techniques, clamping, direct evaluation,EWA, and NIL maps. Each of these techniques has its strengths and weaknesses. Anevaluation of their characteristics is presented in the next chapter.80Chapter 5Comparison and application of thetechniquesSi operatur, operaur.Bob LewisThe four filter evaluation techniques presented in the previous chapter have applicationsto three-dimensional texture mapping. Each of these techniques has its strengths andweaknesses. In order to use a particular technique we must be able to assess its usefulness.Because these techniques are quite different it would be overly simplistic to use a singleevaluation criterion. In this chapter we present an evaluation scheme consisting of severalcriteria. It is our hope that the results will allow the best filter approximating techniqueto be chosen.5-1 Three-dimensional texture filter comparisonMitchell and Netravali [Mitc88] presented a study of a class of cubic filters defined bytwo parameters. Their evaluation criteria used visual characteristics (ringing,bliirring,anisotropy) to measure the performance of this family of reconstruction filters. Theymentioned that that it is difficult to come up with objective measures for filters. Rather,they suggested that the properties of filters should be analyzed, documented, and madeavailable so that the user can select a filter depending on the application. In some sensethe same can be said about the evaluation of filter approximating techniques. In order to815-COMPARISON AND APPLICATION OF THE TECHNIQUESchoose a filter technique we must be familiar with its strengths and weaknesses. We canuse image fidelity measures to evaluate how well the technique approximates a particularfilter but there are other non-quantitative measures that must also be considered. Forthe evaluation and comparison of the filter techniques we use the following criteria.5-2 Filter technique evaluation criteria• Class of filters:This criterion is intended to provide a measure of the application of the technique.There are three main issues that will be addressed by this criterion:— What kind of filters can approximated with the technique? e.g. A Gaussianfilter projected into texture space, box filter projected into texture space.— What control over the approximation quality is provided by the technique?— Does the technique handle the scaling of filters? Most of the filters used incomputer graphics have a finite support. How does the performance of thefilter depend on the number of voxels or texels that are covered by the filter?• Class of textures:This criterion studies the textures to which the technique is applicable. There arethree classes of three-dimensional textures that we wish to check the techniquesagainst.— General procedural:A procedural texture with no restrictions on the functions from which it is825—COMPARISON AND APPLICATION OF THE TECHNIQUESconstructed.— Procedural defined by a basis:The basis may be further restricted to a frequency basis. Any restrictionsimposed on this basis by the technique are also explored.— Discrete:A texture defined by a volumetric sample set. These textures may be empirically or procedurally defined.• Fidelity measures:The image compression literature uses several fidelity measures for image data.The two main measures used are the mean square error (MSE), and the signal tonoise ratio (SNR). In an overview of the compression literature Jam [Jain8l] definesthese two terms. The MSE of two images {u,} and {u} of resolution N >< M isdefined as1 NMe5= NM — ZL,)2.The signal to noise ratio for an image whose pixel values range from 0 to 255 isthen defined in decibels by(255’2SNR = 10 log10 2‘ (db).emsA quick scan of recent literature in the field of image compression [Wu91, Mark9l,Xue9l, Pent9l, Tilt9l] confirms that these two error metrics are still being used.Slight variations of these measures can be found in [Prat78].835-COMPARISON AND APPLICATION OF THE TECHNIQUESPoint sampled rendering of the cosine texture used for evaluation purposes. This imageexhibits a large amount of aliasing in the upper portion.Plate 5.1We will attempt to evaluate the performance of the clamping, EWA, and NIL maptechniques using the direct evaluation filters as a comparison. Clamping will becompared with the direct evaluation of a box filter, EWA filters and NIL maps willbe compared with the direct evaluation of a Gaussian filter. The image that will beused for these comparisons is presented in Plate 5.1. The resolution of the imagewas chosen to be 100 by 100 because this increases the pixel size relative to thescene, thus increasing the aliasing artifacts.• Visual quality:Unfortunately the computational metrics used above are not directly related to ourperception of the images so we must also perform a subjective evaluation. We willperform this evaluation using the following criteria.845-COMPARISON AND APPLICATION OF THE TECHNIQUES— Directional artifacts:Does the technique introduce any directional artifacts?— Discontinuities:Does the technique introduce any discontinuities to the texture?• Cost of the technique:— Pre-processing cost: The cost of any required pre-processing.— Evaluation cost: The cost of evaluating one filter application.5-3 Clamping• Class of filters:The original clamping technique [Nort82] focused on finding a quick approximationto a box filter. They achieved this by requiring that the frequency components ofthe texture be known. Once the frequency components for a texture are known theclass of filters that can be approximated with this technique is quite large. The onlyrequirement placed on the filters to be approximated by such a clamping techniqueis that they have a well defined Fourier transform. The filtering quality is thencontrolled by the order of the polynomial that is used for the approximation of thetransformed filter.855-COMPARISON AND APPLICATION OF THE TECHNIQUESStep function used for clamping. Quadratic function used for clamping.Plate 5.2 Plate 5.3• Class of textures:Procedural textures for which the frequency distribution of the signal is known.This does not mean that we must know the exact Fourier distribution of the signal,but we must have a good idea of the frequency distribution of the signal and howthese frequency components are put together. This was the method used by Perlin[Per185] for anti-aliasing his turbulence function. He modeled turbulence by addingsuccessively higher frequency band-limited noise functions to the signal. By usingthe size of the projection of the pixel the computation could be stopped when itwas determined that the next band-limited noise function would cause aliasing.• Fidelity measures:The clamping methods we evaluated were compared to a direct evaluation of the865-COMPARISON AND APPLICATION OF THE TECHNIQUESbox filter. The box filter was computed using the direct evaluation technique withan error tolerance of 0.004• Visual quality:Two particular implementations of this technique were studied, the first a directextension of the original quadratic clamping technique as suggested by Norton etal. [Nort82], the second a step function as implemented by Perlin {Per185J. As canbe seen from the image in Plate 5.2 the use of a step function for clamping resultsin visible discontinuities in the image. Using the quadratic function (Plate 5.3)reduces the effect of the discontinuity, however, the discontinuities are still visible.• Cost of the technique:— Pre-processing cost:There is no pre-processing required if the frequency information of the textureis known. If this information is not available the cost of pre-processing is thecost of finding the frequency information. For a discrete texture of size N3 thiswould incur a cost of O(N3 log N) if a fast Fourier transform or some otherrelated transform is used.— Evaluation cost:The computation of a texture defined by its frequency spectrum requires theweighed sum of the appropriate basis functions. In the case of the cosine875-COMPARISON AND APPLICATION OF THE TECHNIQUEStextures the computation isI—i J—i K—iT(u,v,z) = S> TkX(u)X(v)Xk(w)i=O j=O k=OThe introduction of the above computation to include the clamping functionresults in1—i J—i K—iT(u,v,z) = Si=O jO k=Owhere C(i,j, k) is the clamping function. The additional cost incurred by thetechnique is then I x J x K invocations of the clamping function. Thus thetotal cost for evaluating the clamped texture isI x J x K(cost(X) + cost(clamp)).When the cost of evaluating the texture basis functions is higher than the costof evaluating the clamping function the cost incurred by clamping is low. Forcosine textures the ratio of the cost of the cosine function to the cost of theclamping function ranges between 26:1 and 6.6:1’. If the clamping function isknown to be zero above a certain frequency then this can be used to ensurethat no unnecessary basis functions are evaluated. In most situations thisremoval of higher frequency components results in a decrease in computation‘On a Silicon Graphics 300 series workstation the ratio is 20:i. On a SUN SPARCstation SLC the ratio is26:1. On a IBM RS/6000 560 the ratio is 6.6:1.885-COMPARISON AND APPLICATION OF THE TECHNIQUEScost that outweighs the cost of evaluating the clamping function. In Table5.1 we present the timings for evaluating the test image using point sampling,step clamping and quadratic clamping. The third and fourth columns of thistable contain the MSE andSNR of the image relative to the image that resultsfrom a direct evaluation of the box filter.Even though we could argue that the quadratic clamping function produces a betterpicture, its MSE is higher than that of the image produced by the step clamping function.This is not surprising because the quadratic filter starts clamping down on a frequencycomponent long before it needs to be filtered out. This excessive filtering causes the risein the MSE.If the textures are defined in terms of their Fourier spectra and the filtering requiredis simple, anti-aliasing this technique should be considered. The discontinuities presentedin this example illustrate the problems of choosing an inadequate clamping function. Bytailoring a clamping function to a particular application it is possible to use this techniquequite effectively.[ Technique ] Time (min:sec) MSE SNRPoint sampled 0:11 115 27.5Step clamped 0:8 115 27.5Quadratic clamped 0:9 224 24.6Direct evaluation 13:29——MSE and SNR of clamping methodsTable 5.1895-COMPARISON AND APPLICATION OF THE TECHNIQUES5-4 Direct evaluation• Class of filters:Arbitrary filters defined over rectangular parallelepipeds2. This restriction is inplace because of the quadrature methods used. Because there is no restriction onthe size of these rectangular parallelepipeds it is possible to compute complex filtersby extending the volume of integration until it encloses all of the interesting partsof the filter. Even though this approach is costly it does provide us with a methodfor evaluating these filters with some degree of confidence. The images computedin this manner can then be used to test the accuracy of other filtering techniques.• Class of textures:The direct evaluation of filters using quadrature rules imposes no restriction on theclass of textures that can be filtered.• Fidelity measures:Because the images computed with the adaptive quadrature rule are being used asa measure of the goodness of the other techniques, we computed these images usinga tolerance of 0.004 Using this tolerance and Simpson’s quadrature rule wecomputed the image using the box filter and the Gaussian filter. In addition tothis, one image was computed using the Gaussian quadrature rule with a Gaussianfilter.2j is possible to use quadrature rules to evaluate integrals over non-rectangular volumes. Details of thiscan be found in [Burd8l]. The construction of these non-rectangular integration volumes might require thatspecialized code be developed for different filters.905-COMPARISON AND APPLICATION OF THE TECHNIQUESIn Table 5.2 we present a comprehensive set of data resulting from the applicationof the different filtering techniques to our test image.Technique Time (min:sec) MSE SNR (dB)1 Point sampled 0:11 115(4) 27.52 Step clamped 0:8 115(4) 27.53 Quadratic clamped 0:9 224 24.64 Direct Simpson’s (box) 13:29——5 Direct Gaussian (box) 3:51 2.18(4) 44.76 Direct Simpson’s (Gauss) 13:31 ——7 NIL, M=1,Tol=10 1:04 6416(6) 10.28 NIL, M=1,Tol=5 1:38 255(6) 24.09 NIL, M=1,Tol=2 1:12 165(6) 26.010 NIL, M=4,Tol=10 5:57 238 (6) 24.311 NIL, M=4,Tol=5 13:35 138 (6) 26.712 NIL, M=4,Tol2 17:42 117 (6) 27.513 EWA 3d (3) 1:33 196(6) 25.214 EWA 3d (5) 4:47 193(6) 25.315 EWA 3d (7) 8:38 193(6) 25.316 EWA 3d (11) 24:18 192(6) 25.3Comparison of run times for filtering techniques. The parenthesized numbers in thefourth column indicate which technique was used to compute the MSE and SNR.Table 5.2• Visual quality:By using an adaptive quadrature rule we can directly evaluate the effects of usingdifferent filters on a particular image. Plates 5.5 and 5.4 show the results of usinga box filter and a Gaussian filterfilter!Gaussian on our test image.915-COMPARISON AND APPLICATION OF THE TECHNIQUESDirect evaluation of Box filter Direct evaluation of Gaussian filterPlate 5.4 Plate 5.5In Plate 5.6 we present the results of three classes of filters being applied to amarble block. The first column shows the results obtained with box filters alignedwith the surface of the object. In the second column Bartlett filters are applied tothe same object. The lower images were generated with increasingly wider filters.The last column shows the results of using a box filter aligned with the ray. In thiscolumn it is the depth of the filter that is being increased.• Cost of the technique:— Pre-processing cost:The technique itself does not require any pre-processing of the texture. If thequadrature rule chosen requires any pre-processing this will be the total preprocessing cost required. In both the case of Simpson’s adaptive quadrature925-COMPARISON AND APPLICATION OF THE TECHNIQUESand Gaussian quadrature no pre-processing is required.— Evaluation cost:The evaluation cost is dependent on the quadrature rule chosen.With Simpson’s adaptive quadrature rule used in all three dimensions theminimum number of texture evaluations is 125 (5 x 5 x 5). The number of actualevaluations performed depends on the fourth derivative of the product of thefilter with the texture. This dependence exists because Simpson’s adaptive rulestops when its estimate of the fourth derivative falls below a predeterminedtolerance.Gaussian quadrature provides the best approximation to the integral for a fixednumber of function evaluations3. In our implementation we used the cubicGaussian quadrature rule. In this case the number of texture evaluations is 27= (3 x 3 x 3). A direct comparison of the run times of the various algorithmsis presented in Table 5.2.This technique allows the accurate computation of arbitrary filters. By using anadaptive quadrature rule we ensure that the computation of the filter is performed to apre-determined tolerance. This expensive computation allows us to generate high qualityimages when expense is not a concern. In the study of other filtering techniques we havefound this technique useful as a means of comparison. It is also possible to use a fixed costquadrature rule for the evaluation of these filters. Because in some sense the Gaussian3Gaussian quadrature optimizes the evaluation of the integral by carefully choosing the position of the samplepoints. The Gaussian quadrature rule chosen here is the one which results from using the Legendre polynomials[Burd8lj.935-COMPARISON AND APPLICATION OF THE TECHNIQUESMarble blocks, with box, Bartlett, and volume filters.Plate 5.6945-COMPARISON AND APPLICATION OF THE TECHNIQUESquadrature rule is optimal it makes sense to use this super-sampling technique ratherthan other super-sampling techniques.5-5 EWA filters• Class of filters:Discrete approximations to truncated two- dimensional and three-dimensional radially symmetric filters. Greene and Heckbert [Gree86j used the Gaussian filterin their presentation of the EWA technique. The extensions presented here approximate a two-dimensional Gaussian filter applied on the tangent plane of theobject, and a three-dimensional Gaussian applied in a volume near the surface ofthe object.• Class of textures:All textures, whether procedural or discrete.• Fidelity measures:In Table 5.4 we present the cost and resulting quality of using the EWA filteringtechnique. In order to illustrate the cost of this technique we present the results ofusing this technique with a maximum of 33,53,73, and ii3 sample points. As can beseen from the results in Table 5.4, increasing the sampling of this technique doesnot significantly improve its performance.• Visual quality:In areas of the image where the sampling rate is insufficient there is still some95CD CDCDCD CD 0) CD C))CDHCC3C—.21CD:•—C)1C—‘—%___—— ci) C >-H:00-C00CD I (0 CD n —I::•:•iI’cccccccci•N00nCCCC—————Cn‘C)1cC.cl-—0)C0)CoCcc ccQ---.II—cc•I’0)0)0)Ccc-II-II-.II-I:‘i•÷-\DL•0)CccCoCccCoCccC-.-.-Lt’-) CDIIIIIIcII’C31 C 00 C z z C)5-COMPARISON AND APPLICATION OF THE TECHNIQUESaliasing.EWA filter. Max samples = 3Plate 5.7• Cost of the technique:EWA filter. Max samples = 11Plate 5.8— Pre-processing cost:Independent of texture size or class. The cost of the pre-processing is n evaluations of the filter and n memory locations for the results, where n is the sizeof the linear array being used to store the pre-computed filter values.Technique Pre-processing time (min:sec)NIL M=4 0:6.8NIL M=1 0:0.1NIL map pre-processing times for sample cosine texture (4 x 4 >< 4).Table 5.5975-COMPARISON AND APPLICATION OF THE TECHNIQUES— Evaluation cost:The cost of evaluating an EWA sample is Q(n3), where n is the number ofsamples taken along one of the axes of the ellipsoid.Given a box which encloses the ellipsoid centered at Po we compute the numberof samples, n3, which must be taken in this box. The result is then computedas a weighted sum of the texture over these n3 points. Thus the cost of asingle evaluation of an EWA filter is I(n3).The use of a pre-computed filter for the evaluation of these radially symmetric functions removes the cost of approximating the filters at each step. This approximation ofthe filter performs well when the volume over which the filter is being evaluated is small.When the filter volume is large the number of samples required makes this approximationvery costly. This high cost requires us to limit the number of samples taken per filter.The simplicity of the technique makes it attractive, however, this simplicity comes at theprice of restricting the class of filters that can be used.5-6 NIL maps• Class of filters:Arbitrary filters. Approximation quality depends both on the order of the approximating patches and the number of patches chosen to represent the filter.• Class of textures:Procedural textures defined by a set of basis functions or discrete textures.98z S ct I. 0 z S cI 0z S I. 0 z S 0C cJD 0 z act I’ CDcccc5-COMPARISON AND APPLICATION OF THE TECHNIQUES• Fidelity measures:The results of approximating a Gaussian filter with NIL maps are presented inTable 5.6. In order to give an overview of the performance of the performance ofNIL maps we present several images. The first three were computed using constantpatch NIL maps (M=1), and the remaining three images were computed with tn-cubic patches (M=4). The number of trigger points for all of the images is constant(53). The number of trigger points allowed per NIL cell is shown in column 2.[ Technique Tolerance Time (min:sec) MSE SNR (dB)Point sampled— 0:15.2NIL, M=1 10 1:04 6416(4) 10.2NIL, M=1 5 1:38 255(4) 24.0NIL, M=1 2 1:12 165(4) 26.0NIL, M=4 10 5:57 238 (6) 24.3NIL, M=4 5 13:35 138 (6) 26.7NIL, M=4 2 17:42 117 (6) 27.5Direct Simpson’s (Gauss)— 13:31——NIL times and comparisons.Table 5.6• Visual quality:A number of anomalies were seen in the images computed using NIL maps. Manyof these are attributable to an inadequate sampling of the filter function. Whenmany of the control point weights are small the scaling value is also small. Smallvariations in this value can cause large variations in the computed value becausethe scaling factor is the denominator. An extreme example of this effect can be1005-COMPARISON AND APPLICATION OF THE TECHNIQUESseen in Plate 5.9, where the scaling factor evaluated to zero in a few spots (thewhite pixels). By forcing the subdivision to proceed further these gross artifactsare removed. This is shown in Plate 5.11.When the one-dimensional, or constant, NIL maps are used a blockiness is introduced. This is not completely unexpected because we are approximating the filterwith constant patches.Technique Cost:— Pre-processing cost:For textures defined by a set of basis functions the storage cost for each positivelevel is M3R . For a particular texture defined this can be reduced to M3 bycomputing the ijk terms.For a discrete texture with resolution x = y = z = 2m the storage cost perpositive level of the NIL map is 3(Ym amma)3 Thus the total cost forstoring the positive levels of the NIL map is M32(7max) 72gx 4•— Evaluation cost:In order to approximate a filter with NIL maps two steps must be followed.The first generated the hierarchy and the second evaluates the approximationto the filter by computing the weights for the control points of the patches.The number of patches which can be generated for an approximation dependson the number of trigger points N, the tolerance tol, and the minimum depth7min In fact the cost of generating the hierarchy is 0 () and the size of1015-COMPARISON AND APPLICATION OF THE TECHNIQUESthe hierarchy is also 0The proof of this is a simple extension of the proof presented by Fournier andFiume [Four88a] in their original NIL map paper.The smallest number of patches for an approximating hierarchy will be generated when tol > N and all of the trigger points lie in one NIL cell.The largest number of patches which can be produced occurs when the subdivision starts at ‘y,,’, and proceeds to the lowest level ‘/mim In order for this tooccur we must have tol + 1 trigger poillts clustered together in such a way thatthey force a subdivision at each level. Because each subdivision caused by thisof cluster trigger points removes one patch and adds eight patches the numberof patches generated by this control patch is at most 7(7marc— “Yrninj. Noticethat no other set of tol + 1 trigger points can add as many patches to theapproximating hierarchy, because the first subdivision is already attributed tothe first tol + 1 trigger points. We can thus use 7(7ma— 7mm) as a coarseupper bound for the number of patches which each tol + 1 trigger points willgenerate. This results in an upper bound of—,“ N((7max— 7mm) tol + 1patches for the approximating hierarchy. Thus the number of patches approximating a filter is 0The cost of constructing this hierarchy is also 0 (3-j). This follows becauseeach patch produced in the above process is only executed once.1025-COMPARISON AND APPLICATION OF THE TECHNIQUESThe approximation of the filter is then computed using the following sum.M—1 M—1 M—1I =patch i0 j0 k=OThe cost of evaluating the filter is then 0 (-1M3).Thus we see that the cost of evaluating a single filter with NIL maps in notdependent on the filter’s size, position, or shape, but rather is dependent ontolerance parameters set by the user.One could argue that the use of NIL maps for the approximation of a Gaussian filteris somewhat excessive. A better example of the power of NIL maps is the case of motionblur filters.First let us assume that a Gaussian filter is being used as the filter over the area onthe viewing plane which maps onto a pixel. If the texture is moving across the screen ina linear fashion then the required filter is a Gaussian extruded along a line. If the textureis on a plane which is rotating about the origin then the filter can be approximated bya Gaussian stretched along the corresponding arc. In order to model this filter withNIL maps we lay down the trigger near this arc. The hierarchy is found using thesetrigger points. The weights for the control points are taken from the rotated filter.The implementation of this filter required that less than 50 lines of C code be written.Appendix B shows the code written for this example.In Plate 5.13 we present a point-sampled view of a textured plane. If this texture isrotated about the origin and the exposure time of a camera is large enough motion blurwill occur. This motion blur occurs because the texture moves relative to the viewing1035-COMPARISON AND APPLICATION OF THE TECHNIQUESITexture for motion blur examplePlate 5.13Motion blur caused by a rotation of‘ir /24Plate 5.14Motion blur caused by a rotation ofTr/12Plate 5.15Motion blur caused by a rotation of7r/6Plate 5.161045—COMPARISON AND APPLICATION OF THE TECHNIQUESplane. Motion blur can be modeled by spreading a filter over the area of the texturewhich passed in front of a pixel area. Using NIL maps we were able to quickly modeland implement such a motion blur filter. In Plates 5.14, 5.15, and 5.16 we see the planedisplayed with motion blur filters. The rotation used in these Plates is 7r/24, ir/12, andr/6 respectively.5-7 WrapupThe four filtering techniques compared here allow the evaluation of three-dimensionalfilters. Each of these techniques has its strengths and weaknesses. In Tables 5.7 - 5.11we present a summary of our evaluation.Clamping provides the lowest cost method for the removal of aliasing frequenciesfrom an image. The artifacts which this technique can introduce and the requirementthat the texture be expressed in terms of its frequency spectrum make this techniquesomewhat limited in application. EWA filters allow the evaluation of radially symmetricfilters without the overhead of computing an expensive filter. When this technique isused with expensive textures the high cost of evaluating large filters can force the userto place an upper bound on the number of samples taken. If this is the case, EWAfilters will not perform well when the filters grow large. By using a pyramidal datastructure NIL maps provide a constant cost (per pixel) filter evaluation technique. Thequality of the approximation is controlled by the order of the approximating basis andthe number of patches generated to approximate the filter. Even though the per-pixelcost of NIL maps is fixed it is still quite high. This makes NIL maps impractical for the1055-COMPARISON AND APPLICATION OF THE TECHNIQUESTechnique [ Filter class 1Clamping Polynomial approximation in Fourier space.Direct Arbitrary filters over rectangular volumes.EWA-2 fixed Radially symmetricEWA-2 adaptive filters, centered in ellipse orEWA-3 fixed ellipsoid.EWA-3 adaptiveNIL Maps Arbitrary filters approximatedby patches.Filters approximatedTable 5.7evaluation of simple filters such as elliptical Gaussians. However, the ease with whichnew filters can be developed makes NIL maps a powerful tool in the study of filtersfor three-dimensional texture mapping. As expected the direct evaluation of the filtersusing adaptive quadrature rules yield the best results. The evaluation of odd shapedfilters (such as motion blur) may required the evaluation of the integrals to be overlarge volumes or over irregular volumes. Computing these integrals is expensive, but thecontrol over the quality afforded by this technique may make it the only option in someapplications.1065-COMPARISON AND APPLICATION OF THE TECHNIQUESTechnique Texture classClamping Procedural with frequency information.Discrete with frequency information.Direct Procedural and discrete.EWA-2 fixed Procedural and discrete.EWA-2 adaptiveEWA-3 fixedEWA-3 adaptiveNIL Maps Procedural defined by a set of basis functions.Texture class allowedTable 5.8Technique Pre-processing costClamping Negligible if frequency known, otherwisecost of finding frequency information.Direct None.EWA Evaluation of n samples of one-dimensional filter.Storage is n floating point numbers.(This is why radial symmetry required.)NIL Maps O(M3x1og2(x))x: Resolution of data setPre-processing costTable 5.91075-COMPARISON AND APPLICATION OF THE TECHNIQUESTechnique Evaluation costClamping FixedDirect Fixed FixedDirect adaptive o(F)EWA-3 0(n3) (n x n x n) samples in boxNIL Maps Storage 0Storage 0 r1M3(M: Order of approximating patches.)(Nj:Number of trigger points.)(tol: Tolerance.)Evaluation costTable 5.10Technique Visual evaluation.Clamping Abrupt changes possibleDirect Depends on filter usedEWA Aliasing still possiblewhen sampling restricted.NIL Maps Truncated filter can lead to normalizationproblems. (Scale factor goes to zero)For N=1, Discrete blocks aligned with axis.Visual evaluationTable 5.11108Chapter 6Filters for volume renderingThe goal of volume rendering is to display objects and structures that are not normallyvisible. Because of this it is impossible to apply a reality measure to the resulting images.This means we are free to choose an arbitrary display model, such as a medium thatabsorbs light, or a medium that absorbs and emits light for instance. Once this displaymodel is chosen then we must ensure that we display the data set accurately accordingto the model chosen. By choosing new display models and display methods new toolscan be developed to display volumetric data.1As we have shown in Chapter 3, the evaluation of these volumetric filters is quitecostly. Not only is this evaluation costly, it may also be the case that the implementation of a filter requires a large amount of specialized code to be written. In this chapterwe three examples to show that by using NIL maps we were able to prototype volumerendering filters. The first set of examples shows how NIL maps can be used to approximate traditional volume rendering techniques. The second example shows how slicesof the data can be extracted using a Gaussian filter approximated by NIL maps. Thethird example shows how NIL maps were used to find a technique for highlighting the‘The relationship between the display model and the display technique for volume rendering is similar tothe relationship that exists between illumination models and shading techniques. The illumination model isused to model the interaction of light with a surface element and the shading technique is used to evaluate theillumination model. Usually the shading technique introduces a simplification to the illumination model. It is inthis sense that the display technique in volume rendering applications is used to evaluate the display model.1096—FILTERS FOR VOLUME RENDERINGwind-passage in our example MRI data set. This example uses a modification of a current volume rendering technique [Sabe88] to select a sample position. Once the point ofinterest is found a filter is evaluated in its neighbourhood. The value resulting from theapplication of this filter is used as the pixel intensity. In particular, if a surface-detectionfilter is used, the surface of the wind-passage is more clearly displayed.6-0.1 Data set for examplesThe examples in this chapter all use a 128 x 128 x 21 sub-sampled version of a 256 x 256 x 218 bit per voxel MRI (Magnetic Resonance Imaging) data set. This smaller subset of thedata was used due to the high memory overhead of the NIL map filtering technique.This data set was obtained as part of a sleep apnea study at the UBC faculty ofdentistry. Hannam2 defines sleep apnea as follows:Sleep apnea is a disorder caused by an upper airway obstruction duringsleep. It is not evident in the waking state. The soft tissues of the upperairway change shape and result in a constriction of the airway. Diagnosisof the disorder is currently made by a combination of bio-medical imagingand polysomographic readings made in a sleep disorder clinic. Treatmentincludes surgical correction of soft tissues, the use of intra-oral appliances tocontrol tongue posture during sleep, and positive airway maintenance withcontinuous airflow.Image resolutionThe images in Plates 6.1, 6.2, and 6.15 were computed at a resolution of 640 x 480. Theremaining plates were computed at a resolution of 320 x 240.2Dr Alan Hannam, Professor of Oral Biology, Faculty of Dentistry, University of British Columbia, Personalcommunication.1106-FILTERS FOR VOLUME RENDERING6-1 Conventional displayPreviously we showed (in Chapter 3) that the display of volumetric data under a particular display model can be formulatedfU fV ftL fVj 1bI= J J Idudv = J J J e_T_tp(x(t),y(t),z(t))dtdudv, (6.1)U0 V0 UO V tawhere r is a user parameter that controls the strength of the scattering effect. Thisequation models a medium where the scattering effect is only dependent on the opticaldepth. As we noted this is the evaluation of the application of a filter to the volumetricdata. In this case the filter is defined as the product of an exponential decay filter(e_T(t_t0)) and a box filter applied over the pixel area on the viewing screen. By rewritingthis equation as,= L L i: F(u, v)e_T(t_tp(x(u, v, t), y(u, v, t), z(u, v, t))dtdudv,we see how this three-dimensional filter can be defined as the product of a two-dimensionalfilter and a one-dimensional filter.In the following examples we use NIL maps to display this data set using a Gaussianscreen filter and an exponential decay filter along the line of sight. In Plates 6.4, 6.5, and6.6 we use the transfer functions illustrated in figure 6.23.In Plates 6.1 and 6.2 we see the image displayed using Sabella’s [Sabe88] methodusing a super-sampling rate of 16 rays per pixel and 256 sample points per ray. The320 x 240 versions of these images required 15 minutes of CPU time. In Plate 6.31111.41.210.8Output p 0.60.40.20-0.206-FILTERS FOR VOLUME RENDERINGTransfer functions used in NIL examplesTransfer functions used in NIL map examples. These transfer functions are defined interms of Lagrange polynomials.Figure 6.23Input p0.2 0.4 0.6 0.8 11126-FILTERS FOR VOLUME RENDERINGSide view of the data set used in examples. This 128 x 128 x 21 8 bit MRI data set isused throughout the chapter. This image was computed using Sabella’s [Sabe88]technique. The value for the r parameter in this image is 3. The red channel containsthe computed intensity, and the green channel contains the max value encounteredalong the ray.Plate 6.11136—FILTERS FOR VOLUME RENDERPGwe present the display of the data from the same point of view using a constant patchapproximation. The tolerance for subdivision in these images was set to 2 and the numberof trigger points used was 4, 8, 16, and 32. This means that the level of approximationwas 2 x 2 >< 2, 4 x 4 x 4, 8 x 8 x 8, and 16 x 16 >< 16, respectively. Plate 6.4 presentsthe display using a linear patch approximation. These two sets of images (Plates 6.3 and6.4) illustrate the results which can be generated using a very coarse approximation tothe filters with NIL maps. The high cost of evaluating these images discouraged furtherinvestigation in this direction. The techniques which directly evaluate the display filtersusing ray-marching are much better suited to the task than NIL maps are.In Plate 6.5 we show the data set displayed using a quadratic patch approximation tothe filter. Notice that for a 2 x 2 x 2 representation of the data there is quite a bit of detailFront view of data set used in examples.Plate 6.21146-FILTERS FOR VOLUME RENDERINGConstant patch approximation using tolerance 2, and trigger 4, 8, 16, 32. The imagesare ordered left to right and top to bottom.Plate 6.3Patch order Trigger points 2 4 8 160 2 3 5 101 4 7 13 322 13 20 40—NIL map volume rendering timings (Mm).Table 6.11156-FILTERS FOR VOLUME RENDERINGLinear patch approximation using tolerance = 2, and trigger = 4, 8, 16, 32. The imagesare ordered left to right and top to bottom.Plate 6.4visible. When we contrast this with the display of the data using direct display (Sabella)with reduced sampling we see that NIL maps allows the use of a low resolution versionof the data. In contrast lowering the sampling rate on the direct evaluation techniquewould yield poor results. The display of the data using 2, 4, 8, and 16 sample pointswith direct evaluation is presented in Plate 6.7. Even though the image in Plate 6.5provides a good abstract display of the data for a 2 x 2 x 2 data set. It is hard to seewhere such a simple display of a volumetric data set would prove useful. In table 6.1 wepresent the time required to produce the NIL map renderings. By increasing the numberof approximating patches a fairly good display (Plate 6.6) of this data set was generated.As expected NIL maps do not perform particularly well in this application. The high1166—FILTERS FOR VOLUME RENDER1NQuadratic patch approximation using the 2 x 2 x 2 level of the NIL mapPlate 6.5Linear patch approximation of general filter for 128 x 128 x 21 data set. The number oftrigger points is 512. The tolerance is 3.Plate 6.61176-FILTERS FOR VOLTJME RENDERINGoverhead cost associated with finding the approximation to the filter is the main cause ofthis. If a smaller representation of the data is required then the use of the top levels ofthe NIL map will provide the required abstraction. This property of NIL maps is moreuseful in texture mapping applications than it is in volume rendering applications.6-2 Slicing the dataOften a slice of the data is required. This slice is a two-dimensional sample set takenfrom the original data set. The position and orientation of these slices is arbitrary. Byusing NIL maps we show how a filtered slice of the data can be constructed. The slice isspecified by specifying a distance from the eye. For each pixel a Gaussian reconstructionfilter is placed at the pre-defined point along the ray that goes through the centre of theDirect rendering with 2,4,8,16 samples per ray.Plate 6.71186-FILTERS FOR VOLUME RENDERINGpixel. The resulting image is the slice. By varying the width of the filter we can obtainslices that represent different widths of the data. In Plates 6.8 and 6.9 we illustrate slicesobtained with a slice width of 0.1 and 0.01 respectively.Another possible application is the extraction of the slices as required in the Fourierdomain rendering of the data [Tots93, Levo92, Ma1z93]. In this approach a slice of thedata must be extracted from the Fourier representation of the volume. The possibilityof using NIL maps for this slice extraction is currently being investigated. The uniformsize of the reconstruction filter seems to indicate that the pyramidal aspect of NIL mapswill not be as important as when variable size filters are being approximated.Thick slice of the dataPlate 6.81196-FILTERS FOR VOLUME RENDERING6-3 Maximum revisitedThe initial display of the data with NIL maps did not really display the wind-passage asre1l as expected. Using the slice method we could extract a variety of slices to highlightthe constriction in the wind-passage. In this section we show how the combination ofa simple search technique with a surface-detection filter can be used to highlight thewind-passage.One of the simplest tools for the display of volumetric data is to compute the maximum along a line of sight and use this as the intensity of the resultillg pixel [Sabe88].Unfortunately this technique does not help us in the task of displaying the wind-passagefor the sleep apnea study as can be seen in Plates 6.1 and 6.2. We can use a variationof this technique to position a filter that highlights surfaces in the volumetric data. TheThin slice of the dataPlate 6.91206-FILTERS FOR VOLUME RENDERINGdensity near the wind-passage lies roughly between 0.14 and 0.18g.Search and display of volumetric data. The search density (,Oo) is 0.14. The outline ofthe wind-passage is faintly visible.Plate 6.10Given a user defined search density we choose the position of the filter by searchingalong the ray for the point with the closest density to this search density. If we use thesedensities as the intensity values for the pixels4 a image with almost uniform intensity isproduced. If one looks at the image carefully the outline of the wind-passage is faintlyvisible. By positioning a difference of Gaussians (DOG) filter at this point we wereable to clearly highlight the wind-passage. In Plates 6.11—6.14 we illustrate the resultsobtained using this combination of techniques.The wind-passage outline is quite easy to find in the images produced with a search3The original data is an 8 bit unsigned data set. In the current NIL map implementation this data is storedin a floating point representation with the density in the range [0,1].4Either the density values or a function of the density values can be used. In either case the image containslittle useful information1216-FILTERS FOR VOLUME RENDERINGdensity of 0.14. In these Plates (6.11 and 6.12) there appears to be a complete constrictionof the wind-passage. Fortunately for the patient this does not represent his/her wind-passage. By ranging the search density from 0.14 to 0.19 we were able to get a good ideaof the shape of the wind-passage. This process of finding the correct search density canbe interactively controlled. In Plates 6.13 and 6.14 we see a front and a side view of thedata set corresponding to a search density of 0.18. If we compare this to the image inPlate 6.15 we see that we have a close match in the shape of the wind-passage. Theirregularities in these images (Plates 6.11-6.14) are due to the simple search method andnot to the filter. When our simple search method finds a location which is not near thewind-passage6the application of a DOG filter will produce a low response.The images presented in this section are the original images that were producedduring the initial implementation and experimentation step. Because the intention ofthis chapter is to illustrate how NIL maps can be used in the investigation of filters wedid not spend any additional time trying to improve the pictures.It is interesting to note that the compact shape of the filters allowed a much fasterapproximation of the filter to be used. Typically nine NIL cells were used to approximateeach filter. This means that the computation of each these images required about fiveminutes.The examples in this Chapter illustrate how NIL maps can be used as a tool for theinvestigation of volume rendering filters. The three examples presented in this chapterwere implemented by altering the trigger point placement code and the filter-weight5This image was obtained using a Marching Cubes implementation[Lore87]. The iso-surface is defined bythresholding the data at a density of 0.1860r the ear passages.1226-FILTERS FOR VOLUME RENDERINGevaluation code. In particular, the implementation of the last technique was done in thecourse of an afternoon.Side view of data set. Display of the data using the sample and filter display. Thesearch density (Po) is 0.14, and the display filter is a difference of Gaussians (DOG)filter.Plate 6.116-4 Comments on NIL maps for volume renderingNIL maps have allowed us to implement quickly and experiment with a number of filters.The three examples presented here highlight the main disadvantages and advantagesof NIL maps in the context of volume rendering. The cost of using NIL maps as aconventional volume rendering technique is high. This high cost is primarily due to thecost of evaluating the approximating hierarchy. In the current implementation a majorportion of the execution time ( 87%) is spent finding the approximating hierarchy1236-FILTERS FOR VOLUME RENDERINGSide view of data set. The search density (po) 0.18.Plate 6.13Front view of data set. The search density (Po) iS 0.14.Plate 6.121246-FILTERS FOR VOLUME RENDERINGIso-surface generated using Marching Cubes technique. The threshold used to generatethis was 0.18.Plate 6.15Front view of data set. The search density (Po) jS 0.18.Plate 6.141256-FILTERS FOR VOLUME RENDERINGPre-processing Subdivision Control point Placement of Patch integrationevaluation trigger points64.73 23.22 2.11 1.59Percentage of time spent by the different NIL map componentsTable 6.2and the weights of the control points. In table 6.2 we present a typical set of statisticsgathered by a standard UNIXTM profiling tools7.This problem is partially addressed if we store the filter-approximating hierarchy foreach pixel. If a new display of the data set is required and the current hierarchies arestill valid this will significantly reduce the cost of the display. In particular this is truewhen transfer functions are being altered.If the view of a data set is changed then we have no choice but to recompute thehierarchy for each pixel. However, there are situations where this hierarchy need notbe completely recomputed. These include transfer function manipulation, tolerance increase/decrease, and slight changes in viewing position. When either the number oftrigger points or the tolerance that sets the subdivision is altered the hierarchy that approximates a filter can be modified by further subdividing it or by collapsing NIL cellsinto higher representations. In the case of a slight change in the point of view the oldhierarchies may be examined in order to determine their validity.There are a number of situations where we may wish to extract a slice from the data.NIL maps can be used to approximate the interpolation filters. This allows higher quality7Pixie and prof on a Silicon Graphics workstation.1266-FILTERS FOR VOLUME RENDERINGfilters such as Gaussian filters to be used. The hierarchical aspect of NIL maps does notseem be so important for this application.The main advantage of NIL maps is seen in the last example presented above. Theability to rapidly implement filters encourages research into alternate filters for volumerendering. Once an interesting filter is found it can be implemented in a more efficientmanner. We believe this to be one of the more important contributions of NIL maps forvolume rendering.There is one small remaining problem with the current implementation of NIL maps.If we know that the filter can adequately be approximated using a linear approximation,but we have computed the NIL map using a higher order approximation we must eitherre-compute the NIL map using the appropriate basis or use a higher order polynomialto approximate the filter. Both of these alternatives can be avoided if we choose apolynomial basis set {B(t)} with the property that the degree of the polynomial B(t)is i. Two examples of such a basis are the power basisB0(t) = 1B1(t) =B2(t) =B3(t) = t3and the Chebyshev polynomialsB0(t) = 1B1Qt) = tB2(t) = 2t — 11276-FILTERS FOR VOLUME RENDERINGB3(t) = 4t3 — 3t.When a basis with the above property is chosen we can evaluate a lower order approximation without having to compute the M control points. If such a basis is chosenfor the NIL map representation we must perform a control point transformation betweenthe approximating basis functions, and the internal basis functions.6-5 WrapupThe display of volumetric data using filters is and will continue to be an important toolfor the study of these data-sets. By developing new filters as display tools we are able toprovide more sophisticated tools for this study. In order to develop these display filterswe need a tool with which filters can be quickly developed and tested. NIL maps is sucha tool. The use of NIL maps for the evaluation of these filters allows preliminary studyinto the use of filters for volume rendering with out requiring the generation of largeamounts of ‘specialized’ filter code to be written. Considered in this way the use of NILmaps is a middle step towards the development of more complex filters for the display ofvolumetric data. The flexibility of the NIL maps technique comes with a high cost. Thissuggests that filters found using NIL maps as an exploratory tool need to be implementeddirectly if they are to be used in a production system. The use of NIL maps allows usto delay this specialized code development step until we are sure of the usefulness of aparticular filter.128Chapter 7ConclusionsThe endThis dissertation studied the display of volumetric data in computer graphics. In bothtexture mapping and volume rendering this display can benefit from the application ofthree-dimensional filters. These filters range from simple anti-aliasing filters to morecomplex filters designed to produce a particular effect. The evaluation of these filtersrequires the computation of an expensive triple integral. This high cost motivated thesearch for approximating techniques for filter evaluation.Four filter evaluation techniques were presented, direct evaluation, NIL maps, EWAfilters and clamping. Direct evaluation uses numerical quadrature rules to evaluate theintegrals. NIL maps and EWA filters are extensions of their two-dimensional counterparts proposed by Fournier and Fiume [Four88a] and Greene and Heckbert [Gree86]respectively. The use of clamping [Nort82] for anti-aliasing three-dimensional textureswas proposed by Perlin [Per185J.7-0.1 Three-dimensional texture map filteringThe implementation and application of these four filtering techniques allowed us to compare their relative performance for texture mapping. The evaluation criteria included avariety of objective and subjective items. Based on this evaluation it is difficult to presentany of these techniques as the ‘best’ one. Each of these techniques has clear advantages1297-CONCLUSIONSand disadvantages. One possible ranking is based on the generality of the techniques.Direct evaluation places no constraints on the class of textures that can be used.The class of filters that can be evaluated using direct evaluation is limited only by thequadrature rule chosen. In most cases this means that the filter is evaluated over arectangular parallelepiped. Even though this approach is quite costly, it does provide uswith the ability to evaluate these filters within a pre-determined tolerance.The development and study of new filters with NIL maps is easily done. This flexibilityallows the quick prototyping of filters for both texture mapping and volume rendering.Once interesting filters are developed it is probably wise to implement them using moreefficient methods. It is also possible to use a different basis set for approximating thefilters, as was done in two dimensions for Gaussian filters by Gotsman [Gots93j.EWA filters are restricted to the evaluation of radially symmetric filters over ellipsoidalvolumes. This restriction allows the use of a pre-computed one-dimensional sample ofthe filter. This technique is useful when the cost of evaluating the filter function is highcompared to the texture evaluation cost. When EWA filters are being considered twoissues must be considered. First, EWA filters approximate a filter centered in the ellipsoidthat, as we showed, can introduce quite a large error when the perspective distortion islarge. Second, the fixed point Gaussian quadrature rule provided a better approximationthan EWA filters do. If EWA filters are being used for procedural textures then thesample points can be placed at the same positions as the sample points generated by thefixed point quadrature rule. This is not an option when discrete textures are being usedunless a reconstruction filter is being used to generate a continuous signal. In this case1307-CONCLUSIONSthe reconstructed signal can be treated in the same manner as a procedural texture.The least general of these techniques is clamping. It requires some information aboutthe Fourier spectra of the texture. The filter is approximated by a low order polynomialapproximation to its Fourier transform. In most of the applications of clamping that wehave encountered the cost of evaluating a filtered sample is lower than that of evaluatingan un-filtered sample. If simple anti-aliasing is required and the cost of more sophisticatedfiltering cannot be afforded, then clamping is an option that should be considered.7-0.2 Filters for volume renderingThe display of volumetric data allows us to see that which is not normally visible. Inother areas of computer graphics we can apply some sort of reality measure; this is notan option for volume rendering. Rather than striving to produce pictures of a certainrealistic quality, volume rendering research has concentrated on producing a variety ofdisplay tools with which volumetric data sets can be studied. The use of filters for volumerendering extends this set of tools in a significant way.A popular display model for volumetric data is to consider the data measurements asthe local density of a volume of gas that either absorbs or emits light. By simplifying thismodel it is possible to formulate the display of these data sets as a filtering operation.This investigation was prompted by the similarities between the display of volumetrictextures near the surfaces of objects and the direct display of volumetric data. Theinitial investigation of filters for volume rendering was done using NIL maps as the filterevaluation method. By using this method we were able to quickly study different filtersfor volume rendering.1317-CONCLUSIONSThis research lead to the incorporation of filters with a more conventional displaytechnique. In this hybrid method a ray marching method is used to locate a point ofinterest. The display of this point does not provide much information, however, theneighborhood of the point may be interesting. By placing a filter at this point and usingthe result of its application a more interesting display was generated.The simplicity of this approach and the results achieved thus far highlight the potentialof this approach. The properties of the data that need to be highlighted differ betweenvolume rendering applications. By allowing the display filters to be designed by the userswe hope to provide a more sophisticated set of tools for the display, study, and analysisof volumetric data sets.7-0.3 NIL mapsNIL maps provide a general filter evaluating tool for volume rendering and texture mapping. The overhead of the technique is quite high, but it allows for the rapid prototypingof three-dimensional filters. In texture mapping it allows a wide range of procedural textures to be used. The only limitation is that the textures be defined by a finite (hopefullysmall) set of functions. In volume rendering NIL maps incorporate transfer functions.These transfer functions are also restricted to be defined by a set of basis functions.The ease with which filters can be implemented and studied using this techniquemakes it an attractive one for the preliminary study of filters in both of these applications.Once a particular filter is selected for a task it is probably the case that a more efficientevaluation scheme can be found.1327-CONCLUSIONS7-1 ContributionsThe contributions of this thesis are:• An overview of the computer graphics literature relevant to the filtering of volumetric data. This included an overview of the two-dimensional filtering literaturefrom which two techniques were selected for extension to three dimensions.• It was shown that the display of three-dimensional textures, the anti-aliasing ofthree-dimensional textures, and the display of volumetric data can all be formulatedas filtering operations.• Three new filtering techniques for three-dimensional textures were developed:— Direct evaluation:The development of a three-dimensional filter evaluation technique basedon Simpson’s adaptive quadrature rule. Allows the accurate evaluation ofthree-dimensional filters to within a pre-defined tolerance.— NIL maps:This technique is an extension of the two-dimensional version. The extensionincludes methods for procedural textures and transfer functions. The flexibility of this technique makes it a good method for the rapid prototyping offilters both in the context of three-dimensional texture mapping and volumerendering.— EWA filters:Again, an extension of the two-dimensional version. This technique is easy to1337-CONCLUSIONSimplement but is restricted in the class of filters that it can approximate. Theperformance of this techniqne is easily matched by a fixed point quadraturerule. If the cost of evaluating the filter is high we could still use the EWAtechnique to reduce the cost of computing the filter-weight function.• These three filter evaluation techniques were evaluated in the context of three-dimensional texture mapping. A comprehensive criteria was used that allows thecorrect technique to be chosen depending on the application.• Examples of the application of filters to volume rendering were presented. In particular the combination of filtering and other display techniques was shown to producea innovative display of the data.7-2 Future workIn this thesis we have provided a set of tools for approximating the evaluation of three-dimensional filters when they are applied to volumetric data. A number of issues ariseout of this research:• If we wish to use filters for volume rendering it is reasonable to allow the user totailor these filters for a particular task. The definition and manipulation of thesefilters is not an intuitive task. Several questions arise:— In what ways can these filters be defined and controlled?The mathematical definition of the filter does not provide an intuitive feel1347-CONCLUSIONSfor the function1 of the filter. Other methods for describing or defining thesefilters must be found.— How can these filters be displayed? In one and two dimensions we can displayby plotting their function, this is not so easy for three-dimensional filtersbecause their display requires the display of a surface in four dimensions.— What queries of the data can be formulated as a filter operation? By combininga ray marching search method with a filter display we were able to display moreinformation about the neighborhood of a point of interest. The application offilters to both the search step and the display step merit investigation. Thereare many properties of the data, such as surfaces defined by a sudden densitychange, whose location can be determined by the application of a filter.• The range of textures that we can generate using the procedural textures is farricher than the colour textures we have studied. As we indicated earlier there isresearch on the filtering of such textures in two dimensions [Four93]. The extensionof these techniques to three-dimensional textures should prove to be an interestingand challenging study.• We have shown how many volumetric data display techniques can be formulated asa filtering operation. A study of the various uses and/or sources of volumetric datamay highlight other situations where a filtering approach will help in the displayof this data.‘No pun intended1357-CONCLUSIONS• There are a variety of properties of the data that are used iu volume reuderingthat cannot be expressed as a linear filtering operation. One such example is themaximum density along a viewing ray. This property has been used extensively forthe display of volumetric data. An investigation of such problems from the pointof view of hierarchical data structures, and/or pyramidal data structures may yieldinteresting results.136//NotationB(t) One-dimensional basis function.3 NIL map cell from level derived from basis function B(t). The super-scriptm indicates the cell’s offset in the-y level of the NIL map.NIL map cell from y level which incorporates a transfer basis function. The 1sub-script indicates which transfer function basis function (B1t)) the NIL cellis derived from. The super-script m indicates the cell’s offset in the-y level ofthe NIL map.I Place holder for the integral result.T(t) Texture function.7(t) Transfer function, the sub-script indicates what channel the data is mappedinto. In the context of NIL maps this transfer function is defined in terms of abasis set b1(t).N-i7(t) b1B(t)l=0To Signal resulting from a reconstruction of a discrete data set.Xr(t) Procedural texture basis function. The texture function is then defined byR-1T(t) XrXr(t)137NOTATIONn, v, w Texture space coordinates.ri NIL map cell from ‘y level which incorporates a procedural texture basis function. The r sub-script indicates which texture basis function (Xr(t)) the NILcell is derived from.x, y, z Object space coordinates.138GlossaryAliasing:Introducing a signal component that should not be there.CAT:Computer Aided Tomography.DC:Constant term in a signal. In a Fourier transform of a signal the DC component isthe coefficient of the zeroth term.Discrete texture:A discrete representation (usually sampled) of a texture.EWA:Elliptical Weighted Average filters. A filter approximation technique designed toaccelerate the computation of radially symmetric filters. Primarily of interest whena Gaussian is being used.Filter:A weighting function.Modelling:Building an abstract (usually mathematical) representation.MRI:Magnetic Resonance Imaging.NIL map:A pyramidal data structure used for storing pre-integrated basis functions. NILstands for nodus in largo. Roughly translated this means knot large.Pixel:Picture element, or frame buffer element.Procedural texture:A continuously defined texture implemented in some programming language.139GLOSSARYReconstruction:Generating a continuously defined signal from a discrete sample set.Rendering:Using a model and a display technique to assign values to pixels in a frame buffer.Sampling:Taking a finite number of samples of a signal.Texture:A high frequency (usually) surface characteristic.Texel:Texture elementTransfer function:A function T(t) which maps a scalar data-set or scalar function into another domain.Voxel:Volume element.140Bibliography[Ade185J Edward. H. Adelson and James R. Bergen. “Spatiotemporal energy modelsfor the perception of motion”. J. Opt. Soc. Am. A, Vol. 2, No. 2, 1985.[Artz79a] E. Artzy. “Display of three-dimensional information in computed tomography”. Computer Graphics and Image Processing, Vol. 9, pp. 196—198, February1979.[Artz79b] E. Artzy, G. Frieder, G.T. Herman, and H.K. Liu. “A system for three-dimensional dynamic display of organs from computed tomograms”. Proc.The Sixth Conference on Computer Applications in Radiology and Computer-Aided Analysis of Radiological Images, June 1979.[Artz8O] E. Artzy, G. Frieder, and G.T. Herman. “The theory, design, implementation,and evaluation of a three-dimensional surface detection algorithm”. ComputerGraphics (SIGGRAPH ‘80 Proceedings,), Vol. 14, No. 3, pp. 2—9, July 1980.[Artz8la] E. Artzy, G. Frieder, and C.T. Herman. “The theory, design, implementationand evaluation of a three-dimensional surface detection algorithm”. ComputerGraphics and Image Processing, Vol. 15, pp. 1—24, January 1981.[Artz8lb] E. Artzy and G.T. Herman. “Boundary detection in 3-dimensions with amedical application”. Computer Graphics, Vol. 15, No. 2, pp. 92—123, March1981.[Benn89] Chakib Bennis and Andre Gagalowicz. “2D macroscopic texture synthesis”.Computer Graphics Forum, Vol. 8, No. 4, pp. 291—300, December 1989.[Blin78a] J.F. Blinn. Computer Display of Curved Surfaces. Ph.D. Thesis, University ofUtah, 1978.[Blin78b] J.F. Blinn. “A scan line algorithm for displaying parametrically defined surfaces”. Computer Graphics (Special SIGGRAPH ‘78 Issue, preliminary papers), pp. 1—7, August 1978.[Blin78c] J.F. Blinn. “Simulation of wrinkled surfaces”. Computer Graphics (SICGRAPH ‘78 Proceedings), Vol. 12, No. 3, pp. 286—292, August 1978.[Bovi87] A. C. Bovik, M. Clark, and W. S. Geisler. “Computational texture analysis using localized spatial filtering”. IEEE Workshop on Computer Vision,November 1987.141BIBLIOGRAPHY[Bren7O] B. Brennan and W.R. Bandeen. “Anisotropic reflectance characteristics ofnatural earth surfaces”. Applied Optics, Vol. 9, No. 2, February 1970.{Buch9l] John Buchanan. “The filtering of 3d textures”. Proceedings of Graphics Interface ‘91, pp. 53—60, June 1991.[Burd8l] R. Burden, J. Faires, and A. Reynolds. Numerical Analysis, Second edition.PWS Publishers, Boston, Massachusetts, 1981.[Cann86] J. Canny. “A computational approach to edge detection”. IEEE Transactionson Pattern Analysis and Machine Intelligence, Vol. 8, pp. 679—698, 1986.[Carp84] Loren Carpenter. “The A-buffer, an antialiased hidden surface method”. Computer Graphics (SIGGRAPH ‘8 Proceedings), Vol. 18, No. 3, pp. 103—108,July 1.984.[Catm74] Edwin E. Catmull. A Subdivision Algorithm for Computer Display of CurvedSurfaces. Ph.D. Thesis, University of Utah, December 1974.[Catm75] Edwin E. Catmull. “Computer display of curved surfaces”. Proceedings ofthe IEEE Conference on Computer Graphics, Pattern Recognition, and DataStructure, pp. 11—17, May 1975.[Chen85] Lih-Shyang Chen, G.T. Herman, R.A. Raynolds, and J.K. Udupa. “Surfaceshading in the cuberille environment”. IEEE Computer Graphics and Applications, Vol. 5, No. 12, pp. 33—43, December 1985.[Chri78j H.N. Christiansen and T.W. Sederberg. “Conversion of complex contour linedefinitions into polygonal element mosaics”. Computer Graphics (SIGGRAPH‘78 Proceedings), Vol. 12, No. 3, pp. 187—192, August 1978.[Clin88] Harvey E. Cline, William E. Lorensen, Sigwalt Ludke, Carl R. Crawford, andBruce C. Teeter. “Two algorithms for the reconstruction of surfaces fromtomograms”. Medical Physics, Vol. 15, No. 3, pp. 320—327, June 1988.[Crow84] F.C. Crow. “Summed-area tables for texture mapping”. Computer Graphics(SIGGRAPH ‘8 Proceedings), Vol. 18, No. 3, pp. 207—212, July 1984.[Dreb88] Robert A. Drebin, Loren Carpenter, and Pat Hanrahan. “Volume rendering”.Computer Graphics (SIGGRAPH ‘88 Proceedings), Vol. 22, No. 4, pp. 65—74,August 1988.[Dung78] W. Dungan, A. Stenger, and C. Sutty. “Texture tiles consideration for rastergraphics”. Computer Graphics (SIGGRAPH ‘78 Proceedings), Vol. 12, No. 3,pp. 130—134, August 1978.142BIBLIOGRAPHY[Eber9O] David S. Ebert and Richard E. Parent. “Rendering and animation of gaseousphenomena by combining fast volume and scanhine A-buffer techniques”. Computer Graphics (SIGGRAPH ‘90 Proceedings), Vol. 24, No. 4, pp. 357—366,August 1990.[Feib80a] E. Feibush and D.P. Greenberg. “Texture rendering system for architecturaldesign”. Computer-Aided Design, Vol. 12, pp. 67—71, March 1980.[Feib8Obj E.A. Feibush, M. Levoy, and R.L. Cook. “Synthetic texturing using digitalfilters”. Computer Graphics (SIGGRAPH ‘80 Proceedings,), Vol. 14, No. 3,pp. 294—301, July 1980.[Ferr84] Leonard A. Ferrari and Jack Sklansky. “A fast recursive algorithm for binary-valued two-dimensional filters”. Computer Vision, Graphics, and Image Processing, Vol. 26, No. 3, June 1984.[Ferr85] Leonard A. Ferrari and Jack Sklansky. “A note on duhamel integrals andrunning average filters”. Computer Vision, Graphics, and Image Processing,Vol. 29, March 1985.[Fium83a] E. Fiume, A. Fournier, and L. Rudolph. “A parallel scan conversion algorithmwith anti-aliasing for a general purpose ultracomputer”. Computer Graphics(SIGGRAPH ‘83 Proceedings), Vol. 17, No. 3, pp. 141—150, July 1983.[Fium83b] E. Fiume, Alain Fournier, and Larry Rudolph. “A parallel scan conversionalgorithm with anti-aliasing for a general-purpose ultracomputer: Preliminaryreport”. Proceedings of Graphics Interface ‘83, pp. 11—21, May 1983.{Fium87j E. Fiume, A. Fournier, and V. Canale. “Conformal texture mapping”. Euro-graphics ‘87, pp. 53—64, August 1987.[Fole9O] J.D. Foley, A. van Dam, Steven K. Feiner, and John F. Hughes. ComputerGraphics: Principles and Practice. Addison-Wesley Publishing Company, second edition, 1990.[Four86] Alain Fournier and David A. Grindal. “The stochastic modelling of trees”.Proceedings of Graphics Interface ‘86, pp. 164—172, May 1986.[Four88aj Alain Fournier and Eugene Fiume. “Constant-time filtering with space-variantkernels”. Computer Graphics (SIGGRAPH ‘88 Proceedings), Vol. 22, No. 4,pp. 229—238, August 1988.[Four88bj Alain Fournier and Donald Fussell. “On the power of the frame buffer”. ACMTransactions on Graphics, Vol. 7, No. 2, pp. 103—128, April 1988.143BIBLIOGRAPHY[Four93] A. Fournier. “Filtering normal maps and creating multiple surfaces”. Workin progress, 1993.[Fuch77] H. Fuchs, Z.M. Kedem, and S.P. Uselton. “Optimal surface reconstructionfrom planar contours”. Communications of the ACM, Vol. 20, pp. 693—702,October 1977.[Gaga83] Andre Gagalowicz. Vers un Modele de Textures. Ph.D. Thesis, UniversitePierre et Marie Curie, Paris VI, 1983.[Gaga86] A. Gagalowicz and Song-Di-Ma. “Model driven synthesis of natural texturesfor 3-D scenes”. Computers and Graphics, Vol. 10, No. 2, pp. 161—170, 1986.[Gaga87] Andre Gagalowicz. “Texture modelling applications”. The Visual Computer,Vol. 3, No. 4, pp. 186—200, December 1987.[Gaga88] Andre Gagalowicz and Song D. Ma. “Animation of stochastic model-based 3dtextures”. Eurographics ‘88, pp. 313—326, September 1988.[Galy9l] Tinsley A. Galyean and John F. Hughes. “Sculpting: An interactive volumetric modeling technique”. Computer Graphics (SIGGRAPH ‘91 Proceedings),Vol. 25, No. 4, pp. 267—274, July 1991.[Gang82] M. Gangnet, D. Perny, and P. Coueignoux. “Perspective mapping of planartextures”. Eurographics ‘82, pp. 57—70, 1982.[Gard84] Geoffrey Y. Gardner. “Simulation of natural scenes using textured quadricsurfaces”. Computer Graphics (SIGGRAPH ‘8 Proceedings), Vol. 18, No. 3,pp. 11—20, July 1984.[Gard85] G.Y. Gardner. “Visual simulation of clouds”. Computer Graphics (SIGGRAPH ‘85 Proceedings), Vol. 19, No. 3, pp. 297—303, July 1985.[G1as86] A. Glassner. “Adaptive precision in texture mapping”. Computer Graphics(SIGGRAPH ‘86 Proceedings), Vol. 20, No. 4, pp. 297—306, August 1986.[Gots93] Craig Gotsman. “Constant-time filtering by singular value decomposition”.Eurographics Workshop on Rendering, pp. 145—155, 1993.[Gree86] N. Greene and P. S. Heckbert. “Creating raster omnimax images from multipleperspectives views using the elliptical weighted average filter”. IEEE ComputerGraphics and Applications, Vol. 6, No. 6, pp. 21—27, June 1986.[Gree89j Ned Greene. “Voxel space automata: Modeling with stochastic growth processes in voxel space”. Computer Graphics (SIGGRAPH ‘89 Proceedings),Vol. 23, No. 3, pp. 175—184, July 1989.144BIBLIOGRAPHY[Grin84] D. Grindal. “The Stochastic Creation of Tree Images”. M.Sc. Thesis, Department of Computer Science, University of Toronto, 1984.{Heck86] P. S. Heckbert. “Filtering by repeated integration”. Computer Graphics (SIGGRAPH ‘86 Proceedings), Vol. 20, No. 4, pp. 315—321, August 1986.[Heeg87] D. J. Heeger. “Model for the extraction of image flow”. J. Opt. Soc. Am. A,Vol. 4, No. 8, 1987.[Heeg88] D. J. Heeger. “Optical flow using spatiotemporal filters”. International Journalof Computer Vision, 1988.{Herm8O] G.T. Herman. “Surfaces of objects in discrete three-dimensional space”. Computer Graphics 80, Proc. of a Conference at Brighton, pp. 287—300, August1980.[Herm82J G.T. Herman, R.A. Reynolds, and J.K. Udupa. “Computer techniques for therepresentation of three-dimensional data on a two-dimensional display”. Proc.SPIE mt. Soc. Opt. Eng., Vol. 367, pp. 3—14, 1982.[Herm83j G.T. Herman and J.K. Udupa. “Display of 3-D digital images: computational foundations and medical applications”. IEEE Computer Graphics andApplications, Vol. 3, No. 5, pp. 39—46, August 1983.[Jain8lj Anil K. Jam. “Image data compression: A review”. Proceedings of the IEEE,Vol. 69, No. 3, pp. 349—389, March 1981.[Kaji84] James T. Kajiya and Brian P. Von Herzen. “Ray tracing volume densities”.Computer Graphics (‘SIGGRAPH ‘84 Proceedings), Vol. 18, No. 3. pp. 165—174, July 1984.[Kaji85] J.T. Kajiya. “Anisotropic reflection models”. Computer Graphics (SIGGRAPH ‘85 Proceedings), Vol. 19, No. 3, pp. 15—21, July 1985.[Kaji89] James T. Kajiya and Timothy L. Kay. “Rendering fur with three dimensionaltextures”. Computer Graphics (SIGGRAPH ‘89 Proceedings), Vol. 23, No. 3,pp. 271—280, July 1989.[Lans9l] Robert C. Lansdale. “Texture Mapping and Resampling for Computer Graphics”. M.Sc. Thesis, Department of Computer Science, University of Toronto,January 1991.[Laur9lJ David Laur and Pat Hanrahan. “Hierarchical splatting: A progressive refinement algorithm for volume rendering”. Computer Graphics (SIGGRAPH ‘91Proceedings), Vol. 25, No. 4, pp. 285—288, July 1991.145BIBLIOGRAPHY[Levo88] Marc Levoy. “Display of surfaces from volume data”. IEEE Computer Graphicsand Applications, Vol. 8, No. 3, pp. 29—37, May 1988.[Levo9oa] Marc Levoy. “Efficient ray tracing of volume data”. ACM Transactions onGraphics, Vol. 9, No. 3, pp. 245—261, July 1990.[Levo90b] Marc Levoy. “A hybrid ray tracer for rendering polygon and volume data”.IEEE Computer Graphics and Applications, Vol. 10, No. 2, pp. 33—40, March1990.[Levo9oc] Marc Levoy. “Volume rendering by adaptive refinement”. The Visual Computer, Vol. 6, No. 1, pp. 2—7, February 1990.[Levo9od] Marc Levoy and Ross Whitaker. “Gaze-directed volume rendering”. Computer Graphics (1990 Symposium on Interactive 3D Graphics), Vol. 24, No. 2,pp. 217—223, March 1990.[Levo92] Marc Levoy. “Volume rendering using the Fourier projection-slice theorem”.Proceedings of Graphics Interface ‘92, pp. 61—69, May 1992.[Lewi89j J. P. Lewis. “Algorithms for solid noise synthesis”. Computer Graphics (SIGGRAPH ‘89 Proceedings), Vol. 23, No. 3, pp. 263—270, July 1989.[Lore87] William E. Lorensen and Harvey E. Cline. “Marching cubes: A high resolution3D surface construction algorithm”. Computer Graphics (SIGGRAPH ‘87Proceedings), Vol. 21, No. 4, pp. 163—169, July 1987.[Lyon89] Kic P. Lyons and Joyce E. Farrel. “Linear systems analysis of CRT displays”.Society for Information Display 89 Digest, Vol. 20, pp. 220—223, 1989.[Ma86] Song De Ma and A. Gagalowicz. “Determination of local coordinate systemsfor texture synthesis on 3-D surfaces”. Computers and Graphics, Vol. 10, No. 2,pp. 171—176, 1986.[Ma1z93] Tom Malzbender. “Fourier volume rendering”. ACM Transactions on Graphics, Vol. 12, No. 3, July 1993.[Mark9l] Tassos Markas and John Reif. “Image compression methods with distortioncontrolled capabilities”. Data Compression Conference, pp. 93—102, April 8-111991.[Max9O] Nelson Max, Pat Hanrahan, and Roger Crawfis. “Area and volume coherence for efficient visualization of 3D scalar functions”. Computer Graphics(San Diego Workshop on Volume Visualization), Vol. 24, No. 5, pp. 27—33,November 1990.146BIBLIOGRAPHY[Mitc88J Don P. Mitchell and Aru N. Netravali. “Reconstruction filters in computergraphics”. Computer Graphics (SIGGRAPH ‘88 Proceedings), Vol. 22, No. 4,pp. 221—228, August 1988.[Naim89] Avi C. Naiman and Walter Makous. “Information transmission for grayscaleedges”. Society for Information Display 91 Digest, Vol. 22, pp. 109—112, May1989.[Newm79] William M. Newman and Robert F. Sproull. Principles of Interactive Computer Graphics. McGraw-Hill, second edition, 1979.[Nort82] A. Norton, A.P. Rockwood, and P.T. Skolmoski. “Clamping: A method of antialiasing textured surfaces by bandwidth limiting in object space”. ComputerGraphics (SIGGRAPH ‘82 Proceedings), Vol. 16, No. 3, pp. 1—8, July 1982.[Novi9OJ Kevin L. Novins, Francois X. Sillion, and Donald P. Greenberg. “An efficientmethod for volume rendering using perspective projection”. Computer Graphics (San Diego Workshop on Volume Visualization), Vol. 24, No. 5, pp. 95—102,November 1990.[Peac85] D.R. Peachey. “Solid texturing of complex surfaces”. Computer Graphics(SIGGRAPH ‘85 Proceedings,), Vol. 19, No. 3, pp. 279—286, July 1985.[Pent9l] Alex Pentland and Bradley Horowitz. “A practical approach to fractal-basedimage compression”. Data Compression Conference, pp. 176—185, April 8-111991.[Per185] K. Perlin. “An image synthesizer”. Computer Graphics (SIGGRAPH ‘85Proceedings), Vol. 19, No. 3, pp. 287—296, July 1985.[Per189] Ken Perlin and Eric M. Hoffert. “Hypertexture”. Computer Graphics (SIGGRAPH ‘89 Proceedings), Vol. 23, No. 3, pp. 253—262, July 1989.[Pern82j D. Perny, M. Gangnet, and P. Coueignoux. “Perspective mapping of planartextures”. Computer Graphics, Vol. 16, No. 1, pp. 70—100, May 1982.[Pou189] Pierre Poulin. “Anisotropic Reflection Models”. M.Sc. Thesis, Department ofComputer Science, University of Toronto, January 1989.[Poul9O] Pierre Poulin and Alain Fournier. “A model for anisotropic reflection”. Computer Graphics (SIGGRAPH ‘90 Proceedings), Vol. 24, No. 4, pp. 273—282,August 1990.{Prat78] W.K. Pratt. Digital Image Processing. Wiley-Interscience, 1978.147BIBLIOGRAPHY[Rose75] A Rosenfeld. Multiresolution Image Processing and Analysis. Springer-Verlag,Berlin, 1975.[Rose76] A. Rosenfeld and A. C. Kak. Digital Picture Processing. Computer Scienceand Applied Mathematics. Academic Press, 1976.[Sabe88] Paolo Sabella. “A rendering algorithm for visualizing 3D scalar fields”. Computer Graphics (SIGGRAPH ‘88 Proceedings), Vol. 22, No. 4, pp. 51—58, August 1988.[Schw83] D. Schweitzer. “Artificial texturing: An aid to surface visualization”. Computer Graphics (SIGGRAPH ‘83 Proceedings), Vol. 17, No. 3, pp. 23—29, July1983.[Shir9O] Peter Shirley and Allan Tuchman. “A polygonal approximation to direct scalarvolume rendering”. Computer Graphics (San Diego Workshop on Volume Visualization), Vol. 24, No. 5, pp. 63—70, November 1990.[Tani75] S.L. Tanimoto and Theo Pavlidis. “A hiearchical data structure for pictureprocessing”. Computer Vision, Graphics, and Image Processing, Vol. 4, No. 2,June 1975.[Tilt9l] James C. Tilton, Deasso Han, and M Manohar. “Compression experimentswith AVHRR data”. Data Compression Conference, pp. 411—420, April 8-111991.[Tots93] Takashi Totsuka and Marc Levoy. “Frequency domain volume rendering”.Computer Graphics (SIGGRAPH ‘93 Proceedings), pp. 271—278, August 1993.[Trou87] Yves Trousset and Francis Schmitt. “Active ray tracing for 3d medical imaging”. Eurographics ‘87, pp. 139—150, August 1987.[Turk9l] Greg Turk. “Generating textures on arbitrary surfaces using reaction-diffusion”. Computer Graphics (SIGGRAPH ‘91 Proceedings), Vol. 25, No. 4,pp. 289—298, July 1991.[Upso88] Craig Upson and Michael Keeler. “V-BUFFER: Visible volume rendering”.Computer Graphics (SIGGRAPH ‘88 Proceedings), Vol. 22, No. 4, pp. 59—64,August 1988.[Ward92j Gregory J. Ward. “Measuring and modeling anisotropic reflection”. ComputerGraphics (SIGGRAPH ‘92 Proceedings), Vol. 26, No. 2, pp. 265—272, July1992.[West9Oj Lee Westover. “Footprint evaluation for volume rendering”. Computer Graphics (SIGGRAPH ‘90 Proceedings), Vol. 24, No. 4, pp. 367—376, August 1990.148BIBLIOGRAPHY[Wijk9l] Jarke J. van Wijk. “Spot noise”. Computer Graphics (SIGGRAPH ‘91 Proceedings), Vol. 25, No. 4, pp. 309—318, July 1991.[Wilh9Oal Jane Wilhelms and Allen Van Gelder. “Octrees for faster isosurface generationextended abstract”. Computer Graphics (San Diego Workshop on VolumeVisualization), Vol. 24, No. 5, pp. 57—62, November 1990.[Wilh9Ob] Jane Wilhelms and Allen Van Gelder. “Topological considerations in isosurfacegeneration extended abstract”. Computer Graphics (San Diego Workshop onVolume Visualization), Vol. 24, No. 5, pp. 79—86, November 1990.[Wilh9l] Jane Wilhelms and Allen Van Gelder. “A coherent projection approach fordirect volume rendering”. Computer Graphics (SIGGRAPH ‘91 Proceedings),Vol. 25, No. 4, pp. 275—284, July 1991.{Wi1183] L. Williams. “Pyramidal parametrics”. Computer Graphics (SIGGRAPH ‘83Proceedings), Vol. 17, No. 3, pp. 1—11, July 1983.[Witk9l] Andrew Witkin and Michael Kass. “Reaction-diffusion textures”. ComputerGraphics (SIGGRAPH ‘91 Proceedings), Vol. 25, No. 4, pp. 299—308, July1991.[Wu91J Xiaolin Wu and Chengfu Yao. “Image coding by adaptive tree-structuredsegmentation”. Data Compression Conference, pp. 73—82, April 8-11 1991.[Xue9l] Kefu Xue and James M. Criseey. “An iteratively interpolative vector quantization algorithm for image data compression”. Data Compression Conference,pp. 139—148, April 8-11 1991.149Appendix ANIL innardsGe i running!In Chapter 4 we overviewed the NIL map filter approximating technique. This overviewwas followed by a discussion of the extension of NIL maps to allow the use of Transferfunctions, and procedural textures defined by a texture basis function set.In one dimension we showed that the evaluation of the convolution of a filter F(t)positioned at t,, and a texture signal T(t)1(t0)= j F(t —t0)T(t)dt (A.1)could be approximated by first approximating the filter by a set of curves. This set ofcurves is piece-wise defined by basis functions B(t). If these basis functions are preintegrated with the texture function,p1= J B(t)T(m + t)dt (A.2)0the above convolution can be approximated byK-i M-11(t0) = (A.3)m0 i=0The G coefficients are known as the NIL map cell entries. By spreading the basis150A-NIL INNARDSfunctions and integrating these with successively larger intervals of the texture a pyramidal NIL map can be constructed. In this case the NIL map cells are defined by-y rlC= j B(t)T(2m + 2t)dt, (A.4)0where—y indicated the level of the pyramid to which the NIL cell belongs. In this appendixwe discuss some of the optimizations which were employed in our implementation of NILmaps.A-i Speeding up the computation of Cjk for discrete texturesThe process that we are trying to approximate is that of the convolution of a three-dimensional filter with a continuous three-dimensional signal. Unfortunately, in manyof our applications we do not have a continuous three-dimensional signal, instead wehave a sampled version of the signal. Before we can apply any filters to this signal wemust reconstruct the signal. There are many reconstruction filters that could be chosen,however, in practice one of two reconstruction methods are used: sample and hold ortn-linear interpolation. When we know the reconstruction filter we can speed up thecomputation of the NIL maps.A-1.1 Sample and holdThe sample and hold reconstruction is computationally the simplest of the reconstructionfilters available. The computation of the value of a particular point is found by applyingthe floor function to each of the coordinates. In the following discussion we will use151A-NIL INNARDS(x. y, z) to denote the reconstructed texture and T(x, y, z) to denote the discrete texture.T(x,y,z) = T([xJ, [yj, [zJ)Using this reconstruction filter affords us two optimizations. The first is in the computation of the different levels of the NIL maps. The second optimization allows the inclusionof an arbitrary number of negative levels to the NIL map with out any storage cost.Speeding up the computation of 3First lets look at this problem in one dimension. The texture T is sampled at At = 10intervals 1 so the equation for C isO j1 T(a + t)B(t)dt (A.5)sinceT(m + t) te[O,1) T(m)we rewrite A.50—Cm= T(rn)J B(t)dt0Now in general for any level y we have1 2—1 ±if B(t)T(2m + 2t)dt = j2 T(2m + a)B(t)dt (A.6)‘We can assume WLOG Zt = 1, if this is not so a change of variables will suffice to ensure the condition.152A-NIL INNARDSso that2—1 f 27 T(2m + a)B(t)dt (A.7)Since the texture function is now constant in every integral interval we rewrite2—1C= T(2m + a)f2 B()dt (A.8)Following this argument in three dimensions we find that for a texture T(u, v, w)2—12—12—1 ±i ±i ±i= f2Yaf27 f27 (2m+a,2n+b,2o+c)a=O b=0 c=O U VWB(u)B(v)Bk(w)dndvdw (A.9)When we examine equations A.9 and A.7 we notice that the integrals are alwaysperformed between constant sections of the texture and some fixed interval of the basisfunctions. This means that we can rewrite A.9 as follows:2—1 2—1 2—1=(2m+a,2ri+b,2”o+c)a0 b=O c=Oil ±i ±if 27 f 27 f 2 B(u)B(v)Bk(w)dudvdw (A.1O)U’7 V WThus the computation of the Ck values should be done by precomputing for eachlevel -y the various integrals of the form153A-NIL INNARDabc= f 2a f 2 f 2 B(u)B}(v)Bk(w)dudvdw (A.ll)u=ry V1a [0,2)be [0,2)cC [0,2)This is a separable integration so we can rewrite= J 27a B(u)du f 2 B(v)dv f 2 Bk(w)dw (A.12)V1a e [0,27)be [0,2)cC [0,2w)Using these values equation A.l0 becomes2—1 2—1 2—1 (A13)a=O b=O c=OFrom this discussion two methods suggest themselves for helping the computation ofone memory intensive and the other less so.The first method requires the computation of all the values forB for each individuallevel-y. Since these arrays increase exponentiallywith the depth of the NIL map we mustallocate enough memory to store the highest level of the Bkcoefficients. The memoryrequired at this level isx M3where the dimension of the data is n x n x ,i.Noticing that these integrals are symmetric over any permutation of the indices a, b, c so154A-NIL INNARDSwe can make further computation and storage optimizations.n(n + 1)(2n + 1)x M3This can be shown by examining the levels of the array. If we set the first index to 1 thenwe see that [1, b, c] = [1, c, b] is the symmetry that defines the first level. The storagerequired for this level is thereforen(n H- 1)2We no longer have to consider any triple that contains a 1 in it. Setting the first indexto 2 and again using [2, b, c] = [2, c, b] the storage required isn(n + 1)— 1— n(n + 1) — 2(2 — 1)2 — 2 2Now setting the first level to i and noticing that we now do not consider any triplecontaining a number less than i the storage for level i then becomesn(n + 1) — (i — 1)i2 2Where the 1)j factor accounts for those triples that contain a value less than i. Thetotal required memory is the sumStorage= [ n(n+ 1) — (i —1)(i)]155A-NIL INNARDSUsing the identities= n(n+1)n(ri+1)(2n+1)we haveStorage= [ n(n+ 1) — (i —1)(i)]— n2(n+1) 1 n(n+1)(2n+1)— n(n+1)— 2 2 6 2—6(n+1)—n(n+1)(2n+1)+3n(n+l)12— n(nH-1)[6n—2n—1+3j12— n(n+1)(4n+2)12— n(n+1)(2n+1)6(A.14)The second method is a little bit more efficient in its memory usage. However, thisefficiency is achieved at an increased computation cost. When we examine equation A.12we notice that the basis integrals can be computed independently in the variables u, v, w,moreover because these integrals will be identical for each of u, v, and w. We need onlystore one table of integrals of the form.156A-NIL INNARDSfWBd (A.15)u=a e [O,2)Using these variables the computation of then becomes.2—1 2—1 2—13 = (2m H- a, 2n + b, 2o + c) b (A.16)a=O b=O c0Negative levels for sample and holdIn many situations the required NIL map entry will be smaller than one of the voxels ofdata. Since we are using the sample and hold filter it is a simple matter to use negativelevels for the NIL map. This is accomplished by noticing that the value of the textureover any of these negative NIL map entries is the same as that of the texture over theparent NIL map entry at level 0. Thus if we need to use one of these negative level NILmap entries we simply scale the value of the parent NIL map entry by the quantity.mno= parent(C0) (A.17)A-1.2 Trilinear interpolationWhen the reconstruction filter is a linear interpolation filter we can write the value ofthe textureT(m + t) = (1 — t)T(m) + tT(m + 1) (A.18)157A-NIL INNARDSUsing the above in equation A.5 we get.0 p1= J T(m+t)B(t)dt0p1= J ((1 — t)T(rri) + tT(rri -- 1))B(t)B(t)dt (A.19)0This integral can be separated.0 7 p1 p1 p1G= (T(m)J B(t) — T(m) J tB(t) + T(m + 1)] tB(t)dt0 0 0Noticing that the integrals j B(t)dt and J tB(t)dt are independent of the sampled data,we define the quantities.plB,=] B(t)dt (A.20)0andtB= j tB()dt (A.21)Extending this to fit into the NIL map scheme these quantities can be defined for arbitrarylevels of.= fB(t)dtt=,iila = f27(27t— a)B(t)dt (A.22)158A-NIL INNARDSSo that the computation of C becomes2—1-y 7[T(27m+a)B—T(27m+a)tB+T(27m+a+1yBj]A-1.3 Three dimensionsFor a given texture T sampled on a grid we can express the reconstructed signal T as aweighted sum of the eight corners of the voxel.(m + u, n + V, + w) = (1 — w)[(1 — v)[(l — u)T(m, n, o) + uT(m + 1, n, o)]+v[(1 — u)T(m, n + 1, o) + uT(rn + 1, n H- 1, o)]]+w[(1 — v)[(1 — u)T(m, n, 0 + 1) + uT(m + 1, n, 0+ 1)+v[(1 — u)T(m, n + 1,0 + 1) + uT(m + 1, n + 1,o + 1)]]Or expandedT(m+u,n+v,o+w)T(rn,n,o)+uT(rn+ 1,n,o) +vT(m,n+ 1,o) +wT(m,n,o+ 1)—uT(m, n, o) — vT(m, n, o) — wT(rn, n, o) — ‘uwT(rn, n, 0 + 1)—uwT(m + 1, n, o) — vwT (in, ii, 0 + 1) — vwT(m, n + 1, o) — ztvT(in + 1, n, o)—uvT(m, n + 1, o) + vwT(m, n, o) + uvT(rn, o) + uwT(m, n, o)+uvT(m+ 1,n+ 1,o) +vwT(m,n+ 1,o+ 1) +nwT(m+ 1,n,o+ 1)159A-NIL INNARDS+uvwT(m + 1, n, o) + uvwT(?n, n + 1, o) + uvwT(rn, n, 0 + 1)—uvwT(rn + 1, n + 1, o) — uvwT(m + 1, n, 0+ 1) — uvwT(m, n + 1,0 + 1)—uvwT(m, n, o) + uvwT(rn + 1, n + 1,o + 1)The calculation of then becomes= ff41T(m, n, o)B(u)B3(v)Bk(w)dudvdw+ / / I uT(m+ 1,n,o)B(u)B(v)Bk(w)dudvdwJo Jo Joçl ci r’+ I I I vT(m, n + 1, o)B(u)B(v)Bk(w)dudvdwJo Jo Jo+fffwT(m,n,o+ 1)B(u)B(v)Bk(w)dudvdw— / I / uT(m,n,o)B(u)B(v)Bk(w)dudvdwJo Jo Joci c1 r1— I I I vT(m,n,o)B(u)B(v)Bk(w)dudvdwJo Jo Joc’ 1— I I I wT(m,n,o)B(u)B(v)Bk(w)dudvdwJo Jo Joci r’ ci— I I I tiwT(rn,n,o+ 1)B(zt)B(v)Bk(w)dudvdwJo Jo Joc’ ci c’— I I I nwT(m + 1, n, o)B(u)B(v)Bk(w)dudvdwJo Jo Jor’ c1 r’— I I I vwT(m,n,o+ 1)B(u)B(v)Bk(w)d’udvdwJo Jo Jo— L1 f vwT(m, n + 1, o)B(u)B(v)Bk(w)dudvdw_j f j uvT(m+1,n,o)B(u)B(v)Bk(w)dudvdw—f01fOuvT(m, n + 1, o)B(u)B(v)Bk(w)dudvdw+j j f vwT(n,n,o)B(i)B(v)Bk(w)duidvdw1604A-NIL INNARDS+ uvT(m, n, o)B(n)B(v)Bk(w)dudvdw+ f j j uwT(m, n, o)BI(u)B(v)Bk(w)dudvdw+ I / I uvT(m + 1, n + 1, o)B(u)B(v)Bk(w)dudvdwJo Jo Jo+‘vwT(m, n + 1,0+ 1)B(u)B(v)Bk(w)dudvdw+j j f uwT(m + 1, n, 0 + 1)B(u)B(v)Bk(w)dudvdw+ f nvwT(m + 1, n, o)B(u)B(v)Bk(w)dudvdw+j j f uvwT(m,n+1,o)B(u)B(v)Bk(w)dudvdw+ f uvwT(m, fl, 0+ 1)B(u)B(v)Bk(w)dudvdw—j uvwT(m + 1, n + 1, o)B(u)B(v)Bk(w)dudvdw— I / I uvwT(m + 1, fl, 0 + 1)B(u)B(v)Bk(w)dudvdwJo Jo Jo101 101 10: ivwT(m, n + 1,o + 1)B(u)B(v)Bk(w)dudvdw_f f j uvwT(m,n,o)B(u)B(v)Bk(w)dudvdw+ f j f uvwT(m + 1, n + 1,o + 1)B(u)B(v)Bk(w)dudvdwDefining some short hand notationspi rl flBk= J j J BQu)B(v)Bk(w)d’udvdw0 00UB=i uB(u)B(v)Bk(w)dudvdwVB=1 1 1 vB(u)B(v)Bk(w)dudvdw= 111 wB(u)B(v)Bk(w)dudvdw000161A-NIL INNARDSUVB= L 101 uvB(u)B(v)Bk(w)dudvdwUWB= fff uwB(u)B(v)Bk(w)dudvdw000p1 p1 p1VWB= J J J vwB(u)B(v)Bk(w)dudvdw0 00p1 p1 p1UVZUB= J J J ‘uvwBQu)B(v)Bk(w)dudvdw0 00and because the texture values are constant over the integrals we havecijk—Tfrn, n, o)Bk + T(m + 1, n, o)uBjjk + T(m, n + 1 o)vBijk+T(rri, n, 0 + 1)wBijk — T(m, n, o)1Bk — T(rn, n, o)vBjjk_T(m,ri,o)wBjjk— T(m,n,o+ 1)Bk — T(rri + 1,n,o)Bk—T(m, n, oH- 1)vBjjk — T(m, n + 1, o)wBjjk — T(m + 1, n, o)uvBjjk—T(m, n + 1 o)uvBijk + T(m, n, o)VWB + T(m, n, o)UVB+T(rn, n, o)Bk + T(m + 1, n + 1, o)UVB + T(rri, n + 1,o + 1)Bk+T(rn + 1, n, 0 + 1)Bk + T(rn + 1, n, o)uvwBjjk + T(m, n + 1, o)BkH-TQm, n, 0 + 1)uBjjk — T(rri + 1, n + 1, o)wBjjk — T(m H- 1, fl, 0 + 1)uBjjk—T(m, n + l,o + 1)’Bk — T(rri, n, o)LBjjk+T(m + 1, n + 1,0 + 1)”-’Bk (A.23)It is simple to show thatU — tJ_W——Uv— tLw——Using these identities equation A.23 becomes.ijk—T(m, n, o)Bk + T(m + 1, n, o)UBjjk + T(in, n + 1, o)UBjjk+T(m, n, 0 + 1)uBkij — T(m, n, o)uBjjk — T(m, n, o)uBjjk—T(m, n, o)uBkjj— T(m, n, 0 + 1)Bk — T(m + 1, n, o)uvB162A-NIL INNARDS—T(rri, ri, 0 + 1)Bk1 — T(m, n + 1 o)uvBk. — T(m + 1, n,0’uvI:1..) -‘-‘e’k—T(rn, n + 1, o)uvBijk -+- T(m, n, o)UVB + T(rn, n, o)uvBijk+Tfrn, n, o)Bk + T(m + 1, n + 1, o)uBjjk + T(m, n + 1,o + 1)Bk+T(m + 1, n, 0-1- 1)Bk + T(m + 1, n, o)’°Bk + T(m, n + 1, o)’Bk+T(m, fl, 0 + 1)Bk — Tfrn + 1, n + 1, o)uBjjk — T(m + 1, Ti, 0 +1) ‘—‘ijk—T(rn, n + 1, 0 + l)uBjjk — T(m, n,0UVWPV/+T(m + 1, Ti + 1,0 + 1)UVW1 (A.25)Rearranging yields= T(m, n, o) [Bk — Bijk — uBkji + UVBkj + UUBijk + uvBikj]+T(m + 1, n, o) [uBjik — UVB. V — + Bk]ikj —‘jk+T(m, n + 1, o) [UB — UVB — UVB. + uvwBijkl+T(m, Ti, 0 + 1) [uBkij — UVBik. — Ui/B + uvwBijk]j kj+T(rn + 1, Ti + 1, o) {UVB — UVWB 1j k+T(m + 1, ri, 0 + 1) [UVBj — UVWB 1t3kj+T(m,n+l,o+1) [ui/Bk.. _UVWB.V 1jkj+T(m + 1, Ti + 1,o+ 1) [uvwBjjk] (A.26)So definingU,)’:?°BV V= [Bk — UB — UB — UB + UVB + UVB + j’ikjj— rup.. uvp UVJ +uBijk]— [ ‘-‘tjk -‘zkg ‘—‘ijk—rUB— UVB V — UVB V + UVWB]— i jik kjV—ruV V— UVJV— UVB V V + IVLVWB]— L kz3 —‘tkj kjlBV V — FUV’: V — UVWB 1— L --‘tjk kj—[UVB—UVWB]— L ikjV—ruvV— UVWV1— L k:j“B — V 1 (A.27)— I tjkj163A-NIL INNARDSThe computation of CJ becomesC° = 2BjjkT(m+x,n+y,o+z) (A.28)s=O y=O z=OThese integrals UVWBijk can be precomputed. In the case of sample and hold we foundthat computing the quantities reduced the precomputation cost somewhat becausethe integrals of the basis functions were not being evaluated more than once. The memorycost associated with this was equivalent to the bottom most level of the NIL map. Fortn-linear interpolation the memory cost for the precomputed integrals is going to be eighttimes that of the bottom most level of the NIL map so we evaluated the higher levels ofthe NIL maps using the following definition.2—1 2—1 2—1 ±i ±i ±i=J2Y J2Y J2YT(2m+a,2n+b,2o-f-c)a=O bzO c=O 2Y V= WB(u)B(v)Bk(w)dudvdw (A.29)A-1.4 Negative levels for tn-linear interpolationThe negative levels for the NIL map in this case are computed by using an altered versionof equation A.26. In the following equation T(m1’,P, 0P) refers to the texture value inthe lower corner of the parent cell, and the references to T(m + x, n’ + x, o + x) referto the interpolated texture value evaluated at (m° + x, ri + x, 0’ + x)C mnoijk —T(m + , n + o + ) [Bk — uBijk — uBjik — uBkji + UVBk.. + UVBiik + Bk]164A—NIL INNARDS+T(m + 2 n +, 0 +0 [UB—UVB—UVB + UVWB]+T(m + , n + n 1 0 + ) [Bk — UVB — UVB + UVtB]+T(m + n + , o + 0+ 1) [UB UVB — UVB + UVWB]+T(m + 2 n +1 op +0+ 1) [UVB— UVWB]+T(m + 2 n +, +0+ 1) [UVBj — UWB}+T(m + + n1 + O+l)[UvBk. — UVWB}+(m+ m+l___o+ O+l)[UvWB.kl (A.30)Using this definition of the negative levels we can compute them on the fly at the costof 8 texture evaluations and the above sum.A-2 Non symmetric casesIn the preceding discussion we have assumed that the dimensions of the digitized data isof the form 2 for some integer p. We have also assumed that in the three dimensionalcases Xres Yres = Zres. In this section we present solutions for these cases. First, wepresent a solution to the problem of not having the dimension of the data as an integralpower of 2. Second, we extend this solution to solve the problem of not having threedimensional data of equal dimensions.A-2.1 Non integral powers of 2 in one dimensionSince the algorithm that we are using depends on halving the dimensions of the data foreach level of the NIL maps it becomes a problem when we try to use this technique on165A-NIL INNARDSdata that has been sampled inadequately. One of the solutions is to reconstruct the signaland resample it. If we do not wish to do this we must extend the NIL map techniqueto handle arbitrarily sized data as efficiently as possible. The technique proposed hereallows the use of the NIL map technique on such data. We first extend the data structurethat we used for storing the NIL map values, by adding a bit value, valid, that indicateswhether the current entry in the NIL map is valid or not.y///////////////////////////////i9vOBuilding NIL maps with samples where n 2Figure A.24The construction of the NIL maps now requires a parity check on the size of eachlevel. If the level is odd we add a NIL map cell and mark it as invalid. The dimensionof the next level is then half of the size of this extended level. Each of the NIL cells onthis level is evaluated unless one (or both) of its children are invalid. If this is the casethen we simply mark this cell as invalid. The parity of this level is then checked and anadditional cell is added if necessary. This process continues until we have constructed acomplete NIL map. Such a NIL map is illustrated in Figure A.24. The NIL map in thisfigure corresponds to a NIL map constructed from five initial samples. The lowest level166A-NIL INNARDS(‘-y = 0) has had one NIL cell added to it. The second level (‘y = 1) has two invalid cells.The leftmost invalid cell is invalid because one of its children is invalid. The rightmostinvalid cell is invalid because it was added to ensure that this level had an even numberof cells. The third level has one invalid cell because both of its children are invalid, andthe fourth level has one invalid cell because one of its children is invalid.The extra storage cost incurred by the above alterations is at most 1og2(n)], becauseat each level we can only add one additional cell and there are at most log2(n) levelsin this NIL map.The process of generating the approximating hierarchy must now be altered slightly.In the subdivision process that we described earlier we subdivided a NIL cell if therewere more trigger points in it that allowed by the tolerance. The new subdivision processis similar to this except that when an invalid patch is encountered it is automaticallysubdivided regardless of how many trigger points it contains. In this manner we ensurethat the subdivision produces a hierarchy of valid patches. This process will require someadditional subdivisions to be performed.The upper bound on the number of additional subdivisions is easily derived. Considerthe case where a single trigger point is positioned so that it lies in an invalid NIL cell foreach level of the NIL map except for the lowest level. This trigger point will cause at most[1og2(n) subdivisions. No other trigger point will cause these subdivisions. This impliesthat using the above extension to NIL maps will require at most log2(n) additionalsubdivisions.167A-NIL INNARDSA-2.2 Rectangular three-dimensional texturesIn this section we will explore the case of a texture which has dimensions Xres = 2 <Yres = 2 < Zr€S = 2Pz. We wish to develop a NIL map representation of these texturessuch that for some level-y there is only one entry in the corresponding level of the NILmap. If we blindly apply the NIL map algorithm we will find that at level= Px thenumber of NIL map entries in the x direction is one but because Px < Py Pz we have2”’’ NIL map entries in the y and z direction.We wish to continue building the NIL map until there is 1 NIL map entry in the toplevel of the NIL map. This means we must continue building for p — Px levels. This issimply a matter of keeping track of these special levels. We must however modify theequations we use for the calculation of the b , 3i ;flO, and b coefficients. Thecalculation of b detailed in equation A.12, becomes= (2Px_nPx,) f; B(u)du(2py_min(pv,7)))f: B(v)dv1 2±1( I [27 Bk(w)dw (A.31)\(pz—min(pyfl)),/ J=a C [0,2min(Pxcv)) b e [0, 2rmv7)), c C [0,2min(Pzn))Equation A.13 then becomesC mnoijk—2min(px,7)_12min(py,7)_1ymirl(pz,-y)_1Z + a, 2mmn(Py7) ii + b,2min(pn)0+ c) b(A.32)168A-NIL INNARDSsimilarly the computation of b TThO becomesD mnoijkl2min(p,)_12mln(py,-y)_12min(pz,-y)_1+ a,2min(Py-)+ b,2min(pz,7)0+ ))b=O cO(A.33)169Appendix BHigh level view of NIL codeIn this appendix we present the listing of the loop from our NIL map implementationfor volume rendering. Much of the book-keeping code has been removed to present acleaner view of the code. This code listing shows the inner-loops which compute thepixel intensities./** For each pixel on the screen calculate the ray which it* generates. Use this ray to lay the trigger points and/or find* the closest density*/for (pixj = bottomY; pixj <= topY; pixj++) for{for (pix_i = leftX; pix_i<= rightX; pix_i++) / * for I */{ 10ray = GenerateRay ((float)pix_i,(float)pixj, &up, &right);* Get the entry and exit points for the volume Use the* voxel traversal stuff for going through the data and* collect the parameters.*/hits = get_entry_exit_points (ray,&v1 ,&v2, &t 1, &t2);if (hits == 1) 20{hits = get_entry_exit_points(ray,&zvl ,&v2, &t 1, &t2);hits = 0;}170B-HIGH LEVEL VIEW OF NIL CODE/ * Hit detected /if (hits 0){trigger_points = lay_down_the_sample_points (vi ,v2,t i ,t2,up,right ,ray,selected_trigger_points, 30slices_required ,samples_per_slice ,length);box = find_the_box_which_encloses_the_points (selected_trigger_points,trigger_points);number_of_patches = find_the_required_patches (selected_patches,box,selected_trigger_points ,trigger_points,maxgamma);40* First find the control weights of the control points* for all the patches. Use the sum of their weights to* normalize the display*1for(i=0; i< number_of_patches; i++){scale + = find_the_weight_of_the_control_points (&selecte&patches [i] ,maxgamma,x ,y,pixi ,pixj ,volume_bottom, volume_dx ,volume_61y,volume_dz,center);}for(i=0; i< nnmber_of_patches; i++){patch_total = find_the_integral_under_the_patch(&selected_patches [i])total += patch_total;}60total = total/scale;pixel=UNIT_TO_CHAR(total_red);(*current frame buffer >setpixel) (pix_i ,pixj ,pixel);i7iB-HIGH LEVEL VIEW OF NIL CODE} / Hits ! Q*/} 7* forl *7}/*JCQ. J *770172Appendix CMotion blur filterC codeThe addition of motion blur to the texture map filtering code required the addition ofthe following code.7** Return the value of the filter at point.* input:* point Vector containing the control point.* Box_center_length Radial distance of centre of filter.* Box_widthjor_rotate Width of filter.**7double Filter(vector point, double Box_center_length, double Box_width_for_rotate){ 10double 1 = sqrt(point.x*point.x + point.y*point.y);double Q = (Box_centerlength _1)*(Box_center_length —1);return(exp( Q/Box_width_for_rotate));}7** Place points around the arc near which the filter is defined.* input:* Texture coordinate for centre of pixel projection.* a Long axis of projected ellipse 20* points[] Array of trigger points.* nil samples User parameter indicating how many samples.* rotation Angle of rotation in radians*7mt Lay_Down_Rotate_Points(vector cB, double a, vector points[J,mt nil samples,double rotation){double rot_delta=rotation/ (double)nil_samples;173C-MOTION BLUR FILTERdouble angle;double start_angle;30double tmp_length;double offset = rot_delta/nil_samples;mt number_of_points;start_angle = atan2(cB .y,cB .x);Box_center_length sqrt(cB.y*cB.y+cB.x*cB.x);Box_width_for_rotate = a;for(i=0; i< nil_samples; i++){angle = start_angle + i*rot_delta;40for(j=0; j< nil_samples; j++){double tmp_length = Box_center_length —a+ 2.0*a/nil_samples *3;X.x = tmp_length * cos(angle+j*offset);X.y tmp_length * sin(angle+j*offset);X.z = —0.01;points[number_of_points] = X;number_of_points++;50X.z = 0.0;points[number_of_points] X;number_of_points++;X.z = 0.01;points[number_of_points] X;number_of_points++;}}return(number_of_points);}60174Indexadaptive quadrature, 91aliased signal, 12aliasing, 34, 41, 84, 86anisotropic reflection, 31anisotropy, 38, 81anti-aliasing, 14, 41, 89, 129, 131, 133architects, 30automata, 42band-limited noise, 86Bartlett filter, (See filter, Bartlett)basis set, 69, 77blurring, 38, 81box filter, 50CAT, 2Catmull-Rom, 37clamping, 20, 47, 49, 105, 129, 131implementation, 49technique evaluation, 85clouds, 40computer graphics pipeline, 4conformal mapping, 36convolution, 14, 26, 39, 46, 150cosine textures, 72, 73, 88DC, 76direct evaluation, 52, 84, 105, 129, 130,133implementation, 52technique evaluation, 90discrete texture, 83, 101, 151discrete volumetric data, 23definition, 23display filters, 19, 132display model, 44DOG, (See filter, difference of Gaussians)ellipse, 57ellipsoid, 58, 98EWA filters, 29, 30, 36, 39, 47, 49, 56,84, 105, 129, 130, 133definition, 29implementation, 56technique evaluation, 95fidelity measures, 83filter, 25surface-detection, 120anti-aliasing, 6, 14, 17approximating, 34approximating hierarchy, 126approximating techniques, 17approximations, 6Bartlett, 14, 34, 92box, 35, 82, 84, 90, 91bump maps, 16clamping, 42code development, 128constant cost, 14convolution, 32, 33definition, 25depth of field, 14design, 19, 33difference of Gaussians, 121direct evaluation, 42175INDEXDOG, 121edge detection, 38evaluation, 19, 129evaluation cost, 14EWA, (See EWA filters)Gaussian, 14, 29, 36, 37, 56, 57, 82,90, 91, 95, 100, 111, 118, 127, 130implementation, 109motion-blur, 14NIL map approximation, 10, 116NIL maps, (See NIL maps)optical flow, 38prototyping, 109pyramid, 34rapid-prototyping, 123reconstruction, 11, 17, 44semantics, 17sinc, 35space invariant, 26definition, 26space-variant, 34surface-detection, 110three-dimensional, 17two-dimensional, 32visual properties, 38filtersspace variant, 32Fourier, 10, 32, 35, 43, 50, 85, 86, 89, 131frame buffer, 24definition, 24fur, 41Gaussian filter, (See filter, Gaussian)geology, 7image fidelity, 82intarsia, 31integral, 154iso-surface, 44, 122Lagrange polynomials, 75marble, 92marching cubes, 122mean square error, 83medical imaging, 7MIP map, 28, 71definition, 28motion blur, 103, 173MRI, 2MSE, 83, 89NIL maps, 10, 28, 30, 36, 37, 39, 47, 49,59, 60, 84, 100, 105, 129, 131—133High level view of code, 170Motion blur code, 173definition, 28implementation, 59implementation details, 150technique evaluation, 98volume rendering, 111noise, 39—41, 86band-limited, 41normalization, 68numerical integration, (See quadrature)Nyquist frequency, 11occlusion, 4, 9, 33OMNIMAXTM, 56optimal, 95parametrization, 14, 31, 32perspective, 12, 33pixel, 24, 32, 52, 82definition, 24pixel-based, 3, 9point sample, 10procedural, 5procedural texture, 43, 71, 82, 86procedural volumetric data, 23definition, 23prototyping, 130pyramidal, 14, 59176pyramidal data structure, 27definition, 27quadrature, 42, 52, 55, 90, 134Gaussian, 55, 90, 93, 130Simpson’s, 42, 55, 90, 93, 133radially symmetric filters, 57ray marching, 27, 41, 132definition, 27ray tracing, 26definition, 26reconstruction, 81reconstruction filter, 18, 130, 157ringi1ig, 38, 81sample and hold, 44, 151sampling, 32, 97signal to noise ratio, 83singularity, 32sleep apnea, 110SNR, 83, 89solid textures, 1, 47splatting, 9step function, 87super- sampling, 95surface extraction, 43symmetric filters, 95texel, 13, 25, 56de11nition, 25texture, 23colour, 17defnition, 23reaction diffusion, 39solid, 15, 40three-dimensional, 6, 15, 30texture mapping, 129texture models, 38three-dimensional texture, 40thresholding, 41transfeT functions, 9, 29, 47, 69definition, 29trigger point, 122trigger points, 62, 79, 102, 126trilinear interpolation, 157turbulence, 86two-dimensional texture mapping, 32veneering, 31visual characteristics, 38volume rendering, 2, 30, 43, 131direct display, 3surface extraction, 3volumetric data, 23, 43, 44, 129, 133definition, 23voxel, 25, 82definition, 25voxel splatting, 9, 27definition, 27voxel-based, 4, 8white noise, 39INDEX177

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0051470/manifest

Comment

Related Items