UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

BRDF acquisition with basis illumination Achutha, Shruthi 2006

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_2006-0734.pdf [ 4.42MB ]
Metadata
JSON: 831-1.0051752.json
JSON-LD: 831-1.0051752-ld.json
RDF/XML (Pretty): 831-1.0051752-rdf.xml
RDF/JSON: 831-1.0051752-rdf.json
Turtle: 831-1.0051752-turtle.txt
N-Triples: 831-1.0051752-rdf-ntriples.txt
Original Record: 831-1.0051752-source.json
Full Text
831-1.0051752-fulltext.txt
Citation
831-1.0051752.ris

Full Text

BRDF Acquisition with Basis Illumination by Shruthi Achutha  B.E, Visweshariah Technological University, 2003  A THESIS SUBMITTED IN PARTIAL F U L F I L M E N T OF T H E REQUIREMENTS FOR T H E D E G R E E OF Master of Science in The Faculty of Graduate Studies (Computer Science)  The University Of British Columbia July, 2006 .  © Shruthi Achutha 2006  Abstract To create a realistic image we need to characterize how light reflects off a surface. In optics terminology, the complete Bidirectional Reflectance Distribution Function is needed. At a given point on a surface the BRDF is a function of two directions, one toward the light and one toward the viewer. Any device for measuring the 4 D reflectance data has to obtain measurements over the hemisphere of incident and exitant directions which can be quite tedious if done using the brute-force approach. In this thesis, we describe an efficient image-based acquisition setup that has no moving parts and can acquire reflectance data in a matter of a few minutes. We make this possible by using curved reflective surfaces that eliminate the need to move either the camera or the light source. The acquisition speedup mostly comes from the way we optically sample the BRDF data into a suitable basis. This also saves us the postprocess compression of data. We then encode the data into a compact form that is suitable for use in various rendering systems.  iii  Contents Abstract  ii  Contents  iii  List of Tables  v  List of Figures  vi  Acknowledgements 1  2  3  viii  Introduction  1  1.1  The Bidirectional Reflectance Distribution Function  2  1.2  Approaches to modeling BRDFs  3  1.3  Thesis Organization  5  Related Work  6  2.1  Analytical Reflectance Models  6  2.2  BRDF Acquisition  8  2.3  BRDF Representation and Storage  11  Physical Setup and Design  14  3.1  Apparatus  16  3.2  Design  19  3.3  3.2.1  Final Design  21  3.2.2  Design Validation  22  Basis Functions  25  Contents  3.3.1 3.4 4  Fabrication  27 29  Acquisition and Results  31  4.1  Calibration  31  4.1.1  Physical Calibration  31  4.1.2  Reflectance Calibration  34  4.2  5  Basis Validation  iv  Acquisition  35  4.2.1  Preprocessing Basis Images  36  4.2.2  Data Capture  36  4.2.3  Postprocessing Measured Data  37  4.3  Rendering and Evaluation  38  4.4  Discussion  38  Conclusions and Future Work  41  Bibliography  43  A  47  Appendix: Radiometric Terms  List of Tables 3.1  Design Parameters  vi  List of Figures 1.1  BRDF Geometry  2  2.1  A typical gonioreflectometer  8  2.2  BRDF capture using a camera  9  3.1  Physical setup of our acquisition bench  15  3.2  Snapshot of a working prototype of our BRDF acquisition setup. . . .  17  3.3  Parabolic mirror and mounting beam  17  3.4  Role of the beam splitter  18  3.5  Depicting various parameters in the design of the optical components.  19  3.6  Depicting the design process  20  3.7  Path of the extreme rays through the projector aperture  23  3.8  The 2D prototype used for design validation  24  3.9  Angular plots of the zonal basis functions  26  3.10 Depicting the processing pipeline for zonal basis validation  27  3.11 Profile templates for parabola and dome surface verification  29  4.1  Projector Calibration  32  4.2  Camera Calibraion  33  4.3  18% diffuse gray card used as the reflectance standard  34  4.4  Images of the gray card captured by the camera  34  4.5  Acquisition Pipeline  35  4.6  An overview of the preprocessing process  36  4.7  An overview of the postprocessing pipeline  37  :  List of Figures  4.8  Buddha model rendered in Eucalyptus Grove environment  vii  39  viii  Acknowledgements I would like to thank my supervisor, Wolfgang Heidrich, and my project partner, Abhijeet Ghosh, for all their ideas, guidance and encouragement. It was a pleasure to work with them. I want to thank my family for their loving support: my mother Tara Achutha, my father A.K.Achutha and my brother Guru Pradeep. I also want to thank my friends here at UBC, including Shantanu, Mugdha, Tanaya, Hagit, Dhruti and all the others, for making my time here during the graduate program such an enjoyable experience s  and for being with me at various stages of my thesis writing in particular and Masters program in general. I would also like to thank everyone in the PSM and Imager Lab for their help at varied occasions. Finally, I would like to thank everyone in Department of Computer Science and at U B C who have helped me directly or indirectly in making my graduate experience successful.  i  Chapter 1  Introduction The appearance of its material goes a long way in determining what we perceive an object to be. This affects how we might interact with the object, how much value we attach to it, how we use it and various other factors. In art and architecture, various materials and colors are used in a way as to create a specific look or effect. For the same reason, synthesizing realistic images of objects and scenes has been one of the major goals of computer graphics. We could use skilled artistry to create highly detailed and realistic models. The other option is to capture the appearance of real-world materials and digitize it. The applications of such digitized data would range from image synthesis, e-commerce, digital-libraries to cultural heritage. To create such a realistic image we need to characterize how light reflects off a surface. In optics terminology, the complete Bidirectional Reflectance Distribution Function described by Nicodemus et al.  [27] is needed. At a given point on a sur-  face the BRDF is a function of two directions, one toward the light and one toward the viewer. The characteristics of the BRDF will determine what "type" of material the viewer thinks the displayed object is composed of, so the choice of BRDF model and its parameters is important. There are a variety of basic strategies for modeling BRDFs. This thesis describes a novel design that we built to capture the reflectance of a material. Section 1.1 gives a more precise definition of BRDF. Section 1.2 gives a brief description of the strategies used to model BRDF of a material and section 1.3 gives an outline of how the rest of the thesis is organized.  2  Chapter 1. Introduction  Figure 1.1: BRDF Geometry  1.1  The Bidirectional Reflectance Distribution Function  As light flows through a medium and reaches an opaque surface, part of it gets absorbed by the surface and the rest gets reflected. How much of the incoming light gets reflected depends on the material properties including its composition and structure. The simplest approach to characterize surface reflectance of a material is to describe how light arriving at a point from an infinitesimal solid angle gets reflected in all directions. We could then model the look of any material by integrating over the incident distribution. The bidirectional reflectance distribution function or BRDF takes this approach to describe the reflectance of a surface. The geometry of the bidirectional reflection process is depicted in Figure 1.1. Light arriving at a differential surface dA from a direction (0,-,<)>,•) through a solid angle da, is reflected in the direction (0 ,((v) centered within a cone of d(O . The BRDF is defined as r  r  the ratio of the directionally reflected radiance to the directionally incident irradiance.  3  Chapter 1. Introduction  MX,<Di,(Or) = Here  ' . dEx,i(l, co,)  (1.1)  ' th incident spectral irradiance (i.e., the incident flux of a given waves  e  length per unit area of the surface) and dLx is the reflected spectral radiance (i.e., the <r  reflected flux of a given wavelength per unit area per unit solid angle). Since the BRDF definition above includes a division by the solid angle (which has units steradians [sr]), the units of a BRDF are inverse steradians [1 /sr]. We can drop the wavelength notation for simplicity. In a BRDF capture system, measurement is integrated over a finite bandwidth of radiation. The BRDF now becomes, /,(44) = ^  aE,-(co,-)  (1.2)  This model of reflectance assumes that the surface is homogenous. BRDF is a function of four variables: two variables specify the incoming light direction, two other variables specify the outgoing light direction. Note that the BRDF is reciprocal, i.e., if the incoming and outgoing directions are reversed, the function still has the same value. Also, when there is no scattering along the ray, the radiance does not vary along the direction of propagation, allowing us to measure the reflected radiance at any distance from the reflecting surface.  1.2  Approaches to modeling BRDFs  In this section we will provide a brief description of the strategies used to model BRDF of a material. They will be covered in more detail in the next chapter. Early attempts to characterize realistic reflection involved deriving a mathematical representation for light transport. These include physically inspired analytical reflection models or empirical models that provide closed form solutions of the reflectance function. These models provide only an approximation to the reflectance of real-world materials. Most of them describe only a particular subclass of materials. Over the years, such models have evolved to include many of the complex physical phenomena, but obtaining intuitive material parameters for these models remain a difficult task.  Chapter 1. Introduction  4  Another approach that is similar in spirit is the measure-and-fit approach. This involves measuring the BRDF for different combinations of incident and exitant directions and then fitting this measured data to an analytical model using some form of optimization. There are several problems with this approach including salient features of measured data being eliminated due to modeling errors in the analytical model and the choice of the optimization parameter being non-intuitive. The other alternative is to actually measure the reflectance data from real world materials. The earliest attempts in this direction was the construction of a device called gonioreflectometer.  Here a sample is placed at the center of the device, a light source  and a photometer are moved about the hemisphere above the sample and measurements of the reflectance are taken every few degrees. Such devices measure a single radiance value at a time rendering the acquisition process very time-consuming. Advances in digital photography and high dynamic range imaging techniques have led to efforts involving image-based BRDF acquisition techniques. There has been considerable development in this area. But most of them require highly controlled lighting environment with multiple light sources over the hemisphere of incoming directions and mechanical parts for moving the camera and/or the test sample. Such data acquisitions, though a cheaper and faster alternative to gonioreflectometers, can easily take upto several hours to obtain a dense sample set. Also, the huge amount of data obtained needs to be processed in order to remove noise and to compact them in a way so that they can be used in real-time rendering applications. In this thesis, we describe an efficient image-based acquisition setup that has no moving parts and can acquire reflectance data in a matter of a few minutes. We make this possible by using curved reflective surfaces that eliminate the need to move either the camera or the light source. The acquisition speedup mostly comes from the way we optically sample the BRDF data into a suitable basis. This also saves us the postprocess compression of data. We then encode the data into a compact form that is suitable for use in various rendering systems.  Chapter 1. Introduction  1.3  5  Thesis Organization  This thesis describes the design, principles and fabrication of a working prototype of our optical design for efficient image based BRDF acquisition. The next chapter, Chapter 2 provides information regarding the related work in the field of BRDF acquisition and representation. Chapter 3 provides a discussion of the mechanical design and physical setup of the optical bench, the geometrical optics involved in measurement and a brief description of the basis functions that was developed and used for efficient acquisition. Chapter 4 contains details of calibration, acquisition and some preliminary measurement results of various materials. Chapter 5 presents the conclusions and future work. A brief description of various radiometric terms is provided in the appendix. Credits The concept and design of the acquisition setup is a joint work in collaboration with Abhijeet Ghosh and Wolfgang Heidrich. I worked on creating the geometry of the various optical components, validating the design and basis functions. The basis functions were developed by Abhijeet. The various calibrations were done again in collaboration with Abhijeet. I worked on the various stages involved in the acquisition while Abhijeet worked on rendering the acquired data.  6  Chapter 2  Related Work Before we present our proposed algorithm for the acquisition we give an overview of the relevant previous work in this chapter. We begin with a review of various analytical BRDF models that give a mathematical representation for light reflection. Then we review various BRDF acquisition techniques used in computer graphics. Finally we discuss about the various basis representations used for BRDFs.  2.1  Analytical Reflectance Models  Early attempts at approximating surface reflectance provided a compact mathematical representation based on a small number of parameters. The parameters are either obtained by fitting to measured data or by manually tweaking around to approximate the desired material. These analytical models mostly fall under two categories. (1) Empirical models that are not based on the underlying physics, but provide a class of functions that can be used to approximate reflectance. (2) Physics based models that take into account the physical properties of the material while modeling a specific phenomenon or class of material.  Empirical Models One of the earliest models which is still widely used today was proposed by Phong [28]. The model is a sum of diffuse component and a cosine weighted specular lobe. It can be expressed as f (l,v) r  (v r)i  = p + p ±-j-, (n.l) d  s  (2.1)  1  Chapter 2. Related Work  where / is the normalized vector towards light; f is the reflected light direction; q is the specular reflection exponent; p</ and p are the diffuse and specular coefficients. This s  model is based on adhoc observation of reflectance and is neither energy conserving nor reciprocal. Blinn [3] adopted the model for a more physically accurate reflection by computing the specular component based on the halfway vector h:  / r  (/,v) = ^  +  p , ^ f  (2.2)  Lafortune [18] presented a more elaborate model based on the Phong model that accounted for off-specular peaks, retro-reflection and anisotropy: f (l,v) = ^+L [C , (l ,v )+C j(l , ) r  i  x i  x  y  x  +C -&,vz)]*  y Vy  w  (2.3)  Ward [35] proposed a model for anisortopic reflection based on an elliptical gaussian distribution o f normals. It is both energy conserving and reciprocal. It can be expressed as:  -«_£<< /  r  (Z,v)--+p,^  1  c  o  s  0  e x p  (  C  o  s  0  t-  t a n 2 s  r  4  k  (  ^ +^)l x  ^  y  (2.4)  where 5 is the angle between the half vector and the normal; (> | is the azimuth angle of the half vector projected into the surface plane; a ,a x  y  are the standard deviations of  the surface slope in x,y directions, respectively. This does not model Fresnel effects or retroreflection.  Physically-based Models Initially these models have been developed by applied physicists. The Torrance-Sparrow model [34] derives the specular component by assuming the reflecting surface to be composed of microfacets based on a gaussian distribution. It includes a Fresnel term for off-specularity and also accounts for shadowing and masking with respect to the microfacet distribution. Ashikhmin [2] proposed an expression for the shadowing and masking for any arbitrary distribution of microfacet normals. Poulin and Fournier [29]  Chapter 2. Related Work  8  Source Driver Hoop  Transmittance Detector  Figure 2.1: A typical gonioreflectometer with movable light source and photometer  presented an anisotropic reflection model assuming a microgeometry of oriented cylindrical grooves. Several extensions have been made to these models, for example He et al.  [14, 15] developed a model to account for arbitrary polarization of incident light  to describe effects like interference. Numerous other models of reflection have been developed in computer graphics and vision communities.  2.2  BRDF Acquisition  Despite the complexity of recent analytical models, there are still some situations where they do not capture the reflectance of some real world materials. Also, the parameters required by these models are not always easily obtainable. Measurement is the most straightforward approach to obtain BRDF data for a broad class of materials. This section presents a survey of such measurement techniques.  Gonioreflectometers A device which measures BRDF is called a gonioreflectometer (gonio refers to the capability to measure data in multiple directions). A typical arrangement as shown in  Chapter 2. Related Work  9  Figure 2.2: Setup to simultaneously capture reflectance in all outgoing directions  Figure 2.1 consists of a photometer that moves with respect to a surface sample which in turn moves with respect to a light source. There are mechanical elements to ensure the four degrees of freedom required to measure the complete reflectance function. The main problem with using a gonioreflectometer is its cost which can be attributed to its inefficiency; it measures a single value at a time and hence a dense set of measurements can take a large amount of time. Also, it requires a highly controlled environment to prevent data corruption by noise. Dana et al [7] developed such an equipment for measuring Bidirectional Texture Functions (BTFs). Their system comprised of a robot, lamp, PC, photometer and a video camera. The planar sample and camera are moved to obtain 205 different measurements over the entire hemisphere of directions. They compiled the data for 60 different materials into a publicly available CUReT database [5]. This sparse data set requires a function fitting to arrive at a useful model.  Chapter 2. Related Work  10  Image Based B R D F Acquisition Ward's imaging gonioreflectometer  [35] as shown in Figure 2.2 was a significantly fast  and inexpensive measurement device. He used a semisilvered hemispherical mirror, a C C D camera with a fisheye lens and a movable collimated light source to capture the reflectance data. This setup enables every outgoing direction to be measured with a single image, thereby eliminating the need for 2 degrees of movement out of 4 that is required for BRDF measurement. This greatly reduces the acquisition time. The drawback of the design is the difficulty in measuring BRDF values near grazing angles. Since most specular materials are specular in the grazing directions this setup cannot be used to measure highly specular BRDFs. Marschner et al. [23] constructed an efficient BRDF measurement device based on two cameras, a light source and a spherical test sample of homogenous material. Their method works by taking images of the sample under illumination from an orbiting light source and generates densely spaced samples. When the measured sample set is sparse, they can be fit to an analytical BRDF model using some optimization strategy. Lensch et al. [20] presented a clustering procedure to model spatially varying BRDFs andfiteach cluster to a Lafortune model [18]. They then used principal  component analysis to compute basis BRDFs for material  clusters. Dana [6] built a BRDF/BTF measurement device that used the approach of curved mirrors to remove the need for hemispherical positioning of the camera and light source. Simple planar translations are used to vary the illumination direction. The device consists of optical components such as a beam splitter, concave parabolic mirror, C C D camera and translation stages. It allows multiple viewing directions to be measured simultaneously. This arrangement is very similar to our design. But as we will describe later, we can capture a much wider range of directions. Matusik et al [25] presented a radically different approach to modeling BRDFs. They proposed a generative data-driven reflectance model for isotropic BRDFs based on acquired reflectance data. They acquired BRDF data for a large representative set of materials using a device similar to that built by Marschner et al [23]. They used  Chapter 2. Related Work  11  linear and non-linear dimensionality reduction techniques to obtain a low-dimensional manifold that characterizes the BRDFs. They let users define intuitive parameters for navigating within BRDF models and each of this movement in the low-dimensional space produces a novel but valid BRDF. The main drawback of this approach is its size. Han and Perlin [13] developed a unique BTF measuring device using a kaleidoscope that could simultaneously illuminate and image a sample from various directions using a single camera and light source. The advantage includes low-cost, no moving parts, insltu measurement capability and portability. This can be used with data-driven surface models. The number of simultaneous captures is definitely the main advantage, but this is still much smaller that what we can capture with our setup. Recently Ngan et al [26] built a new setup for high resolution acquisition of anisotropic BRDFs. Their setup is similar to Lu et al [21] , but instead of a spherical sample they use strips of sample material on a rotating cylinder, obtained from planar samples of the material at various orientations. The acquisition takes upto 16 hours for each sample. They sample isotropic and anisotropic data with intervals of 1 and 2 degrees respectively, and about 85% and 25% of the sample set bins have measured data.  2.3  BRDF Representation and Storage  The reflectance data directly obtained through measurements is unwieldy to store due to its high dimensionality and size. Also, there could be inherent noise in the data which makes it unsuitable for direct rendering. Researchers have tackled this problem in a number of ways. Some have resorted to representing the data in some basis; whereas others have tried fitting an analytical function to approximate the data. In this section we will look at some of the ways in which measured data can be represented using basis functions.  Chapter 2. Related Work  12  Spherical Harmonics Spherical Harmonics have been a pretty popular basis for representing BRDF data [30, 36]. They are the 2D analogue of Fourier series on a sphere. The BRDF function is approximated by a finite number of terms of the spherical harmonic series defined as  /(e,(t>) = ir= £L-< V / , ( e ^ ) , 0  where  m  (2-5)  is the spherical harmonic basis function of order and degree m, and Q  m  is  some constant. The main problem in using spherical harmonics to represent BRDFs is that a large number of terms is required to represent asymmetric and high frequency features. This is due to the fact that BRDF is a function defined on a hemisphere whereas spherical harmonic basis functions are defined on the whole sphere.  Spherical Wavelets Wavelets are suitable for representing functions containing high frequency content in some regions and low frequency in others. Schroeder et al. [31] extended wavelets to spherical domain to efficiently represent spherical functions. They used these spherical wavelets to represent a 2D slice of BRDF by keeping the viewing direction constant. Lalonde [19] proposed solutions to representing the four-dimensional BRDF with wavelets. The main problem with wavelets is that when a small number of wavelet coefficients are used to represent a smooth function, it can lead to aliasing problems. This affects the quality of rendering adversely.  Zernike Polynomials The approximation described above are sphere-based, i.e., they represent a hemispherical function as a special case of a spherical function. The other approach is to map points on a hemisphere onto a disk. Keondrink [17] takes this approach of representing BRDFs using orthonormal basis functions on the unit disk. They project the Zemike polynomial functions onto a hemisphere and use tensor-products of these functions to represent functions defined on a pair of hemispheres. The CUReT BRDF database  Chapter 2. Related Work  13  represents measured BRDFs using these basis functions. Unfortunately, there are problems associated even with this approach. The truncation of high-frequency coefficients is likely to cause "ringing". Further, the evaluation of BRDFs at a particular incident and exitant directions requires computation time that is proportional to the number of nonzero coefficients and hence the problem of high storage and computation costs. Moreover, rotation matrices are not available for them.  Hemispherical Basis Makhotkin's hemispherical harmonics [22], derived from shifted adjoint Jacobi polynomials, can be computed easily using a simple recurrence relation. But they lacked rotation matrices. Gautron et al [10] proposed a novel hemispherical basis derived from associated Legendre polynomials. The hemispherical functions have to be converted to spherical harmonics for arbitrary rotations. Although Hemispherical functions provide a natural basis for the representation of BRDF data, they are not suitable for our, purpose. This is because our measurement space comprises of a hemispherical region with a 9° hole at the top. Ghosh et al [11] have developed a novel basis for this zone of directions on the hemisphere where we actually capture the BRDF data. This zonal basis is a generalization to the hemispherical basis. They also provide a method to project zonal basis data onto spherical harmonic basis. This allows data captured by our system to be easily integrated into existing rendering algorithms.  14  Chapter 3  Physical Setup and Design A BRDF is a 4D function, where for each incoming illumination direction, we have to measure the outgoing reflectance in every direction over the hemisphere. Figure 2.1 shows the setup that would be required. The light source illuminates aflatsample from a single direction and the photometer measures the reflectance distribution over the hemisphere sequentially. This is repeated for each incoming direction over the hemisphere. Sampling this high dimensional space sequentially is impractical and hence the need to look at approaches to measure multiple points simultaneously. Now suppose we place a mirrored dome over the sample and photograph it as shown in Figure 2.2. We can simultaneously capture all outgoing directions in a single photograph using a fisheye lens camera [35]. This speeds up the capture process hugely. However, there still remains the issue of point sampling every incoming direction. If we think about the problem from a mathematical perspective, the incoming illumination can be thought of as a function which can be expressed as a linear combination of a set of basis functions defined over the hemisphere. To enable this kind of basis illumination we introduced a parabolic mirror as shown in Figure 3.1. The basis image from the projector illuminates the parabola and the geometric optics work in a way that the sample gets illuminated from a wide range of directions over the hemisphere. Our acquisition setup is efficient and is less error prone since it has no moving parts. Everything including the camera, the projector (light source), sample material and mirrored components remain fixed. By projecting and capturing the response to a set of basis functions, we achieve the effect of providing the four degrees of freedom required by any BRDF measurement device. We optically sample the illumination and  Chapter 3. Physical Setup and Design  ^ ^ \  * Camera  • Sample Material  Figure 3.1: Physical setup of our acquisition bench.  15  Chapter 3. Physical Setup and Design  16  reflectance data as a part of the capture procedure. This in turn has added benefits in terms of noise reduction, avoiding data redundancy and post processing. Our setup consists of mirrored components (dome and parabola) that cover approximately 90% of incident and exitant directions over the hemisphere; a camera which is positioned in a way as to capture the full zone of reflecting directions measurable by the system; and a projector that acts as the light source, projecting a sequence of basis images that cover the range of incident directions. The set of basis functions that are projected can be used to approximate any illumination environment. When we project a particular basis function onto the sample, what we capture is the response of the sample to that particular basis function. Once we have the response of the sample to a suitable number of basis illumination, we have enough information to compute the basis coefficients, which in turn can be used to compute the sample's response to any illumination environment. This acquisition process turns out to be orders of magnitude faster than point sampling and measuring for every incident direction at resolutions that we achieve (i.e., atleast 1 measurement per degree). Also, what we capture is a low pass filtered version of the reflectance data which is compact and doesn't require any further application of data compression techniques. The amount of averaging that happens depends on the maximum order of the basis images for which we acquire data. This can be varied depending on the specific type of BRDF we are acquiring. A novel basis was developed over the zone of directions where we can project and acquire data. We also carried out extensive simulations to design, optimize and validate our setup. In the sections that follow, we will describe these simulation tests, the fabrication process" and the setup and working of our optical bench. We will also provide a brief overview of the zonal harmonics.  3.1  Apparatus  Figure 3.2 shows a snapshot of our BRDF measurement device. Here is a detailed description of the various components.  Chapter 3. Physical Setup and Design  Figure 3.2: Snapshot of a working prototype of our B R D F acquisition setup.  Figure 3.3: Parabolic mirror and mounting beam shown seperately. In the actual setup, the parabola is mounted on the beam. The beam can be inserted at the centre of the dome and supported at the notches on the outer surface of the dome.  17  Chapter 3. Physical Setup and Design  18  T3  1  4  Figure 3.4: Left: Beam splitter reflects the light from the projector onto the parabola. Right: Light from the parabola passes through to the camera.  1. The mirrored components, i.e., the dome and the parabola were custom designed and manufactured at our lab. Their design and fabrication will be discussed in detail in later sections of this chapter. 2. DLP Projector (BenQ PB6210) that acts as the source of light by projecting basis images onto the parabola. The parabola reflects the image onto the dome which in turn projects light onto the sample material from a wide range of directions. Resolution of the projector is 1024X768 and it has a 2000 Lumens peak illumination intensity. 3. Prosilica EC 1350C is a firewire machine vision camera that captures images of the parabola which actually reflects light coming from the dome which in turn is illuminated by the reflectance of the sample material. Resolution of the camera is 1360 x 1024 and it can acquire 12-bits per color channel. 4. A beam splitter is used so as to reflect the light from the projector onto the parabola and at the same time to allow light reflected from the parabola to pass through so that it can be captured by the camera behind it as shown in Figure 3.4. 5. The sample material is mounted onto a cylinder that can be inserted into the base plate attached to the base of the dome. Height of the cylinder can be adjusted  Chapter 3. Physical Setup and Design  19  Camera/ Projector  y  Beamsplitter  -cp  Mirrored Surface Parabola Opaque Surtace  Sample  Figure 3.5: Depicting various parameters in the design of the optical components.  based on the material thickness so as to ensure that the top surface of the material is at the right position for optimum focus. 6. A lens is used to focus the projector at the required distance onto the parabolic mirror. 7. A narrow suspension beam shown in Figure 3.3 supports the parabola at the right height. It spans the width of the dome and is supported by a notch in the dome. The supporting clamps, beams and plates are the remaining components in the system.  3.2  Design  The mirrored components specification and other design parameters were obtained by carrying out simulations of the geometric optics involved. We simulated the camera and projector by thin lens optics, considering various optical parameters like focal distance, aperture size , etc. We wrote a ray tracer taking into account the camera and projector resolutions and modeled everything in 2D since the optics follow automatically to 3D due to symmetry around the optical axis.  Chapter 3. Physical Setup and Design  20  Figure 3.6: Depicting the design process.  The first step in the design process was to parameterize the 2D parabola, its distances from the sample, projector and camera. Due to Helmholtz reciprocity, the camera and projector can be positioned interchangeably without affecting the results of the simulation. So we simply modeled them both at the same position as shown in Figure 3.5. The beam splitter takes care of this in the actual setup. Given these parameters we iteratively built the polyline (which eventually described the dome in 3D) by Euler integration. Figure 3.6 shows the steps in detail. The first point is given by the intersection of the reflected ray of the ray incident tangential to the lowest point on the parabolic mirror and the horizontal line through the surface of the sample material. Let a be the angle between the horizontal line and the reflected ray. Consider placing a planar mirror perpendicular to the bisector of angle a. The ray CP\ gets reflected along P\D\ by the parabolic mirror and at D\ the planar mirror reflects it along D\ O. This ray would arrive at the sample surface at grazing angle.  Chapter 3. Physical Setup and Design  21  Consider the next ray CPi. This when reflected by the parabolic mirror intersects the planar mirror at point Di- As described above, the angle bisector between this ray P2D2 and D2O gives the surface normal of the next planar mirror segment starting at D2. This process continues with the incident rays getting successively closer to normal incidence, until they get occluded by the parabolic mirror as shown in Figure 3.6. That is when the iteration concludes. What we now have is a set of points that when connected by mirrored segments generate reflections such that every ray originating from the projector, incident on the parabola (from various directions) gets reflected onto the mirror segments and again gets reflected to finally converge to a point on the sample material. The set of points generated from the simulation happen to resemble an arc which when rotated about the optical axis, takes the shape of a dome. The various geometric parameters of this design were manually optimized according to the following design goals: • Maximize the range of measurable directions.  1  • Maximize the number of measurements (pixels) possible per direction. • Convenient spacing of various components. • Robust to minor miscalibration errors of the optical components. • Robust to minor fabrication errors of the dome and parabola.  3.2.1  F i n a l Design  We carried out numerous simulations and finally came up with a design that was a good tradeoff between the various geometric constraints and robust to errors due to misalignments and miscalibrations. Table 3.1 lists the design parameters. With these parameters we were able to project about 100 pixels between the vertex and tangent of the parabola and hence obtain about 1 pixel/degree measurement.  Chapter 3. Physical Setup and Design  22  9° to 90°  Measurable 9 range Camera(COP) - parabola distance d  27 cm  Sample - parabola distance d  13.5 cm  Parabola vertical extent d  2.25 cm  Parabola tangent angle %  20°  Dome dimensions  11" x l l " x 10"  cp  0  p  Table 3.1: Design Parameters  3.2.2  Design Validation  In this section we will discuss in detail the steps that we carried out to ensure that our design was optimal and robust before we went ahead with the manufacturing. Our design validation process involved 2 major steps: • Software simulations of real camera and projector optics with finite apertures taking into account minor misalignments of various parts. • Physical validation of the optics process by manufacturing a 2D prototype of the setup.  Geometric Optics We extended our raytracer to simulate camera and projector as thin lens devices. We assumed an aperture of size 0.5 cm and traced the left and right extreme rays for every ray that we had used in our pin-hole camera design as shown in Figure 3.7. The left and right rays were incident at a slightly different angles on the parabola and hence arrived at different angles on the sample material. Our objective was to minimize the average of these angular offsets, a; and a , over all the rays sampled from the projector. r  We had to test for various design parameters (as described in the previous section) and also the focal lengths to obtain a good balance. We had a manufacturing constraint with respect to the dome size imposed by our rapid prototyping machine. To test the robustness of our design we simulated the optics by adding minor misalignments for various components as follows:  Chapter 3. Physical Setup and Design  Figure 3.7: Tracing the path of the leftmost and the rightmost rays through the projector aperture. Here focal distance is the average of the total path length of the ray starting from the projector until it reaches the sample surface after reflections at the parabola and dome.  23  Chapter 3. Physical Setup and Design  24  Figure 3.8: The 2 D prototype used for design validation  • Horizontal and vertical offsets for the camera. • Horizontal and vertical offsets for the projector. • Horizontal offset for the parabola. • Horizontal offset for the dome This caused the rays to either not converge at all or converge at a point away from the sample surface. We had to look for a design that produced minimal convergence error, given minor misalignments. It was a difficult optimization process and we proceeded by prioritizing the design parameters, fixing the desirable range for most of them and used a systematic trial and error process to find the rest.  2D Prototype Design In addition to carrying out software simulations to validate our design, we also built a 2D prototype of a scaled down version of the final design as shown in Figure 3.8. We built the prototype in our lab using a STRATASYS Vantage i rapid prototyping machine. We used a reflective foil for the mirrored parts and sample surface. Using a laser beam we were able to recreate the geometric optics that we had simulated thus validating our design.  25  Chapter 3. Physical Setup and Design  3.3  Basis Functions  Abhijeet et al [11] developed a set of basis functions similar to the Spherical Harmonic basis, to sample the reflectance over the measurable range of directions. In this section, we will describe the orthonormal zonal basis and also discuss a way to convert them from the zonal space to spherical space. The zonal basis were derived from shifted Associated Legendre Polynomials (ALPs). These zonal basis (ZB) functions Z'"(Q, <()) , where m e {0,...,/}, are orthogonal over the interval [a, P] x [0, 2JT]. They can be constructed from shifted ALPs P'" defined over the interval i f  [a, b] where  a = J^sinSc/S = cosP, and (3.1)  b = f£ smBd& /2  = cos a .  Given K'", the zonal normalization constant, (2/+l)(/-|w|)!  KT = \Ll 2n(b-a)(l-w, + \m\)\ fL  (3-2)  '  the zonal basis function Z™(0, (j)) can be defined as V^^'"cos(m(|))^ (cos0) m  zT(e,4>) =  V2Kpsin{-m<b)Pf (cosQ) m  ifm>0  ifm<0.  (3.3)  rfp}(cosQ) Our reflectance acquisition setup can measure data in the interval 9 £ [7t/20,7i/2], i.e. 9 ° , . . . ,90° from the surface normal. Thus, the zonal basis functions are orthogonal in the interval [TI/20, 7t/2] x [0, 2TC] where a and b that define the shifted ALPs Pj" are  a = cos$ = cosn/2 = 0, and (3.4)  b = cosa = cosn/20. Figure 3.9 gives the angular plots of the first few zonal basis functions, as copied from [11]. For a more detailed discussion please refer to [11].  Chapter 3. Physical Setup and Design  x f 7.-1  Zj  Zj-  3  A  A  i Z?  AA  Zj  Figure 3.9: The plots of zonal basis functions Z  m ;  26  Zj  defined over the measurement space  [7i/20, JX/2] x [0, 2n], for / < 2 [copied from [11]].  Projecting zonal basis to spherical harmonics We need a way to project zonal basis into spherical harmonics for the following reasons, • SH basis works well with rendering algorithms since it supports 3D rotations. • There is a cap of directions near the surface normal where we cannot measure reflectance data and hence need a way to extrapolate the measured data.  Dual Basis The SH functions are not orthogonal over the zone of measured directions. So, they first define a set of dual basis functions that are orthogonal to SH over the zone. Let ZB be defined over the space [a, b]. Given F™, the primal SH bases, they define Yp, the corresponding dual bases over the zone as follows,  1  if / = p and m = q  (3.5)  [ Y, ?^dw m  Ja  0  otherwise  27  Chapter 3. Physical Setup and Design  Basis Image"... Generatoi  A  Input Illumination Zonal Basisllmages  [  BRDF Model', Model Parameters -.Materia'rSpecific Parameters 1  (Output! Reflectance in Zonal Basis  Projector-TofGameia Ray Tracer:  Zonal-BasisfeSpherical Harmonics • , Transformation Module  Output Reflectance in Spherical Basis !  Reflectance Data / * —  1  Data Extrapolation;  Model to Render Rendering: Systeml  ^Rendered Image . • ' ^  Environment Map (-  Figure 3.10: Depicting the processing pipeline for zonal basis validation.  and  ^ = 1^'.  (3.6)  r,s  The zonal coefficients ZJ" are then transformed by a basis change matrix C to obtain the corresponding SH coefficient fp. The elements of this matrix are given by Ct?=  [ Z"Y«d<o. h  (3.7)  Ja  3.3.1  Basis Validation  We took a systematic and thorough approach to test the zonal basis functions. Figure 3.10 gives theflowchartof the basis validation process. What follows is a brief description of the various modules, 1. The basis image generator outputs a set of images corresponding to the first few  Chapter 3. Physical Setup and Design  28  zonal harmonics. This depends on the material that we would like to simulate, higher the specularity or anisotropy, higher is the order of harmonics required. For a typical matte material 4  th  order harmonics would suffice whereas for a  highly anisotropic material like velvet or specular material like metal, it may go upto 6 order. ,h  2. The BRDF simulator basically implements various analytical BRDF models as follows, • Anisotropic Ward Model [35] • Ashikmin Shirley Microfacet Model [2] • Cook Torrance Model [4] • Ashikmin Shirley Phong BRDF Model [1] Given the choice of BRDF model, the model parameters and the material specific parameter values, the simulator generates BRDF value for any pair of incident and exitant directions. 3. The projector-to-camera ray tracer simulates the ray optics for every projector pixel in each basis image and uses the BRDF simulator to determine the value for every pixel as captured by the camera. Hence, for every basis image it generates the corresponding reflectance map as would have been generated by the camera. These reflectance values are in zonal basis. 4. The zonal to SH transformation module converts the reflectance values for zonal basis to spherical basis using the transformation matrices as described by equation 3.7. 5. This encoded data is then extrapolated in the region of the missing zone. The resulting data is used in a physically based ray tracer to render a given model in an illumination environment using the material selected in step 2.  Chapter 3. Physical Setup and Design  29  Figure 3.11: Profile templates for parabola and dome surface verification.  3.4  Fabrication  The 3D geometry for the dome, parabola and support structures were mostly developed using software we wrote for the purpose. We used SolidWorks 3D C A D [32] package for some minor geometry editing. We built these parts using a STRATASYS Vantage i rapid prototyping machine. The vantage machine builds objects in ABS plastic material. These parts required about five days of build time in total. It works is as follows: the program that controls the machine first slices the geometry into a number of layers, each of a particular thickness. Within each layer it generates toolpaths that describe the movement of the printhead that deposits the plastic and support material. Within each layer the build precision of the print head is very high, thus enabling high precision geometric details in the radial direction. But across layers this precision drops to about 7/1000'' of an inch. This causes tiny grooves of about 0.2mm thickness on a smooth 1  surface across layers. This was a big disadvantage to us since we required the parts to be built with optical precision for our acquisition setup. This meant that the parts had to be carefully polished to obtain a smooth surface for mirroring. We opted for a local commercial service to do the polishing and mirroring. Here is a brief description of the process involved. 1. The grooves on the curved surfaces were leveled by wet sanding with successively fine grid sand papers.  Chapter 3. Physical Setup and Design  30  2. It was then painted with black base paint. 3. This was followed by a coat of polyurethene based automotive primer. 4. A coat of polyurethene based clearcoat was then applied. This is a hardening agent. 5. An automotive grade polishing compound was then used to polish the surface. This was the final stage of polishing during which the polished surface was constantly measured using a profile template that we built for testing surface accuracy. These templates are shown in Figure 3.11. 6. MirraChrome [33] paint was used for mirroring the surface. MirraChrome plating provides 95% reflectivity of that provided by true chrome plating and is much thicker than other mirroring mechanisms such as vacuum deposition. Thus it is better suited for our purpose as it smoothes out minor surface inaccuracies in addition to providing a reflective surface. The thickness added to the surface by various paint coatings was about 5 — 6 mils and was accounted for when building the parts.  31  Chapter 4  Acquisition and Results Once the apparatus was set up, we carried out various calibration procedures. This involved camera and projector calibrations and reflectance calibration each of which will be discussed in detail in section 4.1. The acquisition process then involved some initial processing of basis data and of course the actual measurement. The measured data was then post processed to encode it in a format suitable for rendering. The measured reflectance data was then tested by rendering some scenes using a physically based ray tracer. The acquisition, post-processing and rendering procedures will be explained in detail in sections 4.2 and 4.3, respectively. A brief discussion of fabrication problems is given in section 4.4.  4.1  Calibration  As in any data capture setup, we carried out certain calibration steps in order to minimize measurement errors and to standardize the captured data. Our calibration procedures involved not only physical alignment of the optical devices, but also photometric calibration in order to obtain the relative scaling factors for our captured data with respect to a known reflectance standard.  4.1.1  Physical Calibration  This calibration procedure involved alignment of the camera and projector to the optical axis. We began by calibrating the projector to the optical axis and followed this up with camera calibration since it involved using the projector. We will discuss each of these  Chapter 4. Acquisition and Results  32  Figure 4.1: Aligning the projector to the optical axis. Left: Pictorial depiction of the calibration setup with the crosses and the backplate. Right: Image of the crosses.  calibration procedures separately.  Projector C a l i b r a t i o n The initial alignment of the projector was obtained by fixing 2 calibration crosses in a position such that when light is projected on them in such a way that the shadows of the 2 crosses on the backplate coincide, the projector is aligned to the optical axis. This is evident from Figure 4.1. The crosses were designed in a way that when fixed on the mounting plate, they are at the same height from the base as is the optical axis. They were manufactured in ABS plastic using the rapid prototyping machine. To obtain the correspondence between the projector pixels and points on the parabola where these pixels get projected, we generated an image with a bright circle in the centre. We then projected this image onto the parabola and covered the base of the dome with a semitransparent film marked at the central optical point. The projected circle formed a circular image in the film after multiple reflections. We moved the circle in  Chapter 4. Acquisition and Results  33  Figure 4.2: Camera calibration. Left: Image of the crosses captured by the camera when it is aligned to the optical axis. Right: Parabola image captured to recover the camera pixel-to-parabola points correspondence. Note the checkerboard pattern in the background.  the image until it was symmetric with respect to the point marked on the film.  We  then adjusted the radius of the circle until the reflected rays converged to a point on the film. This happens when the circle is projected accurately to match the circumference of the parabola base. The position and size of the circle in the image gave us the required correspondence between projector pixels and points on the parabola. We used this information in generating the projected basis images for data capture.  Camera Calibration In order to align the camera to the optical axis of the dome, we suspended a cross at the centre of the dome using a suspension beam in the same way as we suspend the parabolic mirror. We put a second cross at the centre of the base plate covering the dome.  We adjusted the camera until the 2 crosses overlapped symmetrically in the  captured image as shown in Figure 4.2. We also adjusted the camera focus until the first cross was in focus. This is the required focal length as determined during optics simulations. Next we determined the correspondence between the camera pixels and parabola points in a way similar to the projector case. We pasted a checkerboard pattern on the base plate covering the dome. We then projected the image with a bright circle (as obtained above after projector calibration) and captured the image of the parabola using the camera. The location and size of the parabola in the captured image gave us the  Chapter 4. Acquisition and Results  34  Figure 4.3: 18% diffuse gray card used as the reflectance standard  Figure 4.4: Images of the gray card captured by the camera at exposure times 62.5 ms, 125 ms and 250 ms. required correspondence. This information is used in post-processing basis response images captured by the camera as described by the acquisition process.  4.1.2  Reflectance Calibration  Due to inaccuracies in optical process, the measured reflectance values will not be exact but some scaled version of the actual value. Our objective to carry out reflectance calibration is to obtain the scaling factors (at each pixel) by measuring the reflectance of a known standard material. We chose an 18% diffuse gray card for this purpose. It has a uniform reflectance of 0 18  /r,0.!8(0),) = — 7t  COS0,-.  (4.1)  Ramoorthi et al [30] have shown that Lambertian diffuse reflectance can be encoded completely using the first 2 orders of spherical harmonics basis function. So we projected upto 2  nd  order zonal harmonic basis images and captured the correspond-  ing responses for the diffuse gray card. After projecting these measured coefficients  Chapter 4. Acquisition and Results  35  Basis Images  • Preprocessing'.  Projector Basis Images  Sample Material  Data Capture  Captured Response Images  ,  Post processing • Reflectance Data  Figure 4.5: A n overview of the acquisition pipeline.  into spherical harmonics, we recovered the hemispherical reflectance of the gray card (fr,gray)- From this, we computed the scale factors (A,) for each co,- e Q as follows  x  =  f r ^ 2 d  (  4  2  )  fr,grciyi®i) The gray card and some of the captured responses are shown in Figures 4.3 and 4.4.  4.2  Acquisition  In this section we will discuss the preprocessing stage, data capture and postprocessing stage given by the flowchart in Figure 4.5. Prior to the actual measurement, the basis images are processed so as to convert then into a format suitable for projection. Then the projection and data capture is carried out. In the postprocessing stage, the captured data is processed so as to convert them into a format suitable for rendering. Each of these stages will be discussed in detail.  Chapter 4. Acquisition and Results  ••TV Basis Images.; ; * (.pfm files)  36  i.'bftainimaxpixelr:. .'amdnglajliimages;'  Scale, each image.by;i max pixel - > r  ,' Convert 32-bit float" values to 8-bit integers  V Sepaiate positives <»and negative values.!-  i '\  Projector Images ' ? ( ppm files)  Figure 4.6: A n overview of the preprocessing process.  4.2.1  Preprocessing Basis Images  Basis Images are 2D latitude-longitude maps generated using the mathematical expression given by equation 3.3, where each pixel gives the value of the basis function in one particular direction in the zone. These are floating point values including both positive and negative numbers. For the projector to be able to project these images, they have to be processed as shown in Figure 4.6. The maximum pixel value amongst all images is used to scale pixels in every image. Then the 32-bit floating point values are converted into 8-bit integers. The negative and positive value pixels are stored separately as positive valued images with zeros in the remaining positions. Later in the post-processing pipeline the responses of the negative image is subtracted from that of the corresponding positive image. The images are now in a format that can be projected.  4.2.2  Data Capture  Our data capture procedure involves projecting illumination in the form of zonal basis function defined over the space [TC/20, K/2] x [0, In] obtained after preprocessing and imaging the response of a sample to this illumination. The captured reflectance data gives the coefficients of the zonal basis. The captured response image corresponds to a  Chapter 4. Acquisition and Results  i Crop,& HDR'/Or • ^generation jhd^gen)J*|  Captured Images < , I '1 ' (jpg file's) ,f *  r  *."'• ''Subtract negatjve images from' • •> .corresponding positive images &. '. ' ,' .Convertto.'pfm fr-.i','- . -  *"*• Basis^esponse^lmages ,) •. in SphencalBasis";(.pfm.files)ij 1  Basis Response-Images . jihrZortaf Basis' (.pfm files);  37  .HDR Images ( exr files)  ZonaMc-Sphericali. "': Transformation .-"t  I* —  Figure 4.7: An overview of the postprocessing pipeline,  sampling of the outgoing directions. The optical process can be summarized as rK/2  rlK  z/"(e<j>) = 0!  0  / / JO  /,.(eo,«i»o,e-,(|>)cos01-zf(el-,(|>I-)sine/de,d<t»,. I  /  (4.3)  Jn/20  This approach is similar to that of Kautz et al [ 16]. For every basis input, we acquire the response image at multiple exposures (mostly 3) in order to generate high dynamic range (HDR) data [8].  4.2.3  Postprocessing Measured Data  The acquired images have to undergo some processing before we can get them in a format suitable for rendering as shown in Figure 4.7. The images are first cropped according to the correspondence parameters obtained during calibration. H D R Image Generation The multiple exposures are then combined using HDRgen [9], a software used to generate high dynamic range images. The camera response curve required by hdrgen is obtained before camera calibration by taking 6-7 images of a scene at different exposures. Once, we have the HDR images, the images corresponding to the initial negative basis images are subtracted from the response for the corresponding positive basis image to obtain the basis response images encoded in zonal basis. Basis Projection and Extrapolation  Chapter 4. Acquisition and Results  38  The response images are then projected onto the spherical basis using the basis transformation matrix C, as described in the previous chapter. Spherical harmonics is also used to extrapolate data in the missing zone. Thus, what we have at this stage is SH coefficients fp for every tabulated exitant direction (0 , <|)). O  4.3  0  Rendering and Evaluation  During rendering the reflectance data is reconstructed from the tabulated SH coefficients. The reflected radiance in the viewing direction L is computed as follows: R  L (QoA>) r  =  / / (0 ,<t)„,e,-,(t),)cose,L/(co/)y(co/)rfco  =  /n/r,(6 ,fe)(e/,0/)^(<a/)V(a)/)d(o  n  r  o  o  (4.4)  We acquired reflectance data for various materials including red velvet, red giftwrapping paper, golden brown chocolate box cover and glossy dark brown resin material. We used the Physically Based Ray Tracer [24] for rendering some models using our acquired data. Figure 4.8 gives snapshots of the Buddha model rendered in the Eucalyptus Grove environment. These were acquired and rendered by using upto 4thorder Spherical Harmonics. The images appear noisy only because the environment map has been point-sampled and then the model has been rendered using ray-tracing. If we were to project the environment map onto Spherical Harmonics prior to rendering, it would give us better quality results.  4.4  Discussion  Even though we we're able to acquire some preliminary data, the precision of the curved mirrors was not good enough to carry out extensive acquisitions. The errors accumulated during the multiple polishing and painting procedures caused these artifacts. The artifacts were mostly in the form of variable bumpiness on the surface. Due to coupling  Chapter 4. Acquisition and Results  Figure 4.8: Buddha model rendered in Ecalyptus Grove environment, using reflectance data we captured for (a) glossy brown resin material, (b) golden-brown chocolate box paper and (c) glossy red gift-wrapping paper.  39  Chapter 4. Acquisition and Results  40  effect of the two mirrors, noise reduction techniques don't help in recovering the right results from the acquired data. If we were to obtain the curved components using C N C Machining or electroplating, then the surface prior to mirroring would be smooth and not have the grooves that we obtained in the plastic models printed using our Rapid Prototyping Machine. This would avoid the multiple polishing and painting procedures. In fact, electroplating is one of the methods employed to manufacture industry standard mirrors. The results that we obtained, though not very accurate, still provide proof of concept and design. With the new optical components, we would be able to recreate the optics that we simulated in our design process. Hence, we would be able to acquire reflectance data for a wide range of materials.  41  Chapter 5  Conclusions and Future Work To conclude, the main contributions of our work include: • A new image-based BRDF acquisition setup that is efficient and has no moving parts. • A set of zonal basis functions that are orthogonal over the zone of data capture, that enables us to encode both input illumination and captured response. This speeds-up the acquisition process in addition to avoiding data redundancy. The setup that we currently have provides us with a working proof of concept and design. What we have provided is a novel optical design that is not only efficient, but also is easily extensible. With a few minor changes we can use the setup in a number of different ways. For example, by adding two degrees of freedom, we could acquire spatially-varying BRDF with our setup. Next we discuss the proposed fabrication process and possible future work. As discussed in the previous chapter, we are looking at alternatives for fabricating precise mirror components (dome and the parabolic mirrors) namely, electroforming or machining stainless steel components; followed by reflective coating and polishing. We also plan to use the Mitsubishi PocketProjector instead of the DLP Projector we are now using since it offers various advantages including ease of mounting and shorter focus that we need for our setup. With our new optical setup, there are a few things we would like to do including, • Acquire reflectance data for an extensive set of materials and create a publicly available database. We plan to capture materials like fabrics, paper, metals and  Chapter 5. Conclusions and Future Work  42  various interesting BRDFs that otherwise cannot be modeled well using analytical models. • Test the capability of our basis acquisition setup and determine its limits in terms of the types of materials we can measure. With the current design and basis functions we might not be able to acquire some high frequency BRDFs. We would like to see how far we can go in terms of capture capability. • Explore alternative basis functions for highly specular materials. As discussed above, we might have to look for alternative basis functions to be able to acquire high specularity or anisotropy in BRDF data. This might entail using a basis that is dependent on the type of BRDF data we are acquiring. • Point sample reflection data and follow up by an analytical fitting procedure.  43  Bibliography [1] M . Ashikhmin and P. Shirley. An anisotropic phong brdf model. J. Graph. Tools, 5(2):25-32, 2000. [2] M . Ashikmin, S. Premose, and P. Shirley. A microfacet-based brdf generator. In SIGGRAPH  '00: Proceedings of the 27th annual conference on Computer  graphics and interactive techniques, pages 65-74, 2000. [3] J.F. Blinn. Models of light reflection for computer synthesized pictures. In Computer Graphics (SIGGRAPH  '77proceedings),  pages 192-198, 1977.  [4] R. L. Cook and K. E. Torrance. A reflectance model for computer graphics. ACM Transactions on Graphics, 1(1):7—24, 1982. [5] K. Dana.  C U R E T columbia-utrech reflectance and texture.  Web page.  http://www.cs.columbia.edu/ CAVE/curet/. [6] K. Dana. BRDF/BTF measurement device. In ICCV, pages 460-466, 2001. [7] K.J. Dana, B. van Ginneken, S.K. Nayar, and J.J. Koenderink. Reflectance and texture of real world surfaces. ACM Transactions on Graphics, 18(l):l-34, 1999. [8] P. Debevec and J. Malik. Recovering high dynamic range radiance maps from photographs. In Proc. of ACM Siggraph '97, pages 369-378, 1997. [9] G. J. Ward. HDRgen software. [10] P. Gautron, J. Kfivanek, S.N. Pattanaik, and K. Bouatouch. A novel hemispherical basis for accurate and efficient rendering. In Eurographics Rendering, pages 321-330, June 2004.  Symposium on  Bibliography  44  [11] A . Ghosh and W. Heidrich. An orthogonal basis for spherical zones. Technical Report TR-2006-12, U B C Computer Science, 2006. [12] A. Glassner. Principles of Digital Image Synthesis. Morgan Kauffman Publishers, 1995. [13] J. Y. Han and K. Perlin. Measuring bidirectional texture reflectance with a kaleidoscope. ACM Transactions on Graphics, 22(3):741-748, 2003. [14] X. D. He, P. O. Heynen, R. L . Phillips, K. E . Torrance, D. H. Salesin, and D. P. Greenberg. A fast and accurate light reflection model. In SIGGRAPH  '92: Pro-  ceedings of the 19th annual conference on Computer graphics and interactive techniques, pages 253-254, 1992. [15] X. D. He, K. E . Torrance, F. X. Sillion, and . P. Greenberg. A comprehensive physical model for light reflection. In SIGGRAPH  '91: Proceedings of the 18th  annual conference on Computer graphics and interactive techniques, pages 175— 186, 1991. [16] J. Kautz, P.-P. Sloan, and J. Snyder.  Fast arbitrary BRDF shading for low-  frequency lighting using spherical harmonics. In Eurographics Workshop on Rendering, pages 291-296, 2002. [17] J.J. Koenderink, A.J. van Doom, and M . Stavridi. Bidirectional reflection distribution function expressed in terms of surface scattering modes. In ECCV  '96. 4th  European Conference on Computer Vision, volume 2, pages 28-39, 1996. [18] E. Lafortune, S.C. Foo, K. Torrance, and D. Greenberg. Non-linear approximation of reflectance functions. In Proc. of ACM Siggraph '97, pages 117-126, August 1997. [19] P. Lalonde and A. Foumier.  A wavelet representation of reflectance func-  tions. IEEE Transactions on Visualization and Computer Graphics, 3(4):329336, 1997.  Bibliography  45  [20] H. Lensch, J. Kautz, M . Goesele, W. Heidrich, and H.-P. Seidel. Image-based reconstruction of spatially varying materials. In Eurographics  Workshop on Ren-  dering, pages 104-115, 2001. [21] R. Lu, A. Kappers, and J.J. Koenderink. Optical properties (bidirectional reflectance distribution functions) of shot fabric. Applied Optics, 39(31):57855795, Nov 2000. [22] Oleg A. Makhotkin. Analysis of radiative transfer between surfaces by hemispherical harmonics. Journal of Quantitative Spectroscopy and Radiative Transfer, 56(6):869-879, 1996. [23] S. Marschner, S. Westin, E . Lafortune, and K. Torrance.  Image-based mea-  surement of the bidirectional reflection distribution function. 39(16):2592-2600,2000.  Applied  Optics,  ^  [24] Matt Pharr and Greg Humphreys. Physically Based Rendering, http://pbrt.org. [25] W. Matusik, H. Pfister, M . Brand, and L . McMillan. A data-driven reflectance model. ACM Trans. Graph., 22(3):759-769, 2003. [26] A. Ngan, F. Durand, and W. Matusik. Experimental analysis of brdf models. In Proceedings of the Eurographics Symposium on Rendering, pages 117-226, 2005. [27] F. E . Nicodemus, J. C. Richmond, J. J. Hsia, I. W. Ginsberg, and T. Limperis. Geometric considerations and nomenclature for reflectance. NBS  Monograph,  160, 1977. [28] Bui-Tuong Phong. Illumination for computer generated pictures. In Communications of the ACM, pages 311-317, 1975. [29] P. Poulin and A. Fournier. A model for anisotropic reflection. In Computer Graphics (SIGGRAPH  '90 proceedings), pages 273-281, 1990.  [30] R. Ramamoorthi and P. Hanrahan. Frequency space environment map rendering. In Proc. of ACM Siggraph '02, pages 517-526, 2002.  Bibliography  46  [31] P. Schroder and W. Sweldens. Spherical wavelets: Efciently representing functions on the sphere. In Computer Graphics 29, Annual Conference Series, pages 161-172, 1995. [32] SolidWorks  Corporation.  SolidWorks  Student  Edition.  http://www.solidworks.com/pages/products/edu/studenteditionsoftware.html. [33] The Alsa Corporation. MirraChrome. http://www.alsacorp.com/chrome.htm. [34] K.E. Torrance and E . M . Sparrow. Theory for off-specular reflection from roughened surfaces. Journal of the Optical Society of America, 57(9): 1105-1114, 1967. [35] G. J. Ward. Measuring and modeling anisotropic reflection. In SIGGRAPH  '92:  Proceedings of the 19th annual conference on Computer graphics and interactive techniques, pages 265-272, 1992. [36] S. Westin, J Arvo, and K. Torrance. Predicting reflectance functions from complex surfaces. In Computer Graphics 26, Annual Conference Series, pages 255264, 1992.  47  Appendix A  Radiometric Terms In computer graphics, the interaction of light with matter is often modeled geometrically using ray-optics. In this section we provide a brief description of some of the radiometric terms used in this thesis. Radiant Energy, denoted by Q, is the most basic radiometric unit, measured in Joules [J]. For a photon of wavelength X, the particle model of light gives the energy Q in terms of Planck's constant h and speed of light in vacuum c , as  Radiant Flux (or Radiant Power), denoted by <> | , is the energy flowing through a surface per unit time and is measure in Watts [W],  -£  Radiant Flux Area Density, denoted by u, is a measure of energy flow given by radiant flux per unit area, measured in [W/m ]. 2  dA If the energy flow is toward the surface, it is referred to as irradiance (denoted by E) and if the energy flow is away from the surface, it is referred to as radiosity or radiant exitance (denoted by B). Intensity, /, is the measure of flux with respect to solid angle instead of area and is measured in [W/sr]  doi This is useful in describing point light sources, since the area goes to zero.  (A.4)  Appendix A. Appendix: Radiometric Terms  48  Radiance, denoted by L , is a measure o f radiant flux per unit projected area per unit solid angle. Its unit is [W/m sr]. 2  d 4> 2  L =  — dA don cos 6  (A.5)  where 0 is the angle between the normal N o f the surface area element dA and the direction of the flux {(). The cosine term represents the foreshortening with respect to the flux direction. Spectral radiance is the radiance per unit wavelength interval and is measured in [W/m ,sr| units. 3  For a more detailed description please refer to Glassner [12].  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0051752/manifest

Comment

Related Items