UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Shading and inverse shading from direct illumination Poulin, Pierre 1993

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


831-ubc_1995-98385x.pdf [ 3.15MB ]
JSON: 831-1.0051267.json
JSON-LD: 831-1.0051267-ld.json
RDF/XML (Pretty): 831-1.0051267-rdf.xml
RDF/JSON: 831-1.0051267-rdf.json
Turtle: 831-1.0051267-turtle.txt
N-Triples: 831-1.0051267-rdf-ntriples.txt
Original Record: 831-1.0051267-source.json
Full Text

Full Text

SHADING AND INVERSE SHADINGFROM DIRECT ILLUMINATIONByPierre PoulinM.Sc., University of Toronto, 1989B.Sc., Université Laval, 1986A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFDOCTOR OF PHILOSOPHYinTHE FACULTY OF GRADUATE STUDIESDEPARTMENT OF COMPUTER SCIENCEWe accept this thesis as conformingto the required standardTHE UNIVERSITY OF BRITISH COLUMBIADecember 1993© Pierre Poulin, 1993In presenting this thesis in partial fulfilment of the requirements for an advanceddegree at the University of British Columbia, I agree that the Library shall make itfreely available for reference and study. I further agree that permission for extensivecopying of this thesis for scholarly purposes may be granted by the head of mydepartment or by his or her representatives. It is understood that copying orpublication of this thesis for financial gain shall not be allowed without my writtenpermission.(Signature)Department of Cr.The University of British ColumbiaVancouver, CanadaDate 3o e’cemi,er Ji3DE-6 (2188)AbstractAn understanding of light and its interaction with matter is essential to produce images.As the modeling of light sources, light transport and light reflection improves, it becomespossible to render images with increasing realism. The central motivation behind thisthesis is to improve realism in computer graphics images by more accurate local shadingmodels and to assist the user to obtain the desired lighting effects with these morecomplex models.The first part of the thesis addresses the problem of rendering surfaces illuminatedby extended (linear and area) light sources. To compute the light reflected by a surfaceelement in a given direction, one needs to determine the unoccluded regions (shadowing)of each light source and then to compute the light reflection (shading) from each of theseregions. Traditionally, point light sources are distributed on the lights to approximateboth the shadowing and the shading. Instead, an efficient analytical solution is developedfor the shading. Shadowing from extended light sources is a fairly expensive process. Togive some insights on the complexity of computing shadows, some properties of shadowsand algorithms are presented. To reduce the cost of computing shadows from linearlight sources, two acceleration schemes, extended from ray tracing, are introduced andevaluated.The second part of this thesis addresses the problem of achieving a desired shadingeffect by building up systems of constraints. This approach is called inverse shading. Itallows a user to obtain a desired lighting effect by interacting directly with the scenemodel. An interactive modeling system has been built to study the introduction of rendering issues into the modeling process itself. Two rendering aspects are considered:11shading and shadows. By specifying a highlight position and size, the unique direction ofa directional light source as well as the surface glossiness are determined. Interactivelymoving this highlight on the surface redefines the light direction while changing its sizeredefines the surface glossiness. By manipulating shadow volumes (the volume withinwhich a particular light cannot be completely seen), the light source definition and position can be modified. By assigning colours to various points in a 3D scene, informationabout the colour of a surface as well as other surface characteristics can be deduced. Thisrelatively new concept of defining the causes by manipulating the effects is expected tohave a major impact on the design of future computer graphics modeling systems. Itbegins a new family of tools for the design of more intuitive user interfaces.111Table of ContentsAbstract iiAcknowledgement v1 Introduction 11.1 Light Sources 31.2 Lighting Design 51.3 Organisation 72 Local Illumination 92.1 Quantitative Measures of Light 112.2 Human Visual System 132.3 Perception and Realism 142.4 Local Shading in Computer Graphics . . 162.4.1 Spectral curves 212.4.2 Wave Propagation 222.4.3 Average Approximations of BRDF 222.4.4 More Sophisticated Physical Models 242.5 Failures of Current Reflection Models . 253 Shading with Extended Light Sources 273.]. Concepts and Related Work 283.1.1 Directional Light 28iv3.1.2 Point Light3.1.3 Linear Light3.L4 Area Light3.2 Linear Light3.2.1 Our Solution3.2.1.1 Diffuse Integral3.2.1.2 Specular Integral3.2.2 Extending the Shading Model3.2.3 Results of Shading with Linear3.3 Area Light3.3.1 The Diffuse Integral3.3.2 The Specular Integral3.4 Conclusion4 Shadows4.1 Definitions4.2 Properties of Shadows4.2.1 Directional and Point Light4.2.2 Linear Light4.2.2.1 One Light and One Blocker4.2.2.2 One Light and Many Blockers4.2.3 Area Light4.2.3.1 One Light and One Blocker4.2.3.2 One Light and Many Blockers4.3 Shadowing with Linear Light4.3.1 Primitive-based Shadowing525354545555596161636667Light30313437373738424446494951‘14.3.2 Light Triangle 3D Scan Conversion4.3.3 Linear Light Buffer4.3.4 Results of Shadowing with Linear Light.4.3.5 Conclusion on Shadowing with Linear Light4.4 Shadowing with Area Light Sources4.4.1 Sampling the Light4.4.2 Extended Rays4.4.3 Characterising Regions .4.5 Conclusion5 Lighting Design by Shading and Shadows5.1 A Parallel with Computer Vision5.2 Defining and Manipulating Light Sources5.2.1 Lights from Diffuse Reflections5.2.2 Lights from Highlights5.2.3 Lights from Shadows5.2.4 Results5.3 Defining Surface Characteristics5. SystemsPainting ScenarioPainting: Solving a System of EquationsPainting: An Optimisation ProblemPainting: A Fitting ProblemResultsInverse Shading in Global Illumination .8385878790951021061081101131161201211227071727576767778815.3.8 Extensions 1235.4 Conclusion 1246 Conclusion 126Bibliography 129VUAcknowledgementAs I am writing these lines in the middle of the night, it is hard to believe this part ofmy life is going to be over soon. It seems just like yesterday, I was this poor Québécois,driving through Canada in my old Lynx (may it rust in peace), so eager to discover...My supervisor, Alain Fournier, is by far the most predominant figure in my graduateyears. His guidance, enthusiasm, friendship, generosity and strength are remarkable. Hetaught me so much more than what could ever appear in a thesis. It is with pride thatI will always remain “one of Alain’s students”. Kelly Booth and Dave Forsey with their“almost Dr. Poulin” treated me as an “almost junior faculty” and I really appreciatedtheir support.I thank the members of my supervisory committee: Un Ascher, David Forsey, GuntherSchrack, Jack Snoeyink, and Robert Woodham, for their comments and support. AlsoKellogg Booth, Dale Rolfsen and Tomoyuki Nishita for their role on the examinationcommittee.My friend, Juancho Buchanan, deserves special thanks. Together, “We came, wesaw, we conquered!” His friendship means a lot to me. Even so far apart, we willstill share. For instance, there are these -40°C’s. So many other friends made theseyears so enjoyable... We started a graphics lab from scratch with the great supportof Maria Kiawe. This lab very quickly exploded with many great people. Everybodycontributed one way or another to this thesis, but to name a few: Atjeng Gunawan,Chris Romanzin, Chris Healey, Vishwa Ranjan, Peter Cahoon, and Rob Scharein. BobLewis helped satisfying the library requirements while I was back in Montréal. Withouthim, the thesis might have never made it. Then there are the cool guys (Chris, Deband Tien), the kids (Gene, Torre, Larry, Bill, Kevin, et cie), the system guys (and zesolo numerical analist guy), the staff people, the softball, soccer and ballhockey teams,the babies from the next graphics generations, the old generation of DGPers (Andrew,George, Johnny, Eugene, Marc,...) and so many others with whom I spent long hourschattillg. So many...Mikio Shinya lit up many of my original thoughts on inverse shading during a greatvisit of his lab at NTT, Japan. Michael Gleicher and Paul Heckbert discussed someof their ideas with me. Pat Hanrahan, Michael Cohen and the graphics students ofPrinceton helped me on some of my latest work while on leave for my pre-postdoc.Deborah Wilson, Caroline Houle, Marie-Claire Forgue, Adrienne Drobnies, and YingLi kept the sun shining under the cloudy days. They made me laugh and they listened tome. Ariel Fournier delighted me with her childhood life. Precious moments of my days...Finally, the last and most important thanks go to my family, whom I love very muchand am so proud of.Chapter 1IntroductionComputer graphics creates images to provide information or to induce emotions. Theseimages are made to be looked at. Because we live ill a 3D world, it is often efficientto draw from our experience of our world to convey information. Although realism isnot always necessary, or even the best way to convey the information, it can help inmany situations. Sometimes, realism is the application. For instance, when computergenerated objects are merged with real objects to produce a realistic combined image,the rendered objects must be undistinguishable from the real ones to achieve the desiredeffect.What we see originates from the interaction of light with our environment. Emittedlight is absorbed, reflected and refracted by the objects in the environment. The spectraland spatial distributions of the light can be altered at every step in the paths taken bythe light. These changes in the nature of light allow us to see in our world.In order to graphically simulate parts of our world, understanding light and its interaction with matter is essential. The better we can approximate light’s behaviour, thecloser we can get to capturing the richness of our world in realistic computer graphicsimages. The factors influencing the way light is reflected include the nature of light itselfand the nature and geometry of the surfaces of objects in an environment.Physics explains light as both a massless particle of energy (photon) and an electromagnetic wave. When a photon strikes an atom on a surface, its energy is absorbed andparts of it may be reemitted as light. Fortunately for us, this atomic rendering is not1Chapter 1. Introduction 2necessary to accurately approximate the behaviour of light in most applications. In fact,in many applications, geometric optics provides a sufficient basis to correctly render mostsurfaces.Ullfortunately, while geometric optics simplifies the nature of the light, it does notreduce the complexity of our world enough. Take a single tree as an example. Light isreflected from every branch and leaf. But it is also reflected differently on the variousparts of a single leaf because of the changing concentrations of materials in the leaf.To properly render the reflection from one leaf, one would need to model these variousparts. Our tree is made of thousands of different leaves. A forest is made of thousandsof different trees. A forest is also made of grass, flowers, hills and many other complexobjects. Very quickly, this hierarchy of objects and details explodes in an overwhelmillgcomplexity. Because we are only interested in the light reflection from these objects, itbecomes important to find reflection models that can approximate the reflections for thelevels of complexity that an application requires.To reduce the complexity of reflection models, we can also study the medium onwhich the images will ultimately be reproduced. Computer graphics images are renderedto be viewed on CRT monitors, prints or through projections. Most of these media canreproduce only a subset of the visible colour space (the gamut of the device). So wehave to think of realism of rendered 3D scenes as photorealism within the gamut of thedevice. The limitations of the display system as well as computational costs can forceus to reevaluate our simulation of light to acceptable approximations, where acceptableis determined by the application for which the image is computed.Chapter 1. Introduction 31.1 Light SourcesOne simplification often used in current reflection models applies to the light emitter.Representing a light as an infinitely distant source (directional light) reduces the problemof illumination to a series of parallel light rays. Representing a light as a 3D point fromwhich the light rays emanate (point light) allows for some simplifications, but not asmany. In fact, for many applications where all surfaces form together a small solid angleas seen from a distant light, a directional light is very efficient and quite adequate. Whenthe light emits equally in every direction and when the light forms a small solid anglefrom each surface element, a point light is very efficient. However, our world is largelymade of higher dimensional light emitters, so when the small solid angle assumption isbroken, we must compute the reflection originating from such a higher dimensional light.The first part of this thesis is concerned with improving the shading of surfacesilluminated by extended (linear and area) light sources. In many rendering systems,linear and area light sources are modeled with a series of point light sources. This isdone because point light sources are simple to use and can approximate the light emissionfrom a light source of any shape. Unfortunately, the large number of points necessary toapproximate a light1 can require considerable computing resources to produce realisticlooking results. In this thesis, a general analytic solution for shading surfaces illuminatedby a linear light source is proposed. A linear light approximates a light that is long andnarrow. Fluorescent and neon tubular lights are two examples of such light sourcesin our world. A solution for the diffuse component of reflection is derived for linearlights. By assuming a simple specular reflection component (Phong), inexpensive yetconvincing images are produced with the use of a polynomial approximation for thespecular reflection curve. An extension of the solution for shading with linear light to‘Such an approximation is based on the solid angle the light forms from the surface element to shade.Chapter 1. Introduction 4shading with area light is also presented. The introduction of these illuminants and theirintegration in a commonly used reflection model broaden the range of realistic scenesthat can be efficiently rendered.The presence of shadows forms another aspect of realism in computer graphics imagery. Shadows can provide additional information about the shape of an object, itsnature (if it exhibits some degree of transparency), its relative position with respect toother objects and light sources, and even about the light sources themselves. Shadowingfrom a directional or a point light source is a binary decision. A point in a scene is oris not illuminated by a directional or a point light source. Shadowing from an extendedlight source is more complicated because a point in a scene ca be illuminated by theentire source, not illuminated at all or be partially illuminated. The information aboutthe shading then depends on the portion of the light that is visible from the point beingshaded. Determining this portion can be computationally expensive. We present someproperties of shadows from extended lights and show how this information has been usedto more efficiently compute shadows. This should help to put in perspective the complexity of correctly computing shadows. In the particular case of linear light sources, itbecomes possible to reduce the average cost of computing shadows. Two new accelerationschemes are evaluated. They both take advantage of the 2D nature of the shading problem of a point illuminated by a linear light source. Each technique processes informationabout the geometry of a scene in order to test only a subset of the objects for shadowing.The increase in cost due to analytically solving the light reflection from extended lightsand computing their shadows is substantial compared to a simple directional or pointlight source. However, this increase is usually a fraction of the cost of using a series ofpoint light sources to simulate the same extended light to the degree to which the shadingeffects can be considered realistic.Chapter 1. Introduction 51.2 Lighting DesignBetter reflection models and illuminants can greatly improve the realism of computergenerated images. However, a model is only as good as the designer using it. It istherefore essential to provide a designer with a set of intuitive parameters that allowone to create the desired lighting effects. Unfortunately, when the number of parametersand their inter-relationships grow, a designer has to understand all of them in order tocreate the final look of a picture. For instance, once the geometry of a scene has beenbuilt, the designer has to position the lights and assign them intensities in order to casta shadow here or get a highlight there. The designer must also determine values for allof the parameters of every surface so that each highlight will have the proper size andso that the surface colours will approximate well enough what the designer had in mindwhen building the scene. Changing the intensity of one light can affect the shading ofevery surface. Moving a light can make a highlight disappear or can cast undesirableshadows or can completely over-saturate the colour of a surface. With current modelingsystems, the designer must either compute in her head the inverse shading in order toobtain a certain effect or attempt to achieve the desired effect through trial and error.Even for a simple scene, this process is very difficult. Typically, the designer will haveto iterate between adjusting the values of the parameters and rendering the entire scene.Depending on the experience of the designer (i.e. how well she can perform mental inverseshading) and the time required to perform the rendering, this iteration will continue untilall the desired effects are achieved or until the frustrated designer decides to stop andaccept the current image as partially satisfying. The second part of this thesis addressesthe problem of achieving a desired shading effect. A system of constraints is employedto bring the user a desired effect by letting her interact directly with specific features inthe scene model, rather than indirectly, as is current done in most computer graphicsChapter 1. Introduction 6modeling systems.Instead of observing the results of lighting only after the rendering stage, we introduce some rendering considerations directly into the modeling process. In our system, auser interacts directly with the computer graphics model. Constrained by a system ofequations, the user can alter the shading of a surface leaving the remaining unknownsto be solved automatically by the system. We investigate two rendering effects: shadingand shadows. The user can control the direction of a directional light by selecting a pointon a surface. If this point represents the point of highest diffuse reflected intensity, itidentifies a unique direction for the light. If instead it represents the point of highestspecular reflected intensity, it gives another direction to the light. The specular intensitycorresponds to the highlight region. The user can specify a size for this highlight and bydoing so, it becomes possible to identify a unique value for the surface roughness2 thatwould create the highlight. The highlight can be moved interactively to specify a newlight direction. The size of the highlight can be changed as well, which determines a newvalue to the roughness coefficient.Another technique to define and manipulate light consists of manipulating the shadowvolume (the semi-infinite volume within which points cannot completely see the light) ofa point, linear or area light source. In this way, the shadow is interactively modified andautomatically specifies a light that would produce the shadow.Finally, the user can paint colours on a surface and the modeler will attempt todetermine appropriate surface characteristics for which the colour points would retaintheir colour (within a certain tolerance) when the surface is finally shaded. The underconstrained systems that result from these manipulations are solved via a nonlinearconstrained optimisation while the over-constrained systems are solved via a weighted2The surface roughness in Phong and Blinn shading models determines the surface glossiness. Thisterm will be defined in the next chapterChapter 1. Introduction 7least-squares approximation with penalty functions to constrain the values of some surfaceparameters. Surface attributes such as surface colour, quantity of ambient, diffuse, andspecular reflections and the ratio of dielectric and conductor properties are thereforeautomatically determined as the user adds colour points and interactively moves themon the surface.This relatively new concept of defining the causes by manipulating the effects isexpected to have a significant impact on the design of future computer graphics modelingsystems. It leads to a new family of tools for the design of more intuitive user interfacesby freeing the designer from solving mentally many of the inverse shading problems.1.3 OrganisationThe thesis is organised as follows. In the next chapter, light and the radiometric termsused to measure it are explained with an example. Then the light actually captured bythe human visual system is described and we briefly discuss the perception of the visualsignal and how realism is related to perception. Finally, we present and analyse variousreflection models previously introduced in computer graphics.In Chapter 3, we review the formulations for computing the reflection from directional,point, linear and area lights. We then present our analytical solution for shading with alinear light and how our solution has been derived to shade with a polygonal light.Because extended lights can produce shadows with both umbra and penumbra regions,we present in Chapter 4 some properties of these regions for both linear and polygonallights. Two acceleration schemes for shadowing with linear lights are then presented andevaluated. Algorithms for shadowing with polygonal lights are also presented.In the last chapter, we investigate how to help a lighting designer to efficiently obtaina desired shading effect by indirectly controlling the lights via the position of points ofChapter 1. Introduction 8maximum diffuse reflection, highlights and shadows. We also discuss a system in whichthe designer interactively specifies surface characteristics by simply applying colour pointson a surface.Chapter 2Local IlluminationLight illuminates objects aild makes them visible to humans. This role of light in ourdaily life is the predominant factor in our appraisal of realism in computer graphicsimagery. It is therefore imperative to understand how light interacts with surfaces andhow our visual system perceives the results of this interaction. This chapter starts bygiving a brief introduction about light and its interaction with matter. The next sectionwill briefly describe how this interaction is perceived by the human visual system. Oncethis base is established, we enter into the world of computer graphics. In reality, lightinteraction with matter is very complex. Simplifying this reality is our only practicalapproach to simulating light interaction in computer generated pictures. We review thevarious illumination models used in computer graphics and show how far the underlyingassumptions depart from realism. Unfortunately, the trade-off with realism is not onlya question of computational cost, but also of ease of use. We conclude this chapterwith a discussion of ease of use, computational load and realism to justify our choice ofillumination models in subsequent chapters.Before delving into the nature of light, we must clarify a few points. In computergraphics, we always consider the light reflection off surfaces. In reality light can penetratethe surface and interact with the atoms within the surface. However, for simplicity’s sakeand because a very important factor governing the reflection depends on the orientationof the surface, we model light reflection only as reflection from the surface.A surface element is defined as an infinitesimal portion of a surface. Sometimes,9Chapter 2. Local Illumination 10we also refer to a surface element as a point on the surface that has exactly the sameproperties as the surface element. There is a surface normal at the point and the radianceat the point represents the radiance at the surface element. This analogy will oftensimplify our explanations.Finally, there is the distinction between global illumination and local illumination.Consider a single illuminated surface element. Light comes into contact with the surfaceelement. Some portion of this incoming light is absorbed, another portion is directlyreflected, while yet another portion interacts with the surface (due to interreflectionsbetween surface bumps or interactions with colour pigments within the surface) beforebeing reflected back into the environment. This whole phenomenon is called local reflection. The radiance of this surface element once projected onto a pixel produces the localshading. If light comes only from light sources, we are dealing with local illumination ordirect illumination. However, this phenomenon does not describe the full behaviour oflight. Once light reflects off a surface, it can illuminate other surfaces and reflect fromone surface to another an infinity of times. This phenomenon is called global illuminationbecause the reflected light is potentially a function of every surfaces in a scene. This distinction is a matter of scale. In fact, the interreflections between micro-facets of a surfacecan be considered as a global illumination phenomenon at the scale of the micro-facets.To not confuse these concepts any further, we restrict our definitions of local and globalilluminations to the geometric level at which the objects are defined.In this thesis, we concentrate our efforts on local illumination issues. The results,however, can have a direct impact on global illumination because local light reflection isan underlying part of global illumination.Chapter 2. Local fllumiriation 112.1 Quantitative Measures of LightIn this section, we will look at light as a physical phenomenon measured independentlyof the observer. In order to simulate the behaviour of light, it is important to understandhow light propagates and how it is measured.Light has the dual nature of a massless particle of energy (photon) traveling as anelectromagnetic wave. As photons hit a surface, some energy is absorbed by the atomsat the surface and is re-emitted later in the form of radiation. If the re-emitted radiationis within the narrow band between 380nm and 770nm, this radiation is within the visualrange of the human visual system. It is then called light.Radiometry is concerned with measuring radiant energy as a function of the wavelength A. The following definitions of the most relevant radiometric terms are fromNicodemus et al. [nico77j. The characters in parenthesis ( ) represent the symbols usedto refer to these concepts while characters in square brackets [ } represent their units inthe system international (SI). Figure 2.1 defines the polar coordinates used throughoutthe thesis.Figure 2.1: Polar coordinate systemsRadiant Energy (Q) [Joules J]The radiant energy is the energy propagated in the form of electromagnetic wavesor streams of particles (quanta or photons).Chapter 2. Local illumination 12Radiant Power or Flux QF) [Watts WjThe radiant power or flux is the radiant energy emitted, transferred, or receivedthrough a surface, in a unit time interval.dQdtRadiant Exitance (M) [Watts per square meter W/m2jThe radiant exitance at a point of a surface is the quotient of the radiant poweremitted by an infinitesimal surface element containing the point, over the area ofthat surface element.dM dAIrradiance (E) [Watts per square meter — W/m2]The irradiance at a point of a surface is the quotient of the radiant power incidenton an infinitesimal surface element containing the point, over the area of that surface element.dARadiant Intensity (I) [Watts per steradian— W/sr]The radiant intensity of a source in a given direction is the quotient of the radiantpower emitted by the source in an infinitesimal element of solid angle containingthe given direction, over the element of solid angle.ddwChapter 2. Local illumination 13Radiance (L) [Watts per steradian per square meter W/(m2 sr)jThe radiance in a given direction at a point on the surface of a source or a receiver,or at a point on the path of a beam, is the quotient of the radiant power leaving,arrivillg at, or passing through an element of surface at this point and propagatedin directions defined by an elementary cone containing the given direction, over theproduct of the solid angle of the cone and the area of the orthogonal projection ofthe surface element on a plane perpendicular to a given direction.d2L— (dA cos8dw)BRDF (fr) [inverse steradian — sr1]The bidirectional reflectance-distribution function (BRDF) is defined to be thefraction of the incident radiation coming from the direction (8, j) that is re-emittedin the direction (&,., The BRDF is expressed as fr(8i, /; er, r).dLrfr— dE2.2 Human Visual SystemIllumination concerns us only so long as we can see its effects in our environment. Although this is not primordial in the framework of this thesis, it is important to neverforget that the end result of our light simulation is the transmission of information to aviewer. Understanding the human visual system can help in simplifying the illuminationChapter 2. Local Illumination 14computations while retaining the properties of most significance in the human visualsystem.Light goes through the human eye and is detected by two types of receptor cells:the rods and the cones. The rods are the most sensitive to light and are responsiblefor our vision under dim illumination (night visioll). The cones are responsible for mostof our daylight vision and for our vision of colour. Each receptor cell has a differentsensitivity to light of a particular wavelength. The spectral luminous efficiency curvesof each receptor cell resemble a Gaussian distribution with its peak at 555 nm and 507nm for the cones and the rods, respectively. These measure the response of the humanvisual system to light as a function of the wavelength.Photometry studies light measurements as experienced by the human visual system.Human eye characteristics vary from one human to allother, and even from one eyeto the other in the same person. However many photometric measurements have beenperformed on many persons and a set of accepted standard values exists.We conclude this brief incursion into the properties of the human visual system bystating that for each radiometric quantity, there exists a photometric one [wysz82]. Therefore it is always possible to convert a radiometric value into its equivalent photometricone by convolution with the appropriate spectral luminous efficiency curve. To avoid anyconfusion in the remainder of this thesis, we always use radiometric terms.2.3 Perception and RealismOf all the light being reflected in our environment, only the very small portion impingedupon our visual system is of potential visual significance. Somehow, the human braininterprets the spectral composition and spatial distribution of the signals it receives fromthe visual system. It is unclear how the information is extracted from the visual signals.Chapter 2. Local Illumination 15Moreover, this information can be interpreted in highly different ways from one personto another.To produce realistic images (photorealistic experience), an artist must face severaldilemmas. An image conveys conflicting information because of its duality as a scenewith 3D visual cues and as a flat 2D representation of this scene. Pictures generallyoccupy only a small portion of our field of view. They are computed from a single pointof view and do not allow for different depths of field. They are also subject to illuminationfrom where they are observed. Moreover, the display media currently used exhibit a muchsmaller dynamic range than what our visual system transmits to us of our world.’ So itis likely that the visual experience of looking at reality will be different than looking ata 2D representation of this same reality. The knowledge accumulated by the viewer willlead to different perceptions of this reality.Nevertheless, constructing an image closer to reality provides us with the possibilityof reaching more people who have experienced similar structures that will lead to similar interpretations. For instance, hidden surface removal in line drawings removes someinformation about the occluded structures, but the disambiguation of the spatial relationships often makes an image much easier to interpret. Similarly, the gradient of theshading on a surface can reveal important information about the shape of the surface.Additional cues like shadows, textures, reflections, transparencies, light interrefiections,material properties, and motion blur are all examples of contributions that make theexperience of looking at a computer graphics image more real because they simulatewhat we experience in our world. Therefore, even though we cannot quantify the realismwithin an image, we can judge if the presence or absence of certain features in an image‘Barbour and Meyer [barb92] describe some of these dilemmas in more details and explain sometechniques that can be used to remedy some of these limitations. They base their work on a large bodyof literature in psychology, photography, art, optics and computer graphics. The interested reader isreferred to their paper and its references for a more exhaustive enumeration and treatment of picturesas representations of reality.Chapter 2. Local illumination 16contributes to making the image closer to our experience of our world. It is this unscientific feeling we will be relyillg on when arguing that a techilique leads to more coilvincingphotorealistic images.2.4 Local Shading in Computer GraphicsFor many years now, simple local illumination models such as those introduced by Phong[phon75], Blinn [blin77j and Cook [cook82] have been successfully used to render 3Dscenes with a certain degree of realism. Improvements for these basic models have beenproposed, but somehow they have had a limited impact within the 3D graphics community.In this section, we analyse some of the proposed improvements and try to determinewhy they are not widely used today. With this background, we establish some criteriathat should be considered when developing a new reflection model. These criteria helpus justify our choices for local illumination models in subsequent chapters.As we already know, shading in computer graphics is an essential part of realism.Shading carries information about the geometry of objects, the orientation of lights andthe nature of surfaces. In the late 1970’s and early 1980’s, most of the currently used localillumination models were introduced into computer graphics. Work by Phong [phon75],Blinn [blin77] and Cook [cook82] paved the way to well accepted and commonly usedillumination models. We can summarise the various formulations into a single generalisedequation:Lpixei fraLia7r +f frd(i,i;8r,r)Lic058idwi + (2.1)i=1Chapter 2. Local Illumination 17f ksFs(8j, i; Or, r)Li dwi=1Lpixei is the radiance reflected by the surface element in the direction of the pixel ofinterest. Therefore, if more than one surface element reflects light in the direction of agiven pixel, the total radiance in this pixel direction will be a weighted average of theradiance of all these surface elements. The weight of a given surface element correspondsto the visible area this surface element projects onto the pixel.The reflected radiance from m light sources is subdivided into three terms: ambient,diffuse and specular. Figure 2.2 shows how a combination of each of these three reflectionscan provide a more complete surface shading.Figure 2.2: Ambient, Diffuse and Specular TermsThe first term corresponds to the ambient reflection. In early reflection models,interrefiection between surfaces was very crudely approximated by this term. It assumesthat the indirect light comes to the surface element equally from every directions inthe hemisphere above it. This means that a surface element shaded with only ambientreflection will always reflect the same radiance whatever its orientation and its position(as long as it is visible through a pixel). Lia is the ambient radiance of a scene. It isassumed constant at any point on a surface. If we integrate this ambient radiance overthe entire hemisphere above the surface element, we getprJ J Lia C05 0 sin OdOdb = Lia—ir 0Chapter 2. Local illumination 18fra is the ambient BRDF associated with the surface element. Normally, there shouldbe only one BRDF fr associated with a surface. Dividing it into a summation of threefunctions fra, fd and frs for the three types of reflection provides the possibility tosimplify each one individually. Usually in computer graphics, ka replaces fraThe second term captures the diffuse reflection. Diffuse reflection occurs when lighthits an ideal diffuse surface (also known as a Lambertian surface). A diffuse surfaceelement will appear equally bright from whatever angle it is looked at. This phenomenais a function of cos = (i’c . L), i.e., it is proportional to the inner product between thesurface element normal (iii) and the light direction () from the surface element.2 L(not vectorised) is the light source radiance in the direction of the surface element anddw is the solid angle formed by the light from the surface element. These values mustbe computed and summed for each of the m lights in the scene. For a perfectly diffusesurface, it turns out that frd is constant and thereforeLpixei = fd f L cos O dw= frdEiwhich means that the diffuse reflected radiance is proportional to the irradiance on thesurface. Usually in computer graphics, frd is simply called kd. Surfaces covered by mattepaint or chalk dust are two examples of real surfaces that radiate light mostly in a diffuseway. Two models can explain the diffuse reflection. In one, light reflects from a surfacemade of a multitude of mirror-like micro-facets before leaving the surface. In the othermodel, light penetrates the surface and is scattered internally onto colour pigments beforeemerging out of the surface. Figure 2.3 illustrates these two models.2Please note that all through this document, vectors are identified by a capital letter crowned by anarrow (A) and are all assumed normalised unless otherwise specified.Chapter 2. Local Illumination 19Figure 2.3: Two models for diffuse reflectionThe third and last term simulates the specular reflection. Some surfaces reflect lightmore around one direction. This reflection is associated with what is usually perceived asa highlight. For a perfect mirror surface, light coming from direction (8j, cj) will reflectin direction (6, / + yr), where q is the azimuthal angle of the light direction. To capturethis behaviour, we need to use the Dirac delta function S(6j — 6)6(.— ( + ir)) which isnon-zero (i.e., 1) only when 6 = 6 and + K = q,. The reflected radiance Lr(6r, Pr) flthe perfectly reflected direction must therefore satisfyLr(8r,= f j2F8L(O, SIll 6d6d = Li(8r, (r + K)).This is achieved when f is expressed asF =sin= (cos 6— cos O)ö(q— ( + yr)).Phong [phon75] observed that for a mirror, the specular direction corresponds to theperfectly reflected light direction (RL) around the normal at the surface. See Figure 2.4.Only when a viewer (E) is positioned along (J&), should she see the specular reflection.However, most real surfaces are non-ideal specular reflectors and some light is also reflected slightly off axis from the ideal reflected light direction RL. A possible explanationfor this phenomena is that a real surface is never perfectly flat but contains microscopicdeformations. To simulate the decay of radiance reflected off axis, Phong controlled theChapter 2. Local illumination 20Nspecular term by raising the inner product between the eye direction (.) and the perfectly reflected light direction (RL) to a certain power n. In this model, n is a positivereal number which is a function of the surface roughness. Later, Blinn [blin77] incorporated various distributions, D, of these microscopic deformations. Blinn also included inhis model other surface parameters like geometric attenuation, G, due to self-blocking,and the Fresnel coefficient, F, for grazing angle effects. To incorporate these factors, F3is expressed asDGFF3— cosThe factor (1Sf. fl)’ provides a distribution approximating the specular reflection modelproposed by Phong, although with more sound physical bases. Here, P1 is the bisectordirection between the eye and the light L directions.The reader familiar with the traditional nomenclature of the reflection models in computer graphics will notice that Equation 2.1 departs from it in many symbols, althoughthe equivalent graphics terms can easily be substituted. The connexioll between f andthe standard graphics formulation [lewi93] corresponds to. . — 1 1 rJrVi, (j (17., Yr)— d + fu5cosWe do so because of the need to use proper radiometric values and units. This can helpFigure 2.4: General ReflectionChapter 2. Local illumination 21avoid inconsistencies when going from one reflection model to another, whether thatmodel is developed in computer graphics, computer vision or optics. In this thesis, theradiometric terms are used.Several improvements to the basic reflection model described above have been proposed. The parameters controlling the specular reflection have also been applied torefraction {ha1183] [ha1189]. Even energy conservation formulations have been presentedfor many of the popular models [lewi93]. Each of these improvements was justified as amethod to render scenes with a higher degree of realism. We will examine in more detailsome other improvements in the next sections.2.4.1 Spectral curvesMany of the terms in Equation 2.1, reflection and emission, are actually functions of thewavelength ). Much effort has been devoted to representing and efficiently computingthis wavelength dependency [cook82] [meye88j [yuan88] [glas89] [musg89] [gart9O] [raso9lj[meye9l] [borg9l] [peer93]. Although physical accuracy requires that computations becarried out over the entire visible spectrum, most of the renderers in use today simplysample this spectrum in the red, green and blue regions. This is done for several reasons.There is no generally accepted representation for the spectral curves. While surfacereflectance spectral curves are rather smooth [raso9l], light emittance spectral curvescan be noisy and exhibit strong, narrow peaks (e.g., fluorescent light) [wysz82]. One canalways use a very accurate representation, but at the cost of an increase of computationand storage. The unavailability of spectral data makes it unappealing to any neophyte incolour science. Meyer and Hale [meye9l] measured the spectral properties (reflectance,emittance, absorption and transmittance) of some materials and proposed to build aspectral database for surface rendering. Such a database would certainly help in spreadingthe use of spectral curves.Chapter 2. Local Illumination 222.4.2 Wave PropagationOne can also treat light propagation as a wave. Moravec [mora8l] proposes to simulate this form of light transport by using wave fronts rather than light rays. However,computing these wave fronts as they progress through the environment and reflect offsurfaces is a very expensive process. Some of the effects produced can fortunately besimulated using the simpler ray model for light propagation. This simple model has evenbeen used to simulate the interference of reflected light [flrb85] [smit89] [dias9l]. Wolffand Kurlander [wolf9O] present a model for ray tracing surfaces lit by polarised light.2.4.3 Average Approximations of BRDFThe experimental approach to reflectance accuracy involves measuring the light reflectedin all directions for all possible directions of the incoming light. A BRDF is definedover infinitesimal intervals and therefore cannot be measured directly. However, averagevalues over finite intervals can be measured and used as an approximation to a BRDF.Such approximations have been measured [krin47] into tables for various earth surfaces(wheat fields, ocean and forest) as viewed from an airplane. Cabral et at. [cabr87] computereflectance tables for computer generated height fields.3 They consider shadowing andmasking effects over a triangular mesh representation of the height field, but, they neglectinterreflections between surface elements. They discretise the reflectance hemisphere(hemisphere above the height field) into bins in order to accelerate the computations.These measurements should lead to very accurate reflectance functions if the sampling is done at a sufficiently fine discretisation. Unfortunately, the obvious drawbackof this approach lies on the huge amount of data needed. At a 50 precision in both 0and , we have a four dimensional table with more than 1.5 million entries. And all of3A height field is defined as a fiinction defined over a surfaceChapter 2. Local illumination 23this for oniy a single surface. To reduce the amount of data, Westin et al. [west92] usesamples to define spherical harmonics as do Sillion et al. [sill9lb] to represent BRDFs forglobal illumination. Both techniques conserves energy (important in global illumination)and their representation is suitable for filtering between geometric levels of details. Unfortunately for irregular BRDFs with long and narrow spikes, even spherical harmonicsrequire a very large number of coefficients to accurately approximate the BRDFs. Westinet al. give a few tricks to reduce the number of coefficients but never provide any kindof description of the order of magnitude of storage their representation requires.Fournier [four92a] [four92b] introduces the concept of a distribution of surface normalsto capture reflections from the microscopic level (Phong, Blinn, Cook) to the macroscopic(geometric) level. A distribution of surface normals is represented by a few directions(main normals) around which the micro-facets of a surface are mostly oriented. Thedistribution of the normals around each direction is controlled by a Gaussian function.Fournier establishes a parallel between the standard BRDFs and these distributions.He uses them to produce simple definitions for complex surfaces and explains how torender the surfaces efficiently. His approach leads to important savings because onlya two-dimensional distribution of surface normals needs to be stored instead of a four-dimensional table.Becker and Max [beck93] also allow surfaces to be viewed at different geometric levels(BRDF, bump map and displacement map). A smooth transition is provided betweenthe BRDF, the redistributed bump map (corrected to produce the same average reflectedradiance as the other levels) and the displacement map. Their technique still requiresthe use of several large tables.Ward [ward92a] [ward92bj introduces a different shading model to address the issueof data size. He designed a new physical device to simplify the measurement of BRDFs.Chapter 2. Local Illumination 24An external light illuminates the surface being measured. The light reflects onto an hemispherical reflective dome covering the surface. A CCD camera, positioned in the dome,measures the reflected radiance on the dome. Ward thereby reduces the measurementsto only one array of intensities for each different light direction (a reduction from 4D to2D). One drawback is that his device lacks accuracy when the light or its reflection areat an angle larger than 600 to the surface normal of the sample. The radiances measuredare currently restricted to values between 0 and 255 (CCD resolution). Ward observed inhis measurements that the reconstructed BRDFs could be approximated by two ellipticalGaussians, perpendicular to each other. He therefore reduces to only a few parametersthe huge tables of Cabral et at., the large number of spherical harmonics of Westin etat., or the distribution of normals of Fournier. One can, however, wonder how manysurfaces his model can represent with sufficient visual accuracy.2.4.4 More Sophisticated Physical ModelsHe et at. [he9l] introduce a more sophisticated reflection model based on a more complete description of surface characteristics. To derive their model, they introduce severalvariables that capture certain previously neglected aspects of actual reflection. Theirmodel includes polarisation of light, height, slope and statistical connection of the surfacebumps, and a notion of effective roughness. Although their model has been demonstratedto be close to actual surface reflectance for certain materials, the number of parametersis disconcerting. The situation is further complicated because each variable may be dependent on the others and the effect of any one parameter on the final shading is notobvious.Hanrahan and Krueger [hanr93} propose a model to simulate reflection off layeredsurfaces. Their model replaces the diffuse term approximated by Lambert’s cosine law(]‘.) by a much more complete model where light penetrates the surface and is scatteredChapter 2. Local Illumination 25between the various layers before being reflected off the surface. They present applicationsof their model to rendering human skin and tree leaves.All the values for the variables in the models of He et al. and Hanrahan and Kruegercould be provided in a special library of surfaces for photorealistic rendering. However,measuring all these values is a very difficult task requiring much more equipment, timeand expertise than most computer graphics laboratories can afford.2.5 Failures of Current Reflection ModelsIn the previous sections, we reviewed some proposed improvements to the basic reflectionmodels. None of these improvements really succeeded in making any of the commonlyused reflection models obsolete. They each contribute to the creation of more realisticrendering of surfaces, but somehow the improvements were not worth the costs involved.We identify a few important qualities necessary for a reflection model to be accepted.First, the model must produce realistic looking surface shadings and and it must beconsistent. Varying the direction of the light sources should create continuous changesin the shading (no unexpected discontinuities). The model must be easy to use andintuitive. The model must also be flexible, so that it produces a wide variety of shadingeffects. Finally, it should be reasonably fast to compute, it should be memory efficientand, it should be relatively simple to implement.Naturally, choosing a shading model is also a function of the application domain. Inglobal illumination, conservation of energy is very important, while in the entertainmentbusiness, achieving a desired effect is the only issue that matters.A new reflection model recently proposed by Schlick [schl93] seems to fulfill wellmany of the criteria listed above. His model is an interesting compromise between theChapter 2. Local Illumination 26theoretical accuracy and simplicity of use. It is controlled by only a few intuitive parameters. Moreover, his parameters can be associated to measurable physical surfacecharacteristics. While it is still too early to evaluate the impact this approach will haveon the models currently used, Schlick’s new model is a step forward in a very interestingdirection.Because we would like the techniques developed in this thesis to be accepted andwidely used by the graphics community, we concentrate our efforts around Phong’s andBlinn’s shading models. As we have seen, this is not a physically accurate but it hasproven to satisfy a wide variety of applications. Equation 2.1 can be rewritten with thespecular reflection from Blinn asLpixei = fraLia7 +f frd()LidWi+ (2.2)i=1f k8F;(e, 8r, r)( fl)L dwj1 WiIn the case of Blinn’s full formulation,F’=GFcoser.In the general formulation of Phong, the Fresnel term, F, and the geometric attenuationfunction, G, are not considered and F is taken to be constant. We will write it as F8.Equation 2.2 is the shading equation on which we base our techniques. This formulationis quite popular. However, it is our hope that the techniques developed in this thesis willsomehow transcend the shading model used.Chapter 3Shading with Extended Light SourcesIn the production of computer graphics images, after designing the 3D geometry of a scenethe artist must choose the number, position, and emitted radiance of the illuminants. Ofequal importance are the values assigned to the various reflection properties of eachsurface. Together, the illuminants and the characteristics of the surfaces determine thefinal appearance of the scene once it is rendered.In the previous chapter, we reviewed some of the reflection models proposed to simulate the optical properties of a wide variety of surfaces. In this chapter, we look at thevarious types of light sources used in computer graphics and integrate the properties oflights (in radiometric terms) into the basic reflection models of Phong and Blinn.One of the requirements for generating realistic images is the capability to simulatea wide variety of light sources. Small changes in the emitted radiance of illuminants cansignificantly affect the appearance of a scene. To get the desired shading effect, an artistmay add dozens of lights of different types. In computer graphics, the two most commonlyused light sources are the directional light and the point light. They are simple to useand the irradiance they produce on a surface is relatively easy to compute. However,they are of limited use in the simulation of the irradiance patterns of real lights such asfluorescent light or neon tube light. Neither do they capture the shading gradients of arealights. Both types of lights are part of our everyday existence and so, to be complete,computer rendered scenes should be able to handle these types of illuminants.After reviewing the models developed to simulate these extended lights, we present27Chapter 3. Shading with Extended Light Sources 28our analytical solution to shading with a linear light. Our model allows us to simulatethe emission of a neon tubular light where each point in the light radiates light equallyin every direction. This model is extended to simulate fluorescent tubular lights whereeach point on the surface of the tube radiates light mostly in the direction perpendicularto the main axis of the tube. We also show how our solution was used by Tanaka andTakahashi [tana9lb] to simulate light emitted from polygonal lights.3.1 Concepts and Related WorkIn the next sections, we will describe the related work in function of each type of lightsource.3.1.1 Directional LightThe simplest light source in use in computer graphics is the directional light source. Itmodels parallel light rays coming from an infinitely distant light source of infinite power.Assume that a directional light shining from (8, o) produces an irradiance E0 ona surface oriented perpendicularly to the light direction. The irradiance in this case isexpressed asE0= L L cos O sin 8d8dq5.Because the surface is perpendicular to the light direction, cos = 1. Obviously, theradiance L of this light is zero except iii the light direction. We use the Dirac deltafunction 5(&——qo) to ensure that the light radiance L is non-zero only in thedirection (8, o). The normalisation factor to express E0 as a function of L is simply= f’ f — — L sin 8dOdçb0 sin80= f2 (cos 8j — cos —Chapter 3. Shading with Extended Light Sources 29By integrating, we can determine the radiance of the directional light that would producean irradiance E0 on a surface oriented perpendicularly to the light direction asL= E0—— o)= Eoö(cos6 — cos9o)—sinWe can use this radiance in the reflection model of Equation 2.2 described in the previouschapter. To simplify the equations, we do not consider in this section the ambientreflection, which can be added directly to any formula because it does not depend on thelight definition or position.The radiance of a surface element in the direction of a given pixel while illuminatedby a directional light isLpixei = kdf(I. L)L dw+ksfFs(Th i1)L dw= kdf (iJ.L)E0o(ei0)d8d+ksfF8(JS il)E06(O—O)6(’--4’o)The radiance of the surface element becomes simplyLpixei = kd(I fl)E0 + (3.3)k8F(J’c L)E0. (3.4)There are several reasons for the popularity of the directional light model. First, thevector L that points to the light source is constant and does not have to be computedat each point to shade. This vector, in fact, defines the directional light source. Becausethe vector I is constant, the shading computation for polygons is simplified. Thus manycomputations can be removed from the inner loops of the shading computations [phon75].Chapter 3. Shading with Extended Light Sources 30The radiance of the light source is constant and independent of the position of an objectin space. Therefore, to make a surface radiance twice its value, one can simply doublethe light radiance.3.1.2 Point LightThe second most popular light source is the point light source, which is defined geometrically by a point in space radiating light equally in every direction. Such a point lightsource emits light over the entire sphere. Its radiant intensity isI=.47rThe solid angle subtended by a surface element dA as seen from the point light sourcecorresponds todA cos Udw= 2where U is the angle between the surface element normal and the direction to the pointlight source and r is the distance between this surface element and the point light source.The irradiance of the point light source onto a surface element is computed asE— Idw— cos8— dA — 4irr2The radiance of a surface element in the direction of a given pixel illuminated by a pointlight is(].f)L61 = kd 2 + (3.5)4’rrfl)n(3.6)47rr2The directional and point light sources are generally included in most rendering systems due to their simplicity. In fact, many graphics hardwares implement only theseChapter 3. Shading with Extended Light Sources 31two types of light sources. Modifications to the radiant intensity distribution of pointlight sources can give the designer more creative freedom. Warn [warn83] attempts tocreate more realistic light sources by simulating the lights used by photographers. Hisextensions include making the radiant intensity of the point light sources a function ofdirection to produce spotlights and providing flaps that can cut off the light in certaindirections. Verbeck and Greenberg [verb84] and Nishita et al. [nish85] extend Warn’swork of by providing more sophisticated lighting design tools. They allow the user tospecify the shape of the radiant intensity distribution by drawing goniometric diagrams.13.1.3 Linear LightLinear light sources simulating tubular lights (neon and fluorescent) open a new dimension of shading effects.We express the radiant flux of a linear light in terms of watt per unit length (W/m).Assume that every point on a linear light radiates light equally in every direction. Theirradiance of a linear light source onto a surface element is extended from the formuladeveloped for a point light source. It corresponds toE= f lcos8dl = ‘ij f cos6i£47r r2 4r £where £ is the length of the linear light and I is assumed constant over every part ofthe light.The radiance of a surface element in the direction of a given pixel illuminated by alinear light isr(N.L)= kd J 2 dl + (3.7)_________________k3Ff (3.8)‘Goniometric diagrams specify relative intensity as a function of direction. See the IES LightingHandbook [kauf8l] for a good source of lighting definitions.Chapter 3. Shading with Extended Light Sources 32Linear light sources are modeled by Verbeck and Greenberg [verb84] using a series ofcollinear point light sources. By only sampling the radiant intensity of the linear lightat discrete positions, artifacts in the form of discrete regions may appear in the diffusereflection, in the specular reflection, and in the shadows. These artifacts are mostlyapparent on surfaces with sharp specular highlights and in the shadows cast from thislinear light. To alleviate these sampling problems, it is always possible to add morepoint light sources. However, this is done at an important increase in cost because eachnew point light contributes to a very significant portion of the entire rendering process.Moreover, it is very difficult to estimate the best number of point light sources that wouldbe cost effective while still producing images exempt of major shading artifacts. Thisnumber depends on the distribution of objects around the linear light in the scene, thetype of surfaces (diffuse vs. highly specular), the viewer position, the positions of theshadows and the contrasts in the final image. It is conceivable that for each frame of ananimation sequence, the number of point light sources needed to approximate a simplelinear light could vary. Since many of these factors are geometric or are a function ofsome mathematical model, one could develop some heuristics to estimate the number ofpoint light sources by which a linear light could be approximated with limited shadingartifacts, but there is always the concern that changing the number of point light sourcesLinear LightNFigure 3.5: Irradiance of a linear lightChapter 3. Shading with Extended Light Sources 33might produce temporal aliasing. However, as it will be argued later in this chapter, ananalytical solution proposed will provide perfect shadings at a relatively low cost.Nishita et al. [nish85] simulate a fluorescent light where each point on the light emitslight as a cosine function around the direction perpendicular to the linear light. Theyderive an analytic solution for the diffuse reflection when the linear light source is parallelor perpendicular to the surface. For the general case, where the surface element is neitherparallel or perpendicular to the linear light, they transform the light and the surfaceelement to a new coordinate system (2D) where they perform a numerical integration viaincrements. Multiple point light sources are used to approximate the specular integralwith possibly the exact same problems that can appear in Verbeek and Greenberg [verb84](i.e. discrete regions in the highlight). In subsequent work, Nishita et al. [nish92] simulatelinear lights in a similar way to their earlier work [nish85], although the diffuse integralis performed over a long thin rectangle instead of a line. The specular integral is stillapproximated by point sampling the line as before [nish85]. However, by assuming thatthe linear lights all reside on a plane (the ceiling of classroom for instance), they presenttechniques to speed up shadow computation using a scanline approach.Picott [pico92] simulates a linear light where each point radiates light equally in everydirection. He solves the diffuse reflection analytically but replaces the entire specularintegral by a single sample point. This point on the linear light represents the pointof highest specular reflected radiance from the point being shaded for the eye positionunder consideration. This crude approximation of the specular integral can producesatisfactory results as long as the roughness coefficient of the surface is very high (theintensity curve then exhibits a single narrow peak) and as long as the angle formed by theend points of the linear light at the point being shaded is relatively small. Breaking anyof these assumptions leads to aliasing artifacts in the highlight regions. To reduce thisaliasing, Picott proposes to sample the linear light around the point of highest specularChapter 3. Shading with Extended Light Sources 34reflected intensity. This sampling is done according to the roughness coefficient of thesurface. However, as will be demonstrated with our formulation, computing analyticallythis entire integral costs the equivalent of just a few samples but avoids sampling artifactaltogether.Bao and Peng [bao93] present analytic solutions for both diffuse and specular reflections of linear lights. They derive their solutions in Cartesian 3D space. The analyticsolution for the specular reflection is reduced to evaluating a series of m(rn+1) simplepolynomials of degree up to m, where m = is a function of the surface roughness n.This solution is valid only if n is an odd integer. This limitation on n mainly affects therough surfaces for which n is small. As the surface becomes smoother (a large value forn), the function cos q behaves more and more like a delta function and the requirementthat n be an odd integer is not as restrictive. However, for large n, the computationalcost of evaluating m(rn+1) polynomials of degree up to m can quickly become prohibitivelyexpensive.3.1.4 Area LightNishita and Nakamae [nish83j were the first in computer graphics to present a solution tothe diffuse integral for polygonal and polyhedral light sources. By assuming a Lambertianlight (each light element emits light equally in every direction), the radiance falling on asurface element is proportional to the solid angle subtended by the light. For a polygonallight, this corresponds to:LmLpixei = kd— cos (3.9)27r i=1Chapter 3. Shading with Extended Light Sources 35where P is a point on the surface element being shadedkd is the proportion of diffuse reflection of the surfaceL is the light radiancem is the number of vertices of the polygonal light/3jis the angle formed by two adjacent light verticesSj is the dihedral angle between the plane at P and the plane of the faceformed by the segment on the light and P.These variables are illustrated in Figure 3.6.Figure 3.6: The geometry describing diffuse shading from an area lightA parallel can be drawn between the irradiance and the radiosity, which is also calledradiant exitance. If we neglect the presence of the radiance, the surface BRDF and thediffuse coefficient, the formula of Equation 3.9 is what is called in global illumination theform-factor between a surface element and a polygon. The concept of form-factor is thebasis of most global illumination algorithms in use today.Some techniques that simulate linear light sources have also been adapted to deal withthe specular integral of area light sources. It is fairly simple to see how the approaches ofVerbeck and Greenberg [verb84j and Picott [pico92] can be extended for area light sources.In fact, several other techniques have addressed various issues in sampling the radiance ofarea light sources [cook84bj [shir9l]. Houle [houl9l] represents her area lights as radianceChapter 3. Shading with Extended Light Sources 36direction distributions and uses a pyramidal approach to filter the light radiance.Bao and Peng [bao93] also derived a solution for shading with an area light source.An alternate solution to Equation 3.9 is developed to compute the diffuse reflection. Toapproximate the specular reflection, they use a Taylor expansion of their formula withonly a few terms. A better solution is obtained by addillg more terms from the Taylor’sseries.Some algorithms have been proposed to treat specular surfaces within the radiosityalgorithm. These techniques can be applied to compute the reflected radiance off a surfaceilluminated by a polygonal light. They have the advantage of conserving energy butunfortunately rely on discretisations of every surface. Immel et al. [imme86] add discretedirections to each patch and compute the radiosity solution with these directions. Sillionet al. [sill9la] represent the reflected radiance with spherical harmonics at discrete pointson each patch, assuming that the radiance should remaill uniform withill neighbouringpoints. Rushmeier and Torrance [rush9Oj and Hall and Rushmeier [hall93] assume fewersurfaces are specular in a scene and thus reduce the number of form-factors required fortreating specular surfaces. Aupperle and Hanrahan [aupp93] automatically subdivide allpatches according to the three point transport. They use a variation of the Torrance-Sparrow [torr67] specular term.Other rendering techniques have also been reported to simulate different types ofarea light sources. In cone tracing [aman84], Amanatides renders images illuminatedby spherical lights. In pencil tracing [shin87], Shinya et al. introduce the paraxial approximation theory to ray tracing, substituting a single ray by a pencil of rays, thuskeeping information about the pencil’s concentration or expansion. With this concept, itis possible to render scenes illuminated by area light sources. Unfortunately, the inclusion of these specialised rendering techniques into more conventional techniques requiresextensive revisiolls of the algorithms that are not always possible or desirable.Chapter 3. Shading with Extended Light Sources 373.2 Linear Light3.2.1 Our SolutionAs discussed in the previous sections, some attempts to simulate linear light sources havebeen proposed but no general solution other than sampling (apart from later developmentby Bao and Peng {bao93]) has ever been proposed for approximating the specular integral.In the next two sections, we develop a formulation of the problem that lends itself to asimple solution for both diffuse and specular terms. Diffuse IntegralThe problem of shading a surface illuminated by a linear light source can be considered in2D. We transform the coordinate system such that F, the point representing the surfaceelement to be shaded, is at the origin and the light source lies on the plane Z = 0. Thediffuse integral of Equation 3.7 can be expressed in this system asdl= (.P)f dlwhere £ is the length of the linear light and iS4 is the projection of I onto the planeZ—E0.Consider an infinitely small segment dl on the light source. The projected lengthof dl in the direction of P is dq = dl cos 3, where /3 is the angle between the directionperpendicular to dl and the direction from dl to P. On the other hand, consider a circleof radius r1 centered at the origin. The length of the arc of dçb is dq = r1 d. By combiningthese two equations, we can express dl as a function of d asdl= r1 dqcosChapter 3. Shading with Extended Light Sources 38We can replace the integral along the length of the light source by an integral alongthe angle subtended by the light source at P. If ‘y is the angle made by joining the twoend points of the light source to P (two light vectors) and 9 is the angle between N andthe closest light vector2, the integral becomes(JJ. f07 cos5 ri dqo r1 cos3Figure 3.7 illustrates this situation. Expressing r1 as a function of gives r1=where d is the distance between the supporting line of the linear light source and P.This formulation is valid for modeling a linear light source as an infinity of point lightsources where each point on the light source radiates light equally in all directions. Thefinal diffuse integral is simplykd fos d. (3.10) Specular IntegralBy substituting the popular Phong specular term (].E )n instead of the Lambertianformulation (].) in the diffuse term,3 we obtain the following expression for the specularI, is within y, we can take any end points light vector. In such a case, 6 is negative.replaces J’ and Rp replaces NFigure 3.7: Integrating over the angleChapter 3. Shading with Extended Light Sources 39integral•R)flfcosd. (3.11)It is important to note that if P is along the supporting line of the light source, theformulation of our integral is indeterminate as -y and d are both zero. In this situation,the integral can be easily handled as a special case. Take the original formulation to bef2dl.Since P is on the supporting line of the linear light, (iJ L) is constant over the entireintegral unless P is actually right on the linear light. An analytical solution1(N. L) (1+ b)2 dl = (N . L)b( + b)’exists, where b is the distance to the closest end point of the linear light. If P is on thelinear light, the solution to this integral is infinity.The integral for the diffuse term in Equation 3.10 has an exact solution that is easilysolved. However, the integral for the specular term is harder to integrate exactly in anefficient manner.An alternative is to approximate cos between 0 and . Figure 3.8 illustrates a fewexamples of cos for different values of n. This family of curves is relatively smooth,monotonically decreasing, and can efficiently be approximated by polynomials whichin turn are simple to integrate analytically. Several approximation techniques can beused. For a similar problem, Poulin and Fournier [poul9ob] studied the use of Chebyshev polyllomial approximation, piece-wise splines and the Parzen window. Because theformulation of Phong specular reflection is just an approximation based on no physicalsurface properties other than observation, we can approximate this function by variousChapter 3. Shading with Extended Light Sources 40Figure 3.8: cos q for (right to left) n = 1, 10, 100, and 1000 and e [0, ]other functions with a similar behaviour. Blinn [blin77] reported various such functions.Bishop and Weimer [bish86] use a truncated Taylor expension of the Phong function.The pseudo-code of Figure 3.9 computes the Chebyshev polynomial approximatingthe function cos’ , where e [0, xMl. The array a[0. . D(a)} will contain the coefficients ofthe polynomial approximating the curve. Note that this pseudo-code has been providedhere to explain how to compute the approximation. Tables can easily be used to speedup many of these computations.In order to reduce the degree of the polynomial approximating the function, we cutoff the function cos’2 at the angle where the function is less than a chosen . Lookingat the curves of Figure 3.8, one can observe that as m becomes larger the area underthe curve tends to be very small for the angles larger than a certain angle XM. We cancompute for given n and c the value for XM such thatXM = (cos’0 0.2 0.4 0.6 0.6 .0 1.2 1.4When the curve is evaluated at an angle higher than this value of XM, it is consideredChapter 3. Shading with Extended Light Sources 41Approximate (n, XM, D(a), a[])/ * n is the degree of the function 8 // * XM is the maximum angle of the approximation *//* D(a) is the degree of the polynomial approximation *// * a[] contains the coefficients of the polynomial approximation *1for i = 0 to D(a)sum = 0;for k = 0 to D(a)Xcheby = cos(kr/D(a))X = XM X (Xcheby + 1)/2if (k = 0 or k = D(a))sum = sum + 0.5 cos(x) x Cheby(i, Xcheby)elsesum = sum + cos(x) x Cheby(i, Xcheby)endfor kb[i] = 2/D(a) x sumendfor ifor i = 0 to D(a)if (i = 0 or i = D(a))a[i] = 0.5b[i] x Cheby(i, Xcheby)elsea[i] = b[i] x Cheby(i, Xcheby)endforCheby (degree,x)if (degree = 0) return (1)if (degree = 1) return (x)return (2x Cheby(degree — 1,x)— Cheby(degree — 2,x))Figure 3.9: Pseudo-code for the Chebyshev approximationChapter 3. Shading with Extended Light Sources 42zero. Choosing a value for depends on the light radiance, the surfaces characteristics,the geometry of the scene, the contrasts in the final rendering and other variables. Inpractice, we have found that a polynomial of degree 6 approximates within = thecos q curve and we never observed shading artifacts in any of the images we renderedwith this value for & The degree of the polynomial can be reduced further if e is larger.Once as preprocessing, and for each of the coefficients n in a scene, the Chebyshevpolynomials are computed and stored. They are used in the rendering stage to efficientlyapproximate the specular integral.3.2.2 Extending the Shading ModelIf each point on a linear light source does not radiate light equally in every direction,a new model can be developed from the previous one. One such model assumes that apoint radiates light with a cosine distribution from the directions perpendicular to thelinear light source [nish85]. The modified diffuse integral is simply rewritten as(. ) f(NP .L1) cos/3dl./3 is expressed as (o— ) where ,b is the angle between iS, and the vector formed byP and the closest point on the supporting line of the linear light. These variables areshown in Figure 3.7. The diffuse integral becomes(].]) °+d f coscos(o-)dwhich can be integrated analytically. The specular integral is solved as before a theChebyshev polynomial approximation.If a point on the linear light emits even less in the directions further away from thedirection perpendicular to the light, we can replace the previous distribution by costm /3instead. The same polynomial approximation technique can be used for this distribution.Chapter 3. Shading with Extended Light Sources 43In the case of the specular intensity, we evaluate the product of two polynomials.(] •R )fl 9+7EdEp f coscosm( —)dJust like the extensions of Warn [warn83j for point light sources, the same types ofextensions can be applied to the current shading model for linear light sources. Forinstance, one can restrict the emission of every point on the linear light source with flapsat a given angle 7]. This is done by making certain that the limits of the integrals, 8 and8 + ‘-y, are always confined within the flaps’ range,—i, çb + ii].Other extensions can be applied to the linear light source as a whole instead ofapplying functions to each point individually. It is easy to apply flaps to the entire light.One only needs to determine if the point being shaded is within the illuminated sectionand, if so, the shading is taken care by the functions established previously. Similarly, afunction more complicated than flaps could be applied radially to the linear light source.Figure 3.10 illustrates these extensiolls of light emissions.b) point radiates followinga costm4 distributionc) point radiates only d) general function over thewithin flaps emission of the entire lighta) point radiates equallyin every directionFigure 3.10: The extended shading modelChapter 3. Shading with Extended Light Sources 443.2.3 Results of Shading with Linear LightReplacing a linear light source by a series of point light sources is, as we said earlier,prone to sampling problems. The problems can appear in both the shading and theshadowing. Table I investigates the relative cost of substituting a linear light source asdescribed in this paper by a series of point light sources. Figures 3.11 and 3.12 illustratea typical scene we tested.A linear light source of length ten is positioned five units above a primitive (square,cube, sphere, disk, patch) located one unit above a plane. Each primitive has been scaledso its shadow covers a similar area. The specular coefficient n for (RE. L) is 64 for theprimitive and 32 for the plane. The relative timings are given from the implementationon our local ray tracer {aman92]. The timings are normalised by the time required torender the same scene with a linear light source. For instance, rendering a sphere (raycasting) in our scene without any light source takes only 31% of the time required toshade it with a linear light source. If point light sources are used, 11% must be addedFigure 3.11: Sphere under 7 point lights Figure 3.12: Sphere under a linear lightChapter 3. Shading with Extended Light Sources 45Table I: Linear Light versus Multiple Point LightsPrimitive Shading Only Shading and ShadowingFunction [ Equivalent Function [ EquivalentSquare 0.39 + 0.14 1 4.36 0.16 + 0.10 1 8.35Cube 0.34 + 0.11 1 6.00 0.15 + 0.10 1 8.50Sphere 0.31 + 0.11 1 6.27 0.16 + 0.10 1 8.40Disk 0.40 + 0.14 1 4.29 0.09 + 0.06 1 16.28Patch 0.50 + 0.08 1 6.25 0.06 + 0.04 1 23.50for each light. Therefore in our benchmark, approximating a linear light source by morethan seven point light sources would be more expensive. If shadows are considered, thisnumber increases to nine. Figure 3.11 was computed with seven point light sources andFigure 3.12 with our approximate linear light source. On a 24-bit frame buffer display,the same scene was rendered with as many as 41 point light sources and Mach bandswithin the penumbra region are still visible. Unfortunately, due to dithering, the twoimages incorporated here do not give full credit to our new shading algorithm.For a patch primitive, considering both shading and shadowing, the number of pointlight sources equivalent in rendering time to a linear light source is as high as 24 becausewe rely on a distributed ray tracing approach (described in the next chapter) to determinethe projection of the patch onto the light source.From our observations in Table I and our experience with rendering scenes with linear light sources, shading a surface illuminated by a linear light is equivalent in cost toshading the same surface illuminated by less than seven point light sources. However,our analytical solution is valid for any length of linear light source, in arbitrary positionrelative to the point being shaded and for any diffuse, specular and roughness coefficients. Our solution avoids the discretisation problems prevalent in all previous samplingChapter 3. Shading with Extended Light Sources 46approaches.Although we have considered oniy timings to compare a number of point light sources,we argue that our solution has an even stronger advantage over sampling by the peace ofmind it gives to a designer. Choosing the right number of samples depends on the lights’positions, the surfaces’ positions, the surfaces’ characteristics, the shadows cast and theviewing position. Not all of these can be fully automated so it is possible that duringan animation sequence several frames will exhibit some shading artifacts. Rerenderingthese frames with more samples is a cost that should be factored in, but that is difficultto evaluate. For these reasons we believe our analytical solution to shading with linearlight sources is an important contribution to shading with extended light sources.Shadowing increases the complexity of rendering scenes illuminated by linear lightsources. We will show in the next chapter that our technique, especially after the introduction of some algorithmic speedups, is still quite competitive.Figure 3.13 shows an image of a desk illuminated by a single linear light source. Thesoft shadows add a more realistic look to the scene. Figure 3.14 shows a typical cafeteriailluminated by 12 linear light sources (2 per white rectangle lying on the ceiling). Noticehow by simply increasing the dimension of the light sources to linear, the scene givesan impression closer to what is usually achieved at a much higher computing cost byradiosity algorithms.In the next section, we will present how our analytical solution to shading with linearlight sources has been extended to shading with polygonal light sources.3.3 Area LightAs described in the previous sections, the full shading of a surface illuminated by an arealight source had never been completely computed before. The diffuse reflection couldChapter 3. Shading with Extended Light Sources 47Figure 3.13: A desk under a linear light00Chapter 3. Shading with Extended Light Sources 49be computed analytically, but the specular reflection was only approximated by pointsampling.3.3.1 The Diffuse IntegralTanaka arid Takahashi [tana9lbl extend our formulation of the specular term from linearlight sources to polygonal light sources by incorporating some of their results on highlighting rounded edges [tana9O] [tana9laj. By assuming a Lambertian distributed polygollallight, they use the solution proposed by Nishita and Nakamae [nish83] to compute thediffuse reflection:= kd—-- 13 cos2r j1Please refer to Section 3.1.4 and Figure 3.9 for more information about the diffuse solution.3.3.2 The Specular IntegralThe development for the specular integral is more complex. The use of the specularreflectioll from Phong is important due to its property that the specular reflectivity isrotationally symmetric around the perfectly reflected eye direction RE. As we did, Tanakaand Takahashi transformed the standard Cartesian formulation into polar coordinatesaround this vector. In this form, the specular integral corresponds toLpixei = k8FL1) jf cos8sin8d8d (3.12),...,wm)Chapter 3. Shading with Extended Light Sources 50where n is the Phong roughness coefficient9 is the angle between RE and i, a vector pointing towardsa vertex i of the polygonal lightis the projection of vertex i onto the unit sphereis the rotational angle around REWm) is the projected light polygon onto the unit sphere.Look at the area light from point P on the surface element being shaded. Assumethat the up vector (Z = 1) is ifE for simplicity. Look first at a light triangle definedby a point along RE and two vertices of the polygonal light source. Because we dealwith a Lambertian light, the radiance of this light is the same as the radiance of a lightprojected onto the unit sphere centered at P. Figure 3.15 illustrates several variablesused in the rest of this section.xFigure 3.15: Integrating the specular termThe solid angle is defined by three great circles, two passing by Z = 1 and one passingby w and This latter great circle has a single point R closest to Z = 1 at an anglewe call c. We use the projection of R onto the plane Z = 0 to define the origin (8 =Chapter 3. Shading with Extended Light Sources 51or (X = 1,Y = 0, Z= 0)).Assume the integral is evaluated from w (/3A on Z = 0) to w,i (/3B on Z = 0).Solving the double integral leads toLpixez =k3F8[(NB — A) — (H(, B) — H(, A))1whereI3 cosc \2H(c,3)= I I 2 2 i d75.Jo \tan c+CoS qjFor our function cos , H(o, ) decreases monotonously between 0 and and Tanakaand Takahashi used a Chebyshev polynomial approximation of H(c, ) to integrateH(c, 3).To integrate the entire polygonal light, each edge is integrated in counter-clockwiseorder as seen from P and the result of the integral is multiplied by a function1 1 if (wi, w+1) is counter-clockwiseF(w,w+1)—1 otherwiseEach integral is then summed to provide the final specularly reflected radiance.3.4 ConclusionIn this chapter, we presented an analytical solution for shading surfaces illuminatedby linear light sources and explained how our solutioll was extended to shade surfacesilluminated by area light sources. Rendering with these illuminants is more expensivethan rendering with just the same number of directional and point light sources. Howeverthe increase in realism justifies the added complexity. Moreover, the increase in renderingcost is not as significant as increasing the number of point light sources to produce asimilar lighting effect.The next chapter addresses in more detail the shadowing computation as the lightsource dimensions and the scene complexity increase.Chapter 4ShadowsIn previous chapters, we talked about shading without paying much attention to anotherphenomenon created by the presence of light sources: shadows. A shadow is producedwhen a surface element acts as a blocker of some of the light emitted by another surfaceelement. The blocker stops some of the radiant energy along its path. The energy isthen absorbed or redirected. The volume of points for which at least a portion of thelight is occluded by this blocker is called a shadow volume. This shadow is of interest tous when the radiance on a third surface element has to be computed since we must thencharacterise the illumination on this surface element. This is achieved by intersectingthis surface element with the scene shadow volumes. A shadow is cast on a surfacewhen this surface intersects a shadow volume. This intersection defines a cast shadow.Shadows can provide important information about the shape of objects, their relativepositions and the nature of their surfaces (transparency). They also can reveal someinformation about light locations and shapes. While shadows can also be produced bysecondary lights (i.e. a surface reflecting light can act as a indirect light), we will restrainour discussions in this chapter to shadows produced from an emitting light source.The presence of shadows can reduce the cost of computing shading information. Afterall, when a point is considered completely in shadow of a light, the direct contributionfrom this light is zero. Unfortunately, determining if a point is completely, partially ornot in shadow is a fairly expensive process that often more than offsets the savings inshading.52Chapter 4. Shadows 53We will consider in this chapter only shadows produced by opaque blockers. Semitransparent material blocking light for certain wavelengths can be considered as opaqueblockers if we deal with the shadows on a wavelength basis. However, shadows producedby refractive surfaces are more complicated since they can redirect light depending ontheir geometry, their index of refraction and the wavelengths of the light. Many of theproperties of shadows studied in this chapter do not apply to refractive blockers.In the following sections, we will first give a few definitions of concepts linked toshadows. Then we will go through a series of properties to establish a foundation forour understanding of what is involved in correctly computing shadows. This will giveus some insights on the effectiveness of some shadowing algorithms. With this in mind,we will present our shadowing algorithm for scenes illuminated by linear light sourcesand two culling algorithms to reduce the number of occluding candidates. Finally, wewill review and discuss various algorithms developed to determine the shadows from arealight sources.4.1 DefinitionsThe umbra volume of a light source formed by an occluding object is the set of pointssuch that every point of the light source is blocked by at least a point of the occludingobject.Note that the umbra volume so defined is generally a volume which can be finite,infinite or null. The umbra cast on another object is the set of points of the object(restricted to its surface for an opaque object) that also belong to the umbra volume aspreviously defiled.The penumbra volume of a light source formed by an occluding object is the set ofpoints such that at least one point of the light source is blocked by at least one point ofChapter 4. Shadows 54the occluding object while at least one point of the light source is still visible.Note that the penumbra volume is generally a volume or a set of disconnected volumes.If a penumbra volume is formed by a light and a single blocker, this penumbra is alwaysinfinite.If the light source is a direction (point at infinity) or a point, there exists no penumbravolume as a point is either blocked or not blocked by a point of the occluding object.The points blocked belong therefore to the umbra volume.The shadow is the volume formed by the union of both volumes, the umbra and thepenumbra volumes. Each point not in shadow is illuminated by the entire light source.In the next section, we will look at some theoretical characteristics of shadows. Wewill start by studying the shadows in the plane (2D) because fundamentally, shadowsproduced by a linear light source need to consider only the visible portion of a linesegment viewed from a surface element to shade. We represent this surface element by asingle point on its surface for simplicity. Also, some concepts and properties are simplerto illustrate and understand in the plane and can be generalised to the 3D space.4.2 Properties of Shadows4.2.1 Directional and Point LightAs already mentioned in the previous section, a directional light and a point light sourcecannot generate a penumbra volume from a blocker. A point is or is not in shadow. Theumbra generated by one or more blockers is the union of every umbra volume, where thevolumes are not necessarily connected. This is illustrated in 2D by Figure 4.16.For a survey of shadowing algorithms for directional and point light sources, we referthe reader to the survey of Woo et al. [woo9O].Chapter 4. Shadows 55Point light umbraFigure 4.16: Umbra region in 2D4.2.2 Linear LightIn the plane, a linear light is a line segment and a blocker is an opaque line segment (ora set of line segments) resulting from the intersection of a 3D primitive with the planedefined by the linear light source and the point to shade. The following properties applyto a single line segment acting as a linear light and a single line segment acting as ablocker. One Light and One BlockerConsider as input to the 2D shadow determination problem a blocker line segment and aline segment defined as a linear light source. Figure 4.17 shows a light segment L and ablocker segment B used in the next definitions. All the definitions are valid as long as thesupporting lines of the blocker and of the light do not intersect in the light segment orin the blocker segment. Otherwise, the results are applicable on both sides individuallyif the light or the blocker is split at their intersection point into two separate segments.Let b be a vertex of the blocker segment B. Consider the two lines defined by thisBlocker___________Chapter 4. Shadows 56maximum blockerextremal vertex L minimum blocker2 extremal vertexB b2minimum blocker,”,’extremal line ,“ // maximum blockerextremal lineFigure 4.17: Extremal linesvertex and the two light vertices. The line with the smallest angle c with the segmentB is called minimum blocker extremal line (Minb) and its associated vertex is calledminimum blocker extremal vertex. The maximum blocker extremal line (Maxb) and itsassociated vertex, the maximum blocker extremal vertex are linked to the other vertexof the light L. Each of these lines subdivides the plane into two regions. The minimumblocker extremal line divides the plane such that the light segment is on one side andthe blocker segment is on the other. We define the positive side (Minb) as the onecontaining the light segment. The maximum blocker extremal line divides the planesuch that both the light and blocker segments are on one side of the line. We definethis side (Maxb) as the negative side. We also define sides for the regions divided bythe supporting lines of the light segment and the blocker segment. The positive side ofthe light supporting line (Lj contains the blocker segment and the positive side of theblocker supporting line (Bj contains the light segment.Similar definitions are used [camp9l] for extremal planes of convex polygonal lightsand blockers. Campbell and Fussell prove many properties of shadows in 3D with thesedefinitions. Since our work is mainly based on theirs, we will simply enumerate someproperties of shadows in the plane. When the proofs can easily be extended from theChapter 4. Shadows 57ones of Campbell and Fussell, we will omit them and refer the reader to the originalreport [camp9l].Shadow Properties in the Plane:1. No point on the positive side of either minimum blocker extremal lines or on thepositive side of the blocker B may be occluded by B. This represents the illuminatedregion and is expressed in set operations as B+ U (Minbi)+ U (Minb2)+.2. Any point in the area closed by the negative sides of both minimum blocker extremallines and the negative side of blocker B is at least partially occluded by B. Thisrepresents the shadow region and is expressed in set operations as B n (Minbi) fl(Minb2).3. Any point on the positive side of either maximum blocker extremal lines receivessome light from L while still in shadow. This represents the penumbra region and isexpressed in set operations as [B fl (Minb1) fl (Minb2)jfl[(Maxb1U (Maxb2)].4. Any point in the area formed by the intersection of the negative sides of both maximum blocker extremal lines and the negative side of blocker B do not receive anylight from L. This represents the umbra region and is expressed in set operationsas B n (Maxb1) fl (Maxb2).5. A single linear light and a single blocker can generate 0 or 1 umbra region but notmore than 1 umbra region.Proof:Figure 4.18 illustrates cases with 0 and 1 umbra region. Recall the set definition ofan umbra: B fl (Maxbi) fl (Maxb2) where each region is bounded by an orientedline. Look first at (Maxb1) fl (Maxb2). If the two lines are parallel, they defineChapter 4. Shadows 58a) parallel lines b) intersecting linesFigure 4.19: One umbrathree regions and only at most one region can be on the negative side of both lines.Otherwise the two lines intersect and once again only one region can be on bothnegative sides. Figure 4.19 shows all the possible cases. Note that for intersectinglines, all the cases are identical by rotation. The supporting line of B can onlydivide this region into two, leading only to a single umbra region.6. A single linear light and a single blocker can generate 0,1 or 2 penumbra region(s)but not more than 2 penumbra regions.Proof:Figure 4.20 illustrates cases with 0, 1 and 2 penumbra regions. Recall the set definition of a penumbra: [B n (Minb1) n (Minb2)jn [(Maxb1)U (Maxb2)j. Fromthe previous proof, we can deduct that [B n (Minbi) n (Minb2)j determines amaximum of one region. Call this region A. Then we have Afl[(Maxb1)U (Maxb2)jwhich is equivalent to [A n (Maxb1)] U [A fl (Maxb2)j. Since each intersection0 umbra 1 umbraFigure 4.18: Linear light umbra region in 2DChapter 4. Shadows 59O penumbra 1 penumbra 2 penumbrasFigure 4.20: Linear light penumbra region in 2Dleads to a maximum of a single region, there may not be more than 2 penumbraregions. One Light and Many BlockersThe union of shadow regions defines a set of points such that each point belongs at leastto one shadow region.1. The union of the shadow regions from two blockers forms an umbra region that isequal or larger than the union of the individual umbra regions. This union forms apenumbra region that is equal or smaller than the union of the individual penumbraregions.Proof:A point in the umbra of one blocker and in the penumbra of another blocker isin the umbra of the union since the first blocker completely occludes the light. Apoint in two penumbra regions can be in the umbra or penumbra of the unionsince the portion of the light visible from this point can be completely occluded byanother blocker. This explains the possibly smaller penumbra and larger umbra ofthe union of shadow regions.LBChapter 4. Shadows 60To illustrate this situation, consider Figure 4.21. We label Ui and Pi the umbra andpenumbra regions, respectively, formed by blocker Bi, i e {1, 2}. The region hashed andlabelled U12 is in penumbrae of both blockers B1 and B2 but together, Bi and B2completely occlude the light L and therefore this region is in the umbra of the union ofthe shadows ((P1) U (P2) U (Ui) U (U2)).If two blockers are connected, the two intersecting penumbra regions formed at theconnecting vertex are identical except that the orientations of minimum and maximumblocker extremal lines are in opposited directions. Since there is no gap between thesetwo blockers (i.e. no plane separating the two blockers can intersect the light), this entireregion is simply converted into umbra.When the number of light sources is increased, the shadows must be computed foreach light source individually.The properties enumerated in this section usually extend to 3D for convex lights andconvex polygonal blockers. The next section looks at shadows in 3D. It will show that inFigure 4.21: Smaller penumbra and larger umbraChapter 4. Shadows 61xZrnkZmaextainal vaexa)Figure 4.22: Extremal planesgeneral, the addition of an extra dimension introduces a substantial increase in shadowcomplexity.4.2.3 Area LightOne important difference when going from line segments in the plane to polygons in 3Dis the notion of concavity and convexity. A line segment can only be convex while in 3D,a polygon can be convex or concave. Most of the properties in this section assume convexlights and convex blockers. These results are applicable to concave lights and blockerswhen generalisable to a union of convex polygons. One Light and One BlockerMost of the following results have been extracted from work by Campbell and Fussell[camp9l].Consider as input to the 3D shadow determination problem a convex blocker polygonand an area light source defined as a convex polygon. Figure 4.22 shows a triangularlight L and a rectangular blocker B used in the next four definitions.Let (b, b1) be any edge of the blocking polygon B. Consider all the planes definedby this edge and all the vertices of light L. The plane with the smallest angle withthe supporting plane of B is called minimum blocker extremal plane and its associatedvertex is called minimum blocker extremal vertex. Similarly for the light, let (ti, lj±i) beb)Chapter 4. Shadows 62any edge of the light L. Consider all planes defined by this edge aild all the verticesof blocker B. The plane with the smallest angle with the supporting plane of L iscalled minimum source extremal plane and its associated vertex is called minimum sourceextremal vertex. The positive side of the minimum blocker and source extremal planescontains the light polygon. The maximum blocker and source extremal planes and verticesare the corresponding concepts associated with the largest angle with the supportingplane of B instead. The negative side of the maximum blocker and source extremalplanes contains both the blocker and light polygons.Campbell and Fussell start from these definitions to prove the following:1. No point on the positive side of a minimum blocker or source extremal planes canocclude B.2. The occlusion volume (umbra and penumbra) is the volume resulting from theintersection of the negative side of the blocker B with the negative sides of theminimum blocker extremal planes from every edge of B and the negative sides ofthe minimum source extremal planes from every edge of L. This is equivalent to saythat for a convex light source defined by vertices and a convex blocker, the shadowvolume is the convex hull of the umbra volumes created by the blocker from eachvertex of the light source.3. The umbra volume corresponds to the intersection of the negative side of the blockerB with the negative sides of the maximum blocker extremal planes from every edgeof B. In other terms, for a convex light source defined by vertices and a convexblocker, the umbra volume is the intersection of the umbra volumes created by theblocker from each vertex of the light source.Chapter 4. Shadows 634.2.3.2 One Light and Many BlockersThe previous definitions can be applied individually to each convex light and each convexblocker to characterise each volume as illuminated or in shadow (umbra or penumbra)of a given blocker. With the same arguments than we used in the previous section onshadows of segments in the plane, we can also add that:1. The penumbra created by a collection of convex blockers B illuminated by a convexlight source L is at most the union of the penumbrae created by each blocker B.2. The umbra created by a collection of convex blockers B illuminated by a convexlight source L is at least the union of the umbrae created by each blocker B andat most the union of the umbrae of the pair-wise intersections of the penumbrae.These volumes have been used in many algorithms [nish83} [camp9O] [chin92j [poul92b][bao93] to speed up shadow determination. These volumes are used by these algorithmsto characterise the visibility of a light source from a surface element to shade. Ultimatelythe algorithm can identify the blocker or even the edges of the blocker that must bereprojected onto the light source to determine the visible portion of the light illuminatingthe surface element to shade.Much work in radiosity has lately paid attention to the discontinuities in shading produced by shadows [heck9lb] [heck92b] [heck92a] [camp9l] [sale92] [lisc92] [lisc93j [te1192][stew93j in order to produce surface meshes more suitable for proper radiance interpolation. The surfaces are intersected with the volumes. For a region on a surface thatresides within a single volume, the radiance is sampled at a few surface elements and theradiance function is reconstructed for the entire region.If the radiance over a light source is smooth (C°°), the radiance over a planar receiverwill also be smooth unless a change occurs in the visibility of the light source. Discontinuities in the radiance function are introduced by the presence of shadows, HeckbertChapter 4. Shadows 64{heck9lb] identifies three types of radiance discontinuities (D°, D’ and D2) on a receiverilluminated by a light source with a smooth radiance. A discontinuity is Dk at x if theradiance function is C1 continuous at x but not C’’. If the radiance on the light sourceis smooth, discontinuities of the radiance on the receiver will occur at special visual eventsin space that is when light vertices or edges and blocker edges align.In Campbell and Fussell report’s [camp9l], oniy minimum and maximum extremalplanes are considered to determine the illuminated, umbra and penumbra region(s) ofa single light and a single blocker. However to produce accurate meshes within shadowregions, all discontinuities are of interest.These regions have a similar interpretation in computing the aspect graph of polygonalscenes in computer vision [gigu9O] [gigu9l]. In an aspect graph, the projections of linessemantically identical (i.e. defining the visibility of a face) are regrouped for differentviews of a scene. Stewart and Ghali [stew93] construct a subset of the aspect graph. Thea-aspect graph, where is a polygonal light, determines the views in space such that thevisible light polygon keeps the same structure.A D° discontinuity occurs at the boundary of a shadow volume created by a directionalor a point light source. It can also occur with a polygonal light in a situation such asillustrated in Figure 4.23. The blocker B intersects the rectangle receiver. The radianceon one side of the receiver is non-zero while the radiance on the other side is zero. Ascene made of m edges and illuminated only by polygonal lights can produce e(m2) D°discontinuities [lisc92l as e(m) edges can intersect 0(m) faces.D1 and D2 discontinuities occur at edge-vertex (EV) and edge-edge-edge (EEE) eventswhen the blocker does not intersect the receiver but lies between the light and the receiver.An EV event produces a or a D2 discontinuity. Figure 4.24 shows the change ofvisibility of a light. In a), when a vertex of light L and an edge of blocker B align, thelight becomes visible, producing a D2 discontinuity. After, the visible area of the lightChapter 4, Shadows 65Figure 4.23: A D° discontinuityFigure 4.24: EV eventgrows quadratically. In b), two edges of both light L and blocker B are parallel. Whenthese edges align, light L becomes visible, producing a D’ discontinuity. After, the visiblearea of the light grows linearly. A scene made of m edges can produce 0(m2) EV events[heck9la}. An EEE event generally produces a D2 discontinuity. The envelope of such anevent is defined by a quadric ruled surface as it is determined by a set of lines adjacentto all three edges at the same time. Figure 4.25 shows the change of visibility of a lightL when a light edge and two edges of blockers Bi and B2 align. After, the visible area ofthe light grows quadratically. A scene made of m edges ca produce 0(m3) EEE events[heck9la].In radiosity, where a receiver with discontinuities can itself be considered as a secondary light source, a discontinuity Dc can propagate discontinuities D’’ and D2 onother receivers. Since in the framework of this thesis we deal only with local illumination,we can concern ourselves only with D°, D’ and D2 discontinuities.a)D2 b)D’Chapter 4. Shadows 66Figure 4.25: EEE eventIt is interesting to note that a linear light in a 3D environment can also produce D°,D1 and D2 discontinuities. In flatland [heck9la], a linear light can introduce oniy D°and D1 discontinuities.When the number of area light sources is increased, the shadows must be computedfor each light source individually. Then, to determine the radiance at a given location,one must find the regions within which the surface element being shaded resides andmust determine the portion of each light reaching the surface element.4.3 Shadowing with Linear LightIn the previous chapter, we presented an analytical solution to shading surfaces illuminated by a linear light. This solution provides a good approximation of real illuminantslike fluorescent and neon tubular lights. The proposed shading therefore gives a betterimpression of realism. However the shadows cast by objects illuminated by a linear lightmust also be properly computed in order to not destroy but even accentuate this realism.These shadows can be generated by approximating the linear light by a series ofcollinear point lights located along the linear light. Ullfortunately they will be proneto the same sampling and rasterisation problems we faced for the shading. Instead ofproducing a smooth gradient in the penumbrae along the direction of a linear light,discrete regions could appear, thus destroying the perception of a single linear light.D2Chapter 4. Shadows 67Nishita et al. [nish85j describe an algorithm to compute the shadows produced bylinear lights in a scene made exclusively of convex polygons. Given a linear light sourceand a polygon casting a shadow on a planar surface, the polygon vertices are projectedonto the supporting plane of the surface, taking as center of projection alternately eachof the two end points of the light. Once the contour lines of the shadows from bothend points are determined, the convex hull of these shadow polygons is computed andstored as the entire shadow region. The intersection of the contour lines is also storedas the umbra region. Later on during the scanline process, if a surface element is insidea penumbra region, the polygons producing this penumbra are projected back onto thelight source and the shading is computed for the segments of the light source that arevisible from the surface element to shade. Otherwise if the surface element is illuminated,the full shading is computed without any extra shadowing test. Finally, if the surfaceelement is in an umbra region, only the ambient reflection is considered.Later, Nishita et al. [nish92] observed that some optimisation can be achieved forscenes with many aligned fluorescent lights on a ceiling. Rather than classifying eachshadow region [nish85], they take advantage of the fact that several linear lights (segments) might lie along a single line and apply a scanline rendering technique to determinewhich portions of each linear light illuminate the surface element to shade. The shadingis then computed for these visible linear light segments.4.3.1 Primitive-based ShadowingInstead of determining each shadow region, we decided to compute the visible portionof the linear light source from the point being shaded. In that sense, this approach toshadowing is basic but allows primitives other than polygons. When a surface elementcentered at point P has to be shaded, the light source is first cut by the plane tangentto the normal at P because no light can shine on P from an angle of more than of theChapter 4. Shadows 68YPFigure 4.26: Visibility of the Light Sourcenormal at P. If a portion of the light source is visible from F, a triangle (the light triangle)is formed by joining the end points of the linear light and P. If an object intersects thistriangle, its intersection casts a shadow on P. This intersection is projected onto thelight and the shading is computed only for the visible segments of the light. In the caseof the specular integral, the visible segments must also be ct such that for all segmenti, Oi < and j + yj < in Equation 3.11 (i.e. Re• L> 0).Let 0 be an object that may cast a shadow on P. The first test consists in intersectingthe plane supporting the light triangle (the light plane) with object 0. If the computationsinvolved in intersecting 0 with the light plane are expensive, the simpler bounding volumeof the object can be used instead to quickly reject the objects trivially not intersecting theplane. If it is determined the object does not intersect the light plane, then object 0 doesnot cast any shadow on P. Otherwise, the light triangle is transformed into the objectcoordinate system where more accurate tests can be performed. Afterwards, if indeed theobject 0 intersects the light plane, it is transformed in such position that the light lieson the X axis with one end at the origin and the intersection point P lies on the planexLinear Light SourceChapter 4. Shadows 69z = 0 on the positive side of Y (see Figure 4.26). This involves only a series of rotationsand translations, leaving the basic definition (shape) of the primitive unchanged. Theremaining work consists in identifying the intersection between a primitive and the planez = 0 and projecting in 2D the result of this intersection onto the X axis.As an example, take a sphere that has been transformed by a series of scaling andshearing operations. The light triangle is first transformed in the sphere coordinatesystem where it is easier to determine if the sphere intersects the light plane. Theintersection between the plane and a sphere is either null, a point or a circle. In the firsttwo cases, no shadow is produced. For a circle, the two tangent points from P or theintersection between the circle and the X axis define the visible segments of the lightsource. Again, Figure 4.26 illustrates these two cases.For simple objects like polygons, the transformation in the object coordinate system isnot required and the transformation matrix bringing the light triangle onto the plane z =0 and the linear light along the X axis needs to be computed only once per intersectionpoint P. For other more complicated objects like cubic patches, finding the intersectionbetween a light triangle and the primitive can be a very difficult task. Rather than nottreating the shadowing of these primitives in the presence of a linear light, we offer anapproximate solution based on distributed ray tracing. Rays are shot from P towards thelinear light and intersected only with this primitive. The number of rays is determinedas a function of the angle made when joining P with the two end points of the light. In afirst pass, a regular set of rays are shot to roughly determine the edges of the intersectionwith the light plane. In a second pass, additional rays are used to refine the projectionedges. At this stage, some random jittering is added to the ray directions, substitutingaliasing by noise, which is very effective within the penumbra region. This techniquehas the advantage of being rather general to approximate the visibility of the linear lightfrom P. In fact we rely on this approach for every primitive difficult to intersect withChapter 4. Shadows 70a plane. Ullfortunately, it is not flawless as the edges of the intersection are basicallydetermined from a limited set of rays and ultimately it has the same problems than anysampling technique. Although the results so far have been visually pleasing, we advocatethe proper intersection of each primitive with the light plane. For instance, Fournierand Buchanan [four92c] subdivide a Bézier cubic patch into bilinear patches. These canefficiently be used instead of the patch itself for shadowing computation.It is important to note here that as soon as we know that the whole light is hidden,the shadowing process is stopped. The intersection/projection scheme described in thissection can be relatively expensive. It is therefore essential to eliminate the objects notintersecting the light triangle as quickly as possible. The next two sections describe twoalgorithms that we implemented to reduce the number of objects that are candidates forcasting a shadow on the surface element to shade.4.3.2 Light Triangle 3D Scan ConversionWe implemented our shading and shadowing algorithms into our local ray tracer. Our firstacceleration scheme for shadowing is an extension of a standard ray tracing accelerationscheme. To reduce the number of objects that a ray must test intersection with, wesubdivide the scene into regular cubes, also called voxels. Each voxel contains a list ofobjects intersecting this particular voxel. When a ray traverses a voxel, only the objectsintersecting this voxel have to be tested. We use the regular grid traversal described inAmanatides and Woo [aman87]. Once an intersection point is found, a light triangle isformed. This light triangle is scan converted in the 3D grid of the scene. The objectsintersecting a voxel intersected also by the light triangle are gathered. This possiblysmaller set of objects is then tested and projected onto the light source as described inthe previous section.Although we implemented this technique with regular grid traversal, it can easilyChapter 4. Shadows 71Figure 4.27: The Linear Light Bufferbe adapted to various other space subdivisions like irregular grids, octrees, kd-trees andhierarchies of bounding volumes. While the scan conversion of light triangles throughthese structures might prove to be more complex, it might reduce further the number ofstructure elements visited and the number of candidates.4.3.3 Linear Light BufferIn our second scheme, a modified light buffer [hain86] for linear light sources has beenimplemented. In the traditional light buffer, a cube is placed around a point light source.As a preprocessing step, each object in the scene is projected onto the faces of the cube.Each face is subdivided into regions and each region contains a list of objects projectingonto it. Each list can be ordered by the distance to the light. When a surface elementto shade has to be determined in shadow or illuminated, it is projected onto the lightbuffer and only the objects into its region need to be tested. These objects are tested inorder from the light until we find an intersection or until the distance to the light of thenext object is larger than the distance from the surface element to the light.Our linear light buffer is represented by an infinitely long cylinder whose axis isoriented along the light. This cylinder is subdivided radially in sections. Each sectionhas an ordered list of objects contained (at least in parts) within the limit of its angle. Toimprove the performance of the linear light buffer, we also divide it in three regions: theleft, center and right side of the light source. This is illustrated in Figure 4.27. WhenLeftLinear Light SourceCenter RightChapter 4. Shadows 72an intersection point P is found, it is located within an angular section of the linear lightbuffer. If it is positioned in the center region, only the objects at least partly in thisangular section and iii the center region need to be tested. If P is in the left region,only the objects in the left and center regions need to be tested, and similarly for theright region.4.3.4 Results of Shadowing with Linear LightIn the scan conversion of the light triangle in the 3D grid, no additional memory isneeded. No preprocessing is required but the scan conversion process for each point toshade can be rather expensive. Table II and III illustrate our results for two test scenes.1Figures 4.29 and 4.28 show the rendered version of those two scenes. In Table II, theFigure 4.28: Tetrahedron Figure 4.29: Spherefiakes1These two scenes, the spherefiakes and the tetrahedron, have been suggested by Haines [hain87] inan attempt to help benchmarking rendering techniques. The spherefiakes scene consists of 91 spheresdefined recursively and a square. The tetrahedron scene is represented by 1024 tetrahedron recursivelypositioned in a pyramid-like structure. To test our algorithms, a linear light source is located above eachscene.Chapter 4. Shadows 73rendering times have been normalised by the time required to render the same scenewith a grid of 53 voxels and without using the scan conversion of light triangles. In thetetrahedron scene, scan converting the light triangles into a grid resolution of 53 is 25%more expensive than no scan conversion. This is due to the facts that just a few voxelscontain lots of small triangles and that identifying if a triangle intersects the light planeis a relatively cheap process. However when the grid resolution increases, each lighttriangle is scan converted in smaller voxels with less candidates per voxel. The numberof triangles candidate is greatly reduced, which offsets the extra cost of scan conversion.The rendering cost is then reduced by as much as 76%. Notice that increasing the gridresolution does not always result in a speed up. In the tetrahedron scene, a grid of 5O isless cost effective than 25g. The sphereflakes scene is more problematic and shows someinstability with the algorithm. The bottom square is much larger than the spheres thatare agglomerated in the center of the scene. Because of this situation, many spheres aredivided in just a few voxels. The scan conversion is highly sensitive to variations in thegrid resolution, as demonstrated by its unpredictability.In the linear light buffer, the pointers to the objects for each region within eachsection of each linear light buffer need to be allocated. The assignment of an object tothe appropriate sections and regions is done in preprocessing. During rendering, a surfaceelement needs only a simple projection to determine the right section and region. Anadaptive subdivision of the angular sections can be extended for the linear light bufferbut this has not been done in our current implementation.In Table III, the usefulness of the linear light buffer is evaluated. All the renderingtimes are normalised by the rendering time using no linear light buffer (a single sectionper linear light buffer) and no grid. It is interesting to note that unlike the scan conversion of the light triangle, this algorithm is very stable for both scenes. As the gridresolution increases, fewer objects are tested for intersection for each ray, thus reducingChapter 4. Shadows 74the whole rendering time. As the number of sections in the linear light buffer increases,the rendering time decreases steadily.Table II: Light TriangleDatabase Light Grid ResolutionTriangle 53 i03 15 2O 25 5Otetrahedron Inactive 1.0000 0.8926 0.8719 0.8657 0.8615 0.8642Active 1.2532 0.5239 0.2925 0.2689 0.2385 0.3740sphereflakes Inactive 1.0000 0.9059 0.9022 0.8863 0.8742 0.8629Active 1.2342 0.7969 0.9722 0.8245 0.8696 1.5742Chapter 4. Shadows 75Table III: Linear Light BufferDatabase Grid Linear Light Buffer ResolutionResolution 1 24 36 72 144 360 720i 1.0000 0.9732 0.9508 0.9425 0.9236 0.9120 0.912753 0.1429 0.0455 0.0410 0.0359 0.0333 0.0318 0.0314iü 0.1275 0.0303 0.0256 0.0206 0.0180 0.0166 0.0160tetrahedron 15 0.1246 0.0418 0.0227 0.0177 0.0151 0.0138 0.013220 0.1237 0.0384 0.0223 0.0168 0.0142 0.0128 0.012325 0.1231 0.0341 0.0214 0.0163 0.0138 0.0122 0.0117so 0.1235 0.0534 0.0209 0.0159 0.0133 0.0118 0.0114i 1.0000 0.6656 0.6291 0.5751 0.5505 0.5332 0.528153 1.0270 0.6834 0.6449 0.5916 0.5678 0.5471 0.5497i03 0.9304 0.5809 0.5446 0.4876 0.4676 0.4479 0.4466sphereflakes i5 0.9266 0.5782 0.5392 0.4858 0.4644 0.4449 0.441120 0.9103 0.5606 0.5229 0.4796 0.4481 0.4280 0.425825 0.8979 0.5486 0.5112 0.4688 0.4347 0.4156 0.413050 0.8863 0.5366 0.4974 0.4579 0.4212 0.4031 0.39904.3.5 Conclusion on Shadowing with Linear LightOn a simple test scene, we observed that for a few primitives, shading and shadowing witha linear light source is equivalent in computation to using 10 point light sources. Howeverusing this few point light sources results in pictures showing discretisation artifacts withinthe shadow regions. For primitives as complicated as patches, the number of point lightChapter 4. Shadows 76sources equivalent in computation is around 25.An algorithm is introduced to handle correct shadowing with more complex primitivesthan polygons. This algorithm adds more flexibility in the primitives casting shadowsfrom a linear light source at the cost of more expensive intersection computations. Inorder to reduce the additional cost of using this shadowing algorithm, we studied twotechniques based on ray tracing acceleration techniques. The scan conversion algorithmhas the advantage of requiring no additional memory. However the scan conversion process is rather expensive and it is difficult to evaluate the dimension of the grid subdivisionthat would provide a good speed up of the shadowing calculations. The linear light bufferrequires additional memory for each linear light source. However its stability and thespeed up observed made us choose this method for rendering many of our scenes.For certain primitives, determining the visible portion of the linear light source isvery complicated. A general approach based on shooting rays to determine the visibleportion of the light is used. With such a process, the cost of shadow determination isvery high and the tradeoff of both culling algorithms over no culling becomes even moreworthwhile.4.4 Shadowing with Area Light Sources4.4.1 Sampling the LightMost of the algorithms proposed to compute shadows from area light sources are basedon sampling the light source to determine the visible part from the point to shade.Cook et al. [cook84a] use distributed ray tracing where a set of rays are sent to the arealight to sample the light radiance reaching the surface element to shade. Various techniques have been proposed to improve the sampling pattern of the light source [mitc9l][shir9l]. Kok and Jaiisen [kok9lj [kok92] devise some criteria to decide adaptively howChapter 4. Shadows 77many rays should be sent to the area light source. Ward [ward9l] estimates the importance of the contribution of each light before sending samples. The low contributors arestatistically approximated without shooting any ray. Similarly, Chen et al. [chen9l] treata light source only when its radiosity has a significant impact on the global illumination.Cohen et al. [cohe85] propose the hemicube approach. A hemicube consists of fivefaces regularly subdivided in pixels. The hemicube could be positioned on the center ofan area light source and therefore act just like the light buffer of flames and Greenberg[hain86] with the exception that in the hemicube, only the closest object is kept for eachhemicube pixel. Therefore an object is assumed to cover entirely a hemicube pixel. Thehemicube has the advantage that it is simple and can make direct use of current hardwareZ-buffers. However it suffers because of its regular subdivision. It can present strongaliasing aild ca miss features (shadows) if its resolution is not fine enough. Max andTroutman [max93] show how the hemicube can be improved upon by allowing a variationof the resolutions of the pixels on each face and by rotating the hemicube about the surfacenormal. Meyer [meye9Oj uses the hemicube information before shooting rays towards anarea light. Pietrek [piet93] combines hemicube, ray casting and analytical techniquesto quickly and accurately compute form factors, He also gives some criteria to decidewhich approach should be used in which situation based on the information containedin adjacent pixels in the hemicube. Sillion and Puech [sill89j replace the hemicube by asingle plane with varying surface elements according to the solid angle they produce.All these techniques suffer from sampling that is done at the center of a primitive(generally a polygon) to represent the shadowing over the entire surface.4.4.2 Extended RaysInstead of using sampling rays, Amanatides [aman84] extends the concept of a ray to acone. By doing so, he can compute the shadows cast from a spherical light source. AChapter 4. Shadows 78single cone is sent to the light and the non-obstructed portions of the cone determine theportion of the light shining on a point. To reduce the cost of intersecting a cone with manyobjects, Genetti and Gordon [gene93] fit a cone over the light and recursively subdividethis cone into smaller cones until each cone is either considered fully illuminated, inshadow, or until the subdivision reaches a certain threshold.Heckbert and Hanrahan [heck84l describe beam tracing where a polygonal ray is senttowards a polygonal light to determine the visible part of the light. Polygonal objectsclip the beam as it progresses towards the light.4.4.3 Characterising RegionsAs we saw earlier, a shadow introduces discontinuities in the radiance function over areceiver. Baum et al. [baum9l] use D° discontinuities produced at the intersection ofpolygons supported by non-parallel planes to create better meshes for radiosity computation. D° discontinuities also occur in the shadow (umbra) from directional and pointlight sources. The shadow volumes [crow77} from a point light source have been represented by a binary space partion (BSP) tree by Chin and Feiner [chin89j. Campbell andFussell [camp9O] create meshes for shadow discontinuities in radiosity by distributing aseries of point light sources on a polygon emitter and generating the meshes (BSP) foreach point light. This technique can lead to a large number of subdivisions as each lightmight need to be approximated by a large number of point light sources.Nishita and Nakamae [nish83] compute the shadow with its umbra and penumbra(D°, D’ and D2) regions cast by a convex polygonal blocker when illuminated by aconvex polygonal light. They project every edge of blockers from every vertex of thelight. The shadow is determined as the convex hull (if both light and blocker are convexpolygons) of the shadow volumes from each light vertex while the umbra correspondsto the intersection of these shadow volumes. The reflected radiance is computed pointChapter 4. Shadows 79by point at the rendering stage. If a point on a polygon is fully lit by the light, theradiance is computed directly without any further shadow test. If this point lies in anumbra region, only the ambient contribution is added. If the point is in penumbra ofone or more polygons, the polygon(s) casting this shadow must be projected back ontothe light and the radiance is computed only for the visible portion of the light. Whenthe information about which polygon is casting a given penumbra is not provided, sometechniques allow fast culling of the polygons [hain9l] [woo93]. These techniques relymostly on the plane equations formed by linking the point to shade to the edges of thelight.Bao and Peng [bao93] use a technique similar to [nish83] but rely on a BSP structureto order the polygons casting a shadow on the others. The light source is subdivided if itlies on both sides of the supporting plane of a scene polygon. Instead of considering onlythe planes formed by the scene edges and the light vertices, Chill and Feiner [chin92] alsocompute the planes formed by the light edges and the scene vertices. Thus they avoid theuse of Weiler-Atherton [weil77] [athe78] clipping algorithm and construct similar umbraand penumbra BSP trees for each light. This latest approach has also been used byCampbell and Fussell [camp9lj with more theoretical assertions about the boundary ofeach region.This kind of approach characterises regions in space as illuminated, in umbra or inpenumbra of a single blocker. However, they do not characterise regions that would bein umbra as a result of the combined action of two or more blockers. Therefore all thepolygons for which a point is in penumbra must be projected back onto the light sourceto determine the portion of the light visible from the point to shade. Obviously, thiscan be an expensive process. As an example, assume a finely meshed object lies underan area light. Individually, every facet of the object produces a very small umbra buttogether they can cast a large umbra.Chapter 4. Shadows 80Campbell and Fussell [camp9lj observe that the radiance within each penumbra regionis not linear and simple linear interpolation of a few sample points do not provide theproper values for each point in a given regions. They propose to subdivide each regionaccording to the radiance function. The maximum radiance points are found by numericaloptimisation techniques and each region is further subdivided accordingly. Drettakis andFlume [dret93] show how a few sample points can be used to reconstruct the radiancefunction in the absence of shadows.Instead of characterising only the contour of the penumbra and umbra volumes, otherresearchers have studied the effect of the discontinuities within the penumbra regions.Most of the algorithms start by finding all the critical curves formed by a vertex of thelight and an edge of the scene or by a vertex of the scene and an edge of the light. Heckbert[heck92a] finds the end points of each critical line lying on a given polygon, identifies theintersections between these line segments and subdivides each region with a Delaunaytriangulation. A parametric surface is finally fit to each triangle to represent the radianceover the region. Lischinski et al. [lisc92] use the same scheme but represent the scenewith a 3D BSP tree and store the critical curves (lines) on each polygon with a 2D BSPtree. A quadratic interpolating surface is fitted over the radiance and contributions oflights and reflectors (forming different meshes) are summed over a global mesh. Salesinet al. [sale92] present a solution that fits piecewise cubic interpolants to the radiance,thus preserving C1 continuity. Finally, Lischinski et al. [lisc93] merge their discontinuitymeshing algorithm with the hierarchical radiosity of Hanrahan et al. [hanr9lj allowingfor a fast solution of complex environments that produces images of higher quality dueto the presence of more proper radiance discontinuities.In all the previous approaches, the discontinuities produced by three edges are notcomputed. Teller [tell92] presents an algorithm to compute EV and EEE discontinuitiesin an antipenumbra by converting every segment in the Plucker coordinate system. InChapter 4. Shadows 81this 5D system, a directed line corresponds to a half-space delimited by a plane in thePlucker coordinates. A set of all directed segments forming the holes define a convexpolytope. The intersection of an edge of the polytope with the Plucker quadric surfacedetermines an end line of a discontinuity. The intersection of a face of the polytope withthe Plucker surface defines a VE (planar) or a EEE (quadratic ruled surface) discontinuity.Stewart and Ghali [stew93j use a plane sweep approach starting at the light source. Theintersections of scene edges and faces with the plane are recorded and processed in order,thereby constructing the aspect graph as seen from any point on the light.4.5 ConclusionShadows provide important information and therefore play a role essential in our interpretation of the world. However as we saw in this chapter, properly and efficientlycomputing realistic shadows in computer graphics is an expensive process.Characterising each shadow region in a scene has the advantage that it can be doneonce as preprocessing and can lead to important savings in a static environment. Itcan also allow computation of the radiance function over an entire region. However thisprocess requires to compute all the regions, even if some shadows are not visible. Theseregions must be stored and queries about points in these regions have to be answeredefficiently. Moreover, as the dimension of the light increases, the complexity of the regionsgrows, making the characterisations of regions much more difficult.Sampling the light on a need-to-know basis is a simple process. However it is doneat the risk of missing some blockers and can be expensive for large lights in complexenvironments. To avoid missing blockers, the blockers can be projected back onto thelight, thus revealing the portion of the light as seen from the point to shade. This is doneat the increased cost of doing the light clipping.Chapter 4. Shadows 82Intermediate techniques have been used to characterise regions as being in the shadow(umbra or penumbra) of individual blockers or fully illuminated. These techniques havebeen popular as they often offer a nice compromise between computing the shadows atevery point and characterising every region.In our approach to shadowing with linear light sources, we chose to recompute theintersection of a light triangle with a scene for each point to shade. This providedus with a simple algorithm adapted for each type of primitive in order to render softshadows. Two culling techniques have also been developed to reduce the number ofobjects potentially casting a shadow on the point to shade. This decision seemed themost appropriate in a ray tracing renderer. However this decision must be made for eachapplication domain.In the next chapter, we will see how the shadow regions as well as the shading on thesurfaces can be used to specify the illumination and surfaces characteristics within themodeling process.Chapter 5Lighting Design by Shading and ShadowsIn the previous chapters, we reviewed various reflection models to compute the shadingof many surfaces. We also presented algorithms to compute the shading and shadows ofscenes illuminated by linear and polygonal lights. All these models and techniques providetools to a user to create images of scenes with a higher degree of realism. However, theuser must understand very well the reflection models and their relationships with theilluminants in order to produce the right shading effects in the scene she is designing.In most modeling systems, lights are created and manipulated just like any geometricobject. A user must then assign to each light an emitted power as a function of wavelength. Similarly, the user must provide values for the various surface parameters. Ateach step, the only way to observe the results of the changes is by rendering the scene.Depending on the scene complexity, the rendering technique and the quality needed forthe intermediate image, a resulting image can be produced in times ranging from a fraction of a second to hours. With this feedback, the user can return to the modeler andagain change the parameters to create better shading effects. This process is repeateduntil the image corresponds to what the user wants or until the user finally decides, oftenafter many frustrating iterations, that the image is as good as it is going to get.The task is not simple because the user must perform mentally an inverse shading.She can know which shading effect she wants on a surface, but must determine the valuesfor each surface parameter and each light position and emitting power that will translateinto the desired shading.83Chapter 5. Lighting Design by Shading and Shadows 84In this chapter, we describe how we incorporate shading and shadow informations intothe modeling process itself. By creating and directly altering these shading effects, thelights and surfaces are indirectly specified. Our modeler thus helps to determine where alight should be to produce a shadow here or a highlight there. It can also provide valuesfor surfaces parameters to create a highlight of a certain size or to fit selected colours atsurface points.Our approach reduces the number of inverse shading tasks that a user must perform.It should lead to important savings in the number of modeling/rendering iterations. Italso frees the user from knowing all the specifics of shading models and can prevent thisuser from having to enter magic numbers. Any user, whether experienced or not, shouldbenefit from the extra information provided by our modeler.Our new process does not preclude previous ways of defining and positioning lightsources or specifying reflection parameters. In fact, one can still move the light sourcesdirectly, just like any other object in the scene, or specify surface characteristics byentering values for various shading parameters. Our techniques thus offer a new approachthat enhances the process of lighting design and surface shading.In the next sections, we quickly review some computer vision techniques for extractinginformation from shading features in an image. We see how difficult some of these problems are because of a lack of information (constraints). In a modeler, viewing parametersand the exact scene geometry are known. Because of this, many problems become muchsimpler to solve. We show how diffuse and specular reflections can be used to determinethe geometric location of a light. We also describe a technique to define and positiona light from the shadows it cast. Finally, we demonstrate how the modeler can fit the“best” values of the shading parameters to colour points applied on a surface.Chapter 5. Lighting Design by Shading and Shadows 855.1 A Parallel with Computer VisionComputer graphics builds models to simulate reality. Once built, the user renders thesemodels onto an image. In this sense, computer graphics models the causes (the interactionof light with matter) and renders the effects (colours) as an image. By way of contrast,computer vision is interested in analysing an image. It tries to isolate certain effects inan image in order to estimate the causes. While the two processes might seem to goin totally opposite directions, it is interestillg to consider how advances in one mightactually help the other.An important research area in computer vision is allalysing shading illformation toextract object shape, light orientation and surface characteristics from an image. Thereis an extensive literature on recovering shape from shading.1 By integrating the shadinggradient, it becomes possible to infer a surface that would produce the shading. Thetechnique has several limitations because boundary conditions are often unavailable. Infact, it is difficult to determine what portion of a pixel colour is due to the surface ata boundary because we do not know the pixel coverage. Real surfaces are not generallyideal diffuse reflectors and interreflections between surfaces reduce the accuracy of thereconstructed surface. Much research still involves producing more general solutions,although shape from shading is a very difficult problem to solve without relying onadditional assumptions.Finding the light source direction from shading is a complementary problem to shapefrom shading. By assuming a scene is made of diffuse surfaces and a uniform isotropicdistribution of surface normals, Pentland [pent82] applies a least-squares regression tothe pixels intensities and gradients to determine the most likely light direction. However,when his assumptions are violated, it becomes very difficult to rely on his technique.‘For a collection of important results in this area, see [horn88] and [wolf92].Chapter 5. Lighting Design by Shading and Shadows 86Highlight information has been used to determine light direction or local shape orientation. Babu et al. [babu85l study contours of constant intensity in an image to determine the orientation of planar surfaces under the illumination of a directional lightsource. Buchanan [buch87j fits ellipses to the highlights to obtain the same informationfor planar surfaces illuminated by point light sources. One important requirement ofmost of these algorithms is the accurate identification of the highlight area. This area isoften detected by some kind of thresholding technique. The unfortunate situation withthresholding is that different threshold values can lead to relatively different shape of thehighlight and, therefore, to different shape and light recovery. Other techniques, suchas the use of polarisation {wolf9lj, are promising, although they require the presence ofpolarising lenses on the cameras capturing the scene.Much useful information can be extracted from the shadow areas in an image [walt75][shaf85]. These areas provide additional information on the shape (profile) of the objectcasting a shadow and even on the shape of the object on which the shadow is cast.Moreover, they provide information on the direction and the shape of the light sources.However, shadows of dim lights will produce only very small changes in shading gradients.Similarly, the presence of several lights (possibly extended lights) and interrefiections willcreate many shadows for each object. This amalgam of overlapping shadows will makethe situation very difficult to interpret. Detecting shadows can be done in a mannersimilar to edge detection by applying various edge enhancing filters. For extended lights,the shadow edges are soft and the shadows must be detected based on changes in thegradients of the surface shading. Gershon [gers87j uses gradients in colour space todetermine if a region corresponds to a shadow region or simply to a change of material.While modeling a scelle, a user has access to important information unavailable tocomputer vision, i.e. the geometry of the scene and the viewing parameters. To betterunderstand a 3D scene, the user can move her view point aroulld, display at the sameChapter 5. Lighting Design by Shading and Shadows 87time several views of the same scene, move objects, and remove hidden surfaces, all inreal time. However, very few current applications use information about shading andshadows to improve the efficiency of the modeling step. In this chapter, we investigatehow to define and manipulate light sources and to specify surface shading parameters.5.2 Defining and Manipulating Light SourcesWith the advent of high performance graphics hardware, it becomes possible to interactively create and manipulate more complex models with a higher degree of realism.The simple wireframe models of yesterday can now be replaced by illuminated, smoothshaded, depth buffered, textured (filtered) and antialiased polygons, while still retainingreal time interaction with the models.2 Hanrahan and Haeberli [hanr9Ol demonstrate withtheir system how current graphics hardware can be used to paint textures and variousother surface parameters (transparency, perturbation of surface normals, etc.) directlyonto surfaces in a fully interactive system. This increase in rendering power allows us tobetter investigate light definition and manipulation from shadings and shadows as wellas specify surface characteristics by their shading effects.In the next three sections, we describe how light sources can be defined and manipulated from diffuse reflections, highlights and shadows. The advantages of each techniqueare demonstrated and their respective limitations are explained to give a better insighton the efficiency of each technique.5.2.1 Lights from Diffuse ReflectionsA diffuse surface appears equally bright from every angle. For most surfaces, the shadinggradient provided by this model offers an important clue to understand the shape of2As point of reference, the Silicon Graphics RealityEngine [akel93] is reported to have performanceexceeding one million antialiased, texture-mapped triangles per second.Chapter 5. Lighting Design by Shading and Shadows 88a surface. Recall the diffuse contribution to the pixel radiance of a surface elementilluminated by a directional light (Equation 3.3):Lpixei = kd(IS )E0.In this equation, we assume that all the terms are constant except for the light direction. iS is the surface normal at a given point.This formulation tells us that for a given point on the surface specified as the maximum intensity of the diffuse reflection function, a unique directional light source can bedetermined when (IS = 1, which occurs atL=N.The term maximum intensity is not properly correct if we think of it in the contextof a complete shading model with diffuse and specular reflections. However if in a sceneall is fixed except for the direction of the light, this maximum intensity will find thedirection of the light that will produce the maximum irradiance on the surface element.It is interesting to note that other surface elements on this surface might also reach thismaximum irradiance but will never surpass it.The diffuse reflection as expressed above is independent of the viewer position. Itis therefore possible to change the viewer position while the diffuse shading informationremains the same. Unfortunately, this technique is limited to specifying only a directionalong which a light source lies unless additional constraints are added to the system.Such constraints can exist in a modeling system. For instance, the user can constrainthe light to reside on a plane (such as a ceiling in an indoor scene). By adding a point ofdiffuse maximum intensity, a direction is established. The intersection of this directionray and the light plane3 can be used to define a point light source or a vertex of anextended (linear or polygonal) light source.3Note that there might not be any intersection.Chapter 5. Lighting Design by Shading and Shadows 89While a point light defined by this technique still maximises the above diffuse expression, it does not correspond to maximising the diffuse radiance in the direction ofthe pixel for a surface illuminated by a point light. Instead, one needs to maximise thediffuse contribution of Equation 3.5Lpixei = kdfrd( L) 4irrIf all the terms are kept constant except for the light position, the maximum intensity ofthe above equation occurs atIN.Lmaxr2Without any additional constraint, this maximum occurs directly on the surface element.In many indoor scenes, illuminants have fixed definition (a point light, a linear light of acertain length or a polygonal light of a certain size) and thus must lie on the ceiling plane.By constraining the point light to lie on a plane, the maximisation can be expressed inCartesian coordinates (see Figure 5.30) asxpsina—dcoscimax 3(d2+x,+ywhere x, and y are the only variables. The local maximum can be obtained with anonlinear optimisation algorithm.This time the surface element is not necessarily the portion of the surface with thelargest irradiance. However, this point light does provide the maximum irradiance 011the surface element while still lying on the plane. A similar approach can be applied tolinear and polygonal illuminants if, for instance, the light can only be translated on theplane.Chapter 5. Lighting Design by Shading and Shadows 90Figure 5.30: Maximising5.2.2 Lights from HighlightsSpecular reflection models light that is reflected mostly around one direction. The lightreflected this way corresponds to what we consider to be a highlight. Recall that the specular contribution to the pixel radiance of a surface element illuminated by a directionallight (Equation 3.3)Lpixei = kF(Iwhere iS is the surface normal at a given point, fl is the bisector vector of the eyedirection and the light direction, and n is the surface roughness coefficient.In this equation, we assume that all of the terms are kept constant except for the lightdirection 1 (therefore fl varies). This formulation tells us that for a given point on thesurface, specified as the maxim’um intensity of the highlight, a unique directional lightsource can be determined when (ic.R) = 1, which occurs when ]E1 = 1, and thereforeL=2(N.E)N_E,where E is the eye direction.Once again, we interpret the term maximum intensity to mean maximising the radiance function given above.This simple relationship between the maximum intensity of the highlight and theChapter 5. Lighting Design by Shading and Shadows 91light direction has been used in the past. Hanrahan and Haeberli [hanr9Ol mention howthey specify a light direction by dragging the point of maximum intensity of a highlighton a sphere. This technique has also been implemented in two other modelers. A lightmodeler developed in 1983 at the New York Institute of Technology by Paul Heckbertmanipulated highlights on a sphere. A light editor written by Richard Chuang around1985 at Pacific Data Images was used to position highlights at specific locations on flyinglogos. A similar approach at LucasFilm was used to get a glare on a sword at a crucialmoment in the movie Yo’ung Sherlock Holmes.Our technique extends the basic approach by indirectly and interactively determiningthe surface roughness coefficient n in relation to the size of the highlight.Ollce the maximum intensity point of the highlight has been selected, the user dragsthe cursor away from this point. At a new position on the surface, the surface normal iscomputed. This new point is used to determine the boundary of the highlight, i.e. wherethe specular term of the radiance reaches a preselected threshold t. To satisfy thisthreshold, n, the only free variable in (iS. = t, is easily computed aslog t-. . . (5.13)log(N. H)While only these two points on a surface are necessary to orient a directional lightsource and establish the surface roughness coefficient, they give very little informationabout the complete shape of the highlight. To approximate the contour of the highlight,the pixel with the maximum intensity is used as a seed point and the neighbouring pixelscovered by this surface are visited in a boundary fill fashion [fole9O] until pixels on bothsides of the threshold are identified or until the boundary of the visible surface is found.With this boundary fill algorithm, the second point is not guaranteed to be enclosedby the contour of the highlight determined from the original seed point. Figure 5.31represents a shape where the point of maximum intensity and the threshold point wouldChapter 5. Lighting Design by Shading and Shadows 92define two separate highlights from a unique directional light. If this happens, the secondof maximum intensitypointFigure 5.31: Peanut with two highlights from one lightpoint is also used as a seed. Unfortunately, unless each pixel covered by this surface isvisited, some of the possible other highlights produced by this light on this surface mightbe missed. If the position of every highlight is necessary, every pixel covered by thesurface must be checked. This is done only when requested by the user because this canlead to considerable increase in computation time, thus reducing the interaction speed.Instead of using a screen based algorithm to trace the highlight boundary, we couldhave followed the boundary curve directly on the surface via numerical methods. Thisprovides a more accurate definition of the highlight contour. However, because of itssimplicity, speed and generality (it is applicable to any rasterisable object), the boundaryfill algorithm proved quite satisfactory in our application.When n has already been determined for a given surface, care must be taken in orderto keep a unique value for n. If another highlight is created on this surface, as soon asthe point with the maximum intensity is selected, the contour of this new highlight iscomputed with the previous value for n. However, this value for n and the position ofthe highlights are not fixed and can be interactively changed because some informationis kept in a temporary frame buffer. In this frame buffer, each previously visited pixelcontains information about its surface normal. The contour can therefore be scaled down(i.e. a smaller highlight defined by a larger value for n) very efficiently. If the contourChapter 5. Lighting Design by Shading and Shadows 93is increased, oniy the unvisited pixels need to have their surface normals determined. Itis also possible to move the highlight on the surface. This process is more expensive ifthe highlight is moved to a completely different location on the surface as many surfacenormals might need to be computed. On some graphics hardware, information on thesurface normals can be obtained directly from the hardware, therefore allowing for evenfaster highlight manipulation.The contour of the highlight that we define is an approximate representation of thehighlight. In the fully rendered surface, this highlight may appear different becauseof the diffuse reflection contribution and the k8 factor that controls how much of thespecular contribution translates into radiance. The highlight can appear different if ablocker resides between a point on the highlight and the light direction. Shadow rayscan be shot towards the light to determine the presence of blockers and to modify theboundary fill algorithm with this information. Figure 5.32 shows a highlight producedby a directional light source over a patch of the teapot. The white segment within thehighlight region represents the point of maximum intensity. This segment points towardsthe light direction.The highlight information is dependent on the eye position. Therefore, if the eyeis moved, every highlight in the scene must be recomputed. The points of maximumintensity will not be valid anymore and consequently every surface must be revisited torecover every highlight. This expensive process should be avoided as much as possible.This also means that a highlight computed in one window would have a different definitionin another window for which a different projection is used. To avoid confusion and toomuch computing time, we decided to remove all highlight information when the viewingparameters are changed. The lights definitions and surfaces roughnesses are kept withthe scene description. The highlights are recomputed only upon request from the user.Another limitation of using highlight information to describe a light source is that aChapter 5. Lighting Design by Shading and Shadows 94Figure 5.32: Creating a light by its highlighthighlight determines only a direction. We therefore need more constraints to determineother types of light source. Some of these constraints are described in the previoussection when the diffuse reflection approach was discussed. These are applicable here.Similar to the diffuse reflection used to manipulate a light, if a point light, a linear lightor a polygonal light are constrained to lie on a given plane, the location of the light iscontrolled. This is solved with a nonlinear optimisation algorithm.To represent highlights created by extended light sources, the contribution of eachvertex of the light is not sufficient to determine the shape of the complete highlight(Figure 5.33). To display this information, the boundary fill algorithm has to computethe specular integral for a linear light (Equation 3.11) or a polygonal light (Equation 3.12)for each pixel visited. Such integrals are rather expensive to compute. In order toachieve real time, cheaper approximations based on precomputed tables help to reducethe cost. We did not investigate this approach in this thesis, but relied solely on theChapter 5. Lighting Design by Shading and Shadows 95partial information provided by the light’s vertices.Figure 5.33: Incomplete highlight informationHighlight information can be very useful to specify a directional light source and asurface roughness coefficient. With extra constraints, they can even be used to defineand control point, linear and polygonal light sources. Shadow information can also beused to define and manipulate light sources. The next section describes how this can bedone.5.2.3 Lights from ShadowsAs seen in the previous chapter, shadows provide very important clues to understandingthe geometry of a scene. In the context of this thesis, we use the fact that shadows canreveal important information about the nature of a light source. We define light sourcesby manipulating their shadow volumes.4 These shadow volumes have the advantage ofdepending only on the lights and the objects’ positions. Therefore, as opposed to themanipulation of lights by their highlights, the eye position can change without alteringthe description of the shadows; the shadows are consistent for every projection. This4A shadow volume formed by a single object and a directional or a point light is the 3D volumewithin which every point is in the shadow of the object [crow77j [berg86j. For extended light sources(linear, polygonal), the shadow volume is the 3D volume within which every point is at least partly inthe shadow of the object.Chapter 5. Lighting Design by Shading and Shadows 96allows for multiple windows to be opened with different orthographic and perspectiveprojections, as is common in most of the modeling systems.Some systems have been reported to display shadows in real time. Blinn [blin88]describes how an object scaled by zero in one dimension and coloured with a uniformdarker shade can be used to simulate a fake shadow approximating a shadow from adirectional light source. Most of the graphics hardware producing shadows in real timerely on shadow volumes [crow77]. In their extension of shadow volumes for objects definedas unions, intersections and differences of other objects (CSG objects), Jansen and vander Zalm [jans9l] display shadows from directional and point light sources. Chin andFeiner include shadow polygons in the BSP tree representation of a scene. They displayshadows from point light sources [chin89] and polygonal light sources [chin92j. Segal etal. [sega92j use textures created from the light source to simulate shadows.While all of these techniques can display shadows in real time or near real time, nonedefines or manipulates lights from shadows. Some work in 3D user interfaces [hern92]describes how the fake shadows of Blinn [blin88] are projected onto three orthogonalplanes (or fake mirrors when the object is fully shaded) to aid in transforming a model.This is part of a toolkit to manipulate an object from its fake shadow or fake reflectioll.This work is not concerned with dealing with the light sources and realism.The shadow volume created by an object illuminated by a directional light sourceconsists of a sweep of the object silhouette in the direction the light source shines. Thissilhouette can be analytically determined for simple primitives, computed for moderatelycomplicated objects with algorithms like the one of Bonfigliolo [bonf86], sampled bystudying the variation of surface normals at the vertices of a tessellated object or sampledusing the information in a Z-buffer orthographic projection of this object.Specifying the direction of a directional light is simply a question of choosing twoarbitrary, although different, points in the scene. The second point is considered alongChapter 5. Lighting Design by Shading and Shadows 97the shadow cast by the first one. Figure 5.34 shows a cylinder illuminated by a directionallight source. The white point on the top of the cylinder represents the first point selectedby the user. The line segment originates from this point and intersects in 3D the planeunderneath where the cursor points. For some primitives, computing the exact silhouettecan be an expensive process. Depending on the application domain, an approximationof the shadow volume can be sufficient, allowing for real time computation and displaywhile still providing important information about the real shadows. In the case of thecylinder of Figure 5.34, each polygon vertex forming the cylinder is simply projected inthe direction the light shines.In our modeler, we use four different representations for the shadow volumes. Thefirst one consists of (1) simple line segments originating from points Oil the blocker (notnecessarily all on the silhouette). This representation is fast and allows a user to easilyselect a specific line segment. The backfacing faces of the shadow volumes and the objectsbehind the shadow volumes are still visible. The shadow volumes can also be displayed as(2) opaque or (3) semi-transparent volumes. It is then easier to see the shadow volumes as3D objects and determine the parts of surfaces that are intersecting the shadow volume.It also allows a user to easily select a frontfacing face of the shadow volume. The lastrepresentation uses (4) the graphics hardware to display only the intersection of theshadow volumes with the objects in the scene. Therefore the shadow volumes disappearand only the shadows that they produce are displayed. This is how we see shadows butthis does not allow for a simple manipulation of the volumes creating these shadows.To move a shadow volume once it is defined, a user needs to select a point on theshadow volume. The point on the object casting this shadow is then identified. Bydragging the cursor to a new location, a new direction is computed, the direction of adirectional light source.We use the fact that a directional light source can be interpreted as a point lightChapter 5. Lighting Design by Shading and Shadows 98source located at infinity to explain how we create a point light source from a shadowvolume cast from a directional light. Figure 5.35 illustrates the process of going from adirectional light source (Figure 5.35a) to a point light source (Figure 5.35b) by modifyingits shadow volume.A point sn1 on the shadow volume is selected. The point sri2 on the silhouettecasting shadow on the point sri1 is identified. This shadow segment [sri1,sri2] will now beconsidered as nailed (not moving) and the point light source will reside on the supportingline of this segment. By selecting another point i on the shadow volume, the point 2casting this shadow on this point is identified. The nailed segment [sni, sn2] and thepoint 2 define a plane (sri1 — sri2 — 52). By moving the cursor away from i, a point s’on this plane is located. s’ is considered on the shadow cast by 2• The two supportinglines of segments [sri1,sri2] and [si, 52] intersect at a unique point i. The point lightsource is therefore moved to pj as shown in Figure 5.35b.Once a point light source is created, it can be manipulated in the scene by manipulating its shadow volume. This can be done by fixing any shadow segment as previouslyFigure 5.34: Creating a directional light by its shadowChapter 5. Lighting Design by Shading and Shadows 99piSfl2s(SFigure 5.35: Going from a directional light source to a point light sourcedescribed, or, if no shadow segment is nailed, by adding a new constraint to the system.Once such assumption is that the distance d from the point light pj to the point 2 castinga shadow is constant. Combinations of these two actions are sufficient to position almostany point light source in a scene.In some rare configurations of a scene, some positions might not be accessible bymanipulating only these shadow volumes. For instance, assume that a scene is made ofa single flat polygon and of a directional light parallel to the plane of the polygon. Insuch a situation, the light would never be able to escape the plane of the polygon. Toavoid this kind of situation, the scene is bounded by a large box and shadows cast onthese walls can always be manipulated to avoid the problem above.It is important to note that the point s2 might not lie on the boundary of the shadowvolume while the point light source is moved around. However the real shadow volumeis always displayed so the user has a direct view of the altered shadow.To create extended light sources like linear or polygonal lights, new point light sourcesare needed to define the vertices of the light source. The shadow volumes of each lightvertex are handled just like any point light source. For polygonal light sources with morethan three vertices, each light vertex must reside on the light plane. This extra constraintis added to the system to create the shadow volume of a new light vertex because onlya)Chapter 5. Lighting Design by Shading and Shadows 100a direction (two points) is necessary to determine the 3D location of the light vertex.As we saw in the previous chapter, shadows of extended light sources are formed by theumbra and penumbra volumes. Several techniques can be used to compute these volumes.In our implementation, we compute the umbra as the intersection of each vertex shadowvolume (one shadow volume per light vertex); the penumbra is the difference betweenthe whole shadow and the umbra. Some problems occur when neither the blocker or thelight are limited to being convex. It can be shown however that if both the light andthe blocker are divided into convex elements, the whole shadow cast by this blocker fromthis light is the union in 3D of all the shadow volume convex hulls as:For now on, assume a polygonal convex light and a convex object.For each (convex light element)For each (convex blocker element)Compute the convex hull of the shadowvolumes created by these two elementsCompute the 3D union of all these convex hullsTo compute efficiently the shadow volume (umbra and penumbra) cast by a convexobject illuminated by a convex light, we do the following. We assume the blocker doesnot intersect the light plane. We first find a plane separating the light from the blocker.If there is no such plane, the light intersects the blocker (remember that the light and theblocker are both convex). The portion of the light inside the blocker is simply disregarded.Finding a separating plane can be achieved in O(n log n) where n is the number of faces inthe light and the blocker. Then this plane is displaced away from the light until the lightand blocker are on the same side of the plane. For each light vertex and blocker vertex,their supporting line is intersected with the plane. A simple 2D convex hull algorithm(we used the algorithm of Graham reported in Sedgwick’s book [sedg9o]) is then appliedChapter 5. Lighting Design by Shading and Shadows 101to these points. The points on this 2D convex hull are associated with lines in 3D. Eachline belongs to a vertex shadow volume. The 3D convex hull of these vertex shadowvolumes is delimited by planes supported by the same lines of the vertex shadow volumesthan the ones obtained by the 2D convex hull on the separating plane.Computing the umbra region (i.e. the intersection of each vertex shadow volume)cannot generally be performed in 2D. Figure 5.36 shows an example where using only theinformation in the 2D projection plane would fail to identify the umbra region (showedin hatched). To compute the umbra volume, we intersect each shadow polygon5 of aFigure 5.36: Hatched umbra region is undetected in the projection domainlight-vertex shadow volume with every other vertex shadow volumes formed by the othervertices of a single light. This process can be very expensive as it is O((ps)2)where p isthe number of vertices of the light and s is the number of shadow polygons forming theshadow volume. Algorithms to compute the intersection of the supporting planes of eachlight-vertex shadow volume can be used. This can be achieved in O(nlogn) [prep85jwhere n is the number of supporting planes of each shadow polygon.Although all these techniques give a good impression of the final shadow, they donot incorporate umbra volumes resulting from the union of penumbra volumes from5The silhouette of the object can be discretised. Each point cast its shadow in one direction. Twoconsecutive points on this silhouette and their shadow direction define a quadrilateral with two of itsvertices at infinity.Chapter 5. Lighting Design by Shading and Shadows 102different blockers. The various algorithms provided by Teller [tell92] and Stewart andGhali [stew93} could be used to identify all the umbra volumes but accurately computingthese extra volumes will be at the cost of a much slower general interaction.5.2.4 ResultsA modeler has been implemented in order to test the techniques presented in this chapter.The modeler operates on primitives such as conics (sphere, disk, cone, cylinder), squares,cubes, triangular meshes and Bézier patches. Figure 5.37 shows a global view of themodeler itself.The code is written in C under GL of Silicon Graphics. The graphics hardware allowsfor real time surface removal, Phong shading, transparency and shadow volumes, which isvery useful to model scenes and create and manipulate shadow volumes. Unfortunately,this real time shading can lead to some minor difficulties when creating highlights, because the threshold t must be adjusted to the hardware shading implementation.Figures 5.38 to 5.40 show a cone under a triangular light source. At first, no convexhull is applied. In this image (Figure 5.38), it is easier to associate each shadow witha light vertex. Once the convex hull is applied (Figure 5.39), only the segments on thesilhouette of the penumbra are displayed. Notice the umbra region just under the cone,within the penumbra volume. The shape of the umbra was difficult to extract from theprevious representation. In Figure 5.40, the umbra and penumbra volumes are filledwith a semi-transparent mask. All these static images do not give full credit to therepresentations into the modeling process. As soon as the point of view is rotated, thelines provide more 3D information.In the first part of this chapter, we described how shading effects like diffuse reflection,highlights and shadows can be used to define and manipulate light sources. These effectscan be rendered in real time with existing graphics hardware. However, we also usedCct C CD C CD S C CD CDChapter 5. Lighting Design by Shading and ShadowsFigure 5.38: Cone under a triangular light: No convex hull104Figure 5.39: Cone under a triangular light: Convex hull appliedChapter 5. Lighting Design by Shading and Shadows 105Figure 5.40: Cone under a triangular light: Convex hull with filled shadowssome simple line drawing representations (highlight contour, shadow volume sampledas line segments) in order to ease the manipulation of these effects. Obviously, theserepresentations do not provide the same perceptual information than a fully renderedimage. In the next sections, we will present a system to interactively determine surfacecharacteristics as they appear in the final image. This is achieved by painting colours onthe surfaces. The system finds the best values for the various surface parameters in orderto produce these colours at their exact locations. We saw how the surface roughness couldbe determined by defining the size of a highlight but could not specify the final look ofthe highlight. In fact, while the final shape of the highlight follows its representation, ifthe surface has a very low specular contribution, this entire highlight might be invisiblein the fully shaded image. This new painting system will allow us to control the colourof this highlight.Chapter 5. Lighting Design by Shading and Shadows 1065.3 Defining Surface CharacteristicsWhen drawing an image, an artist first sketches rough line drawings of his ideas. Oncesatisfied with the story, she starts refining the line drawings by adding more details andcorrecting the first drafts. Finally, when the line drawings express enough her feelings,the artist is ready for the last step, selecting the right colours and colouring.While these steps are not mutually exclusive, we can generalise the process as:• rough line drawings to convey spatial relationships• correction and fine line drawings to provide fuller details in the image• colour selection and filling for the final picture.With the advent of computer graphics, new tools were developed, opening a new erafor cartoon animation and special effects in cinema. These tools have been widely spreadin logo-type animations but also their impacts were recognised in cartoon computer animations like Tony de Peltrie by Lachapelle-Bergeron-Robidoux-Langlois, Luxo Jr., Red’sDream, Tin Toy (Academy Award) and Knick Knack from Pixar, and in Locomotionfrom Pacific Data Images. These tools also opened new possibilities for special effects asin sequences from The Wrath of Khan by Lucasfilm and in sequences from The Abyss,Terminator II and Jurassic Park by Industrial Light and Magic.Unfortunately, even though many software packages are now available, they still require a good knowledge of their shading techniques in order for a user to produce theexact results one originally thought of. This is true in each step of the drawing processdescribed above, but becomes particularly true as the latest steps are reached. Gettinga certain shading is very important in fully computer generated images. The appearanceof a metallic surface is different than one of plastic. A user must therefore be able to manipulate the surface parameters to approximate well enough the reflection from the realChapter 5. Lighting Design by Shading and Shadows 107surfaces. While a user has the possibility to alter the shading of the surrounding surfacesin order to improve the overall look of a final image, it becomes much more difficult touse the same tricks in special effects that normally require to merge computer generatedobjects with real objects. When two surfaces, a real one and a computer generated one,are next to each other and must look the same, it is essential that the simulated shadingapproximates visually the real shading of the surface. Within the limits of our simplereflection models, it becomes very difficult to find the “best” approximation.A typical surface in computer graphics has several attributes. They include surfacecolour, ratio of diffuse versus specular reflection, surface roughness, transparency, indexof refraction, etc. The number of attributes depends on the rendering system, the object representation and desired effects, whether the shading, reflection, refraction andshadowillg models are physically based or pure hacks.To evoke the right surface in painting, an artist must mix basic colours and applythem to specific locations on the canvas. A painter has the freedom to apply any colourat any given point in a scene. The realism of the painting relies on the skills of theartist in reproducing with colours what she perceives on the shading effect. Creatingthe right surface in computer graphics is difficult because of the possibly large number ofparameters involved to provide at least the flexibility to create some basic shading effects.It is similar to painting in the sense that colours can be used to set certain parameters.It is different because mathematical models are given to approximate the behaviour oflight and an artist is limited by their rules.In the next sections, we will investigate how our knowledge of current shading properties can be used to free a computer graphics artist from knowing every surface parameterin order to produce a given shading effect on a surface. The system in itself is relativelysimple to understand. An artist selects points on a surface and assigns colours to each ofthese points. A unique value for each shading parameter is provided and the remainingChapter 5. Lighting Design by Shading and Shadows 108parts of the surface are shaded according to the values of these parameters.5.3.1 Painting SystemsSince we present our system as a variation on a painting system, we describe in thissection the evolution and state of the art in computer painting systems.Since the beginning of computers, computer scientists have been interested in thelarge benefits computers can offer for interactively creating pictures. They are fast,precise, allow to undo previous commands and automate often very monotonous tasks.Sutherland in his Ph.D. thesis [suth63j presents many innovative ideas on how computerscould be used as a drafting tool. Although his system basically deal only with line drawingrepresentations of objects, many of his concepts have been extended to today’s paintingsystems.The typical painting system is 2D as it tries to simulate the general environmentof real painters. The pencils are substituted by a mouse, a tablet or a light pen, thecanvas by a computer screen, the paint by a palette of colours, the brushes by variousshapes for the cursor, the eraser by a cursor removing colours, etc. The strong analogybetween the tools of the artists and the tools offered in painting systems explains thegeneral simplicity of the painting systems. They can also offer characteristics uniqueto computers. For example, it becomes possible to undo a command, remove paint, fillregions with a colour or a pattern, replace a colour by another with basic commands.Smith [smit82] describes many of standard painting system features. To give a lookand feel closer to real paint and brushes, some systems use information about brushdefinition and movement [hobb85j [fish85] [stra86}. Others simulate textures closer to realpaint {guo9l] [smal9l.1 [cock92l.The use of digitised images into a computer extends the power of painting systems.One can capture any image, apply various transformations to it (scaling and filteringChapter 5. Lighting Design by Shading and Shadows 109are just two examples of a large class of image processing algorithms) and incorporate itin a painting. Haeberli [haeb9o] presents such a painting system which samples a given(possibly real) image as the brush moves over it and redraws each region according toa painting style. He can simulate styles like cubism and impressionism. Cabral andLeedom [cabr93] use vector fields to produce similar effects on images.Although 2D painting systems are the most popular, it is easy to see how researchersextended them to 2D (like in cel-based animation) by using an a-channel to compositetogether several images. By filtering and shading the edges in a 2D image, Williams{will9l] creates an impression of pseudo-3D. He also presents a model for painting in 3D[will9O] where the brush (cursor) is moved in the 3D space to apply paint (colour) to 3Dvolume elements.Hanrahan and Haeberli [hanr9Oj describe a highly interactive 3D painting program.Based on the advances in workstation graphics hardware, they show how they achieve inreal time the painting of various surface attributes directly onto the 3D surfaces. Examples of attributes they use include light and surface colours (ambient, diffuse and specular), surface roughness, bump and displacement mappings as well as some investigationson transparency. While their technique is a good step forward into more user-friendlymapping of surface attributes, it still requires the user to execute all the work of choosing colours and properties via standard techniques (entering numbers or moving sliders).This forces the user to understand the impacts of various shading parameters to obtainthe desired effect. Seeing the variation of shading directly in the image as a particularslider moves is helpful but if one does not know which sliders have to move at whichposition, finding the right combination for a given shading effect can be a frustratingtask.In the next section, we will present a new kind of painting system. This system coversthe middle ground between a traditional painting system where the user has full control ofChapter 5. Lighting Design by Shading and Shadows 110which colour is assigned to which pixel, and shading parameters selection where the useris given a shading effect by directly changing values for the various shading parameters.In this new system, a user does not have to understand the underlying mathematicalshading model but still can control the shading to reach her goal. In the next sections,we will present the interface of our painting system and then explain the numericalmethods used to provide the “best” values for the surface parameters.5.3.2 Painting ScenarioThe interface of our painting system is relatively simple. A user selects colours andapplies them to points in the 3D scene. A colour point is represented by a small diskaligned on the surface along the normal at this point. The center of the disk indicatesthe 3D location of the colour point. The disk is painted with the selected colour butis not shaded. It is surrounded by a small ring of opposite colour [naim85] to easilydetect and manipulate it on a surface. These colour points can be moved on the surface,deleted, their colour can be modified and the size of each disk can be scaled up or down.A larger disk is convenient to see properly the colour of each point and to manipulateit. Since colour is context sensitive, it is important to be able to increase the colour diskin order to better perceive this colour. A smaller disk occludes less of the underneathsurface, allowing to better see the shading gradient around the colour point. This can beimportant to see small highlights. Figure 5.41 shows some colour points (represented bytheir disks) on a surface.Our system is similar to the system presented by Hanrahan and Haeberli but limitedonly to colours on selected points 011 surfaces. However it goes a step beyond by tryingto fit the “best” values to the shading parameters of this surface. By best values, wemean that the system attempts to optimise certain functions (under-constrained system)or to fit values (over-constrained system) so that the coloured points will remain as closeChapter 5. Lighting Design by Shading and Shadows 111as possible to their assigned colours when the full rendering is completed.To select colours, we extended a basic colour tool provided by Haeberli. Three slidersindicate the proportion of each parameter in a given colour space. Five colour spacesare currently available: ROB, CMY, HSV, ilLS and YIQ.6 It is possible to convert eachparameter from any to any of these colour spaces. The colour corresponding to thepositions of the three sliders is displayed in the rightmost rectangle. Figure 5.42 showsthe colour tool. This tool allows one to sample the colour of any pixel on the screen.This is a very useful feature to approximate the shading of real objects. Assume a realimage is displayed on the monitor. The shape of the real object can be approximated inthe modeler. Then, the colour of strategic points on the real surface can be selected andapplied to the same points on the model in order to approximate the shading of the real6J is not clear which colour space is the most appropriate in our context. We found the ilLS colourspace to be useful for points within mostly the diffuse region of a surface. However because of ourfrequent use of the RGB colour space in other applications, we more often rely on this space.Figure 5.41: Colour points on a surfaceChapter 5. Lighting Design by Shading and Shadows 112Figure 5.42: Colour tool of HaeberliChapter 5. Lighting Design by Shading and Shadows 113surface. The colour tool has been extended to provide a range for a selected colour torelax the constraints (this is explained in the next section). It has also been modified tocommunicate directly with our modeler.However useful our basic colour tool is, it could still be improved upon. Schwarz etal. [schw84] [schw87] compare various colour spaces and colour selection tasks. Theirresults should be integrated in our colour tool. However, as this goes beyond the scopeof this thesis, we simply refer the reader to [wysz82j and [hall89j for more informationabout this topic.For some combination of colour points, no values for the surface attributes will satisfy every constraint. When this happens, the system performs a rough and/or a finerinvestigation of altering the latest colour. Colours for which a converging solution canbe found are indicated in our colour tool. The user can then select one of these newcolours or decide to move the point to another location on the surface and let the systemestimate if this new position leads to a possible solution.Now that we have a better feel of how the user interface of our painting systemoperates, we will present in the next sections the numerical algorithms used to find the“best” values for the surface parameters as determined by the colour points. We will seehow each colour point can be interpreted as a system of equations, a constraint in anoptimisation problem or a sample in a fitting problem.5.3.3 Painting: Solving a System of EquationsAssume one has a 3D scene with all the geometry known and all the light sources positioned with their power fixed. This is a situation that often occurs in scene design. Firstthe designer builds the geometry with temporary surface attributes. These attributesare mainly used to differentiate the various objects in a scene. They are subsequentlymodified once the scene is closer to completion. In indoor scenes, it is also often the caseChapter 5. Lighting Design by Shading and Shadows 114where the light sources have fixed positions and power within the scene geometry (lightbulbs of 100 Watts hanging at precise locations from the ceiling). The lights can also bepositioned with the techniques we described in the previous sections.When a surface element is shaded, the radiance reflected to the pixel is functionof the scene geometry and of the surface parameters. The general reflection model ofEquation 2.2 applied to one directional light:Lpixei = kaLia + kdf( )L dw + k8 fF8(. fl)L dwWe can express this function in each colour channel (r,g and b)7 asLpixei(T) = ka(r)Lia(r)+kd(r)fL(r)(. ) dw+ks(r)fFs(r)L(r)( dwLpixei(g) = ka(g)Lia(g)+kd(g)fL(g)(. L) dw+ (5.14)ks(g)fFs(g)L(g)( )n dL61(b) = ka(b)Lia(b)+kd(b)jL(b)(] L) dw+5(b)fF3b)L(b)(ISfIf we consider ka(A)Lja)) and F5(A) as constants then we have a reflection model whichapproximates the hardware SGI GL reflection model for a directional light.If the value for each variable is known, the reflected radiance is easily computed. Inour painting system, we attempt to solve the inverse problem. Therefore our system mustfind values for the surface characteristics that would satisfy the colour points if the fullshading were performed. In the above reflection model, for a given point, all the surface7A colour channel consists in the convolution of light spectrum with a wavelength response curve.Chapter 5. Lighting Design by Shading and Shadows 115attributes are independent in red, green and blue. Therefore without lost of generality,we can consider solving the problem in the red channel. The approach is identical forthe green and blue channels.For a given colour point, the known values in the diffuse reflection can be summedfor all m lights as:Ld(r)=f(.)Li(r)dwi fori=1And similarly for the specular reflection:for •>0 and 0.i=1Generally speaking, Ld(r) and L3(r) can be computed for any other type of light source,whether it is a point light, a linear light or an area light.Each colour point contributes to a new equation in each of three channels. If there areas many independent equations as variables, the system of equations can be solved andvalues identified for each surface attributes. Looking at Equation 5.14, if F3 is constant,we can therefore handle these terms as a single variable (ka(), kd(r),k3(r)) assumingthey capture both the surface colour and proportion of ambient, diffuse and specularreflection, respectively. With the shading model of Equation 5.14, a system of threevariables can be determined with only three colour points in the following form1 Ldl L81 ka1 Ld2 L32 kd = L21 Ld3 L33 k5 L3and if a solution to this system exists, a unique value for ka, kd and k3 can be computed.Unfortunately, it is unlikely that a user will be able to provide the exact colours thatwould lead to a solution. We need to transform the problem such that it would lead toa solution. The next two sections will present two interpretations of this problem andtheir solutions.Chapter 5. Lighting Design by Shading and Shadows 1165.3.4 Painting: An Optimisation ProblemWe can look instead at this inverse problem as an optimisation problem. In this scheme,each colour point is given as a volume in the colour space of acceptable colours, thusintroducing two constraints. Additional constraints are associated with the variables inthe shading model. For instance, no variable can be negative. Moreover, some shadingmodel can put an upper limit on the values of some of their parameters. The combinationof all the constraints on a surface can lead to no solution, a unique solution (rare) or aninfinity of solutions. In the latter, to provide the user with a unique solution, we need tominimise (or maximise) an objective function. Several objective functions are possibleand each can lead to different behaviours of the system. It is also possible to use differentobjective functions depending on the number of colour points and their locations. Forinstance, one can decide to maximise the ambient term for the first colour point on asurface and to maximise the diffuse term when the next colour point is added. Or onecan decide to use a series of objective functions like first maximising the diffuse term,bounding the value within a small range of this local maximum and then maximising thespecular term.In our current system, the choice of objective functions is based on the number ofcolour points and their location. Generally, we proceed as follow. We first minimisethe ambient term, bound this minimum and then maximise the diffuse term. This way,shadow areas will be as dark as the user wants and the user will have the most controlon the diffuse reflection. We only change the objective function if the first colour pointis located in a shadow area. Then the diffuse reflection is not influenced by this colourand maximising it would lead to the maximum value allowable for the diffuse reflection.So instead, we minimise the diffuse term.We found that this combination of objectives leads to a behaviour of our system thatChapter 5. Lighting Design by Shading and Shadows 117is intuitive and simple to understand and use. However it is possible to personalise thebehaviour of the system. We can provide the user with a library of objective functions.The user ca then interactively select the objectives and combine them. She could alsodecide which objectives to use depending on the number of points and their locations.The user would thell need to ullderstand the three types of reflections but nothing moreabout the specifics of the shading model.The shading model of Equation 5.14 can be solved by a simple constrained linearoptimisation algorithm. These algorithms have the advailtage of converging to the globalminimum. However most of the even slightly more sophisticated reflection models willoften introduce nonlinear constraints.Look at the following reflection model used in our local ray tracer [aman92]:Lpixei() = kaS() + kdS(r)Ld(r) + k8L(r) [MS(r) + (1 — M)]Lpixei(g) = kaS(g) + kdS(g)Ld(g) + k3L8(g) [MS(g) + (1 — M)] (5.15)Lpixei(b) = kaS(b) + kdS(b)Ld(b) + kL8(b) [MS(b) + (1 — M)}In this model, S(A) is a simple function of the surface colour. This model simulates alinear dielectric-conductor ratio (M e [0, 1]) for the specular reflection. When M = 1,the specular reflection is function of the light colour only. It is more complex thanthe previous reflection model. Also notice that the coefficients of ambient, diffuse andspecular reflection (ka, kd, k3) are the same for all three primaries. Therefore we cannottreat the solution independently in each channel. The constraints in this model arenonlinear. Some variables could be separated to form a system of independent equationsbut at the cost of introducing several quadratic terms. We preferred not to do so in orderto experiment with the more general problem of solving an optimisation problem withnonlinear constraints.We input these constraints and their gradients into FSQP [zhou93], a general algorithm to solve constrained nonlinear optimisation problems. FSQP is based on SequentialChapter 5. Lighting Design by Shading and Shadows 118Quadratic Programming with two types of line searches in the solution domain: a mono-tonic search along an arc and a nonmonotonic search along a straight line. The interestedreader should refer to the original report [zhou93] for a detailed version of the algorithmsused. This algorithm performs very well (locally superlinear convergence) for a smoothobjective function and smooth constraints. Minimising the ambient reflection or maximising the diffuse reflection generally satisfies the smoothness desired for an objectivefunctioll. The specular reflection can generate a solution within a very narrow domain.If the initial guess is infeasible for some constraints, FSQP will attempt to find a feasiblestarting value but can fail to find one. If this happens, no optimisation ca be performed.To avoid this situation, we developed careful initial guesses based on our knowledge ofthe domains of each variable and within which region each colour point resides. Considerthe following example with only two colour points, one in shadow and one in the diffusearea. Assume ka = 1 and suppose we want to study the domain of possible values for kd.We can use the colours to determine the limits such thatDi(r) < kaS(T) + kd S(r) Ld(r) < D(r)Di(r) < Aj(r)(1 + A(r) x) + kd Ai(r)(1 + /A(r) . x) Ld(r) < D(r)(5.16)where Di(r) is the lower colour red for the diffuse colour pointDt(r) is the top colour red for the diffuse colour pointAi(r) is the lower colour red for the ambient colour pointAi(r)(1 + A(r)) is the top colour red for the diffuse colour pointx e [0, 1]Looking at the variation on kd, we obtainI Di(r) \\ / I D(r) ‘\ /+ A(r) . x)- ) / d(T) - d - Ai(r)(1 + A(r) . x)- ) / d(T)On a graph, this expression is represented in Figure 5.43. kdj(r) represents the lowestvalue for kd for the red channel while kdt(r) is the top one. Each channel (r, g, b)Chapter 5. Lighting Design by Shading and Shadows 119xkdl(r) kdl kdt kd(r)kFigure 5.43: Domain of kdcreates a trapezoid. The intersection of these three trapezoids determines the limits forkd e [kdt, kdt] and also for S(c) = Aj(c)(1 + zA(c) . x) with x E [xmin, xmax] and c is acolour channel.When a colour point in a specular region is added, the same kind of analysis, althoughinvolving more parameters, must be performed to determine an initial guess that satisfiesthe constraints.When FSQP cannot generate a valid initial guess and our investigation of each variableboundaries fails to identify a feasible initial guess, there is always the possibility that nosolution exists that would satisfy all the constraints. When this happens, the systemshows which constraints are violated (based on our study of the domain of each variable)and the user can modify the colour point, its boundaries or its position to allow for adifferent search of an initial guess.When the colour points define more constraints than variables, the new constraintscan only contribute to narrow down the range in which to search for a solution. This canbe achieved in a simple manner by reducing the acceptable range of the colour pointsalready located on this surface. In our experience, adding more colour points often onlyH xmaxChapter 5. Lighting Design by Shading and Shadows 120 cQuadratic Programming with two types of line searches in the solution domain: a mono-tonic search along an arc and a nonmonotonic search along a straight line. The interestedreader should refer to the original report [zhou93] for a detailed version of the algorithmsused. This algorithm performs very well (locally superlinear convergence) for a smoothobjective function and smooth constraints. Minimising the ambient reflection or maximising the diffuse reflection generally satisfies the smoothness desired for an objectivefunction. The specular reflection can generate a solution within a very narrow domain.If the initial guess is infeasible for some constraints, FSQP will attempt to find a feasiblestarting value but can fail to find one. If this happens, no optimisation can be performed.To avoid this situation, we developed careful initial guesses based on our knowledge ofthe domains of each variable and within which region each colour point resides. Considerthe following example with only two colour points, one in shadow and one in the diffusearea. Assume ka = 1 and suppose we want to study the domain of possible values for kd.We can use the colours to determine the limits such thatDj(r) kS(r) + kd S(r). Ld(r) Dt(r)Dj(r) < Ai(r)(1+i.A(r)•x) + kd A1(r)(1---zA(r).x) Ld(r) Dt(r)(5.16)where Dj(r) is the lower colour red for the diffuse colour pointDt(r) is the top colour red for the diffuse colour pointAj(r) is the lower colour red for the ambient colour pointAz(r)(1 + zA(r)) is the top colour red for the diffuse colour pointx e [0, 1]Looking at the variation on kd, we obtain( Dz(r)— IL ( <k < ( D(r)—A1(r)(1+A(r).x) ) / dT1 — d — Ai(r)(1+A(r).x) ) / drOn a graph, this expression is represented in Figure 5.43. k (r) represents the lowestvalue for kd for the red channel while kdt(r) is the top one. Each channel (r, g, b)Chapter 5. Lighting Design by Shading and Shadows 120contributes to introduce new constraints that can be violated and therefore eliminatesthe possibility of providing a solution.5.3.5 Painting: A Fitting ProblemWe can interpret differently the colour points when more points than variables are inputin the system. This becomes a typical problem of fitting the “best” approximation foreach variable subjected to ilonirnear constraints. We use a nonlinear least-squares fittingalgorithm, lmder, taken from MINPACK. lmder is based on the Levenberg-Marquardtalgorithm.In a typical least-squares fitting, each variable is free. However in our shading models,each variable is constrained within a certain domain. To retain this concept of boundaries,we introduce penalty functions. A variable f(x) is replaced by:f(x) ebl_x if x < bif(x)= f(x) if bi <x <buf(x) . if bu < xwhere bi and bu are the lower and upper boundaries on x, respectively. Any steep functioncould replace the exponential here. The steeper the function, the higher the penalty willbe if a variable goes beyond its domain.We also modify our system in order to provide different weights for different kinds ofcolour points. From the shading Equations 5.14 and 5.15, one can observe that a colourpoint within a shadow region on a surface can directly influence very few variables.Therefore we can increase by a large factor (100 in our system) the weight of thesepoints. This ensures that the points in the other regions do not push the ambient termoutside its limits. Similarly, the diffuse reflection is usually well controlled by one or twocolour points. A smaller factor (10 in our system) keeps their contributions important,Chapter 5. Lighting Design by Shading and Shadows 121x0Figure 5.43: Domain of kdcreates a trapezoid. The intersection of these three trapezoids determines the limits forkd E [km, kdtj and also for S(c) = Ai(c)(1 + zA(c) x) with x e [xmin, xmax] and c is acolour channel.When a colour point in a specular region is added, the same kind of analysis, althoughinvolving more parameters, must be performed to determine an initial guess that satisfiesthe constraints.When FSQP cannot generate a valid initial guess and our investigation of each variableboundaries fails to identify a feasible initial guess, there is always the possibility that nosolution exists that would satisfy all the constraints. When this happens, the systemshows which constraints are violated (based on our study of the domain of each variable)and the user can modify the colour point, its boundaries or its position to allow for adifferent search of an initial guess.When the colour points define more constraints than variables, the new constraintscan only contribute to narrow down the range in which to search for a solution. This canbe achieved in a simple manner by reducing the acceptable range of the colour pointsalready located on this surface. In our experience, adding more colour points often onlyjxmaxkkdl(r) kdl kdt kd(r)Chapter 5. Lighting Design by Shading and Shadows 122contributes to introduce new constraints that can be violated and therefore eliminatesthe possibility of providing a solution.5.3.5 Painting: A Fitting ProblemWe can interpret differently the colour points when more points than variables are inputin the system. This becomes a typical problem of fitting the “best” approximation foreach variable subjected to nonlinear constraints. We use a nonlinear least-squares fittingalgorithm, lmder, taken from MINPACK. lmder is based on the Levenberg-Marquardtalgorithm.In a typical least-squares fitting, each variable is free. However in our shading models,each variable is constrained within a certain domain. To retain this concept of boundaries,we introduce penalty functions. A variable f(x) is replaced by:f(x) ebl_x if x < bif(x) f(x) if bi x buf(x) . ex_ if bu <xwhere bi and bu are the lower and upper boundaries on x, respectively. Any steep functioncould replace the exponential here. The steeper the function, the higher the penalty willbe if a variable goes beyond its domain.We also modify our system in order to provide different weights for different kinds ofcolour points. From the shading Equations 5.14 and 5.15, one can observe that a colourpoint within a shadow region on a surface can directly influence very few variables.Therefore we can increase by a large factor (100 in our system) the weight of thesepoints. This ensures that the points in the other regions do not push the ambient termoutside its limits. Similarly, the diffuse reflection is usually well controlled by one or twocolour points. A smaller factor (10 in our system) keeps their contributions important,Chapter 5. Lighting Design by Shading and Shadows 123although less than the ambient ones. Finally, in most cases, the designer uses more colourpoints to finely control the variation of colours in the highlights. Each colour point inthe highlight region will keep its unit contribution. By choosing appropriate weights, itis possible to approximate the behaviour of the objective functions. If done properly, auser will not notice the passage from an optimisation problem to a least-squares fittingone.However, unlike the optimisation algorithm, the least-squares fitting does not guarantee that every colour point will retain its colour in the final rendering. The weightsand penalties do help to keep the final colours as close as possible to the original ones.This technique offers a nice alternative to the optimisation when the user is not able tofind an initial feasible guess or when the behaviour of the objective function does notcorrespond exactly to what the user expects of the final shading.5.3.6 ResultsA large advantage of this combination of optimisation and least-squares fitting relies to acertain degree on the fact that most of their use is invisible to the user. By this we meanthat the user does not need to know how many variables there are in the reflection modeland their contributions to the final shading of a surface. In fact, several different shadingmodels could be used within the modeler and the user would have never to request oneover another. The system could adapt itself to the demands of the user when she insistson certain colour points to be placed at specific locations.This advantage can also be interpreted as a disadvantage. A poor choice of objectivefunctions in the optimisation part or a poor combination of weights in the least-squaresfitting could lead to strange behaviours of the system that the user could not understandbecause everything is hidden.We found however that for the functions and weights we used, the behaviour of ourChapter 5. Lighting Design by Shading and Shadows 124system appeared intuitive, predictable and lead quickly to the desired shading. If thefirst colour points are placed mainly in the shadowed and diffuse regions of a surface, theuser gets high control on the final shading with only a few points. Once these aspects of asurface are satisfying, the user can finely tune the look of the highlights with more colourpoints. In many of our test scenes, we found that three to four colour points often provideenough control to quickly produce a close approximation to the right values associatedwith the surface parameters.It is important to provide the user with adequate feedback when colour points areadded or moved on the surface. The real time hardware rendering with the SGI shadingmodel allows one to see directly the surface change as the colour points are altered.Unfortunately, this is not possible for more sophisticated shading models unless theycan somehow be converted or approximated by the SGI shading model. By keeping someinformation (3D location, surface normal, illuminant irradiance) for each pixel covered bythe surface the user is currently working on, the entire surface can be reshaded efficientlywhen the user needs it.5.3.7 Inverse Shading in Global IlluminationThe problem of inverse shading has just recently been addressed in two different approaches applied to a different facet of the problem that we address in this thesis. Schoeneman et al. [scho93j describe how time consuming, tedious and often counter-intuitiveselecting lights and surface reflectances can be in a system solving global illumination ina diffuse environment. This is true even for experienced users. They apply the inverseshading to identify the lights’ intensities and colours that would closely match a targetprovided by a user. The user paints colours onto the scene with a tool like a spray can.By assuming only ideal diffuse surfaces and treating only the direct illumination, theydevise clever incremental updates of the matrices and vectors that are then used in aChapter 5. Lighting Design by Shading and Shadows 125constrained least-squares. This way, they can handle efficiently scenes with as many as19,000 polygons and 12 lights. Even though they increase the weight of the painted scenevertices compared to the unpainted ones, their system cannot guarantee that the finalcolour will be close to the painted one because it is also function of the surface reflectanceassociated with each surface element. Also, since only the lights are modified, it is likelythat other surfaces will have their appearance changed after each application of paint.Kawai et al. [kawa93] also apply inverse shading to a radiosity solver. They useunconstrained nonlinear optimisation (Broyden-Fletcher-Goldfarb-Shanno) to find a localminimum of a possibly complex objective function. It includes physical terms (lightsource emission, directionality and distribution as well as element reflectivity) and termsbased on human perception (impression of clearness, pleasantness, and privacy basedon the scene brightness). In order to reduce the number of free variables, the user mustselect the active ones and impose constraints on them. Their system can provide completeresults within a minute or two on a scene of a small conference room. They report howeverthat their system can require “unintuitive tweaking” when the psychophysical propertiesof lighting are not accounted for.5.3.8 ExtensionsIn our experience and as indicated Kawai et al. [kawa93], adding more variables to solvefor usually has the effect of enlarging the domain of possible solutions. Unfortunately,depending on the behaviour of the new variables, they can introduce more local minima.When this happens, it becomes more difficult to determine if a given local minimumcorresponds to a visually satisfying result for the objective function. Some techniquesare available to start searches at various locations and therefore examine different localminima. This is in fact what Kawai et al. [kawa93j do.We believe that some additional variables could be considered to extend our currentChapter 5. Lighting Design by Shading and Shadows 126solution. The first one is the roughness coefficient. We did not consider it because itwas available by the way that we define and manipulate the highlights. Other variablesfrom more sophisticated reflection models could be investigated. Some examples includetransparency, anisotropy, diffraction, polarisation, layered surfaces, etc. Similarly toSchoeneman et al. [scho93] and Kawai et al. [kawa93j, we could also consider altering thelights intensities and colours and apply our techniques in a radiosity solver. We couldeven attempt to define and manipulate the lights from the colour points.However, with all those possibly more complex shading effects and with a growingnumber of free variables, developing efficient objective functions that keep the behaviourof our system intuitive and powerful becomes a task as difficult as solving for the constraints.5.4 ConclusionIn this chapter, we investigated using lighting effects, i.e. shading, highlights and shadows, to define the lights themselves and specify their location as well as identify thesurface shading parameters. We showed some inherent limitations with these approachesbut also demonstrated a powerful new technique. This technique allows a user to manipulates interactively shading, highlights and shadows, which can be very important whendesigning a scene. In previous modeling systems, these effects were not under the directcontrol of the user. Therefore a user needed to iterate between rendering the entire sceneand modifying the lights. This process can be expensive depending of the quality ofthe rendering required. Incorporating shading, highlights and shadows in the modelingprocess adds more information on the geometry of the scene and its illumination whichshould help the user to better understand the scene even before rendering it.Our system, although simple, gives direct information to the user on the lightingChapter 5. Lighting Design by Shading and Shadows 127effects she is constructing during the modeling process. These lighting effects are theobjects being directly manipulated. This direct manipulation is crucial because gettingthe desired shading effect by manipulating the causes is generally more difficult thanmanipulating the effects themselves.We foresee that, as the graphics hardware improves and as CPUs become faster,more and more effects available only at the rendering stage will become an inherent partof the modeling stage itself. Real time Phong shading is now common with high-endmodelers. These improvements will lead us to investigate more intuitive ways of definingand controlling these special effects. Although the separation between computer graphicsand computer vision is still strong, we believe this will lead us to more computer vision incomputer graphics for greater benefits of realism and potentially more computer graphicsin computer vision for better scene analysis of natural phenomena.Chapter 6ConclusionWhat we see is the result of interaction between light and matter. The intensities reachingour eyes are interpreted to help us understand the world we live in. We can identifyobjects, their surface properties, their relative positions in 3D space, their motion, etc.By improving our understanding of light and its interaction with matter, we can alsoimprove our understanding of the matter itself.3D computer graphics is one technique used to represent a 3D synthetic world byprojecting this world onto an array of pixels. The simulation of light transport and itsinteraction with virtual objects determine the colour of each pixel. One active area ofresearch in computer graphics consists in simulating as accurately as possible our realworld. This search for realism is important because our visual system was developed tounderstand the world we live in. So by producing realistic computer generated pictures,we can expect people to better understand an artificial world.In this thesis, we focused our research onto one aspect of realism: the local illumination. In local illumination, we are interested in shading a surface directly illuminated bylight sources. Typically in computer graphics, people were using directional and pointlight sources as illuminants. They are satisfactory for distant or small lights but lackrealism and flexibility when a higher dimensional light is needed. For the few peopleusing linear or polygonal light sources, only diffuse reflection was computed analyticallywhile specular reflection was simply approximated via sampling. Assuming the popularPhong specular expression, we presented an analytical solution to the diffuse as well as128Chapter 6. Conclusion 129to the specular reflections from a linear light source. Our solution is suitable for various extensions on light emission and has aiso been extended by Tanaka and Takahashi[tana9lb} for computing the reflection from a polygonal light source.The increase in realism in the shading of surfaces illuminated by linear and arealight sources comes unfortunately with a higher complexity for computing shadows. Wegathered several properties of shadows to form a theoretical foundation. These resultshelp us to better understand the cost of correctly computing shadows. With this in mind,we presented two algorithms to reduce the number of possibly occluding candidates fora linear light source and discussed the algorithms currently used for shadowing withpolygonal lights.While the first part of this thesis addressed mainly rendering issues, the second partapproached the generation of realistic shading from a modeling standpoint. When modeling a scene, a designer traditionally moves the objects in the virtual world, moves thelights and inputs the surface characteristics by assigning values to various shading parameters. Unfortunately, all along this process, little feedback is given until the end of therendering process. As the rendering process becomes more expensive due to the desire ofachieving a stronger realism, the number of iterations between modeling and renderingincreases, often leading to frustrating situations. By knowing well the shading modelused, an experienced designer can reduce this number of iterations but this knowledgewill not always be applicable to systems with different shading models.In the second part of this thesis, we presented a shift from the traditional modelingsystems. We introduced rendering issues directly in the modeling process. It thereforebecomes possible to indirectly alter the causes by manipulating the effects. We investigated defining and positioning light sources by moving on a surface a point of highestdiffuse or specular intensity or by moving the shadow volumes cast by an object. We alsoshowed how the specular exponent can be specified by selecting the size of a highlightChapter 6. Conclusion 130and how various surface properties can be identified by painting colours onto points ona surface. With all these techniques, a designer should achieve a given shading effect inmuch less time than previously and this, without any special knowledge of the shadingmodel.Creating the right picture with today’s computer rendering technology is an art.The designer must face intrinsic limitations due to the modeling process, the renderingalgorithms and, very important but too often neglected, the user interface used as anintermediary between what is wanted and how to do it. This research alleviates theburden of the rendering task by extending the types of light sources available and byintroducing rendering issues directly in the modeling process. We expect these results tohave a direct impact on tomorrow’s rendering technology.Bibliography[akel93] Kurt Akeley. “RealityEngine graphics”. Computer Graphics (SIGGRAPH‘93 Proceedings), pp. 109—116, August 1993.[aman84] John Amanatides. “Ray Tracing with Cones”. Computer Graphics (SIGGRAPH ‘8 Proceedings), Vol. 18, No. 3, pp. 129—135, July 1984.[aman87] John Amanatides and Andrew Woo. “A fast voxel traversal algorithm for raytracing”. Eurographics ‘87 pp. 3—10, August 1987.[aman92] John Amanatides, John Buchanan, Pierre Poulin, and Andrew Woo. “OptikUsers’ Manual — Version 2.6”. Technical Report Imager 1992—1, Universityof British Columbia, August 1992.[athe78] P. Atherton, K. Weiler, and D. Greenberg. “Polygon Shadow Generation”.Computer Graphics (SIGGRAPH ‘78 Proceedings), Vol. 12, No. 3, pp. 275—281, August 1978.[aupp93] Larry Aupperle and Pat Hanrahan. “A hierarchical illumination algorithm forsurfaces with glossy reflection”. Computer Graphics (SIGGRAPH ‘93 Proceedings), pp. 155—162, August 1993.[babu85] Mohan D.R. Babu, Chia-Hoang Lee, and Azriel Rosenfeld. “DeterminingPlane Orientation from Specular Reflectance”. Pattern Recognition, Vol. 18,No. 1, pp. 53—62, January 1985.[bao93] Hujun Bao and Qunsheng Peng. “Shading models for linear and area lightsources”. Computers and Graphics, Vol. 17, No. 2, pp. 137—145, March/April1993.[barb92] Christopher G. Barbour and Gary W. Meyer. “Visual cues and pictorial limitations for computer generated photo-realistic images”. The Visual Computer,Vol. 9, No. 3, pp. 151—165, December 1992.[baum9l] Daniel R. Baum, Stephen Mann, Kevin P. Smith, and James M. Winget.“Making radiosity usable: Automatic preprocessing and meshing techniquesfor the generation of accurate radiosity solutions”. Computer Graphics (SIGGRAPH ‘91 Proceedings), Vol. 25, No. 4, pp. 51—60, July 1991.131Bibliography 132[beck93] Barry G. Becker and Nelson L. Max. “Smooth transitions between bumprendering algorithms”. Computer Graphics (SIGGRAPH ‘93 Proceedings),pp. 183—190, August 1993.[berg86] P. Bergeron. “A General Version of Crow’s Shadow Volumes”. IEEE Computer Graphics and Applications, Vol. 6, No. 9, pp. 17—28, September 1986.[bish86j G. Bishop and D.M. Weimer. “Fast Phong Shading”. Computer Graphics(SIGGRAPH ‘86 Proceedings), Vol. 20, No. 4, pp. 103—106, August 1986.[blin77] James F. Blinn. “Models of Light Reflection For Computer Synthesized Pictures”. Computer Graphics (SIGGRAPH ‘77 Proceedings), Vol. 11, No. 2,pp. 192—198, July 1977.[blin88] James F. Blinn. “Jim Blinn’s Corner: Me and my (fake) shadow”. IEEEComputer Graphics and Applications, Vol. 8, No. 1, pp. 82—86, January 1988.[bonf86j L. Bonfigliolo. “An Algorithm for Silhouette of Curved Surfaces based onGraphical Relations”. Computer-Aided Design, Vol. 18, No. 2, pp. 95—101,March 1986.[borg9l] Carlos F. Borges. “Trichromatic approximation for computer graphics illumination models”. Computer Graphics (SIGGRAPH ‘91 Proceedings), Vol. 25,No. 4, pp. 101—104, July 1991.[buch87j Craig Stuart Buchanan. “Determining Surface Orientation from SpecularHighlights”. M.Sc. Thesis, Department of Computer Science, University ofToronto, August 1987.[cabr87j Brian Cabral, Nelson Max, and Rebecca Springmeyer. “Bidirectional Reflection Functions from Surface Bump Maps”. Computer Graphics (SIGGRAPH‘87 Proceedings), Vol. 21, No. 4, pp. 273—281, July 1987.[cabr93] Brian Cabral and Leith Casey Leedom. “Imaging vector fields using lineintegral convolution”. Computer Graphics (SIGGRAPH ‘93 Proceedings), pp.263—270, August 1993.[camp9O] A.T. Campbell and Donald S. Fussell. “Adaptive Mesh Generation for GlobalDiffuse Illumination”. Computer Graphics (SIGGRAPH ‘90 Proceedings),Vol. 24, No. 4, pp. 155—164, August 1990.[camp9lj A.T. Campbell and Donald S. Fussell. “An analytic approach to illuminationwith area light sources”. Technical Report TR-91-25, Department of Computer Science, University of Texas at Austin, August 1991.Bibliography 133[chen9l] Shenchang Eric Chen, Holly E. Rushmeier, Gavin Miller, and DouglassTurner. “A progressive multi-pass method for global illumination”. Computer Graphics (SIGGRAPH ‘91 Proceedings), Vol. 25, No. 4, pp. 165—174,July 1991.[chin89] Norman Chin and Steven Feiner. “Near Real-Time Shadow Generation Using BSP Trees”. Computer Graphics (SIGGRAPH ‘89 Proceedings), Vol. 23,No. 3, pp. 99—106, July 1989.[chin92] Norman Chin and Steven Feiner. “Fast object-precision shadow generation forarea light sources using BSP trees”. Computer Graphics Special Issue (1992Symposium on Interactive 3D Graphics), Vol. 26, pp. 21—30, March 1992.[cock92] Tunde Cockshott, John Patterson, and David England. “Modelling the texture of paint”. Computer Graphics Forum (EUROGRAPHICS ‘92 Proceedings), Vol. 11, No. 3, pp. 217—226, September 1992.[cohe85] M.F. Cohen and D.P. Greenberg. “The Hemi-Cube: A Radiosity For ComplexEnvironments”. Computer Graphics (SIGGRAPH ‘85 Proceedings), Vol. 19,No. 3, pp. 31—40, July 1985.[cohe93} Michael F. Cohen and John R. Wallace. Radiosity and realistic image synthesis. Academic Press, 1993.[cook82] R.L. Cook and K.E. Torrance. “A Reflectance Model for Computer Graphics”.ACM Transactions on Graphics, Vol. 1, No. 1, pp. 7—24, January 1982.[cook84al R.L. Cook. “Shade trees”. Computer Graphics (SIGGRAPH ‘84 Proceedings),Vol. 18, No. 3, pp. 223—231, July 1984.[cook84bj Robert L. Cook, Thomas Porter, and Loren Carpenter. “Distributed RayTracing”. Computer Graphics (SIGGRAPH ‘84 Proceedings), Vol. 18, No. 3,pp. 137—145, July 1984.[crow77] Franklin C. Crow. “Shadow Algorithms for Computer Graphics”. ComputerGraphics (SIGGRAPH ‘77 Proceedings), Vol. 11, No. 2, pp. 242—248, July1977.[dias9l] Maria Lurdes Dias. “Ray tracing interference color”. IEEE Computer Graphics and Applications, Vol. 11, No. 2, pp. 54—60, March 1991.[dret93] George Drettakis and Eugene Fiume. “Accurate and consistent reconstruction of illumination functions using structured sampling”. Computer GraphicsForum (EUROGRAPHICS ‘93 Proceedings), Vol. 12, No. 3, September 1993.Bibliography 134[firb85] P.A. Firby and D.J. Stone. “Interference in Computer Graphics”. ComputerGraphics Forum (Eurographics ‘85 Proceedings), Vol. 4, No. 3, pp. 209—216,September 1985.[fish85] Kenneth P. Fishkin and Brian A. Barsky. “Algorithms for brush movement”.The Visual Computer, Vol. 1, No. 4, pp. 221—230, December 1985.[fole9o] J.D. Foley, A. van Dam, Steven K. Feiner, and John F. Hughes. ComputerGraphics: Principles and Practice. Addison-Wesley Publishing Company, second edition, 1990.[four92a] Alain Fournier. “Filtering Normal Maps and Creating Multiple Surfaces”.Technical Report Imager 92/1, Imager, Computer Science, University ofBritish Columbia, 1992.[four92b] Alain Fournier. “Normal distribution functions and multiple surfaces”.Graphics Interface ‘92 Workshop on Local illumination, pp. 45—52, May 1992.[four92cj Alain Fournier and John Buchanan. “Chebyshev polynomials for boxing andintersections”. To be submitted to ACM ‘ftansactions on Graphics, 1992.[gart9o] Judith A. Gartaganis and John Tartar. “Wave-based spectrum rendering”.Technical Report TR 90-13, Department of Computer Science, University ofAlberta, May 1990.[gene93] Jon Genetti and Dan Gordon. “Ray tracing with adaptive supersampling inobject space”. Proceedings of Graphics Interface ‘93, pp. 70—77, May 1993.[gers87] Ron Gershon. The use of color in computational vision. Ph.D. thesis, Dept.of Computer Science, University of Toronto, 1987.[gigu9o] Ziv Gigus and Jitendra Malik. “Computing the aspect graph for line drawingsof polyhedral objects”. IEEE Transactions on Pattern Analysis and MachineIntelligence, Vol. 12, No. 2, pp. 113—122, February 1990.[gigu9l] Ziv Gigus, John Canny, and Raimund Seidel. “Efficient computing and representing aspect graphs of polyhedral objects”. IEEE Transactions on PatternAnalysis and Machine Intelligence, Vol. 13, No. 6, pp. 542—551, June 1991.[glas89} Andrew S. Glassner. “How to Derive a Spectrum from an ROB Triplet”.IEEE Computer Graphics and Applications, Vol. 9, No. 4, pp. 95—99, July1989.[guo9lj Qinglian Guo and T.L. Kunii. “Modeling the diffuse painting of sumie”. IFIPModeling in Computer Graphics, 1991.Bibliography 135[haeb90] Paul Haeberli. “Paint By Numbers: Abstract Image Representations”. Computer Graphics (SIGGRAPH ‘90 Proceedings), Vol. 24, No. 4, pp. 207—214,August 1990.[hain86] Eric A. Haines and Donald P. Greenberg. “The Light Buffer: A Ray TracerShadow Testing Accelerator”. IEEE Computer Graphics and Applications,Vol. 6, No. 9, pp. 6—16, September 1986.[hain87] Eric Haines. “A Proposal for Standard Graphics Environments”. IEEE Computer Graphics and Applications, Vol. 7, No. 11, pp. 3—5, November 1987.[hain9l] Eric Haines and John Wallace. “Shaft culling for efficient ray-traced radiosity”. Eurographics Workshop on Rendering, 1991.[hall83J R.A. Hall and D.P. Greenberg. “A Testbed for Realistic Image Synthesis”.IEEE Computer Graphics and Applications, Vol. 3, No. 8, pp. 10—20, November 1983.[ha1189] R. Hall. Illumination and Color in Computer Generated Imagery. Springer-Verlag, 1989.[hall93] David E. Hall and Holly E. Rushmeier. “Improved explicit radiosity methodfor calculating non-Lambertian reflections”. The Visual Computer, Vol. 9,No. 5, pp. 278—288, March 1993.[hanr90] Pat Hanrahan and Paul Haeberli. “Direct WYSIWYG Painting and Texturingon 3D Shapes”. Computer Graphics (SIGGRAPH ‘90 Proceedings), Vol. 24,No. 4, pp. 215—223, August 1990.[hanr9lj Pat Hanrahan, David Salzman, and Larry Aupperle. “A rapid hierarchical radiosity algorithm”. Computer Graphics (SIGGRAPH ‘91 Proceedings),Vol. 25, No. 4, pp. 197—206, July 1991.[hanr93} Pat Hanrahan and Wolfgang Krueger. “Reflection from layered surfaces dueto subsurface scattering”. Computer Graphics (SIGGRAPH ‘99 Proceedings),pp. 165—174, August 1993.[he9l] Xiao D. He, Kenneth E. Torrance, Francois X. Sillion, and Donald P. Greenberg. “A comprehensive physical model for light reflection”. Computer Graphics (SIGGRAPH ‘91 Proceedings), Vol. 25, No. 4, pp. 175—186, July 1991.[heck84] Paul S. Heckbert and Pat Hanrahan. “Beam Tracing Polygonal Objects”.Computer Graphics (SIGGRAPH ‘8 Proceedings), Vol. 18, No. 3, pp. 119—127, July 1984.Bibliography 136[heck9la} Paul S. Heckbert. Simulating global illumination using adaptive meshing.Ph.D. Thesis, Computer Science Division (EECS), University of California,Berkeley, June 1991.[heck9lb] Paul S. Heckbert and James M. Winget. “Finite element methods for globalillumination”. Technical Report UCB/CSD 91/643, Computer Science Division (EECS), University of California, July 1991.[heck92a] Paul Heckbert. “Discontinuity meshing for radiosity”. Eurographics Workshop on Rendering, pp. 203—216, 1992.[heck92b] Paul S. Heckbert. “Radiosity in flatland”. Computer Graphics Forum (EUROGRAPHICS ‘92 Proceedings), Vol. 11, No. 3, pp. 181—192, September 1992.[hern92j Kenneth P. Herndon, Robert C. Zeleznik, Daniel C. Robbins, D. BrookshireConner, Scott S. Snibbe, and Andries van Dam. “Interactive shadows”. Symposium on User Interface Software and Technology, pp. 1—6, November 1992.[hobb85] John Douglas Hobby. Digitized Trajectories. Ph.D. Thesis, Stanford University, 1985.[horn88] Berthold K. P. Horn and M.J. Brooks, editors. Shape from Shading. MITPress, 1988.[houl9lj Caroline Houle. “Light Source Modelling”. M.Sc. Thesis, Department ofComputer Science, University of Toronto, 1991.[imme86] D.S. Immel, M.F. Cohen, and D.P. Greenberg. “A Radiosity Method for Non-Diffuse Environments”. Computer Graphics (SIGGRAPH ‘86 Proceedings),Vol. 20, No. 4, pp. 133—142, August 1986.[jans9l] Frederik W. Jansen and Arno N.T. van der Zaim. “A shadow algorithm forCSG”. Computers and Graphics, Vol. 15, No. 2, pp. 237—247, 1991.[kauf8l] J.E. Kaufman and H. Haynes. IES Lighting Handbook. Illuminating Engineering Society of North America, 1981.[kawa93] John K. Kawai, James S. Painter, and Michael F. Cohen. “Radioptimization— Goal based rendering”. Computer Graphics (SIG GRAPH ‘93 Proceedings),pp. 147—154, August 1993.[kok9l] Arjan Kok and Frederik Jansen. “Source selection for the direct lightingcomponent in global illumination”. Eurographics Workshop on Rendering,1991.Bibliography 137[kok92] Arjan J.F. Kok and Frederik W. Jansen. “Adaptive sampling of area lightsources in ray tracing including diffuse interreflection”. Computer Graphics Forum (EUROGRAPHICS ‘9 Proceedings), Vol. 11, No. 3, pp. 289—298,September 1992.[krin47] E.L. Krinov. Spectral reflectance properties of natural formations. LaboratoriaAerometodov, Akad. Nauk SSSR, Moscow, 1947.[lewi93] Robert Lewis. “Making shaders more physically plausible”. EurographicsWorkshop on Rendering, pp. 47—62, 1993.[lisc92J Dani Lischinski, Filippo Tampieri, and Donald P. Greenberg. “Discontinuitymeshing for accurate radiosity”. IEEE Computer Graphics and Applications,Vol. 12, No. 6, pp. 25—39, November 1992.[lisc93] Dani Lischinski, Filippo Tampieri, and Donald P. Greenberg. “Combininghierarchical radiosity and discontinuity meshing”. Computer Graphics (SIGGRAPH ‘93 Proceedings), pp. 199—208, August 1993.[max93l Nelson Max and Roy Troutman. “Optimal hemicube sampling”. EurographicsWorkshop on Rendering, pp. 185—200, 1993.[meye88] G. W. Meyer. “Wavelength Selection for Synthetic Image Generation”. Computer Vision, Graphics and Image Processing, Vol. 41, pp. 57—79, 1988.[meye9o] Urs Meyer. “Hemi-Cube Ray-Tracing: A Method for Generating Soft Shadows”. Eurographics ‘90, pp. 365—376, September 1990.[meye9l] Gary Meyer and Richard Hale. “A Spectral Database for Realistic ImageSynthesis”. Proceedings of Graphics Interface ‘91, pp. 47—52, June 1991.[mitc9l] Don P. Mitchell. “Spectrally optimal sampling for distribution ray tracing”.Computer Graphics (SIGGRAPH ‘91 Proceedings), Vol. 25, No. 4, pp. 157—164, July 1991.[mora8l] Hans P. Moravec. “3D Graphics and the Wave Theory”. Computer Graphics(SIGGRAPH ‘81 Proceedings), Vol. 15, No. 3, pp. 289—296, August 1981.[musg89j F. Kenton Musgrave. “Prisms and rainbows: a dispersion model for computergraphics”. Proceedings of Graphics Interface ‘89, pp. 227—234, June 1989.[naim85] Avi Naiman. “Color spaces and color contrast”. The Visual Computer, Vol. 1,No. 3, pp. 194—201, November 1985.Bibliography 138[nico77] F.E. Nicodemus, J.C. Richmond, J.J. Hsia, LW. Ginsberg, and T. Limperis.“Geometrical Considerations and Nomenclature for Reflectance”, October1977.[nish83} Tomoyuki Nishita and Eihachiro Nakamae. “Half-Tone Representation of 3-DObjects Illuminated by Area or Polyhedron Sources”. Proc. of IEEE Computer Society’s Seventh International Computer Software and ApplicationsConference (COMPSAC83), pp. 237—242, November 1983.[nish85} T. Nishita, I. Okamura, and E. Nakamae. “Shading Models for Point andLinear Sources”. ACM Transactions on Graphics, Vol. 4, No. 2, pp. 124—146,April 1985.[nish92] T. Nishita, S. Takita, and E. Nakamae. “A Shading Model of Parallel Cylindrical Light Sources”. Visual Computing (Proceedings of CC International‘92), 1992.[peer93] Mark S. Peercy. “Linear color representations for full spectral rendering”.Computer Graphics (SIGGRAPH ‘99 Proceedings), pp. 191—198, August 1993.[pent82] Alex P. Pentland. “Finding the illuminant direction”. Journal of OpticalSociety of America, Vol. 72, No. 4, pp. 448—455, April 1982.[phon75] Bui-T. Phong. “Illumination for Computer Generated Pictures”. Communications of the ACM, Vol. 18, No. 6, pp. 311—317, June 1975.[pico92j Kevin P. Picott. “Extensions of the linear and area lighting models”. IEEEComputer Graphics and Applications, Vol. 12, No. 2, pp. 31—38, March 1992.[piet93] Georg Pietrek. “Fast calculation of accurate form factors”. EurographicsWorkshop on Rendering, pp. 201—220, 1993.[poul9oa] Pierre Poulin and John Amanatides. “Shading and Shadowing with LinearLight Sources”. Eurographics ‘90, pp. 377—386, September 1990.[poul9ob] Pierre Poulin and Alain Fournier. “A Model for Anisotropic Reflection”. Computer Graphics (SIGGRAPH ‘90 Proceedings), Vol. 24, No. 4, pp. 273—282,August 1990.[poul9la] Pierre Poulin. “An extended shading model for linear light sources”. Proceedings of the 1991 Western Computer Graphics Symposium, pp. 31—35, April1991.[poul9lb] Pierre Poulin and John Amanatides. “Shading and shadowing with linearlight sources”. Computers and Graphics, Vol. 15, No. 2, pp. 259—265, 1991.Bibliography 139[poul92a] Pierre Poulin. “Separate functions for local illumination”. Graphics Interface‘92 Workshop on Local Illumination, pp. 37—43, May 1992.[poul92b} Pierre Poulin and Alain Fournier. “Lights from highlights and shadows”.Computer Graphics Special Issue (1992 Symposium on Interactive 3D Graphics), Vol. 26, pp. 31—38, March 1992.[poul92c] Pierre Poulin and Alain Fournier. “Lights from highlights and shadows”.Proceedings of the 1992 Western Computer Graphics Symposium, pp. 141—145, April 1992.[prep85] F.P. Preparata and M.I. Shamos. Computational Geometry: An Introduction.Springer Verlag, Berlin, Germany, 1985.[raso9lj Maria Raso and Alain Fournier. “A Piecewise Polynomial Approach to Shading Using Spectral Distributions”. Proceedings of Graphics Interface ‘91, pp.40—46, June 1991.[rush9o] Holly E. Rushmeier and Kenneth E. Torrance. “Extending the RadiosityMethod to Include Specularly Reflecting and Translucent Materials”. ACMTransactions on Graphics, Vol. 9, No. 1, pp. 1—27, January 1990.[sale92] D. Salesin, D. Lischinski, and T. DeRose. “Reconstructing illumination functions with selected discontinuities”. Eurographics Workshop on Rendering,pp. 99—112, 1992.[schl93j Christophe Schlick. “A customizable reflectance model for everyday rendering”. Eurographics Workshop on Rendering, pp. 73—83, 1993.[scho93] Chris Schoeneman, Julie Dorsey, Brian Smits, James Arvo, and Donald Greenberg. “Painting with light”. Computer Graphics (SIGGRAPH ‘93 Proceedings), pp. 143—146, August 1993.[schw84] M.W. Schwarz, J.C. Beatty, W.B. Cowan, and J.F. Gentleman. “Towards aneffective user interface for interactive colour manipulation”. Graphics Interface ‘84 Proceedings, pp. 187—196, 1984.[schw87] Michael W. Schwarz, Wiffiam B. Cowan, and John C. Beatty. “An experimental comparison of RGB, YIQ, LAB, HSV, and opponent color models”.ACM Transactions on Graphics, Vol. 6, No. 2, pp. 123—158, April 1987.[sedg9o] Robert Sedgewick. Algorithms in C. Addison-Wesley, 1990.Bibliography 140[sega92j Mark Segal, Carl Korobkin, Rolf van Widenfelt, Jim Foran, and Paul Haeberli. “Fast shadows and lighting effects using texture mapping”. ComputerGraphics (SIGGRAPH ‘92 Proceedings), Vol. 26, No. 2, pp. 249—252, July1992.[shaf85] Steven A. Shafer. Shadows and Silhouettes in Computer Vision. KluwerAcademic Publishers, 1985.[shin87] Mikio Shinya, Tokiichiro Takahashi, and Seiichiro Naito. “Principles and Applications of Pencil Tracing”. Computer Graphics (SIGGRAPH ‘87 Proceedings), Vol. 21, No. 4, pp. 45—54, July 1987.[shir9l] Peter S. Shirley. Physically Based Lighting Calculations for Computer Graphics. Ph.D. Thesis, Computer Science Department, University of Illinois atUrbana-Champaign, 1991.[sill89} Francois Sillion and Claude Puech. “A General Two-Pass Method Integrating Specular and Diffuse Reflection”. Computer Graphics (SIGGRAPH ‘89Proceedings), Vol. 23, No. 3, pp. 335—344, July 1989.[sill9la] Francois Sillion. “The state of the art in physically-based rendering and itsimpact on future applications”. Eurographics Workshop on Rendering, 1991.[sill9lb] Francois X. Sillion, James R. Arvo, Stephen H. Westin, and Donald P. Greenberg. “A global illumination solution for general reflectance distributions”.Computer Graphics (SIGGRAPH ‘91 Proceedings), Vol. 25, No. 4, pp. 187—196, July 1991.[smal9l] David Small. “Simulating Watercolor by Modeling Diffusion, Pigment, andPaper Fibers”. Proceedings of SPIE ‘91, February 1991.[smit82] Alvy Ray Smith. “Paint”. Tutorial: Computer Graphics, pp. 501—515, 1982.[smit89] B.E. Smits and G.M. Meyer. “Newton colors: Simulating interference phenomena in realistic image synthesis”. Eurographics Workshop on Photosimulation, Realism and Physics in Computer Graphics, 1989.[stew93] A. James Stewart and Sharif Ghali. “An output sensitive algorithm for thecomputation of shadow boundaries”. Proceedings of the fifth Canadian Conference on Computational Geometry, pp. 291—296, August 1993.[stra86] S. Strassmann. “Hairy Brushes”. Computer Graphics (SIGGRAPH ‘86 Proceedings), Vol. 20, No. 4, pp. 225—232, August 1986.Bibliography 141[suth63] I.E. Sutherland. Sketchpad: A man-machine graphical communication system.SJCC. Spartan Books, 1963.[tana9o] Toshimitsu Tanaka and Tokiichiro Takahashi. “Cross Scanline Algorithm”.Eurographics ‘90, pp. 63—74, September 1990.[tana9la] Toshimitsu Tanaka and Tokiichiro Takahashi. “Precise rendering method foredge highlighting”. Scientific Visualization of Physical Phenomena (Proceedings of CG International ‘91), pp. 283—298, 1991.[tana9lb] Toshimitsu Tanaka and Tokiichiro Takahashi. “Shading with Area LightSources”. Eurographics ‘91, pp. 235—246, September 1991.[te1192] Seth J. Teller. “Computing the antipenumbra of an area light source”. Computer Graphics (SIGGRAPH ‘92 Proceedings), Vol. 26, No. 2, pp. 139—148,July 1992.[torr67] K.E. Torrance and E.M. Sparrow. “Theory for Off-Specular Reflection fromRoughened Surfaces”. Journal of Optical Society of America, Vol. 57, No. 9,1967.[verb84] C.P. Verbeck and D.P. Greenberg. “A Comprehensive Light-Source Description for Computer Graphics”. IEEE Computer Graphics and Applications,Vol. 4, No. 7, pp. 66—75, July 1984.[walt7S] David Waltz. “Understanding Line Drawings of Scenes with Shadows”. ThePsychology of Computer Vision, pp. 19—91. Mc-Graw Hill, New York, 1975.[ward9l] Gregory Ward. “Adaptive shadow testing for ray tracing”. EurographicsWorkshop on Rendering, 1991.[ward92a] Gregory J. Ward. “Measuring and modeling anisotropic reflection”. ComputerGraphics (SIGGRAPH ‘92 Proceedings), Vol. 26, No. 2, pp. 265—272, July1992.[ward92b] Gregory J. Ward. “Towards more practical reflectance measurements andmodels”. Graphics Interface ‘92 Workshop on Local illumination, pp. 15—21,May 1992.[warn83j D.R. Warn. “Lighting Controls for Synthetic Images”. Computer Graphics(SIGGRAPH ‘83 Proceedings), Vol. 17, No. 3, pp. 13—21, July 1983.[weil77] K. Weller and K. Atherton. “Hidden surface removal using polygon areasorting”. Computer Graphics (SIGGRAPH ‘77 Proceedings), Vol. 11, No. 2,pp. 214—222, July 1977.Bibliography 142[west92] Stephen H. Westin, James R. Arvo, and Kenneth E. Torrance. “Predicting reflectance functions from complex surfaces”. Computer Graphics (SIGGRAPH‘92 Proceedings), Vol. 26, No. 2, pp. 255—264, July 1992.[will9oj Lance Williams. “3D Paint”. Computer Graphics (1990 Symposium on Interactive 3D Graphics), Vol. 24, No. 2, pp. 225—233, March 1990.[will9l] Lance Williams. “Shading in Two Dimensions”. Proceedings of GraphicsInterface ‘91, pp. 143—151, June 1991.[wolf9O] Lawrence B. Wolff and David J. Kurlander. “Ray Tracing with PolarizationParameters”. IEEE Computer Graphics and Applications, Vol. 10, No. 6,pp. 44—55, November 1990.[wolf9l] Lawrence B. Wolff. Polarization Methods in Computer Vision. Ph.D. thesis,Columbia University, 1991.[wolf92] Lawrence Wolff, Steven A. Shafer, and Glenn Healey, editors. Radiometry.Physics-based Vision: Principles and Practice. Jones and Bartlett, 1992.[woo90] Andrew Woo, Pierre Poulin, and Alain Fournier. “A Survey of Shadow Algorithms”. IEEE Computer Graphics and Applications, Vol. 10, No. 6, pp. 13—32, November 1990.[woo93] Andrew Woo. “Efficient shadow computations in ray tracing”. IEEE Computer Graphics and Applications, Vol. 13, No. 5, pp. 78—83, September 1993.[wysz82] G. Wyszecki and W.S. Stiles. Color Science: Concepts and Methods, Quantitative Data and Formulae. Wiley, 1982.[yuan88] Ying Yuan, Tosiyasu L. Kunii, Naota Inamoto, and Lining Sun. “Gemstone-Fire: adaptive dispersive ray tracing of polygons”. The Visual Computer,Vol. 4, No. 5, pp. 259—270, November 1988.[zhou93] Jian L. Zhou and Andre L. Tits. “User’s guide for FSQP version 3.2: A Fortran code for solving constrained nonlinear (minimax) optimization problems,generating iterates satisfying all inequality and linear constraints”. Technical Report TR-92-107r2, Electrical Engineering Department and Institute forSystems Research, University of Maryland at College Park, March 1993.


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items