UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Shade from shading Liu, Lili 1994

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_1994-0491.pdf [ 3.52MB ]
Metadata
JSON: 831-1.0051440.json
JSON-LD: 831-1.0051440-ld.json
RDF/XML (Pretty): 831-1.0051440-rdf.xml
RDF/JSON: 831-1.0051440-rdf.json
Turtle: 831-1.0051440-turtle.txt
N-Triples: 831-1.0051440-rdf-ntriples.txt
Original Record: 831-1.0051440-source.json
Full Text
831-1.0051440-fulltext.txt
Citation
831-1.0051440.ris

Full Text

SHADE FROM SHADINGByLiii LiuM.Sc., System Engineering, TianJin University 1991B.Sc., Computer Science, TianJin University 1989A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFMASTER OF SCIENCEinTHE FACULTY OF GRADUATE STUDIESDEPARTMENT OF COMPUTER SCIENCEWe accept this thesis as conformingto the required standardTHE UNIVERSITY OF BRITISH COLUMBIAAugust 1994© Liii Liu, 1994In presenting this thesis in partial fulfilment of the requirements for an advanceddegree at the University of British Columbia, I agree that theLibrary shall make itfreely available for reference and study. I further agree thatpermission for extensivecopying of this thesis for scholarly purposes may be grantedby the head of mydepartment or by his or her representatives. It is understood that copying orpublication of this thesis for financial gain shall not be allowed without my writtenpermission.(Signature)____________________________Department of_________________The University of British ColumbiaVancouver, CanadaDate_______DE-6 (2/88)AbstractThe use of both computer generated images and real video images can be made much moreeffective by merging them, ideally in real time. This motivation is at the basis of ComputerAugmented Reality (CAR), which involves both elements of computer graphics and computervision. This thesis is concerned with an important aspect of CAR: to obtain geometric information about the light sources and about the surface normals from the image pixel values, and usethat information to shade the computer generated objects and reshade the real objects whennecessary.To acquire light source direction and surface orientation from a single image in the absenceof prior knowledge about the geometry of the scene, we assume that the changes in surfaceorientation are isotropically distributed. This is exactly true for all convex objects boundedentirely by gradually occluding contours, and approximately true over all scenes. This thesisdevelops an improved method of estimating light source direction and local surface orientationfrom shading information extracted from pixel values under such assumptions. First and secondderivatives of intensity at each pixel are used to compute these estimates, and a weighted sumof all estimated illuminant directions is used for the whole image.We tested our algorithm with different kinds of images: synthetic images and real videoimages, images with various non-planar shapes, and images with different (non-diffuse) surfacereflections. We found that our illuminant direction estimator is able to produce useful resultsin all cases, and that our surface orientation estimator is able to give useful information inmany cases. The main use for such information in the context of CAR is to reshade objects inthe real images according to new lighting information, and our tests show that our method iseffective in such cases, even when both the light direction and the surface normals have beenestimated from the image.11Table of Contents1 Introduction1.11.21.33 A new improved method3.1 General Principles3.2 Estimating the Light Source Tilt TL3.3 Estimating the Light Source Slant L - solution I3.4 Another Slant Estimator & Surface Orientation EstimatorMotivation and GoalRelated WorkOverview of Thesis1136778101116161921222 Analysis of Previous Methods2.1 Analysis of Pentland’s Method2.1.1 Image Model2.1.2 Point-by-point Method2.1.3 Regional Constraint method2.2 Analysis of Lee & Rosenfeld’s Method2.2.1 Light Direction Detection2.2.2 Surface Normal in Light Source Coordinate System .2.2.3 Surface Normal in Viewer’s Coordinate System2.3 Limitation of previous algorithms24242930391113.4.1 Light Source Slant L - solution II 393.4.2 Surface Normal 503.5 Confidence Interval of Evaluated Parameters 534 Performance Evaluation 584.1 Implementation Issues 584.1.1 Evaluation by Small Regions 584.1.2 Average of Source Direction in Different Regions 624.1.3 Variable Expression and Their Range 624.2 Evaluation of Light Direction with Synthetic Sphere Images 654.2.1 Experiments with One Set of Synthetic Images 664.2.2 Robustness of Algorithms 714.3 Light Direction Evaluation with Real Images 754.4 Light Direction Evaluation with Textured Objects 824.5 Evaluation of Results on Surface Orientation 864.6 Multiple Light Sources 905 Reshading 935.1 Reshading Algorithm . 945.1.1 Shading Model 945.1.2 Self-Shadowing . 955.1.3 Filtering . . . . 965.2 Experimental Results 975.2.1 Reshading with Known Objects Geometry and Estimated LightDirection . . 97iv5.2.2 Reshading with Estimated Objects Geometry and Known LightDirection 1015.2.3 Reshading with Estimated Objects Geometry and Estimated LightDirection 1016 Conclusion and Future Work 1106.1 Conclusion 1106.2 Future Work 112Bibliography 113VChapter 1Introduction1.1 Motivation and GoalThis thesis is part of a research effort concerned with the merging of real video images(RVI) and computer generated images (CGI), which is known as Computer AugmentedReality (CAR) [8].In the past twenty years, computer graphics has made great strides towards producingrealistic images. Improved hardware and software has led to increased realism in modelingshape and rendering lighting effects. But neither the hardware nor the software hasdeveloped to the level of producing realistic images of our everyday environment in realtime. 1On the other hand, real video images are not sufficient for many applications. Sometimesit is necessary to insert computer-modeled objects into a real, existing environment orreal objects (such as persons) into a computer modeled environment.By merging computer generated images and real video images in real time, the effectiveness of both sources of images can be improved.Many research issues are involved to reach that goal. They can be divided into twoparts: geometric issues and illumination issues. The geometric part can in turn bedivided into two problems : viewing parameters and visibility. The problem of viewing1Real-time in this context means about from 5 to 60 images per second.1Chapter 1. Introduction 2parameters is about setting up common viewing parameters between the RVI and theCGI. Usually, the method used is to extract the viewing parameters from the RVI andsetting synthetic camera position accordingly in CCI. The problem of visibility involvesdetermining mutual visibility while compositing the two images. In computer graphics,the Z-buffer algorithm is usually used to solve this problem and they can also be donewith RVI if a real-time depth-map is provided.The illumination issues includes both local illumination and global illumination of RVIby computer generated light sources, and of CGI by light sources in the real environment.It is the Common illumination problem. Local illumination involves computing reflectedlight from light sources directly illuminating the point being shaded. Global illuminationinvolves computing light that is indirectly reflected and transmitted from the light sourcesto the point being shaded. Other illumination problems, such as shadows, reflectionsand transparencies fall somewhere between these two categories. In local illumination,many models have been developed for giving reasonable realistic pictures [7]. In globalillumination, radiosity-based methods are the most commonly used [6] [5]. To accomplishany of that with RVI involves two major steps:• to identify the directions and characteristics of the lights in the RVI in order toilluminate objects in the CCI• to acquire the surface normals in the RVI in order to shade it according to CGIlightsThe information so acquired will also be used to “unshade” the RVI object prior toreshading.The specific goal of this thesis is to extract light direction and surface normal informationfrom the pixel values alone, without prior geometry information on the lights or theChapter 1. Introduction 3objects. The purpose is not to build a model of the scene, but to get enough informationto restore the current shading of the real objects and compute new shading under newlighting conditions.1.2 Related WorkSource detection and shape detection are problems traditionally studied in ComputerVision, while local and global illumination fall into the realm of Computer Graphics.In Computer Vision, there has been extensive work on methods of deriving 3D surfaceshape from stereopsis, motion, texture and shading cues, and there has also been somework on methods of deriving light sources from shadow, texture and shading information.Many shape detection methods assume source information as known a priori, while somesource detection methods assume shape information as a known priori.“Shape from shading” was initially developed at MIT by Horn and his students. Sincethen, there has been extensive work in this area by researchers in Computer Vision,Mathematics and Psychology [17] [9] [15].The direct way to extract shape from shading is to assume that the reflectance map isknown, that is, illumination direction, surface reflectivity, and illuminant strength, areknown a priori. The indirectly way, instead of assuming reflectance map as known, assumes the surface normals distribute isotropically in the whole image. The last approachwas initially developed by Pentland [15], who tried to determine shape from shadinglocally, in cases where the illumination direction, illuminant strength, and surface reflectivity are not known. In this kind of approach, light from shading, which is to determinethe illuminant direction, has to be solved first.In our context, light source information is not assumed to be known, so the reflectanceChapter 1. Introduction 4map approach requires assumptions that are too strong to serve our purpose.According to Pentland [141, “the direction of illumination is required to be known in orderto obtain accurate three-dimensional surface shape from two-dimensional shading becausechanges in image intensity are primarily a function of changes in surface orientationrelative to the illuminant. For example, small changes in surface orientation parallelto the illuminant direction can cause large changes in image intensity, whereas largechanges in surface orientation that occur in a direction perpendicular to the directionof illuminant will not change image intensity at all. Thus the illuminant direction mustbe known before one can determine what a particular change in image intensity impliesabout the changes in surface orientation”.For our purpose, light from shading is important not so much because it is importantfor understanding shape from shading, but mainly because the knowledge of lightingconditions in the RVI is necessary to illuminate objects in the CGI and to “unshade”objects in RVI. A reliable light detecting algorithm is very important to get a satisfactorymerging of real and computer generated images.It is possible to recover the direction of the light sources from just intensity informationwhen certain assumptions are made.Pentland made a basic assumption about the objects surface curvature, which is thatthe changes in surface orientations are isotropically distributed. Lee and Rosenfeld [12]set up their theory based on the same assumption, but used statistics of the images todetect light direction and estimated surface orientation using the light source coordinatesystem. A third potentially relevant paper addressing shape from shading without apriori light information is from Brooks and Horn [3]. They avoided the assumption ofapproximating the surface as spheric points (points on a sphere). They used an iterativeChapter 1. Introduction 5algorithm which alternately estimates the surface shape and the light source direction.Since they did not show that their iterative scheme can converge towards the correct lightsource and surface orientation, and experimental results of the full estimation of bothlight source and surface orientation were not given, their method has not been consideredhere along with Pentland’s and Lee & Rosenfeld’s.Since reshading is the ultimate goal, we need an illumination model to compute results.Here we are not going to present a new shading model, but apply a widely used localillumination model to produce a graphics generated image using the light direction andsurface orientation estimated from the above algorithms.In the late 1970’s and early 1980’s, due to the work by Phong [16] and Blinn [1], localshading models were introduced to computer graphics. These simple local shading modelsare widely accepted and successfully used to render 3D scenes with a certain degree ofrealism. We can summarize the various formulations into a single generalized equation:L = IakaA + IAkd(NL) +k8(iV.f[)”LA is the radiance in the viewer’s direction. The first term describes ambient reflection,an artificial concept including uncountless far light source. The second term representsdiffuse or Lambertian reflection and the third term represents specular reflection.Since in general a full spectrum is considered, all terms that are wavelength-dependentare subscripted with ). N is the surface normal at the point being shaded, L is theilluminant direction, is viewpoint direction, and R is the bisector of L and 7. ‘a,\,’pAare light source radiance (W/m2xsr). kaA,kdA and k8A are ambient-reflection-coefficient,diffuse-reflection-coefficient and specular-reflection-coefficient respectively.The radiance caused by diffuse reflection is determined by the angle between the surfacenormal and the light source, but is not dependent on the view point. Specular reflectionChapter 1. Introduction 6is a function of the direction of view point. In practice, we will not use the specular termin our reshading, but it would be easy to do so, and it is included here for completeness.1.3 Overview of ThesisThe rest of the thesis is organized as follows:In Chapter 2, we analyze Pentland’s and Lee & Rosenfeld’s light from shading andshape from shading algorithms. The algorithms are closely related to our goal.In Chapter 3, we presents the concepts and formula of our proposed light from shadingand shape from shading method. Some implementation issues are also discussed.In Chapter 4, we evaluate the performance of the three algorithms by comparing estimated light direction and surface normals with real values with various objects.In Chapter 5, we reshade objects with the estimated light direction and estimatedsurface orientation and compare the results obtained from all three algorithms.In Chapter 6, we present our conclusions and discuss extensions and future work.Chapter 2Analysis of Previous Methods2.1 Analysis of Pentland’s MethodInstead of using shape from shading techniques assuming knowledge of the scene sothat surface orientation can be estimated by propagating constraints from boundaryconditions, Pentland applied a purely local analysis to an unknown scene.According to Bruss [4] and Brooks [2], local methods will not lead to unique results,since singular points and occluding boundaries provide strong constraints, which are notavailable to a method that only considers shading in a small region of the image.Pentland agreed that local shading information is not sufficient to fully determine surfaceshape. However, he argued that local shading information is sufficient to determine thestructure of a simpler class of shapes: the set of Lambertian spherical points, and manygeneral surface shapes can be locally approximated by these simpler shapes.Pentland presented two local shading analysis methods: point-by-point and regional inference. In the point-by-point method, six parameters including two for surface normaland two for illuminant direction are derived from six constraints provided by six independent measurements including intensity(I), first derivatives of intensity on X and Yaxes(I, Ii,), second derivatives of intensity on x and y(I, I and Ifl). In the regionalconstraint method, regions are considered rather than a single point. There are manypossibilities for obtaining a good estimate of the mean value of particular parameters by7Chapter 2. Analysis of Previous Methods 8using inferences about the distribution of image data within the region. After obtainingan estimate for the mean value of a parameter, other parameters can be obtained.In the following sections, Pentland’s image generation model is discussed first since othermethods discussed in this thesis are all based on a similar image generation model. ThenPentland’s two local shading analysis methods are discussed.2.1.1 Image ModelThis theory assumes a simple model of image formation: there is a distant point sourceilluminant in direction L, a patch of surface (assumed to be a Lambertian surface) withsurface normal N, and a viewer in direction V (Figure 2.1).Figure 2.1: A simple model of image formationThe surface normal N, the viewer’s direction V, and the illuminant direction L are allunit vectors in Cartesian three-space. So only two parameters are needed to express eachof them, the third component being determined by the constraint that they have unitmagnitude.Any unit vector can be specified by two angles uniquely in a chosen coordinate system:tilt and slant. Slant is the angle between Z axis and the unit vector. Tilt is the anglebetween the projection of the unit vector on the XY plane and the X axis. In FigureChapter 2. Analysis of Previous Methods 92.2, tilt and slant for unit vector V are shown.xYOV’: projection of vector V on Xy p1aFigure 2.2: Any unit vector can be identified by two angles: tilt and slantThere are three parts to our model of image generation: the illumination of the surface,the reflection of light falling upon the surface back into space and eventually onto image,and the projection from the surface of the object onto the image plane.Considering the three parts of image generation, we obtain the formula:I p\(N.L) (2.1)where p is the portion of incident light that is reflected and is the amount of light (flux)per unit area arriving the surface.Pentland’s method uses the first derivative of I instead of I itself to be the input information.If we stay within a relatively small, homogeneous area of the image, it is reasonable toassume that the albedo and illumination are constant. So we have:zVtiltrdl = p\(diV.L)Chapter 2. Analysis of Previous Methods 10Thus the change in image intensity dl depends on the change in the surface normal dNrelative to the illuminant.2.1.2 Point-by-point MethodAssume that at each point in an image we can obtain six independent measurements:intensity(I), two first derivatives of intensity along the X and Y axis (Is, Is,) and threesecond derivatives of intensity along X and Y (I,I and I,), where I =,I,, == I=and I, = Therefore, at most six image-formationparameters can be determined.In Pentland’s theory, all surface points are assumed to be spherical points, that is thesurface can be approximated locally by a sphere. Six parameters are required to specifythe image intensity in the neighborhood of a spherical point on a Lambertian surface:two for surface normal (tilt TN and slant crN), one for radius of curvature R, two for theilluminant direction (tilt TL and slant oL), and one for the product of surface albedo pand illuminant strength A.’Pentland’s solutions for the six parameters are as follows:—(I, —— Ivy)2 + 4Itan(TN)= 21Here the lower sign applies when the source is within 7r/2 of the eye direction, while thetop sign applies when the source is beyond 7r/2 of eye direction.The surface slope (ow) is related to the ratio of maximum to minimum second directionalderivative._________________2(I + Iyy)±( — Iyy)2 + 41yC05 aJ =(lxx +— Ivy)2 + 4Iy‘We use Pentland’s notation in this section. ) corresponds to I in the previous chapter and p toChapter 2. Analysis of Previous Methods 11or(Irx + Iyy) — — Iyy)2 + 4ICOSJN =2 ‘i I _12V xyThe assumption of local sphericity is inappropriate when II — I <0.The formula for illumination direction L(ll, 12, 13) is:Iii2R+IxRpA—I2R+ IRl2= XpAI?72R313= YXpAwhere:X = COS rNsincYN7 = sin TNSiflUN= 1— x2 _72R= 22 ((xix + 7Iy)±(xIx +71)2—4IIi2)(A)2 = ( IxyliR + IrR)2 + (_R + IR)2 +2.1.3 Regional Constraint methodLight from Shading1. Assumptions about the surface curvature in the imagesIn formula 2.1, image intensity(I) is not only determined by the illumination position and characteristics(L), but also by the shape of the 3-D objects(N). I is knownChapter 2. Analysis of Previous Methods 12from the image while L and N are unknown. It is an underconstrained problem.However, L can be estimated by adding constraint by making an assumption aboutthe surface curvature of objects in the image. As Pentland said, “one assumptionthat is sufficient to disentangle L and surface curvature N is that changes in surface orientation are isotropically distributed. This condition js true of images ofconvex objects bounded by smooth occluding contour and is true on average overall scenes”.There is an important concept involved in the assumption: image direction. Animage direction is a 2D direction in the image plane, and can be expressed by the2D unit vector. There are two ways to specify a 2D unit vector. One way is theangle between the vector and X axis. The other way is by the two projections ofthe vector on the X and Y axes. See figure 2.3.Y_______________///::ç”direction(dx.dy)Figure 2.3: Two ways to express an image direction V :(dx,dy) or 0 = tan1(dy/dx)Assuming that surface orientation, when considered as a random variable over allpossible scenes, is isotropically distributed, we can derive that dN, the change inN, is also isotropically distributed. Thus the sum of diV over all image points andChapter 2. Analysis of Previous Methods 13image directions about those points has an expected value of zero:E(EdN)=Owhere x, y are the image point coordinates, and represents the direction in theimage.Under the assumption that within a region the expected value of the sum ofdN(dxN, dyN, dzN) over all directions is zero, then along any one image direction 0dxN is proportional to cos 0, dyN is proportional to sinO, and dzN is zero. Thus:E(dI) = p,\{E(dN).L] = p\(dNxL + dyNyL + dzNzL) = p\(dNxL + dyNyL)where dxN and dyN are the mean values of dxN and duN as observed along thatimage direction and E(dzN) = 0.2. Light from Shading AlgorithmIf a differential dr is introduced, which may be thought as the expected magnitudeof dN, as E(dNI), then:dxdr = dxNdydr = dyNwhere (dx,dy) is an image direction.Thus:E(dI) = p\dr(xLdx + yLdy)Letting XL = p.\xLdr and YL p/\yLdr, then:Chapter 2. Analysis of Previous Methods 14E(dI) XLdX + LdyThe least-squares regression model can be set up to obtain 1L , YL and further theratio of unknown XL and YL.Letting dI be the average of dl over the region along image direction (dxi, dy)(Figure 2.3), then the least-squares regression model is ([14]):d11 dx1 dy1d12 dx2 dy2 ( Xy\YLdf dx dyThe tilt of the illumination direction is given by:_1YL 1YLTL=tan (—)=tan (-—)XL XLFrom this least-squares regression, the confidence intervals for 1L, YL and so theconfidence interval for tilt can be estimated. The confidence interval can be usedto evaluate the robustness of the tilt estimated statistically. For single light sourceimages, the mean light direction tilt can be derived by the weighted sum of tiltsin all small regions. The weight we use is the inverse of the standard deviation,which means the larger the confidence interval, the smaller the weight, and the lessreliable the tilt obtained in the region.To derive the slant, we need one more assumption:Var(dXN) = Var(dyN) = Var(dzN) = dr2Chapter 2. Analysis of Previous Methods 15which is true for sphere model, then we obtain the following formula, derived byPentland:E(d12)— E(dI)2 =p2fdrLetting k = p\.dr, then the illumination direction (XL, Vi, zL) is:XLXL=ZL[1 k2And the slant of the illumination is cos’ zL.Surface Normal from ShadingSince the tilt of the surface normal estimated by the point-by-point method is fairlyrobust [14], the tilt given by the regional constraint method is just the average of thetilts estimated at all the points in the region by the point-by-point method.The slant of the surface normal estimated by the point-by-point method, however, depends critically on having equal curvature and exact knowledge of the surface tilt. A morerobust estimate of surface slant can be derived by estimating mean surface curvature inthe region first.1Zir/V21 1V I R2As Pentland said, “A good estimate of mean R within an image region can be made byapplying the constraint that resulting ZN must satisfy the inequality lzNO, that is,the visible surface must fWcing the viewer. We assume that the estimated value of Rholds throughout the region, and thus obtain an estimate of intra-regional slant.”Chapter 2. Analysis of Previous Methods 162.2 Analysis of Lee & Rosenfeld’s MethodLee and Rosenfeld [12] estimated shape from shading using a light source coordinatesystem. They identify light direction and surface normal at each point according to thefollowing steps:1. compute the illumination direction L;2. compute the surface normal N in light source coordinate system;3. use the coordinate transformation to get the surface normal in the viewer’s system.Since the illumination direction at a point is derived by evaluating the distribution ofintensity within its neighboring region, Lee & Rosenfeld’s method can be classified as aregional constraint method.2.2.1 Light Direction DetectionLee & Rosenfeld used a statistics approach to estimate the illumination direction. LikePentland, they also assume isotropic distribution of the surface orientation. From theassumption and some statistical theories, the probability density function for the slant isgiven by: (sino)/2ir (see [12] for details).For the illumination direction L, let 11, 12, 13 be the projection of the unit vector L onthe X, Y and Z axes. In terms of tilt and slant of L, 11, 12 and 13 can be expressed as:11 = sin(JL) cos(TL)12 = sin(oL) Sifl(TL)13 = cos(JL)Chapter 2. Analysis of Previous Methods 17From the surface normal N (tilt TN and slant ow), and the formula I = )pN.L, we get:I = Ap(—llsinJNcosTN —l2sinJN•sinrN — l3coso)The first derivative of the image intensity is then:I = (—l1 + l3tan JNC0STN)I, = —l2 + l3tan JNSiIITN)From the assumption that all points can be approximated as spherical points, R is theradius of the sphere on the point.Solution for tilt TLFrom the expected value of both quantities:E{I} = (—l1+l3E{tanuN.cosrN})2ir ir/2= —(—l1+l3j j tanaN.cosrN(sin2u)/22rth7dr);11RE{I} = (—l2+l3E{tancrN.sinrN})2ir ir/2= —(—l2 + 13j j tan cTNsm TN.(sin2cr)/2lrduclT)- ---l2Rso we obtain, TL = tarr1(12/l1) = tanSolution for slant crLCase I: illuminated points with the same tilt as the illuminant direction.At such points, ‘rL = TN, the shading equation becomes:I=p)(N.L)Chapter 2. Analysis of Previous Methods 18= p.)(sin(JN)cos(TN)sin(JL)cos(rL) +sin(oN)sin(TN)sin(JL)sin(TL) +cos(oN)cos(oL))= p)(sin(crN)sin(JL)cos(rL— TN) + cos(ow)cos(cTL))= p(sin(JN) sin(0L) cos 0 + cOS(0N) cos(orj)= pAcos(ow—UL)Because of the shading model we used, the surface orientation at illuminated pointshaving the same tilt as the light source direction on the object is limited to a regionsmaller than —ir/2 to T/2. According to Lee and Rosenfeld, the range of 0N is from—7r/2 + cYL to ir/2. So the sampling distribution of the slant here is: cos crN/(1 + cos crL).Taking the expected value of I:E{I} = pAE{cos(JL— 0N)}cosuN= pA j cos(crL— JN) daNl+coscrL=, (ir—cTL) cos L + sin aLp 2(1+coscrL)Taking the expected value of 12:E{12} = p2fE{cos(aL — 0N)}2 212 2 cosapr= í f I cos (°-L — N) duNl+cosaL— 2 + 4 cos L + cos 2o— pJ 6(1 + cosuL)= -j—(1+cosJL)From the above two equations, the slant of the light source can be obtained. Note thatChapter 2. Analysis of Previous Methods 19the expectation of I and 12 are taken only over the points with the same tilt as theillumination direction. To use the equations, those points have to be identified.Case II: illuminated points with general tilt.In this case, the tilt of the surface normal at such points is not fixed. The samplingdistribution for slant of surface normal differs in the 2D case from that found in 1D caseas above. The new distribution is: sin 2uN/(1 + cosWith a derivation similar to that of the previous case, Lee & Rosenfeld obtain:E{I} = ( — JL) COS L + Slfl 0L (2.2)3ir(1 + coscTL)E{12} = + cos JL) (2.3)The expected values of equations 2.2 and 2.3 are taken over the whole image. In ourexperiments with Lee & Rosenfeld’s method, we use this set of equations to obtain0L instead of the equations for illuminated points with the same tilt as the illuminantdirection. The advantage of the latter method is that it does not require that theseparticular points be identified.2.2.2 Surface Normal in Light Source Coordinate System1. Tilt of the surface normal in the light source coordinate systemDefining:I cosocosrL COSLSiflTLITT TT T_ i lIT T’\Tk’1x,y) I Ik-z,ly)\ —slnrL COSTL )Lee and Rosenfeld proved that the tilt of the surface normal in the light sourcedirection coordinate system is: tan’ (II,/II)Chapter 2. Analysis of Previous Methods 202. Slant of the surface normal in the light source coordinate systemConsider again the image irradiance equation: I = pA(N.L). The surface slantviewed from the light source coordinate system will be the cos1(N.L). Let us notethe slant of surface normal in this system as 0.One method to estimate (JV.L) is based on a local estimation of pA. cos 0 in lightsource coordinate system can be computed from I/(pA). To estimate pA at pixelP, let Q be a point near P in the gradient direction at P and let R be a point onthe opposite side of P:Ip = pA(Np.L) = pAcos0IQ = pA(N.L) = pAcos(0 + &)JR = pA(NRL) = pA cos(0 — z2)where L and L2 are positive small angles. We can approximate= . So:I = pA cos(0 + L) = pA(cos 0 cos — sin 9 sin t)= pAcos(0—= pA(cos0cosL + sin0sint)Therefore:1R+IQ = 2pAcos9cosL 2IpcosL— I = 2pA sin 0 sin Jso:‘R + IQCOSLh =2Ipand(\)2= ( 0)2 + (pA cos 9)2_____r 2= “2sinL +IPChapter 2. Analysis of Previous Methods 21so:T2 IT/ ip ‘R-’Q i- 2A)= 1 2 1/1 1 \ip UiR+iQ)/i)According to [11] the process can be iterated since p\ can be viewed as the brightness for a suitable light direction. The observed intensities is corrected by dividingthem by cos 0: I/cos 0 = pX. Three iterations are enough in many cases. Finally,we obtain:cosO = Ip/p.\2.2.3 Surface Normal in Viewer’s Coordinate SystemLet the coordinates in terms of light direction (having the Z-axis in the illuminationdirection) be (x’, y’, z’), the coordinates in terms of the viewer’s (having Z-axis in theviewer direction) be (x, y, z).One way of deriving (x’,y’,z’) from (x,y,z) is as follows: (1) rotate by the tilt angle TLcounterclockwise around the Z axis (2) rotate by slant angle TL counterclockwise aroundY axis to obtain the light coordinate system.In summary, the transformation between the two coordinate systems is:cosJL•cosrL cosoLsinrL —sinrLfx,y,Z)— —slnrL cosr 0 X,y,ZjsrnuL•cosrL sint7L•sinrL cosrJ,or:cos o• COS TL — sin TL sin o,• cos TLT ‘ ‘(x, y, z) = sin TL• cos L cos TL smc7L• Sfl TL (x , y , z)Sifl 0 COSTLChapter 2. Analysis of Previous Methods 222.3 Limitation of previous algorithmsWe tested both of these algorithms with many simple and complex real and computergenerated images.We will present more results of these tests in Chapter 4 for comparing different algorithms’performance. The relevant conclusion for now is that both algorithms give very goodestimation for one parameter of the light source direction, the tilt, but not good at allfor the other parameter, the slant (see Figures 2.4 and 2.5).100ideal tilt .etimatorPentland’s tilt .atimatop>Leeiflosenfeld’a tilt eati006040 /200 20 (0 60 00 100Figure 2.4: Relation between estimated tilt (vertical axis) and real tilt (horizontal axis)with a set of sphere imagesFor surface normals, both algorithms can give usable results in some cases. But theestimated surface orientation for the whole image is generally too bad to be used for ourpurpose.Given these limitations, we still need an algorithm with better performance in estimatingslant of light source direction, and a better performance in estimating surface orientation.Chapter 2. Analysis of Previous Methods 23100 -0060402020 40 60 00 000Figure 2.5: Relation between estimated slant (vertical axis) andaxis) with a set of sphere imagesreal slant (horizontalcy=O ‘r=40;a=40 ‘r=120;a=60sourceimagesPentland’ SsolutionLee’ssolutionideal slant as tiaatorPentland’a slant estiaato,—-—Lenisosenfeld’ slants, tinet’7’—-----7’--7”Figure 2.6: Shade from shading performanceChapter 3A new improved method3.1 General PrinciplesOur algorithm will also be based on the assumption that surface orientation is isotropically distributed.Let us first discuss the range of surface normals and their derivatives. The coordinatesystem used is the viewer’s coordinate system. The angle between any vector and the Zaxis is from 0 to ir. All vectors with the same angle with the Z axis form the outsidesurface of a cone (Figure 3.7).z ..i.NFigure 3.7: The range of surface normals relative to Z axisAssuming we are looking down the Z axis orthographically, then all visible surface normalsare within a ir/2 angle from the Z axis. When the angle from the surface normal is larger24Chapter 3. A new improved method 25than 7r/2, the surface will not be seen in the image.The surface normal vector is unique at any particular surface point, but the first orsecond derivative of the surface normal is not unique. The first derivative of the surfacenormal in all directions form a tangent plane perpendicular to the surface normal at thispoint. However, in our context, all first and second derivatives of surface normals aremeasured along a given direction and derivative of surface normal along a given directionis unique.From the analysis in the previous chapter, we know that, under the assumption of Lambertian surface, the first derivative of image intensity along a particular image direction(dx,dy) satisfies the following formula:dl = pA(dN.L) (3.4)Since dl is measured as the difference in intensity between two adjacent pixels along aparticular image direction, dN should be measured as the difference of surface normalsbetween the same two pixels. Therefore, every dN used in our theory is along a particularimage direction. For a certain surface point, its dN along a certain image direction isunique. Its range of angle from the Z axis is between 0 and r.The projection of dN on the image lies on the image direction that dN is measured along.For dN along image direction 0, the tilt of the dN is 0.We define the second derivative of surface normal as:dNp—dNQ —lim = d2NpdNpdNQ0dNp — 0dNqwhere dNp and dNQ are the first derivative of surface normals on two adjacent pixels Pand Q. d2Np, as it is defined in the formula, is along image direction from image pointChapter 3. A new improved method 26P to Q.’In our algorithm, we use the d]Vp÷i and dNp that are along the same image direction asd2Np in the formula. The reason is just to make the tilt of dNp1,dNp the same as thetilt ofd2Np, which simplifies the formula. Under the assumption that all objects in ourimages are convex objects, the second derivative of surface normal could be 7r/2 to r inthe direction from the Z axis (see Figure 3.8).The second derivative of surface normal at a given point along a given direction is unique.NINimage directionFigure 3.8: Range of surface normal and its derivativesSo we havesin cYN > 0 ; cos N > 0sin 7jj > 0 ; cosodN > 0 or < 0sin Od2N > 0 ; cos ud2N < 0Replacing equation 3.4 with functions of tilt and slant, we obtain:dl = pAl(sin dN cos TdN sin L cos TL + sin0j!N sin TdN sin oL sin TL + cos 0dN cos UL)where 1 is the magnitude of vector L, and dN is a unit vector.1As we discussed before, unit vector is characterized by its tilt and slant. For dN along imagedirection 0, its tilt is fixed to be 0. Therefore, dN along image direction 0 is determined only by its slantvalue.Chapter 3. A new improved method 27Assume dN is along image direction 8, TdN = 0.dl = p)d(sin dN cos 0 sin L cos TL + sin0dN sin 0 sin 0L Sill TL + COS tdN COS 0L)Since cos(0—TL) = cosOcosrL +sinOsinrL,dl = p)l(sinodN sinoLcos(TL —0) + cosodN cos LYL) (3.5)°dN is a angle between 0 and ir. Since we assume that surface normals are distributedisotropically, the distribution of dN in the range from 0 to 7T should be uniform. Thesum of cos dN along isotropically distributed objects should be 0:E(cos(odN)) = 0 (3.6)E(dI), the expected value dl along a particular image direction given M measurementsof dl along the image direction 8, is thereforeE(dI) = p.)J(E(sin udN) cos 0 sin o, cos TL + E(sin odN) sin 0 sin 0L sin TL)= pl(E(sin JdN) sin aL(coS 0 cos TL + sin 8 sin TL))= p\1E(sin dN) sin L cos(0 — TL)Letting dI be any of M measurements of the first derivative of intensity along imagedirection 0 (0 = tarr’ (dy/dx)), and E(dI) be the mean of M measurements of dI onM number of pixels along image direction 8, then our formula becomes:d11 = p\l(sin dN sin UL cos(TL— O) + cos dN cos o-L) (3.7)E(d11)= pklE(sin odN) sin oi. cos(0 — TL) (3.8)Chapter 3. A new improved method 28(dIj)pi — (d11)p= pAl((sinJdNp+1— Sifl7dN)SiflJL cos(TL — O)+(cosudNp+1— COSadN)COScTL)p)\l((cosudN(d))sinoLcos(rL— 8) + (—sinudN)(dcr)cosJL)where d =t7dNp,— dNpLet idI = (dI)p1— (dI)p, which is the difference of dl on P and its closest neighbouralong image direction O.Letting d2I be second derivative of intensity along image direction O, similar to d2N’sdefinition, we have:urn(dI)p÷1 — (dI)p= (d2I)pdNp+l dNp° JdNp,—*JdNpwhere (dIjpi and (dI)p are measured along the same image direction as (d2I)p.For any pixel, we haveLdI = p)J((cos udN(du)) sin oL cos(TL—— sin cJdN(dcT) cos ‘7L) (3.9)When du is small, LdI(d2I)xdo-.So at any pixel, the second derivative of intensity as defined above should bed2I = pAl(cosudNsinoLcos(rL—Oj) — sino’dNcoscrL) (3.10)For the same reason, cos dN satisfies equation 3.6, sin dN > 0 for all surface normalsand sin oj,, cos o are constants for all points in a fixed image.The expected value of LdI is then:E(z.dI) = pAlE(du)(E(cos CTdN) sin 0L cos(TL—— B(sin cTdN) cos= p)JE(do’)(O— E(sinudN)cosuL)Chapter 3. A new improved method 29E(dI) = —pAlE(d)E(sin cTdN) cos UL (3.11)and:E(d21)= —p)lE(sinJdN)cosuL (3.12)3.2 Estimating the Light Source Tilt TLFrom formula 3.7:dI = p.)J(sin dN Sifl uL COs(TL Oi) + cos 0dN cos aL)When TL = O, that is, when the dl is measured along the tilt of light source, cos(rL—S) =1. When TLOj, cos(TL — 0) < 1. dI will be largest when the image direction (dx:, dy:)overlaps with the tilt of the light source.For the same reason, E(dI) is largest when (dxi, dy) is the same as the source tilt (TL).Equation 3.8 can be written as: (let Edlmax = p)JE(sin udN) sin uL)E(dI) = EdIma cos(0 — TL)= EdImar(cos O cos TL + sin 0 sin TL)Ed11 cos 0 sin &iEd12 COS 02 Slfl 02 ( TLEdImaxk. sin TLEdImasEdJ cos9 sin0If image direction 0 to 0 is uniformly distributed between 0 to 27r, the matrix formulacan be simplified asEdImaSflTL — E(dI)sin(0) 3 13EdImaxCOSTL — :E(dIj)cos(0j)Chapter 3. A new improved method 30The left side of above formula can be simplified as tan rjj,:E(d11)sin(9:)tanrL= E(dI)cos(O) (3.14)The light direction has the same tilt as the vector obtained by adding vectors from E(d11)to E(dI).3.3 Estimating the Light Source Slant 0L - solution ITaking the ratio of 3.8 and 3.11, we obtain:E(dI:)— pXlE(sinJdN)sinJLcos(O— TL) (3 15E(/dI)— p;k1E(du:)E(sin 0dN) cos L— tancTLcos(O—rL) (316)— E(dcr:)Image direction 8 is known and Ti, is determined as before. If E(du2) is a function of TLor 0i, or both, L is solvable.From equation 3.12 , do = dNp+i — 0dNp, the difference of the direction of the firstderivatives of surface normal at two adjacent pixels P and P+1. The term E(du) is theexpected value of dcr over the entire image. Since the first derivative of surface normalis distributed isotropically over the whole image, dcr is distributed isotropically as well.E(dcr) , the average difference of surface normal first derivatives on two neighboringpoints of the whole image, is mainly determined by the resolution of the image. Under theassumption of isotropically distributed surface normals, E(da) is only weakly dependenton the objects orientation. Is E(da) related to the light direction? If E(d) is theunweighted average of du for all image points, E(do) would not be related to the lightdirection. But in our method, E(do’) is a weighted average of dcr for all image points,with the weight directly related to the light source direction.Chapter 3. A new improved method 31In any image, some image points may not receive light directly from the light source(s),but instead are illuminated by indirect light that reflects or refracts from other objects.These points are not taken into account by this or by any previous methods presented.Furthermore, for image points corresponding to background regions and the boundarybetween two objects, the intensity distribution around those points has little relationwith the light source direction. At these points, since the distribution of intensity isirregular for reason of reflection, or change of material, the intensity variance is relativelylarge compared to other points directly shaded by light source(s) and on only one surface.In the above equations, the expected value of dIi, &I and dci in a region are weightedaverage of dIi, &I and dci around all points of the region, where the weight is thereciprocal of intensity variance at each point. For points which are not shaded directlyand solely by light source(s), the weight is small. It is the surface points that are directlyshaded by light source(s) that determine the value of E(dci).When the light source direction is different, different part of surface is directly illuminated, that means, surface points with different dci value are being directly shaded. Wefound a predictable pattern for convex objects. As L gets larger, surface points withlarger dci value are directly shaded by the light source. Since the weight for those pointsbeing directly shaded are larger when calculating E(du), E(da) increases as L increases.To illustrate the pattern, we will use a sphere as an example. First of all, we illustratethat: dci= cidNp+i — dN on two adjacent pixels P+1 and P is larger if cidN is larger onthe two points. In Figure 3.9, assume the distance on the image between P1 and P2 isthe same as that between P3 and P4. Since the slant of the surface normal at P1 and P2is larger than that at P3 and P4, the distance on the sphere surface between P1 and P2is greater than that between P3 and P4, and therefore dci between P1 and P2 is largerChapter 3. A new improved method 32than the value between P3 and P4. For a general surface, the larger the surface normalson the two neighboring image points at the same distance on the image, the larger thedistance between them on the object surface, and the larger the du between them. Forpoints on the sphere boundary, do is largest. The closer the points are to the center ofthe sphere, the smaller dcr becomes.Figure 3.9: The slant of surface normals on P1 and P2 are larger than that of P3 andP4, so the dcT is larger between P1 and P2 than dcr between P3 and P4.Assuming that we look down the Z axis, when the light source slant oj. = 00 (along theZ axis), all points on the image are directly shaded by the light source. As 0L becomeslarger and larger, the area directly shaded by the light source in the image moves fartherand farther from the projection of sphere center. Since the points directly being shadedby the light source have a larger weight when calculating the expected value of do , anddg on those points is larger when the light source slant is larger, therefore E(do-) is largerwhen L gets larger.When the tilt of the light source direction changes, the position of the area being directlyChapter 3. A new improved method 33shaded will change, but not its size or distance from the projection of the sphere center.Therefore, E(d) is not affected by TL.To confirm the above analysis and find a quantitative relation, a set of computer generatedsphere images (lOOxlOO) with a known light source direction and known surface normalat each point are used to plot the relation between E(dcr) and different 0L (tilt heldconstant), and relation between E(do) and different TL (slant held constant), respectively.Figure 3.10: E(do) as a function of 0L with the first set of sphere imagesFrom the two Figures, we see E(da) is much more related to 0L than TL. E(d) is almostindependent of TL when 0L <= 700 and a little related to TL when aL is larger than 700.To confirm the result, we experimented with another set of computer generated imageswith different levels of ambient light.2.521.510.5010 20 30 40 50 60 70 80Chapter 3. A new improved method 342.5 I I I Islant = 10slant = 20/-.slant=3O5-lant = 40——-slaht.---slant= 60----/ slant = 70slant=801.5 -10.5 — ________0 I20 40 60 80 100 120 140 160Figure 3.11: E(du) as a function of TL with the first set of sphere imagesChapter 3. A new improved method 352.221.81.61.41.210.80.60.40.210 20 30 40 50 60 70 80Figure 3.12: E(da) as a function of UL with the second set of imagesChapter 3. A new improved method 362.2 I I I I I-1antslant = 20slant = 30slant=40slant= 50slant=60-----slant=7016- slant=801.41.2 - - - - -------0 8--0.6 -:_ ::.——---0.4 -0.2 I I I I I20 40 60 80 100 120 140 160Figure 3.13: E(d) as a function of TL with the second set of sphere imagesChapter 3. A new improved method 37We therefore will ignore the dependence of E(duL) on TL. To find the function E(d)in terms of UL, we average all the curves plotted between the two terms for each set ofimages (see Figure 3.14).1.81.61.41.210.80.60.40.2Figure 3.14: E(dcr) as a function of 0LThe average of the two curves is used as the function E(do) of crj, in our algorithm.Combining the two terms related to oL in 3.16, we obtain:E(dI)= —A(oL)cos(O — TL)E (zXdI) (3.17)where A(0L) =___The relation of A(JL) to UL is plotted in Figure 3.15.In the method, we use a polyline based on the above data to represent the function curve.10 20 30 40 50 60 70 80Chapter 3. A new improved method 383.5 I I I I I I I IA as a function of slant of light direction —3.2.521.510.50 I I I I I0 10 20 30 40 50 60 70 80 90Figure 3.15: A as a function ofChapter 3. A new improved method 39I A(i * 100) when L = i * 10°=linearfunction when i * 10° <JL < (i + 1) * 100where i=0,1,2,3,4,5,6,7,8.The values of A(i * 100) are obtained from experiments with images having oL = i * 100.They are 0 (UL = 0°), 0.65 (oL = 10°), 0.74 (JL = 20°), 0.96 (JL = 30°), 1.27 (crL = 40°),1.57 (crL = 50°), 1.86 (UL = 60°), 2.45 (cYL = 70°), and 3.24 (JL = 80°).In summary, solution I consists of, given j} which are computed from the image,finding the value of o.k, which in the right side of equation 3.17 gives the value closest tothe left side of the equation.3.4 Another Slant Estimator & Surface Orientation Estimator3.4.1 Light Source Slant oj. - solution IIFrom equations 3.7 and 3.8, we have:dI— E(dI:) = p)J((sin dN — E(sin cTdN)) sin 0L cos(TL—0) + cos dN cos cTL) (3.18)EdI1— E(dIjI = p)J. EI((sin0dN — E(sin UdN)) 5fl L cos(TL — O) + cos dN cos oL)I(3.19)Compared to cos dN cos L, (sin dN — E(sin dN)) Sfl T1, cos(TL — O) may not be smallenough to be ignored.Since sin adN — E(sin odN) is evaluated in a small region, the difference between 0dN andsin’(E(sincrdN)) is not large, so:dl: — E(d12) pl(cosodN(dJdN)sino’L cos(TL — 0) + cosJdN coscrL)Chapter 3. A new improved method 40= p.\lcos dN COSJL((dUdN)tafluL cos(TL — O) + 1)where (dJdN) = odN — sin’(E(sinadN)), and:Id1 — E(dI)I pAlIcosodNIcosJL(dudN)tanLcos(TL—9J + iiTaking the expected value of the quantity above, we have:E(IdI—E(dI)I) = p)l.EIcosJdNIcosuL.EI((dadN)tanuLcos(TL—O)+l)ILet F = EI((dudN)tanoLcos(TL—8) + 1)j, we obtain:EIdI — E(dI)I = pAl.E(coso-dNI)cosJL.F (3.20)F is a function of JL,TL, and surface normal derivatives.We can plot F as a function of cr (with the same tilt), and as a function of TL (withthe same slant) by using a set of computer generated images with known light sourcedirection and known surface orientations (see Figures 3.16 and 3.17).The plots show that F is well correlated to UL, and little dependent on TL.By further analyzing the formula, we can confirm the experimental results. All quantitiesdiscussed in this chapter are actually evaluated in many small regions of the imagerespectively and their weighted average is used as the estimation for the whole image(implementation details will be given in the next chapter).The relative magnitude of the two terms (dudN) tan gL cos(TL — 0:) and 1 decide thevalue of I(dJdN) tan L cos(TL — 0) + ii. From the assumption of an isotropically distributed adN, the chances of (dJdN) being positive and negative are the same and itsmagnitude is distributed uniformly around its average dJdN in small regions. If the magnitude of the first term is relatively small compared to the second term (1), the value ofChapter 3. A new improved method 412.6 I I I Itilt = 20 —tilt = 40tilt=60tilt = 80—---/,tilt = 100 __7L/tilt = 120tilt = 1402 tilt 160‘In’,-,1.8 /J;’’1.20.8 ..— --.‘—0.6 I I I10 20 30 40 50 60 70 80Figure 3.16: F as a function of LChapter 3. A new improved method 422.6 I I I I Islant = 10slant = 202.4 slant.- 30slapt = 40----s&nt= 50 -.2.2.6lant = 60--.slant=7O-‘ slant=8021.81.61.4 -1.2 —1 ----0.8 =0.6 I I20 40 60 80 100 120 140 160Figure 3.17: F as a function of TLChapter 3. A new improved method 43I((dadN) tan u, cos(TL—O) + 1) is mainly determined by the second term, in this case aconstant. Otherwise, the larger the magnitude of the first term, the larger((dodN)tanoLcos(rL—O) + 1)1 can be.From experiments with images of sphere (resolution 100x100), the largest difference ofdN across either the X direction (100 points) or the Y direction (100 points) is r.The average dJdN is approximately (0.03) between two neighboring pixels in the image, and f(0.3) between two pixels 10 pixels apart. The largest value of cos(TL —0:) can be is 1. Only tan 0L could be as large as infinity. From the formula, thenEI((dcrdN) tan 0L cos(TL—O) + 1)1 is much less dependent on iL and surface orientationthan on L.From the above analysis, we reach a conclusion similar as suggested from the plots. F ismainly a function of 0L and is relatively independent of TL. So F f(0L).Including this results in equation 3.8 and 3.20:E(sinudN)sinoLcos(S,—rL) — E(d11)•F 321E(IcosJdNI)cosL— E(IdI—E(dI)I) . )Using a similar derivation, we have the following results from equation 3.12:Ed2I— E(d2I)I = p\l(da)(EIcosJdNsinJLcos(rL—O)— (sinodN— E(sinodN))cosoLI)p&l(du)(E cosudN siflJL cos(TL — O) — (cos crdN(dudN))coscTLl)= pXl(du)Ecos odNI sin crL[Elcos(rL—Oj) — (dudN)/tanJLI]= p)l(du)(E cos dNI) sin JLGwhere G = El Cos(TL — O) — (dodN)/tanoLI.The relation of G to the slant is shown in Figure 3.18 and to the tilt is shown in Figure3.19.Chapter 3. A new improved method4.543.532.521.510.540 50 60 8044tilt = 20 —tilt 40tilt = 60tilt = 80tilt = 100tilt = 120tilt = 140tilt = 16010 20 30 70Figure 3.18: G as a function of ULChapter 3. A new improved method 45slant = 8032.5215 — —L —--rz--=1 -—..—-—.. ---=---—.-0.5 I I I20 40 60 80 100 120 140 160Figure 3.19: G as a function of TLChapter 3. A new improved method 46Combined with equation 3.12, we have:E(sin odN) cos o-L— E(d2I) 3 22E(IcosodNI)sinoL— E(IdI—E(d) )From the ratio of equations 3.21 and 3.22, we obtain:E(sin‘7clN) sin 0L cos(O — TL) E( cos 0dNI) sin crLE(I cos odNI) cos cr E(sinodN) cos 0L— E(dI) E(Id2I—E(d)I)FE(IdI, — E(dI)) E(d21) GSince the left side of above equation is actually tan2(crL).cos(O,— TL), we have:2 E(dI) E(Id2I—E(&I)I) 1 Ftan (TL)= E(1d11 — E(dI)) E(d2I) cos(O — TL)G (3.23)Since F and G are all dependent on oL and little dependent on TL, is also a functionof cij. The plot is shown in Figure 3.20.is mainly dependent on L. It is little dependent on TL when cYL < 70 and a littledependent on TL when L > 70. To simplify the function, we ignore its small dependenceon tilt and focus on its major dependence on slant.Averaging the plot functions between & and OL when TL is different, we obtain an averagefunction between and 0L for all tilts. To confirm the relationship, we performed thesame experiments on another set of computer generated images with different ambientlight. The plot between and oL for two sets of images are shown below.It seems the relation between & and L is similar even for images with different ambientlight.Compare all terms in equation 3.23 which are related to L, we obtain the final formulato solve L for solution II:E(dI) E(1d21, — E(&I)I) 1B(TL)= E(IdI— E(dI)I) E(d21) cos(O —TL) (3.24)Chapter 3. A new improved method 473.532.521.510.500010 20 30 40 50 60 70Figure 3.20: F/G as a function of oj,Chapter 3. A new improved method 482.5 I I I Ifirst set of images—/second set of image -/;0 d07 80Figure 3.21: F/G as a function of oj.Chapter 3. A new improved method 49where B(JL) = tan2(JL)/(F/G).B(o-L) as a function of UL is shown in Figure 3.22.16 I I I I I IB as a function of slant of light direction —14121086420 I I I I I I0 10 20 30 40 50 60 70 80 90Figure 3.22: B as a function of oi,In our experiments, we use a polyline from above data to represent the curve and it isdefined as:I B(i * 100) when L = i * 100B(JL) =1. linearfunction when i * 100 <UL < (i + 1) * 100where i=0,1,2,3,4,5,6,7,8.The values of B(i*10°) are obtained from the experiments with images having oL = i*10°.They are 0 (JL = 0°), 0.19 (oL = 10°), 0.41 (o-L = 20°), 0.68 (JL = 30°), 1.29 (JL = 40°),1.89 (crL = 50°), 3.57 (UL = 60°), 7•47 (o- = 70°), and 14.7 (o-L = 80°).Chapter 3. A new improved method 50In chapter 4 and 5, we apply the functions plotted in Figure 3.15 and Figure 3.22 tomore complex images, and find we still can get pretty good slant estimator.So far, we have light source tilt estimator 3.14, and two slant estimators 3.17 and 3.24.3.4.2 Surface NormalMultiplying equations 3.21 and 3.22, we obtain:E(sin cYdN) Sfl OL cos(O — TL) E(sin crdN) cos (3 25)B(I cos dNI) cos L E(I cos udNI) sin c1 cos(O — TL)— E(dL) E(d21) F 0 3 26—Simplifying the left side of the equation and representing terms in the right side that arerelated to L as C(crij, we obtain:E(sinudN) 2 E(dI) E(d2I)E(I CO5JdND = E(IdI — E(dIJI) E(Id2I— E(d2I)I)’ (3.27)where C(UL) = (F.G).Except for C(0L), other terms in the right side of above equation can be obtained fromthe source image. L can be determined as before and C(0L) as a function of 0L can beplotted from the previous discussion from F and 0 (Figure 3.23).Like functions A and B, C(0L) is a polyline. The values of C(i * 10°) (where i=0,1...8)are 0 when UL = 00, 0.65 when o = 100, 0.74 when cYL = 200, 0.96 when o-L = 300), 1.27when UL = 400, 1.57 when L = 500, 1.86 when crL = 600, 2.45 when o-L = 700, and 3.24when cYL = 800.So the left side of equation is solvable. That means, we can obtain E(I tan UdN) for eachimage direction O.For each point in the image, if we apply the above formula in a small region centered atChapter 3. A new improved method 514.5 I I I I I I IC as a function of slant of light direct on —43.532.521.51 I I I I I I I0 10 20 30 40 50 60 70 80 goFigure 3.23: C as a function of LChapter 3. A new improved method 52a point, E(I tan cIdNI) , the average tan odNI in the small region, can be thought as theapproximate evaluation of I tan adN at this point.If we can further obtain 0dN from tan dN for each image direction, then the tangentvector of the surface normal along each image direction O is known (tilt=O, slant=odN).The surface normal at each point, which should be perpendicular to all its tangents, issolvable then.There are two odN for each tan dN I: one is between 0 and 7r/2, one is between 7r/2 andir. To decide which solution is the one suitable for this point in the image, we have tolook at its adjacent pixels. Surface normals changes continuously among adjacent pixels,and udN along a certain image direction change continuously among the adjacent pixelsas well. Considering dN along image direction 8j for a set of points lying on the samedirection O, for convex objects, if we check the points one by one in the direction of O,0dN along O for the points will be larger and larger. The relationship is obvious fromFigure 3.24. P1 to P5 are a set of points whose projection on the image lie on the imagedirection O = 00 which is parallel to the X axis. If we look the points from P1 to P5 (inthe order of image direction O = 00), the dN values (angle with Z axis) of the tangentsalong O = 00 on those points become larger and larger (they range from 0 to ir).Generally speaking, because of the continuity of the surface in neighboring points, theirdN along the same image direction should all be less than ir/2 or larger than K/2. Thecase which satisfies that property is the solution for dN• There must be only one case.Finally, the surface normal at each point is the vector which is perpendicular to all itstangents with tilt equal to 9 and slant equal to 0’dN, which is evaluated using the abovemethod.Chapter 3. A new improved method 53Z axis‘(-axisFigure 3.24: From P1 to F5, the angle of the tangents with the Z-axis becomes larger.3.5 Confidence Interval of Evaluated ParametersConfidence interval for the tiltFrom the tilt estimator, we have:Ed11/cos 0 1 tan 0Ed12/cos 02 = 1 tan 02 ( COS TLEdImarsin TLEdImaEdI/cos0 1 tan0It is in the form Y=Xb, where Y is a nxl matrix, X is a nx2 matrix and b is a 2x1matrix.The tilt is derived from parameters b0 = COS TLEdImar and b1 = Sfl TLEdImax. So wefirst have to determine the confidence intervals of b0 and b1.b0 and b1 depend on Y, the measured first derivative of intensity in different directions,and on the tangent values along different directions.We begin by considering the variance u of Y1. An unbiased estimate of the variance o2Chapter 3. A new improved method 54is [10]:= Z_1(Y —n—iwhere n is the total number of samples.We then determine the variance for parameters b0 and b1. From the b1 evaluator,b1-b1 is a linear combination of Y._ (X-)1_(X)2so:cr2{b1} =_____1—Replacing o2 with its unbiased estimate 2, we have an unbiased estimator ofu2{bi}:s2{bi}= (X —The confidence interval for b1 is:b1±t(1 — cv/2; n — 2)s{bi} (3.28)From the symmetry of the t distribution, it follows that:n — 2) = —t(1 — n — 2)Chapter 3. A new improved method 55Here, t(cr/2; n — 2) denotes the (cv/2) percentile of the t distribution with n — 2 degreesof freedom.For b0,2 2______________{b0}=The unbiased estimate of cr2b0 is:_2.s { o}—3n(X_X)Similarly, the confidence interval for b0 is:b0±t(1 — o/2; n — 2)s{bo} (3.29)Now we can evaluate the confidence interval for the least-squares parameters in the general case of n independent samples (Xi, Y) (1in). There is a bias in our samples.We measured dl at many points along each directions, and the mean of dl along eachdirection is involved in the least-squares model to evaluate the tilt. It means that, foreach X (corresponding to direction i), there are m number of Y (1jm), X1 and themean of }‘ (11), 1in, is used as a sample in the least-squares model. The formula forthe unbiased evaluation of the variance of Y, b0 and b1 is modified as follows:s2(Y) = >= rn(y.— )2(3.30)n * m — 1s2{bo}= nm +1mx )2s{Y} (3.31)2{Ys2{bi}= =i rn(X1— )2(3.32)Chapter 3. A new improved method 56The confidence interval for b0 and b1 is as given by equations 3.29 and 3.28.Since the tilt is derived from the ratio of b1 and b0, the confidence interval for tilt canbe computed. The lower bound and the upper bound of the tilt are the smallest valueand largest value of tan’(bi/bo) when b0 and b1 are within their own confidence interval.But the confidence level is different from either b0 or b1’s confidence level. Assumingthe confidence probabilities is BO for b0 and Bi for b1, the probability for bo or b1 to beboth in their confidence interval is BOxB1, when tan’(bi/bo) is guaranteed to be in theconfidence interval estimated above. So a conservative estimate of confidence intervalfor tilt is between the smallest and largest value of tan1(bi/bo) when b0 and b1 arewithin their own confidence interval, and the confidence level for tilt is the product ofthe confidence levels for b0 and b1.The larger the confidence interval, the less reliable is the evaluated parameter. From thevariance, we can tell it actually reflects how smoothly the pixels’ intensities change alonga direction or within a region. The less smooth the intensity change within a region,the higher the variance is in the region, and the larger the confidence interval is. So theconfidence interval could help predict whether the region contains object boundaries, orit is in the background or not, and therefore could help segment the image.Confidence interval for the slantFrom the slant estimators 3.17 and 3.24, assuming TL not affecting the slant, we have:A(aL)X =or:B(L)X =Chapter 3. A new improved method 57As for b1 or b0, A(0L) and B(aL) are linear combination of Y, so:s2{A} = s} (333)s2{B}= s2{}The confidence interval for A and B are:A±t(1 — a/2; n — 2)s{A}B±t(1 — o/2; n — 2)s{B}As for the tilt’s confidence interval, a conservative estimate of confidence interval forslant is between smallest and largest oL when A(JL) or B(JL) is in their own confidenceintervals, and the confidence level is derived from that of A(oL) or B(UL).Chapter 4Performance Evaluation4.1 Implementation IssuesAs we mentioned before, the method for estimating the light source direction is actuallyapplied to many small regions of the image, and a weighted average of the results fromall small regions is used as the final estimated light source direction for the whole image.In the following, we will discuss the reasons to use small regions, how to divide imageinto small regions and how to choose the weight used to average the results calculated inall small regions.4.1.1 Evaluation by Small RegionsWe know that we cannot evaluate reliably the surface normal and light source directionat a point by using only the information at that point. From his light-from-shadingalgorithm, Pentland concluded (as is obvious) that results would be more reliable if theintensity distribution in a region instead of intensity values at a couple of points wereused.From our experiments, we also notice that the size of the region affects the results. Ifthe region is too small, it will not be much better than results from a couple of points.Statistically, results will not be reliable if the information based is not sufficient. Onthe other hand, if the region is too large, it is more likely to include some background,58Chapter 4. Performance Evaluation 59and boundaries between objects in the region. In this case, the intensities can vary quitesuddenly for different surface textures and different object curvatures. The intensitydistribution can be more affected by scene composition than by light direction or objectsurface orientation. Scene composition is unknown in most cases, and we want to avoidan elaborate segmentation scheme at this point of our research.A suitable region size will allow the regions to contain only one part of one object in mostcases, so that the surface normal changes smoothly in all directions. Then the change ofintensity distribution in different directions is mainly affected by light source directionand surface orientation, which makes the determination of light from shading and normalfrom shading possible.What is the suitable sizes for small regions? We tried different small region size fordifferent images, and compared the estimated tilt of light source direction (tilt is usedinstead of slant since tilt is more sensitive to region size) . The plot between region sizeand estimated tilt is shown in Figure 4.25.For digitized video images with resolution about 640x480 , and usually with severalobjects, the suitable small region size ranges from lOx 10 to 80 x 80. There is no definiteanswer for this question based on size of region alone. Usually, the small regions willcontain only one part of an object. The number of pixels contained in such windows rangefrom 100 to 6400, which is large enough to satisfy statistical reliability requirements.There are two more reasons for us to use small regions to evaluate light source direction:1. Multiple light sourcesAll theories discussed above only apply to a single light source. A small regionmakes it more likely to have only one light source dominate the shading. Withan image with multiple light sources, we can assume a single light source in smallChapter 4. Performance Evaluation 60180 I I I I I Isphere image(206X283)ellipsoid image(l52X356)160 peanut image(162X406)pillow image(205X393) —140120200 /7///80 I I’.’60 /,,‘0 20 40 60 80 100 120 140 160 180 200Figure 4.25: Relation between estimated tilt error (vertical axis) and different linearregion size (horizontal axis)Chapter 4. Performance Evaluation 61regions and apply our method in these small regions. From the distribution of lightsource directions estimated in different regions, we can tell how many lights arelikely shading the whole image. An example is given in the next chapter with animage having two light sources.2. Directional light sourceIn all the above algorithms, one of the common assumptions is that we deal witha directional light source. But in actual experiments, it is more likely to have alight source at a finite distance rather than directional light source. Over the wholeimage, the light source may not be far enough to be thought of as directional light.But within small region the direction might be constant enough to treat the sourceas a directional light source.In our experiments two ways to divide an image are used. The first way is just todivide the image horizontally and vertically into a grid of small regions. Each regionis referenced by row number and column number in the grid. If image resolution isImageXResxlmageYRes and region size is sizeXxsizeY, the number of regions isImageXRes/.sizeX in rows and ImageYRes/sizeY in columns. The regions consideredthis way do not overlap.The other way to divide the image is based on each pixel. For any pixel on the image,there is a region centered at that pixel which includes all the pixels within a distance ofhalf of the region size. These regions overlap. This essentially defines a filter kernel toaverage the results over each pixel.To estimate surface normals, the second method is used. To estimate light source direction, both methods are used.Chapter 4. Performance Evaluation 62Light directions evaluated in each region are averaged with a weight different for eachregion. The weight used is the inverse of the standard deviation for the estimated tiltand slant in each region. Our tilt estimator uses a least-squares regression model. Theestimated parameters’ confidence interval is therefore derived from the least-squares computation.4.1.2 Average of Source Direction in Different RegionsTo obtain the light source direction for the whole image (fr, and cfL), the weighted averageof estimated TL and o. from every region i is used as follows:—_________L—Lwhere C is confidence interval for region i.4.1.3 Variable Expression and Their RangeThe two vectors, surface normal and light source direction, are both expressed by twoangles: tilt and slant. For the purpose of visualization, we represented angles by a vectorin the image plane such that its value is the angle between that vector and the X axis inthe image plane.For both surface normal and light source direction, tilt ranges from 0 to i- or -‘r to 0,while slant ranges from 0 to K/2.These ranges are further restricted. For instance, for light source direction L, if o > ir/2,it means the surface facing the viewer is not illuminated directly from the light source.Chapter 4. Performance Evaluation 63Instead, the surface intensities we see are reflected or refracted by other objects. Theintensity distribution in this case is not related directly to the light source direction, andthis phenomenon is beyond the scope of this thesis.For surface normals N, if ojy > ir/2, it means the viewer does not see the surface.An example of visualizing estimated light source direction (TL and Ui,) is shown in Figure4.26. The source image is a sphere image with resolution 100 x 100 and light directionTL = 600 and L = 500.The background intensity in each region shows the estimated confidence interval of theregion. The darker the background of the region, the larger the confidence interval, andthe less reliable the estimated light direction is.In every region of the figures in the second row, tilt, slant and confidence intervals of tiltfor the corresponding region in the source image are shown. The four corners include thebackground of the source image. Notice that the confidence intervals in the four cornersare very large, which means the light direction computed there are very unreliable.From the tilt distribution among all regions, we find they tend to point to the locallybrightest area. From tilt estimation theory, we know the estimated tilt in each region isalong the direction where intensity changes most, which is different from the real tilt formost regions. It is only by averaging the estimated tilt among all regions that we get a“real” estimate of tilt.There are two kinds of performance evaluation methods for the problem we consider. Thefirst is to compare the estimated light direction or surface normal with the real values.In the case of synthetic images, they are obtained from the software which generatedthem. In the case of real images, they can be calculated from distance between lightand scene, or guessed by people directly from the images. The other method is to useChapter 4. Performance Evaluation 64source imagestilt evaluation slant evaluationFigure 4.26: Tilt and slant estimations in all regionsChapter 4. Performance Evaluation 65the estimated light directions for the whole image and surface normals at each point toshade the region in the image, and compare the new reshaded image with the originalimage. The first method is to test the accuracy of the results, while the second methodexplores how useful are these results for the task of merging computer generated imageswith real images. We call the second method reshading. It is actually the next step formerging real and computer generated images after light and normal detection. We willdiscuss this in Chapter 5. In this chapter, we evaluate the performance of Pentland’s,Lee & Rosenfelds’s and our algorithm in estimating light source direction and surfaceorientation.The images we use include computer generated images and real images of spheres andmore complex non-planar models, such as ellipsoids, peanuts, pillows and a toy. We alsoexperimented with objects with more realistic reflection behavior surface, such as wovenmaterials of silk, velvet and denim.4.2 Evaluation of Light Direction with Synthetic Sphere ImagesWe produced synthetic images with a local ray-tracer, optik, using Lambertian surfacereflection and single directional source.The resolution of those images are lOOx 100. Light source direction is estimated ineach lOx 10 small region produced by uniformly dividing an image using a grid. Thelight source directions estimated in all small regions are averaged by the weight of theirconfidence interval to be the light source direction estimated for the whole image.Chapter 4. Performance Evaluation 664.2.1 Experiments with One Set of Synthetic ImagesLight from shading results of using Pentland’s, Lee & Rosenfeld’s, and our algorithmswith same set of synthetic sphere images are shown in the tables 4.1 to 4.4. The tiltand slant of the light source used to generate synthetic images are shown in the first rowand first column of each table. Each pair of data in the tables are the estimated 0L (firstnumber) and TL (second number) for the image with the real 0L and TL in the same rowand column.1. Pentland’s method— (Table 4.1)Table 4.1: Pentland’s estimated illumination directionJL\TL 0 20 40 60 80 100 120 140 1600 25\9810 26\4 28\23 25\39 25\65 26\82 26\96 26\116 25\136 25\15620 28\5 28\27 27\50 29\62 29\81 29\99 26\118 25\137 25\15830 28\1 29\20 30\45 30\61 29\80 29\98 28\119 27\139 25\15840 29\0 30\20 31\43 30\62 29\80 28\98 27\118 25\139 23\15950 26\0 27\19 29\41 28\63 25\81 24\98 24\118 22\138 19\16160 23\0 25\19 25\40 24\62 23\81 22\98 21\117 19\139 18\16170 22\0 23\19 23\40 23\61 22\81 21\98 20\118 18\139 17\16080 22\0 23\19 23\40 23\60 22\80 21\99 20\119 18\139 17\16090 25\0 24\20 24\40 24\60 24\80 23\100 22\120 20\140 18\160The average error on tilt is 1.70 and average error on slant is 300 . From ourexperiments, it seems that Pentland’s method is a very good tilt estimator, butnot a good slant estimator, which agrees with Lee’s experiments [12]. The slantestimates for all images are around 300.2. Lee & Rosenfeld’s method — (Table 4.2)Chapter 4. Performance Evaluation 67Table 4.2: Lee & Rosenfeld’s estimated illumination directionUL\TL 0 20 40 60 80 100 120 140 1600 39\10110 35\1 37\19 37\43 35\63 36\84 34\100 31\117 35\137 39\16020 42\5 43\26 41\51 43\62 45\80 42\100 39\118 42\138 39\16030 45\0 47\20 48\44 49\62 44\80 44\99 43\118 42\138 43\15940 53\0 55\20 56\44 56\62 54\81 51\99 48\118 49\138 49\16050 62\0 62\19 59\41 60\62 62\81 59\98 58\118 55\139 55\16160 67\0 66\18 63\40 64\62 67\82 65\98 62\117 60\139 63\16170 68\0 69\19 69\40 69\61 68\81 68\98 67\118 66\139 67\16180 77\0 77\19 77\40 77\60 76\80 77\99 76\119 76\140 75\16090 84\0 84\20 85\40 85\60 84\80 84\100 84\120 83\140 82\160The average error of tilt for this set of experiments is 1.80 and average error of slantis 14°.It appears that Lee’s algorithm is as good a tilt estimator as Pentland’s and abetter slant estimator than Pentland’s.Later we will show that the slant results are seriously affected by the absoluteintensity value, which does not lead to a robust algorithm.3. Our method — (Tables 4.3 and 4.4)The results applying our source direction tilt estimator and the slant estimatorpresented in chapter 3 are shown in table 4.3 (solution I) and 4.4 (solution II).The tilt estimator used in both tables are the same, while the slant estimator -solution I is used for the first table and solution II for the second table.For solution I, the average error of tilt is 1.30 and average error of slant is 50•For solution II, average error of tilt is still 1.30 and average error of slant is 4•90•Chapter 4. Performance Evaluation 68Table 4.3: Illumination direction with our estimator- IJL\TL 0 20 40 60 80 100 120 140 1600 6\10l10 7\1 15\20 8\43 12\63 13\84 8\100 17\117 9\138 25\16020 16\5 22\26 23\51 22\62 18\80 21\99 18\118 20\138 30\16030 29\0 31\20 31\44 31\62 32\80 33\99 32\118 30\138 33\15940 38\0 39\20 41\44 40\62 43\81 44\98 43\118 40\139 38\15950 51\0 52\19 51\41 52\62 50\81 54\98 53\118 51\139 43\16160 64\0 65\18 63\40 64\62 64\81 65\98 63\117 57\139 55\16170 72\0 77\19 75\39 77\61 74\81 74\98 68\118 66\139 64\16080 83\0 86\19 86\40 86\60 82\80 81\99 72\119 77\139 75\16090 69\0 76\20 79\40 79\60 66\80 70\100 60\120 61\140 71\160Table 4.4: Illumination direction with our estimator — IIcrL\TL 0 20 40 60 80 100 120 140 1600 11\10110 11\1 12\20 13\43 12\63 9\84 12\100 14\117 15\138 12\16020 12\5 12\26 12\51 25\62 12\80 16\99 21\118 20\138 23\16030 34\0 36\20 23\44 33\62 38\80 39\99 35\118 35\138 38\15940 44\0 43\20 41\44 44\62 45\81 45\98 44\118 44\139 43\15950 52\0 51\19 51\41 52\62 50\81 50\98 52\117 52\139 51\16160 60\0 59\19 59\40 60\62 61\81 61\98 60\117 59\139 57\16170 68\0 69\19 68\39 70\61 69\81 68\98 69\118 66\139 67\16080 81\0 84\19 82\40 84\60 84\80 83\99 84\119 79\139 80\16090 79\0 82\20 83\40 80\60 81\80 81\100 81\120 80\140 77\160Chapter 4. Performance Evaluation 69To compare the performance of the three algorithms, we plot the relation betweenestimated light source direction (estimated tilt and slant) and the real light sourcedirection (real tilt and slant) used to generate the same images. For images generated with the same slant and different tilts (they are in the same row and differentcolumns in above tables), the average of all the images’ estimated slant is usedto compare with the real slant. Similarly, the average estimated tilt of all imagesgenerated with the same tilt and different slants is used to compare with the realtilt (see Figures 4.27 and 4.28). This confirms that all methods are good tilt estimators, while only our method gives a reasonable and consistent estimation of theslant in this type of images (which is the easiest case).100 I I I Iideal tilt estimatorPentland’s tilt estimato---Lee&Rosenfeld’s tilt estimrOur tilt es ator806040 ,,—---72000 20 40 60 80 100Figure 4.27: Relation between estimated tilt and the real tilt for the first set of imagesChapter 4. Performance Evaluation100806040200100700 20 40 60 80Figure 4.28: Relation between estimated slant and the real slant for the first set of imagesChapter 4. Performance Evaluation 714.2.2 Robustness of AlgorithmsWe continue with another set of experiments with similar synthetic images but less ambient light. To compare the performance of the three algorithms on this set of syntheticsphere images, like the first set of images, we plot the relation between estimated lightsource direction and the real light source direction used to generate the images.100 I I I Iideal tilt estimatorPentland’s tilt estimato7----Lee&Rosenfeld’s tilt estim’rOur tilt esator80-V60--,--20I I I0 20 40 60 80 100Figure 4.29: Relation between estimated tilt and the real tilt for the second set of imagesAs with the first set of computer generated sphere images, all three methods give goodresults for the tilt. Pentland’s slant estimator is as bad as before. Our slant estimator(both solution I and solution II) is as good as before. The difference is that Lee &Rosenfeld’s slant estimation is worse. Why is that so?In Lee’s algorithm, the tilt of the light source is determined by the ratio of the firstChapter 4. Performance Evaluation10080604020010072Figure 4.30: Relation between estimated slant and the real slant for the second set of20 40 60 80imagesChapter 4. Performance Evaluation 73derivative of intensity along the X axis direction and the Y axis direction:TL = tan’(12/l1) = tan’(E{I}/E{I})The slant is obtained from two equations:E{I} — JL)C0SUL + .SirIcYL3ir(1 + CowL)E{12} = + CowL)Using the square of first equation to divide the second equation, we obtain:(E{I})2— 16x4 — cTL)COSUL + SiflUL)2E{12} — 9-2(1 + co.so-L)3So oi is decided by (E{I})/ {1}. The relation of (E{I})2/ {1}and L defined bythe above function is plotted in Figure 4.31. We can predict that a small change in(E{I})2/ {1}will make 0L quite different.In the above two set of images, the ambient component of shading is different, and so eachpixel’s intensity is different, while the intensity distribution is the same. The intensityof every corresponding pixels in the second set of images is smaller by a constant value.We can tell (E{I})2/ {1}becomes larger when E{I} becomes E{I}— M and E{12}becomes E{(I— M)2} (M>O). Referring to the curve of Figure 4.31, we can then explainwhy the estimated slant is much smaller in most cases.The tilt estimation is not affected by the change of intensity since it is a function of firstderivative of intensity, which is not affected by the ambient light.In both Pentland’s and our algorithm, the first and second derivatives of intensity isused instead of the absolute value of the intensity, which makes the two algorithms morerobust in this case.Chapter 4. Performance Evaluation 74111ILB 3889£J.7205061Figure 4.31: (E{I})2/ {1}is a function of CXLChapter 4. Performance Evaluation 75We realize that no algorithm can be totally robust since the intensity or derivatives ofintensity are not determined solely by the light source direction. There are many otherfactors which will affect intensity while the light source direction is constant. However,we know that a light direction estimator which relies upon the absolute value of intensitywill not be robust enough.For similar reasons, when p.\ increases or decreases, a robust algorithm will generate thesame or nearly the same estimated slant or tilt. From the formula used, we know allthree algorithms are robust in this sense.It is easy to understand why tilt is easier to estimate than slant. The image is a 2Dinformation set. It is much easier to derive 2D information (tilt) than 3D information(slant) from an image.Comparing the three slant estimators, it is clear that Pentland’s algorithm does notreflect the real slant value. Images with slant range from 00 to 900 all result in a similarestimated value. Lee’s algorithm can sometimes discriminate better, but the result isaffected by the absolute intensity of the images. Our slant algorithm reflect real slantvery well, especially when real slant is less than 800. From the above experiments it isstable when all the pixels intensities in the image scale up or down.4.3 Light Direction Evaluation with Real ImagesAs we apply our algorithms to real images, more and more complex objects and differenttextures, the Lambertian and spherical assumption are less likely to be obeyed, but westill can obtain useful results.The images of sphere, ellipsoid, peanut, pillow and toy model were taken in the Laboratory of Computational Intelligence of UBC (see Figure 4.32). The surfaces have beenChapter 4. Performance Evaluation 76created to be as close to diffuse surfaces as real objects could. Six images were takenfor each object under six different light directions. The type of light source used was agreen collimated source to be nearly directional and of high constant luminance over theilluminated field. The “real” slant and tilt used in the following performance evaluationare all calculated by measuring positions of the light source and the object relative tothe camera.To compare the performance of Pentland’s, Lee & Rosenfeld’s and our algorithms, bothtables and plots are used. We use tables to show the estimated light source directions foreach image. Every pair of data in the following tables are light source direction in theformat slant\tilt. The first column is the real light source direction measured from thescene where the images were taken. The second to fifth column correspond to estimatedlight source directions by applying Pentland’s, Lee & Rosenfeld’s, our first solution andour second solution. Since tilt estimators for all algorithms are all fairly good, we onlyplot the relation between the estimated slant and real slant for all algorithms.As we showed before, the slant estimated by Lee & Rosenfeld is much affected by theamount of ambient light in the images. Since the following images were taken withalmost no ambient light, the estimated slant values from Lee’s algorithm are expected tobe larger.The evaluations were conducted with a sphere (table 4.5 and Figure 4.33), an ellipsoid(table 4.6 and Figure 4.34), a peanut (table 4.7 and Figure 4.35), a pillow (table 4.8and Figure 4.36) and a toy (table 4.9).From all the tables and plots, it is clear that our slant algorithm has a better performancewith images of real objects. Although the toy object is more complex, the results are asgood as for the previous examples. Note that more angles would be useful for a moreChapter 4. Performance Evaluation 77ellipsoidpeanutspheretoyFigure 4.32: Samples of real images used.Chapter 4. Performance Evaluation 78Table 4.5: Estimated illumination direction from real sphere images0L\TL Pentland L&R solution I solution IIO\- 50\-178 58\O4 20\104 15\10433\-180 36\-167 60\-169 43\-170 43\-17540\16 40\14 63\11 47\11 48\1145\-180 35\-178 49\-179 48\-175 50\-17554\19 32\22 68\14 55\14 55\1465\-180 28\-175 69\-175 66\-175 60\-17590 I I I I Iideal slasiatoePentland’s slant estimato80 Lee&Rosenfeld’s slant estim orOur slant estimator - solu on I —Our slant estimator - so ion II0Figure 4.33: Relation between estimated slant and the real slant by applying Pentland’s,Lee & Rosenfeld’s and our algorithms with real sphere imagesChapter 4. Performance Evaluation 79Table 4.6: Estimated illumination direction from real images of ellipsoidoL\TL Pentland L&R solution I solution IIO\- 39\20 53\78 1O\78 12\7833\-180 29\-177 65\-177 45\-177 38\-17740\16 31\5 65\5 50\5 50\545\-180 26\-176 57\-176 51\-176 48\-17654\19 29\-176 65\-177 56\-177 51\-17765\-180 22\-175 69\-175 72\-175 62\-177403020Figure 4.34: RelationLee & Rosenfeld’s and our algorithms with ellipsoid imagesbetween estimated slant and the real slant by applying Pentland’s,90706050100 10 20 30 40 50 60 70 80 90Chapter 4. Performance Evaluation 80Table 4.7: Estimated illumination direction from real images of peanutoL\TL Pentland L&R solution I solution II0\- 39\135 60\127 14\127 13\12733\-180 33\-179 63\-179 45\-179 40\-17940\16 34\6 67\6 47\6 47\645\-180 35\-178 71\-178 53\-178 49\-17854\19 36\8 70\8 59\8 54\865\-180 31\-178 74\-178 66\-178 59\-17890 I I I I I /ideal slant estimator ,Pentland’s slant estimator’80 Lee&Rosenfeld’s slant estimtrOur slant estimator - solutfon I- 90Figure 4.35: Relation between estimated slant and the real slant by applying Pentland’s,Lee & Rosenfeld’s and our algorithms with images of peanutChapter 4. Performance Evaluation 81Table 4.8: Estimated illumination direction from real images of pillow shapecTL\rL Pentland L&R solution I solution IIO\- 43\139 54\139 17\139 14\13933\-180 32\-178 59\-178 42\-178 39\-17840\16 32\11 63\9 45\9 46\945\-180 31\-177 68\-177 48\-177 47\-17754\19 31\11 66\11 56\11 53\1165\-180 22\-176 64\-176 63\-176 57\-17690 I I I I I I Iideal slant estimator -,--Pentland’s slant estimat980 Lee&Rosenfeld’s slant estim8or -----Our slant estimator - solu-on IOur slant estimator - sol)tion II8’O 90Figure 4.36: Relation between estimated slant and the real slant by applying Pentland’s,Lee & Rosenfeld’s and our algorithms with images of pillow shapeChapter 4. Performance Evaluation 82Table 4.9: Estimated illumination direction from real images of a toyI OI,\TJ Pentland L&R solution I solution IIro\- 42\-35 80\-5 10\-5 11\-5F30\52 38\63 78\51 24\51 31\51I 28\128 39\131 80\128 24\128 31\128convincing case with the toy.4.4 Light Direction Evaluation with Textured ObjectsTexture is also a very important factor in the shading of a surface. It is important to seehow texture affects our algorithm.’We use two different light sources to get two sets of images. Each set are images of piecesof material made of silk, velvet and denim under the same light source. To make thelight source more directional, we used a slide projector. The source images are shownin Figures 4.37 and 4.38. The estimated light direction by applying Pentland’s, LeeRosenfeld’s and our algorithms are shown in table 4.10. The slant estimation is asaccurate as with the previous examples. Since there are only two different slant values,we plot two points instead of a line for each algorithm (see Figures 4.39, 4.40 and 4.41).So far, we applied Pentland’s, Lee & Rosenfeld’s and our light from shading algorithms tomany different images. All algorithms gave very good light source tilt estimation, whileonly our algorithm gave accurate and consistent slant estimation.our context, “texture” means non-uniform pixel intensities in the image, whether due to surfacegeometry or reflectance variations.Chapter 4. Performance Evaluation 83Figure 4.37: First set of images of silk,velvet and denim shaded under a light ofTL = _1700 and L = 320Chapter 4. Performance Evaluation 84Figure 4.38: Second set of images of silk, velvet and denim shaded under a light ofTL = _450 and aL = 250Table 4.10: Estimated illumination direction from cloth imagesUL\TL material Pentland L&R solution I solution II32\-170 silk 44\-175 8\-175 25\-175 31\-17532\-170 velvet 46\-163 0\-170 27\-170 30\-17032\-170 denim 42\-161 56\-161 27\-161 33\-16125\-45 silk 45\-78 17\-79 27\-79 25\-7925\-45 velvet 51\-79 18\-79 27\-79 24\-7925\-45 denim 41\-79 17\-79 27\-79 25\-79Chapter 4. Performance Evaluation 8590 I Iideal slant estiaRtorPentlant’s entim&toy’ORD Lee 6 Rosenfeld’s esti,n4.tr +Queen timeter — sulu*I’n I SOur en tieeter— so_1tinnll X70605004030S020+10+O 10 20 30 40 50 60 70 60 90Figure 4.39: Relation between estimated slant and real slant from images with silk90 Iideal slant en timatorPentlaut’a nstinatoj/0$0 inn 6 Rosenfeld’s netin$.Cr +Our en tieator— n010)6u I 0QuO nntiaatne— ne_3tienII X706050 04030 31o 020100 10 20 30 40 50 60 70 60 90Figure 4.40: Relation between estimated slant and real slant from images with velvetChapter 4. Performance Evaluation 8690 I I I Pideal slant estimatorP.ntlant’se. tinatoto$0 Lea 6 flusrnfold’, e.tin9Zr +Our cc tiattur — aolup(on I SOur estimator —s3l4tionII X7060+5040030o o20+100 10 20 30 40 50 60 70 80 90Figure 4.41: Relation between estimated slant and real slant from images with denim4.5 Evaluation of Results on Surface OrientationIn this section, we compare the surface normal estimators in Pentland’s, Lee’s and ouralgorithms. Pentland’s shape from shading is relatively independent from his light fromshading algorithm. Lee’s shape from shading algorithm is based on his light from shadingresults. His surface orientation estimates are thus directly affected by the accuracy ofthe light detection algorithm. As in Pentland’s case, our shape from shading algorithm(formula 3.27) is not based on light from shading results.For the three synthetic sphere images (see Figure 4.42), the surface orientation in theimages are supposed to be the same in the corresponding pixels. Lee’s estimates withthree images are quite different due to differences in the light direction. Pentland’s andour results shape from shading changed as well, but not as much as Lee’s (see Figure4.44 for Lee’s shape from shading algorithm evaluation, Figure 4.43 for Pentland’s andFigure 4.45 for ours). For clarity, only the estimated surface normal (black vector)Chapter 4. Performance Evaluation 87normal (red vector) on the middle point of each region is drawn in the following figures.Figure 4.42: Three synthetic sphere images with three different light sources.Left:= 00; middle: TL = 40° and aL = 40°; right:TL = 120° and L = 60°tilt ofsurfacenormalslantofsurfacenormalFigure 4.43: Evaluation of surface normal (tilt:first row,slant:second row) for the threesphere images by Pentland’s algorithmIn regions without black vectors, the surface normals are uncomputable. Almost allregions have computable surface normals by applying our algorithm, which is not true‘t=4O;=4O t=12O;c=6OChapter 4. Performance Evaluation 88cy=O t=40;G=40 ‘t=120;G=60tilt ofsurfacenormalslant ofsurfacenormal— - .-p •-, •- ‘ -V4\c<jJ:-. ç:IIdi*\:Figure 4.44: Evaluation of surface normal (tilt:first row,slant:second row) for the threesphere images by Lee’s algorithmC)CDCL’CDC- CDIFr:’I!..!pIICD.1.C.L1CIC4-‘:\CCZCrtMrt-i—C)oCDH,CD CD CDCflOO)o I-h)F-JO CDChapter 4. Performance Evaluation 90In regions without black vectors, the surface normals are uncomputable. Almost allregions have computable surface normals by applying our algorithm, which is not truewhen applying the other two algorithms. Considering only computable surface normals,we compare the average estimated error for tilt and slant by applying different algorithmsin table 4.11. oNE represents the average error for 0N and TNE represents the averageerror for TN.Table 4.11: Comparison of averaged error for tilt and slant by applying different methods4.6 Multiple Light SourcesWe mentioned that we might be able to detect the presence of multiple light sourcesby examining the shading in small regions. In each small region, one light source isestimated based on local information. By examining the estimated tilt values amongall regions, we can tell how many light sources might be present. If there are multiplelight sources, there should be several focus points to which the estimated tilts point inadjacent regions.We use an image of a volleyball illuminated by two light sources (Figure 4.46), andestimated tilt and slant among all regions ( Figure 4.47).It is hard to appreciate the methods’ general performance by comparing the red vectorsand black vectors in all regions. The significance of the results is made clear by usingreshading as seen in the next chapter.CD -1 H Co CD Co C C CD C CD CDt:lj C:rjCD 0 CD C Co C CD CoChapter 4. Performance Evaluation 92In the tilt distribution, there are two points towards which the estimated tilt values tendto point. One is at the lower right of the image, the other is at the very upper left.Accordingly, we can guess there may be two light sources illuminating the image.This is only given us an indication of the possibilities. We have not developed a methodto automatically identify those points and therefore the light sources.Chapter 5ReshadingAs stated before, the process of merging the shading of computer generated images andreal images involves two problems. One is to calculate the light source direction fromthe real images and shade computer generated objects; the other part is to computethe orientations of the surface in the real images and shade the real objects with thecomputer generated light sources.In this thesis, we use the estimated light direction to reshade the objects in the real imageused in the light estimation. If the reshaded image with the same objects and estimatedlight source is similar to the original real image, we believe that shading other kind ofobjects by the estimated light will result in a visually consistent result. In this sense,reshading is also another way to evaluate the light from shading and shape from shadingalgorithms.In the last chapter, we evaluated all three light from shading algorithms on both syntheticand real images, simple and complex objects, Lambertian and non Lambertian surfaces,single and multiple light sources. We observed that all three algorithms have very goodtilt estimators in all cases. Pentland’s slant estimator is little related to the real slantvalue. Lee & Rosenfeld’s slant estimator is affected by the ambient light in the surface,that is the brighter the image, the smaller the estimated slant value.In this chapter, we continue the performance evaluation using three operations:1. Reshade the computer generated objects with the light source estimated from the93Chapter 5. Reshading 94real video images. The object’s geometry is known, while the light source is estimated.2. Reshade an object from real images with a computer generated light source. Theobject’s geometry is unknown and estimated from the real image, while the lightsource is known.3. Reproduce a real image with the light and surface orientation estimated from theimage. Both light direction and object geometry are estimated from the real imagewithout assuming any a priori.5.1 Reshading Algorithm5.1.1 Shading ModelThe shading model we used is very simple. It just includes two components: ambient anddiffuse. Since we assume a Lambertian surface when we detected light from the image,the reshading of the Lambertian objects will not consider specular reflection.Then our shading model is: 2I = laKa + IdKd(N.L) (5.35)The first term accounts for ambient illumination. ‘a is the intensity of the ambient light,assumed to be constant for all objects. Ka is the amount of ambient light reflected from10f course, specular reflection can be added if the application needs it.2The left hand side of this equation is expressed as an intensity, that is in pixel value, as opposed tothe equation given in Chapter 1, which uses radiance. We have only access to pixel values. But we canassume that the two are proportional (see [8])Chapter 5. Reshading 95an object’s surface, and ranges from 0 to 1. This coefficient is a material property, andshould be obtained empirically results from the material.Ambient reflection is independent of the geometry of the surface or the direction of thelight source. The assumption is that the ambient light comes from every direction andthe surface will always reflect the same radiance whatever its orientation as long as it isvisible.The second term takes into account the diffuse reflection. The radiance reflected at asurface by diffuse reflection is proportional to the inner product of surface normal (N)at the surface and direction of light source (L) to the surface. Id is the intensity oflight source; the material’s diffuse-reflection coefficient kd is a constant between 0 to 1and related with its material property. Diffuse reflection is independent from the viewerposition.5.1.2 Self-ShadowingOnly self-shadowing is considered in our shading algorithm. We have to judge whether apoint on a surface can be seen from a certain light source. If a point is not visible fromthe light source, it is in shadow. When there are multiple light sources, a point must beclassified relative for each light source. In our case, we assume there is only one lightsource to each small region in the image.If N.L < 0, the surface normal is more than ir/2 from the light direction, and the pointis in shadow.When a point is in the shadow, the illumination model must be adjusted to take this intoaccount. For a point light source or a directional light source, the addition of shadow toChapter 5. Reshading 96above shading model yields:I = IK + SIdKd(]V.L) (5.36)where:1 0 if this point is in shadows=( 1 if this point is not in shadowNote that the points in shadow are still illuminated by the ambient light.The two parameters ‘aKa and IdKd are derived from the source image. For sourceimages considered in this chapter, the light directions are known. There is one equation for each pixel in the source image is the normal can be determined at that pixel.Only two unknown parameters laKa and IdKd are involved in these equations. It is anoverconstrained problem and least-squares fitting is used.5.1.3 FilteringA simple filtering method is used to produce reshaded images. Filtering should be donewith surface normals, but the filtering of vectors is a problem in itself. So instead wefilter the intensity. Basically, the intensity at each pixel is computed as the weightedaverage of intensities of the pixels in a small neighbourhood.For pixel {m, n}, its intensity is computed as follows:‘m,n= i1‘m+i,n+j X Wm+i,n+jwhere Wm+i,n+j is the weight for pixel {m + i, n + j}.31t is clear from the equations that ‘a can not be separated from Ka, and neither can Id from Kd,but they need not be for reshading purposes.Chapter 5. Reshading 97The weight selected for this experiment is:0.25 ifi=Oandj=0Wm+i,n+j 0.125 if either i0 or jL00.0625 if i0 and j05.2 Experimental Results5.2.1 Reshading with Known Objects Geometry and Estimated Light DirectionIn the following, we reshade the three synthetic sphere images and a set of real images ofthe sphere, peanut and pillow models (see Figures 5.48, 5.49 and 5.50). In these figures,the first row shows the three original (computer generated or real) images of the objectilluminated under three different light sources. The second, third and fourth row imagesshow the new images using light direction estimated from Pentland’s, Lee & Rosenfeld’sand our algorithms respectively.The surface geometry for the peanut model and pillow model in polar coordinates are([13]):p = 1 + 3cos2op = 4 + 3sin2crsin(20)where 8 and u are the tilt and slant.Looking at the reshaded images with different objects and light source directions, it isobvious that oniy our algorithm gives a reliable estimation in all cases.Note that the slant error is relatively larger when real slant is 00 or 900. Since 00 and 900are the two extremes for slant, the slant estimated in all regions can only be all largerChapter 5. Reshading 98a=O t=40 a=30 t=90 a=45 t=120 a=60sourceimagesPentland’ssolutionLee’ssolutionoursolution Ioursolution IIFigure 5.48: Reshaded images with light source estimated by Pentland’s, Lee & Rosenfeld’s and our algorithms from synthetic images of spheresChapter 5. Reshading 99cy=O ‘=1 80;a=33 t=1 9;a=54sourceimagesPentland’ ssolutionLee’ssolutionOursolutionlOursolution IIFigure 5.49: Reshaded images with light source estimated by Pentland’s, Lee & Rosenfeld’s and our algorithms from real video images of the peanut modelChapter 5. Reshading 100sourceimagesPentland’ ssolutionLee’ssolutionOursolution IOursolution IIFigure 5.50: Reshaded images with light source estimated by Pentland’s, Lee & Rosenfeld’s and our algorithms from real video images of the pillow modelt=1 8O;=33 ‘u=l 9;a=54Chapter 5. Reshading 101than the real slant (when it is 00) or all smaller than the real slant (when it is 900), sothe average will be biased in this case.5.2.2 Reshading with Estimated Objects Geometry and Known Light DirectionFigures 5.51, 5.52 and 5.53 show the images reshaded with the object orientations estimated from different algorithms, and known light source directions. Comparing withthe source images in the first row, we can see the performance of different normal fromshading algorithms clearly.Our solution is the only one where we can at least identify the objects’ contour and theirpositions, while the other two solutions tell little about objects. The performance ofshape from shading are not as good as that of light from shading for all three methods.In some peanut and pillow reshaded images, objects contour are not well identified. Thisproblem could be solved within CAR, since segmentation methods can be used to identifyobjects. Here we just show the results only using our normal from shading algorithms.5.2.3 Reshading with Estimated Objects Geometry and Estimated LightDirectionIn Figures 5.55, 5.56 and 5.57, we show some reshaded images with both estimated surfaceorientation and estimated light direction. We call that full reshaded images to distinguishfrom the above reshading which use either known surface normals or known light sourcedirections.Since our light from shading algorithm is rather accurate, the reshading results withestimated normals and estimated light direction are very similar to the results in the lastChapter 5. Reshading 102sourceimagesPentland’ ssolutionwithoutfilteringLee’ssolutionwithoutfilteringOursolutionwithoutfilteringOur solutionwithfiltering(3X3)Figure 5.51: Reshaded images of object with surface normal estimated by Pentland’s,Lee & Rosenfeld’s and our algorithms from synthetic images of spherest=4O;=t4O ‘t=120;a=60Chapter 5. Reshading 103Figure 5.52: Reshaded images of object with surface normal estimated by Pentland’s,Lee & Rosenfeld’s and our algorithms from real video images of the peanut modelcy=O t=1 9 ;cy=54 cy=OsourceimagesPentland’ ssolutionwithoutfilteringLee’ssolutionwithoutfilteringOursolutionwithoutfilteringOursolutionwithfiltering(3X3)Chapter 5. Reshading 104sourceimagesPentland’ ssolutionwithoutfilteringLee’ssolutionwithoutfilteringOursolutionwithoutfilteringOursolutionwithfiltering(3X3)Figure 5.53: Reshaded images of object with surface normal estimated by Pentlan.d’s,Lee Rosenfeld’s and our algorithms from real video images of the pillow model‘t=19;cy=54Chapter 5. Reshading 105sourceimagesreshadedimagesFigure 5.54: Reshaded images of object with surface normal estimated by our algorithmfrom real video images of more complex objects=32;t=1 70Chapter 5. Reshading 106sourceimagesPentland’ ssolutionLee’ssolutionOursolutionlOursolution IIFigure 5.55: Reshaded images with both object orientation and light source estimated byPentland’s, Lee & Rosenfeld’s and our algorithms from real video images of the spheremodelt=40;a=40 t=120;a=60Chapter 5. Reshading 107sourceimagesPentland’ ssolutionLee’ssolutionOursolution IOursolution IIFigure 5.56: Reshaded images with both object orientation and light source estimated byPentland’s, Lee & Rosenfeld’s and our algorithms from real video images of the peanutmodel‘t=l 9;a=54 a=OChapter 5. Reshading 108Figure 5.57: Reshaded images with both object orientation and light source estimatedby Pentland’s, Lee & Rosenfeld’s and our algorithms from real video images of the pillowa=OmodelChapter 5. Reshading 109section where the known light direction is used. For Pentland’s and Lee & Rosenfeld’salgorithms, the reshaded images with both estimated shape and estimated light directionare as bad as when the light direction is known.Chapter 6Conclusion and Future Work6.1 ConclusionWe have analyzed two methods to retrieve light direction and surface normals from imageintensities. Following this analysis, we have derived our own methods and conductedexperiments to test our methods and compare with the previous ones.1. Light from shadingFrom the results given in chapter 4 and 5, we conclude that our algorithm is better(especially with slant value of light direction) than the two previous algorithms weused as a starting point. All three methods are good tilt estimators. Pentland’sslant evaluator tends to give a constant slant estimation. The average error onslant according to our experiments by applying Pentland’s method is more than300. Lee & Rosenfeld’s slant estimator is affected by the absolute value of intensity.For images with little ambient light, their estimated slant tend to be too large.The average slant error is 170 in one case tested and more than 300 in other casetested. Our two slant estimators give an average slant error of around 50 for all theexperiments presented in this thesis.2. Surface normal from shading110Chapter 6. Conclusion and Future Work 111Given the reshaded images shown in chapter 5, it is clear that our algorithm tocompute surface normals from shading is better in terms of preserving object shapefrom video images.For objects of different shapes with approximate Lambertian reflectance, our algorithms can suggest the shapes of objects after reshading, no matter what theorientation of the objects is in the image. The distribution of intensity on mostreshaded images approximately conforms with the light source directions shown intheir source images. However, the intensity distributions in the reshaded imagesare not as smooth as that in the original images, especially when using images withlarge 0L or with more complex objects.3. ReshadingThe shading model used only considered the ambient reflection and the diffusereflection. This shading model is adequate enough to evaluate the performance ofour algorithm when comparing to the original images.In the various images we used: synthetic images, real video images, images with sphere,peanut, pillow or toy model, images with Lambertian surfaces and images with complexreflecting features surfaces, we found that the light from shading algorithm gave usefulresults for all cases. The determination of surface normals from our algorithms is reallylimited and does not seem to work well with complex objects or objects with complexreflecting features. This is of course not too surprising given the assumptions that hadto be made to determine the normals, and the fact that no segmentation at all had beenused in this work.Chapter 6. Conclusion and Future Work 1126.2 Future WorkThe work can be further pursued in most aspects in order to deal with more complexsituations.1. Light from shadingIn the case of multiple light sources, our algorithm can guess the number of lightsources by checking the distribution of estimated tilt over the whole image. Thiscould be used with our algorithm to detect the number of light sources automatically.2. Surface normal from shadingOur algorithm does not work well for objects with complex reflecting behavior. If wecan find some other methods to delete the effect of the complex reflecting featureson the objects surfaces and produce an intermediate image with the same objects,but simple reflecting features, such as Lambertian surfaces, then we might obtainnormals from shading with these images and add the effect of complex reflectingfeatures when reshading.3. Shading/reshadingFirst, specular reflectance should be taken into account to improve reshading results. Second, to deal with more complex scenes, new steps are needed to solveother problems, such as visibility.The consistency of our results for moving images has not been checked, but this isan important problem to address in practice.Bibliography[1] BLINN, J. F. Models of Light Reflection For Computer Synthesized Pictures. Computer Graphics (SIGGRAPH ‘77 Proceedings) 11, 2 (July 1977), 192—198.[2] BRooKs, M. J. Shape from Shading Discretely. Ph.d. thesis, Essex University,1985.[3] BRooKs, M. J., AND HORN, B. K. Shape and Source from Shading. In Shapefrom Shading, B. K. P. Horn and M. J. Brooks, Eds. MIT Press, 1989, pp. 53—68.[4] BRuss, A. Shape from Shading and Bounding Contour. Ph.d. thesis, MIT, 1981.[5] CoHEN, M. F., CHEN, S. E., WALLAcE, J. R., AND GREENBERG, D. P. AProgressive Refinement Approach to Fast Radiosity Image Generation. ComputerGraphics (SIGGRAPH ‘88 Proceedings) 22, 4 (August 1988), 75—84.[6] FoLEY, J., AND VAN DAM, A. Fundamentals of Interactive Computer Graphics.Addison-Wesley Publishing Company, 1982.[7] FoLEY, J., VAN DAM, A., FEINER, S. K., AND HuGHEs, J. F. Computer Graphics: Principles and Practice, second ed. Addison-Wesley Publishing Company, 1990.[8] FOURNIER, A., GuNAwAN, A. S., AND R0MANzIN, C. Common illuminationbetween real and computer generated scenes. In Proceedings of Graphics Interface‘98 (May 1993), pp. 254—262.[9] HoRN, B. K., AND BRooKs, M. J. The Variational Approach to Shape fromShading. In Shape from Shading, B. K. P. Horn and M. J. Brooks, Eds. MIT Press,1989, pp. 173—214.[10] JoHN NESTER, W. W., AND KuTNER, M. H. Applied Linear Statistical Models.1983.[11] LEE, C.-H., AND ROSENFELD, A. Albedo Estimation for Scene Segmentation.Pattern Recognition Letters 1 (1983), 155—160.[12] LEE, C.-H., AND ROSENFELD, A. Improved methods of estimating shape fromshading using the light source coordinate system. Artificial Intelligence 26 (1985),125—143.113Bibliography 114[13] LI, Y. Orientation-Based Representations of Shape and Attitude Determination.Technical Report 93-12, University of British Columbia, 1993.[14] PENTLAND, A. P. Finding the illuminant direction. Journal of Optical Society ofAmerica 72, 4 (April 1982), 448—455.[15] PENTLAND, A. P. The Visual Inference of Shape: Computation from Local Features. Ph.d. thesis, Department of Psychology, MIT, 1982.[16] PRoNG, B.-T. Illumination for Computer Generated Pictures. Communications ofthe ACM 18, 6 (June 1975), 311—317.[17] W00DHAM, R. J. Photometric Method for Determining Surface Oirentation fromMultiple Images. In Shape from Shading, B. K. P. Horn and M. J. Brooks, Eds. MITPress, 1989, pp. 139—140.

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0051440/manifest

Comment

Related Items