"Science, Faculty of"@en . "Computer Science, Department of"@en . "DSpace"@en . "UBCV"@en . "Damberg, Gerwin"@en . "2017-07-24T22:44:19Z"@en . "2017"@en . "Doctor of Philosophy - PhD"@en . "University of British Columbia"@en . "Cinema projectors need to compete with home theater displays in terms\r\nof image quality. High frame rate and high spatial resolution as well as\r\nstereoscopic 3D are common features today, but even the most advanced cinema\r\nprojectors lack in-scene contrast and more importantly high peak luminance,\r\nboth of which are essential perceptual attributes for images to look\r\nrealistic. At the same time studies on HDR image statistics suggest\r\nthat the average image intensity in a controlled ambient viewing\r\nenvironment such as cinema can be as low as 1% for cinematic HDR\r\ncontent and does not often exceed 18%, middle gray in photography. Traditional\r\nprojection systems form images and colours by blocking the source light from a lamp, \r\ntherefore attenuating on average between 99% and 82% of light before it reaches the screen. This\r\ninefficient use of light poses significant challenges for achieving\r\nhigher peak brightness levels.\r\nWe propose a new projector architecture built around\r\ncommercially available components, in which light can be steered to\r\nform images. The gain in system efficiency significantly reduces the total cost of ownership of a projector (fewer components and lower operating cost) and at the same time increases peak luminance and improves black level beyond what is practically\r\nachievable with incumbent projector technologies. At the heart of\r\nthis computational display technology is a new projector hardware\r\ndesign using phase-modulation in combination with new optimization\r\nalgorithms for real-time phase retrieval. Based on this concept we propose and design a full featured projector prototype.\r\nTo allow for display of legacy SDR as well as high brightness HDR content on light steering projectors we derive perceptually motivated, calibrated tone mapping and colour appearance models. \r\nWe develop a calibrated optical forward model of the projector hardware and analyse the impact of content mapping parameters and algorithm choices on (light) power requirements."@en . "https://circle.library.ubc.ca/rest/handle/2429/62421?expand=metadata"@en . "Computational Projection DisplayTowards Efficient High Brightness Projection in CinemabyGerwin DambergDipl.-Ing (FH), University of Applied Sciences, Karlsruhe, 2006A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFDOCTOR OF PHILOSOPHYinThe Faculty of Graduate and Postdoctoral Studies(Computer Science)THE UNIVERSITY OF BRITISH COLUMBIA(Vancouver)July 2017\u00C2\u00A9 Gerwin Damberg 2017AbstractCinema projectors need to compete with home theater displays in termsof image quality. High frame rate and high spatial resolution as well asstereoscopic 3D are common features today, but even the most advancedcinema projectors lack in-scene contrast and more importantly high peakluminance, both of which are essential perceptual attributes for images tolook realistic. At the same time studies on High Dynamic Range (HDR)image statistics suggest that the average image intensity in a controlled am-bient viewing environment such as cinema can be as low as 1% for cinematicHDR content and does not often exceed 18%, middle gray in photography.Traditional projection systems form images and colours by blocking thesource light from a lamp, therefore attenuating on average between 99%and 82% of light before it reaches the screen. This inefficient use of lightposes significant challenges for achieving higher peak brightness levels. Wepropose a new projector architecture built around commercially availablecomponents, in which light can be steered to form images. The gain insystem efficiency significantly reduces the total cost of ownership of a pro-jector (fewer components and lower operating cost) and at the same timeincreases peak luminance and improves black level beyond what is prac-tically achievable with incumbent projector technologies. At the heart ofthis computational display technology is a new projector hardware designusing phase-modulation in combination with new optimization algorithmsfor real-time phase retrieval. Based on this concept we propose and designa full featured projector prototype. To allow for display of legacy StandardDynamic Range (SDR) as well as high brightness HDR content on lightsteering projectors we derive perceptually motivated, calibrated tone map-ping and colour appearance models. We develop a calibrated optical forwardmodel of the projector hardware and analyse the impact of content mappingparameters and algorithm choices on (light) power requirements.iiLay SummaryMovies look more appealing when they are brighter. In cinema, where theviewing environment is dark, small and bright highlights such as light sourcesand reflections or effects such as explosions are particularly important fordirectors to tell a story. Today\u00E2\u0080\u0099s cinema projectors are limited in their abilityto produce bright images. Why? Projection screens are large; projectorlight sources are expensive and hard to cool and projectors are inefficient informing images: by blocking or wasting light not used in darker regions ofan image.This work analyses the problem and provides a solution: steering lightthat is not needed in dark areas of an image into bright areas improvesthe brightness of projectors dramatically and also allows for more visibledetail in dark parts of a scene. We discuss and build several prototypes thatshow how this concept can be implemented with up to 20 times brighterhighlights.iiiPrefaceAll publications that have resulted from or are related to the research pre-sented in this work, along with the relative contributions of the collaboratorsare listed herein.1 High Dynamic Range Projection Systems - SID 2007 (Ref.: [19])Authors: Gerwin Damberg, Helge Seetzen, Greg Ward, Wolfgang Hei-drich, Lorne WhiteheadContribution: As one of the early researchers at BrightSide Technolo-gies Inc., a university start-up company, the author performed all ofthe research work discussed in the paper, including the image analysis,algorithms, and prototype work. The author wrote the first draft ofthe paper with select input from the co-authors. The high level con-cept of dual (amplitude) modulation in projectors was analogous to theLight Emitting Diode (LED) back light in the BrightSide HDR TV.Some of today\u00E2\u0080\u0099s high-end cinema (Premium Large Format for Cinema(PLF)) projectors are based on the general architecture proposed inthis paper.The publication text is not included in this thesis, but results aresummarized in Chapter 2.2 Comparing Signal Detection Between Novel High-LuminanceHDR and Standard Medical LCD Displays - Journal of DisplayTechnology 2008 (Ref.: [106])Authors: M. Dylan Tisdall, Gerwin Damberg, Paul Wighton, NhiNguyen, Yan Tan, M. Stella Atkins, Hiroe Li, Helge SeetzenContribution: The research work described in the paper was jointlyperformed by Dr. Tisdall and the author whose work was focussed onthe display hardware and image processing algorithms for the study,whereas Dr. Tisdall\u00E2\u0080\u0099s work was focussed on preparing the MagneticResonance Imaging (MRI) image data and the study stimuli. The userstudy was executed jointly. The display prototype was built by theauthor. This user study in the field of medical imaging was suggestedby Dr. Atkins and coordinated by the author.ivPrefaceThe publication text is not included in this thesis, but results aresummarized in Section 2.3 A High Bit Depth Digital Imaging Pipeline for Vision Re-search - APGV poster 2011 (Ref.: [58])Authors: Timo Kunkel, Gerwin Damberg, Lewis JohnsonContribution: The poster publication was initiated by Dr. Kunkelbased on joint work with the author. The author\u00E2\u0080\u0099s contribution arethe development of Matlab scripts to incorporate the display experi-mentally into the PsychoPhysics Toolbox for vision research using theHigh Definition Serial Digital Interface (HD-SDI). Dr. Kunkel and theauthor prepared the poster. Mr. Johnson assisted in porting the SerialDigital Interface, SMPTE 292M (SDI)-card driver code.This poster was presented by Dr. Kunkel at the Applied Perception inGraphics and Visualization (APGV) symposium in 2011. A modifiedversion of the text is incorporated in Section 4 as it ties together theauthor\u00E2\u0080\u0099s work on tone mapping and colour appearance (Section 3) andtoday\u00E2\u0080\u0099s Society of Motion Picture & Television Engineers (SMPTE)HDR encoding and transmission standards. The research work on theuniversal mapping function referenced in Section 4.2 was initiated bythe author and its final form was developed and published by AndersBallestad and Andrey Kostin (of Dolby Canada) and further modifiedin the course of adoption in published video encoding standards byother Dolby employees.4 Calibrated Image Appearance Reproduction - Siggraph Asia2012 (Ref.: [93])Authors: Erik Reinhard, Tania Pouli, Timo Kunkel, Ben Long, An-ders Ballestad, Gerwin DambergContribution: As a Senior Research Engineer at Dolby Canada Re-search, the author\u00E2\u0080\u0099s responsibilities included establishing new Univer-sity relationships and guiding industry-relevant research work. Thispaper was a close collaboration between the author\u00E2\u0080\u0099s research groupat Dolby and the University of Bristol, UK. The author suggested andcoordinated the work on combined colour appearance and tone map-ping that is described in this paper and together with the first author,Dr. Reinhard formulated the need for a simple Colour AppearanceModel (CAM) in the form of a tone mapping operator that functionsover a large luminance range and omits the backward step of classicalCAMs: the forward-only colour appearance model. Dr. Reinhard de-vPrefacerived the first concept of the model. Both Dr. Kunkel and the authorimplemented and evaluated early Matlab versions of the model. Dr.Kunkel focussed part of his PhD work on a variant of the model. Dr.Reinhard and Dr. Pouli later added the local adaptation parts of themodel and wrote the first draft of the joint paper.A version of this paper is included in Chapter 3 of this document.5 State of the Art in Computational Fabrication and Display -Eurographics 2013 (Ref.: [47])Authors: Matthias Hullin, Ivo Ihrke, Wolfgang Heidrich, Tim Weyrich,Gerwin Damberg, Martin FuchsContribution: In this high level State of the Art paper, the authorprovided a draft of the section on novel display devices.The publication text is not included in this thesis, but parts are sum-marized in Section 2.6 Efficient Freeform Lens Optimization for Computational Caus-tic Displays - Optics Express 2015 (Ref.: [18])Authors: Gerwin Damberg and Wolfgang HeidrichContributions: All of the research work and prototyping was per-formed by the author. The research topic as well as the approachwas jointly suggested by the author and Dr. Heidrich. The physicallenses were designed by the author and 3D printed and polished byDr. Heide who had access to a suitable 3D printer at the time.A version of this publication is included in Sections 6.3 to 6.5 of thisdocument.7 High Brightness HDR Projection Using Dynamic FreeformLensing - Transactions on Graphics 2015 - presented at SIGGRAPH2016 (Ref.: [17])Authors: Gerwin Damberg, James Gregson, Wolfgang HeidrichContribution: The author performed all experimental work, developedthe basic algorithm framework, implemented the image statistics workand wrote the first draft of the paper. Dr. Gregson later developed andimplemented a real-time version of the algorithm which was includedin the paper and is described in Section 6.6.3. Dr. Ballestad mappedthe HDR images used in the HDR power survey. Dr. Ballestad andthe author jointly initiated the image statistic research.Parts of the paper are incorporated into Section 6.6 and into Sec-tion 5.3 of this document.viPreface8 Temporal Considerations and Algorithm Architecture for LightSteering Projectors - MTT Innovation Incorporated (MTT) Inter-nal Reports 2016Authors: James Gregson, Eric Kozak and Gerwin DambergContributions: The work related to the RGB prototype was performedby the research team including the author (CTO) at MTT InnovationInc., a UBC collaboration partner and demonstrated internally and atSIGGRAPH [15, 16]. The overall colourimetric calibration approachwas provided by the author. Dr. James Gregson developed and im-plemented the specific calibration routines including capturing anddigitizing the Point Spread Function (PSF) and non-steered compo-nents. Versions of the demo code were written by the author, RaveenKumaran and James Gregson, who also provided the first draft of thealgorithm write-up. The initial prototype hardware was conceptual-ized and prototyped by the author and significantly refined by theMTT research team, notably Raveen Kumaran, Johannes Minor, ErikKozak and James Gregson. The temporal synchronization schemeswere jointly developed and documented by the author and Eric Kozak.viiPrefacePrototype Demonstrations9 HDR Projector With Improved Contrast - SIGGRAPH Emerg-ing Technologies 2008 (invited)(Ref.: [20])Authors: Gerwin Damberg, Peter Longhurst, Michael Kang10 Light Steering Projector Monochromatic Proof of Concept -SIGGRAPH Emerging Technologies 2014 (Ref.: [15])Authors: Gerwin Damberg, Anders Ballestad, Erik Kozak, JohannesMinor, Raveen Kumaran11 High Brightness HDR Projection Using Dynamic Phase Mod-ulation - SIGGRAPH Emerging Technologies 2015 (invited)(Ref.: [16])Authors: Gerwin Damberg, Anders Ballestad, Erik Kozak, JohannesMinor, Raveen Kumaran, James Gregson, Wolfgang HeidrichPatents and Patent Applications12 Dynamic Freeform Lensing With Applications To High Dy-namic Range Projection - Provisional Patent Application US62007341Inventors: Gerwin Damberg, Wolfgang Heidrich13 Efficient, Dynamic, High Contrast Lensing with ApplicationsTo Imaging, Illumination and Projection - PCT patent applica-tion CA2015050515Inventors: Gerwin Damberg, James Gregson, Wolfgang Heidrich14 Light Detection, Color Appearance Models, and ModifyingDynamic Range for Image Display - US WO 2010/132237Inventors: Gerwin Damberg, Erik Reinhard, Timo Kunkel, AndersBallestad15 Image Processing and Displaying Methods for Devices thatImplement Color Appearance Models - US WO 2010/083493Inventors: Timo Kunkel, Erik Reinhard, Gerwin DambergviiiTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiLay Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiiList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiiiList of Acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . xxDedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi1 Introduction and Structure of the Thesis . . . . . . . . . . . 11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Contributions and Outline of the Dissertation . . . . . . . . . 22 Background and Related Work . . . . . . . . . . . . . . . . . 42.1 Spatial Light Modulators . . . . . . . . . . . . . . . . . . . . 42.1.1 Spatial Resolution . . . . . . . . . . . . . . . . . . . . 42.1.2 Types of Spatial Light Modulators . . . . . . . . . . . 52.1.3 Amplitude and Phase-Only LCoS Modulators . . . . . 72.2 Contrast Metrics . . . . . . . . . . . . . . . . . . . . . . . . . 92.2.1 Sequential Projector Contrast . . . . . . . . . . . . . . 92.2.2 In-scene Projector Contrast . . . . . . . . . . . . . . . 102.3 Dual Modulation Projection Displays . . . . . . . . . . . . . . 112.4 Holographic Displays . . . . . . . . . . . . . . . . . . . . . . . 12ixTABLE OF CONTENTS2.5 Freeform Lenses . . . . . . . . . . . . . . . . . . . . . . . . . . 122.6 Tone Mapping Operators and Colour Appearance Models . . 133 Visual Perception and Colour Appearance . . . . . . . . . . 153.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153.2 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . 163.3 Our Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183.3.1 Input Parameters . . . . . . . . . . . . . . . . . . . . . 193.3.2 Pupil Size . . . . . . . . . . . . . . . . . . . . . . . . . 203.3.3 Bleaching . . . . . . . . . . . . . . . . . . . . . . . . . 213.3.4 Photoreceptor Response . . . . . . . . . . . . . . . . . 213.3.5 Final Mapping Function . . . . . . . . . . . . . . . . . 223.3.6 The Hunt and Stevens effects . . . . . . . . . . . . . . 253.3.7 Post-Processing . . . . . . . . . . . . . . . . . . . . . . 263.4 Local Lightness Perception . . . . . . . . . . . . . . . . . . . 273.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313.5.1 Scene Reproduction . . . . . . . . . . . . . . . . . . . 313.5.2 Video Reproduction . . . . . . . . . . . . . . . . . . . 353.5.3 Appearance Prediction . . . . . . . . . . . . . . . . . . 373.5.4 Limitations . . . . . . . . . . . . . . . . . . . . . . . . 383.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394 Digital Imaging Pipelines . . . . . . . . . . . . . . . . . . . . . 404.1 Digital Imaging Pipeline for Vision Research . . . . . . . . . 404.2 Perceptually Optimal Quantization . . . . . . . . . . . . . . . 445 Image Statistics and Power Requirements . . . . . . . . . . 465.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465.1.1 Light Steering Efficiency . . . . . . . . . . . . . . . . . 465.1.2 Component Efficiency . . . . . . . . . . . . . . . . . . 485.1.3 Narrowband Light sources . . . . . . . . . . . . . . . . 495.2 Full Light Steering and Hybrid Light Steering Architectures . 495.3 Average Luminance of HDR Images in Cinema . . . . . . . . 515.3.1 Methodology . . . . . . . . . . . . . . . . . . . . . . . 525.3.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . 536 Freeform Lensing and HDR Projector Proof of Concept . 556.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556.2 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . 566.3 Phase Modulation Image Formation . . . . . . . . . . . . . . 56xTABLE OF CONTENTS6.4 Optimization Problem . . . . . . . . . . . . . . . . . . . . . . 586.5 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . 606.5.1 Ray Tracer Simulation . . . . . . . . . . . . . . . . . . 616.5.2 Physical Optics Simulation . . . . . . . . . . . . . . . 616.5.3 Refractive 3D Printed Lens Results . . . . . . . . . . . 656.5.4 Static Phase Plates . . . . . . . . . . . . . . . . . . . . 656.6 Dynamic Lensing in Projection Systems . . . . . . . . . . . . 676.6.1 Monochromatic Prototype . . . . . . . . . . . . . . . . 696.6.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . 716.6.3 Real-time Freeform Lensing . . . . . . . . . . . . . . . 766.6.4 Limitations . . . . . . . . . . . . . . . . . . . . . . . . 816.6.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . 827 Improved RGB Projector Prototype . . . . . . . . . . . . . . 847.1 Architecture of the RGB Projector Prototype . . . . . . . . . 857.2 Temporal Considerations in HDR Projection Displays . . . . 887.3 Colourimetric Calibration of the Projector . . . . . . . . . . . 967.3.1 Light Steering Image Formation Model . . . . . . . . . 977.3.2 Optical Model . . . . . . . . . . . . . . . . . . . . . . 997.3.3 High-Level Algorithm . . . . . . . . . . . . . . . . . . 997.3.4 Input Transformation . . . . . . . . . . . . . . . . . . 1007.3.5 Content Mapping . . . . . . . . . . . . . . . . . . . . . 1027.3.6 Forward Model . . . . . . . . . . . . . . . . . . . . . . 1047.3.7 Phase Pattern Computation . . . . . . . . . . . . . . . 1057.3.8 Amplitude Pattern Generation . . . . . . . . . . . . . 1077.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1088 Discussion and Conclusion . . . . . . . . . . . . . . . . . . . . 1108.1 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1108.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . 1128.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114xiList of Tables5.1 Conventional and Proposed Projector Power Requirements . 536.1 Basic Algorithm Run Times . . . . . . . . . . . . . . . . . . . 606.2 Luminance Measurements of Prototype Results . . . . . . . . 746.3 Runtimes FFT Algorithm . . . . . . . . . . . . . . . . . . . . 797.1 RGB High Power Prototype Features . . . . . . . . . . . . . . 847.2 Pulse Durations Within the Projector . . . . . . . . . . . . . 957.3 Projector Chromaticity Coordinates . . . . . . . . . . . . . . 101xiiList of Figures1.1 Scope of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . 23.1 Image Appearance Model Contrast Assessment . . . . . . . . 173.2 An Overview of the Blocks of Our Model . . . . . . . . . . . 183.3 Mapping Function Plotted Against Colour Matching Datasets 263.4 Stevens Effect Within our Model . . . . . . . . . . . . . . . . 273.5 Median Cut Algorithm Results . . . . . . . . . . . . . . . . . 283.6 Example of Global and Local Rendering . . . . . . . . . . . . 313.7 Visual Comparison of Model Results Against Other Algorithms 333.8 Comparison of our Model Against Other Appearance Models 353.9 Results With Display and Environment Variations . . . . . . 363.10 Effects of Change in Adapting White Point . . . . . . . . . . 363.11 Performance as Predictive Appearance Model . . . . . . . . . 384.1 Components Within the Vision Research Pipeline . . . . . . . 414.2 Simulated and Measured Intensity . . . . . . . . . . . . . . . 425.1 SDR and HDR Luminance Range on Log Scale . . . . . . . . 475.2 Theoretical and Measured Steering Efficiency . . . . . . . . . 485.3 Full Light Steering Architecture . . . . . . . . . . . . . . . . . 505.4 Hybrid Light Steering Architecture . . . . . . . . . . . . . . . 505.5 Splitting Scheme for Hybrid Architecture . . . . . . . . . . . 515.6 Example Images From HDR Survey . . . . . . . . . . . . . . 525.7 Relative Power Requirements of Different Projectors . . . . . 546.1 Geometry for the Image Formation Model . . . . . . . . . . . 576.2 Intensity Change Due to Distortion . . . . . . . . . . . . . . . 586.3 Algorithm Progression for Six Iterations . . . . . . . . . . . . 616.4 LuxRender Ray Tracer Results . . . . . . . . . . . . . . . . . 626.5 Test Pattern and Resulting Lens Height Field . . . . . . . . . 636.6 Spectra of LED and Colour Matching Function . . . . . . . . 636.7 Wave Optics Simulation . . . . . . . . . . . . . . . . . . . . . 64xiiiLIST OF FIGURES6.8 3D-Printed Refractive Lenses . . . . . . . . . . . . . . . . . . 646.9 Geometry for Refraction in a Freeform Lens . . . . . . . . . . 656.10 Static Phase Plate Results . . . . . . . . . . . . . . . . . . . . 676.11 Phase Wrapping . . . . . . . . . . . . . . . . . . . . . . . . . 696.12 Monochromatic Proof of Concept . . . . . . . . . . . . . . . . 706.13 Phase Pattern and Intermediate Light Fields . . . . . . . . . 716.14 System Diagram of the HDR Projector Architecture . . . . . 726.15 Result Photos With Luminance and Contrast Measurements . 756.16 Mirror-Padding the Input Image . . . . . . . . . . . . . . . . 806.17 LuxRender Simulations . . . . . . . . . . . . . . . . . . . . . 817.1 RGB Light Steering Prototype . . . . . . . . . . . . . . . . . 857.2 Chromaticity Comparison . . . . . . . . . . . . . . . . . . . . 877.3 Typical Relative Rise and Fall Times . . . . . . . . . . . . . . 907.4 Sub-Frame Phase Response . . . . . . . . . . . . . . . . . . . 917.5 DMD Timing . . . . . . . . . . . . . . . . . . . . . . . . . . . 927.6 Slow, Asynchronous Light Pulses . . . . . . . . . . . . . . . . 937.7 Fast, Asynchronous Light Pulses . . . . . . . . . . . . . . . . 937.8 Slow, Synchronous Light Pulses . . . . . . . . . . . . . . . . . 947.9 Phase LCoS, DMD and Laser Pulse Timing Diagram . . . . . 957.10 Prototype using a DMD Amplitude Modulator . . . . . . . . 967.11 RGB Prototype Optical Blocks . . . . . . . . . . . . . . . . . 977.12 Unsteered Light . . . . . . . . . . . . . . . . . . . . . . . . . . 987.13 Full-screen white pattern . . . . . . . . . . . . . . . . . . . . . 997.14 High-Level Algorithm Blocks . . . . . . . . . . . . . . . . . . 1007.15 Input Transformations . . . . . . . . . . . . . . . . . . . . . . 1017.16 Content Mapping Algorithm Block . . . . . . . . . . . . . . . 1037.17 Splitting Function . . . . . . . . . . . . . . . . . . . . . . . . 1047.18 Forward Model Algorithm Block . . . . . . . . . . . . . . . . 1057.19 Point Spread Function . . . . . . . . . . . . . . . . . . . . . . 1057.20 Phase Pattern Computation Block . . . . . . . . . . . . . . . 1067.21 Amplitude Pattern Generation Block . . . . . . . . . . . . . . 1077.22 Photo of New Light Steering Prototype in Action . . . . . . . 109xivList of AcronymsA | C | D | E | F | G | H | I | J | L | M | O | P | Q | R | S | T | U | VAALL Average Luminance Level . . . . . . . . . . . 47, 49, 52, 53, 55, 82ANSI American National Standards Institute . . . . . . . . . . . 10, 11APGV Applied Perception in Graphics and Visualization . . . . . . . vAPL Average Picture Level . . . . . . . . . . . . . . . . . . . . . . . . 47AR Augmented Reality . . . . . . . . . . . . . . . . . . . . . . . . . 111ASIC Application-Specific Integrated Circuit . . . . . . . . 7, 79, 91, 95CCAM Colour Appearance Model . . . . . . . . . . . . . . . . . v, 13, 16CIE Commission International de l\u00E2\u0080\u0099E\u00C2\u00B4clairage . . . . . . . . . . . . 13, 99CPU Central Processing Unit . . . . . . . . . . . . . . . . . . . . . . 78CRT Cathode Ray Tube . . . . . . . . . . . . . . . . . . . . . . . 41\u00E2\u0080\u009343CUDA NVIDIA\u00E2\u0080\u0099s Compute Unified Device Architecture . . . . . . . 78cuFFT NVIDIA\u00E2\u0080\u0099s CUDA Fast Fourier Transform Library . . . . . 78, 79CW Continuous Wave . . . . . . . . . . . . . . . . . . . . . . . . . . . 95DD-ILA Direct Drive Image Light Amplifier . . . . . . . . . . . . . . . 9DAC Digital-to-Analog Converter . . . . . . . . . . . . . . . . . . . . 94xvList of AcronymsDC Direct Current . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12DCI Digital Cinema Initiative . . . . . . . . . . . . . . . . . . . . 46, 88DCI2K Digital Cinema Initiative 2K standard . . . . . . . . . . . . . 4DCI4K Digital Cinema Initiative 4K standard . . . . . . . . . . . . . 4DLP Digital Light Processor . . . . . . . . . . . . . . . . . . . . 6, 9, 92DMD Digital Micromirror Device . . . . . . . 5\u00E2\u0080\u00937, 84, 86, 88, 89, 95, 96DP DisplayPort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41DPSS Diode-Pumped Solid-State . . . . . . . . . . . . . . . . . . . . 85DVI Digital Visual Interface . . . . . . . . . . . . . . . . . . . . . . . 41EEMI Electromagnetic Interference . . . . . . . . . . . . . . . . . . . . 94EOTF Electro-Optical Transfer Function . . . . . . . . . . . . . . . . 3Ff# f-number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86, 87FFT Fast Fourier Transform . . . . . . . . . . . . . . . . . . . . . 78, 79FPGA Field Programmable Gate Array . . . . . . . . . . . . . 79, 90, 91fps Frames per Second . . . . . . . . . . . . . . . . . . . . . . . . . . . 92FSW Full Screen White . . . . . . . . . . . . . . . . . . . . . . 5, 46, 52GGPU Graphics Processing Unit . . . . . . . . . . . . . . . . . . . 78, 79HHD High Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4HD-SDI High Definition Serial Digital Interface . . . . . . . . . . v, 42xviList of AcronymsHDMI High-Definition Multimedia Interface . . . . . . . . . . . . . . 41HDR High Dynamic Range . ii, iv\u00E2\u0080\u0093vi, 1\u00E2\u0080\u00933, 11, 15, 16, 20, 34, 36, 40, 42,44\u00E2\u0080\u009347, 49, 51\u00E2\u0080\u009356, 72, 74, 82, 83, 110, 111, 113HTPS High Temperature Poly-Silicon . . . . . . . . . . . . . . . . . 5, 7HVS Human Visual System . . . . . . . . . . . 40, 41, 43, 46, 111\u00E2\u0080\u0093113IIR Infrared . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90JJND Just Noticeable Difference . . . . . . . . . . . . . . . . . . . . . 44LLBE Light Budget Estimator . . . . . . . . . . . . . . . . . . . . . . . 46LC Liquid Crystal . . . . . . . . . . . . . . . . . . . . . . . . 5\u00E2\u0080\u00938, 89, 91LCD Liquid Crystal Display . . . . . . . . . . . . . . . 5, 6, 43, 68, 110LCoS Liquid Crystal on Silicon . . . . . . . . 5\u00E2\u0080\u00939, 68, 69, 72, 86, 89, 95LED Light Emitting Diode . . iv, 10, 11, 43, 49, 61\u00E2\u0080\u009364, 69, 81, 110, 111LUT Look Up Table . . . . . . . . . . . . . . . . . . . . . . 101, 104, 107MMEMS Microelectromechanical System . . . . . . . . . . . . . . 5, 9, 68MRI Magnetic Resonance Imaging . . . . . . . . . . . . . . . . . . . . ivMTT MTT Innovation Incorporated . . . . . . . . . . . . . . . . . vii, 84OOLED Organic Light Emitting Diode . . . . . . . . . . . . . . . 43, 111OS Operating System . . . . . . . . . . . . . . . . . . . . . . . . . . . 41xviiList of AcronymsPPBS Polarizing Beam Splitter . . . . . . . . . . . . . . . . . . . . . . 7PC Personal Computer . . . . . . . . . . . . . . . . . . . . . . . . . . 40PCM Pulse Code Modulation . . . . . . . . . . . . . . . . . . . . . . 91PLF Premium Large Format for Cinema . . . . . . . . . . . . . iv, 110PQ Perceptual Quantization . . . . . . . . . . . . . . . . . . 45, 99\u00E2\u0080\u0093101PSF Point Spread Function . . . . . . . . vii, 47, 52, 71, 82, 88, 104, 105PWM Pulse Width Modulation . . . . . . . . . . . . . . . 88, 89, 94\u00E2\u0080\u009396QQMR Quasi-Minimal Residual Method . . . . . . . . . . . . . . . . . 78RRGB Additive Colour Red Green Blue . . . . . 42, 82, 97\u00E2\u0080\u009399, 101, 105SSDI Serial Digital Interface, SMPTE 292M . . . . . . . . . . . v, 41, 42SDR Standard Dynamic Range . . . . . . . . . . . . . . . . . . . ii, 111SLM Spatial Light Modulator 4, 5, 7, 11, 12, 55, 56, 65, 68\u00E2\u0080\u009371, 73, 75, 81,82, 88, 95, 97, 108SMPTE Society of Motion Picture & Television Engineers . . v, 42, 45SXRD Silicon X-tal Reflective Display . . . . . . . . . . . . . . . . . 9TTFT Thin Film Transistor . . . . . . . . . . . . . . . . . . . . . . . . 5TI Texas Instruments Inc. . . . . . . . . . . . . . . . . . . . . . . 5, 9, 92TV Television . . . . . . . . . . . . . . . . . . . . . . 6, 42, 45, 110, 111UxviiiList of AcronymsUCS Uniform Chromaticity Scale . . . . . . . . . . . . . . . . . . . . 87UHD Ultra-High-Definition . . . . . . . . . . . . . . . . . . . . . . . . 4VVGA Video Graphics Array . . . . . . . . . . . . . . . . . . . . . . . 41VR Virtual Reality . . . . . . . . . . . . . . . . . . . . . . . . . . . 111xixAcknowledgementsI would like to thank the many people including past and present colleagues,friends and family that have in one way or another supported me throughoutmy studies at UBC and during my work in the start-up MTT InnovationInc.My research supervisor Dr. Wolfgang Heidrich has been a great inspi-ration throughout my graduate studies by providing just the right amountof guidance. He is one of the smartest people that I have gotten to knowin this field and he made it possible for me in the first place to combineacademic research with innovative and practical technology development atthe start-up. I would further like to thank Roger Miller at the UniversityLiaison Office for working with Wolfgang and me on the technology transferof some of my research results at UBC to MTT.I am thankful for many fruitful discussions with my lab mates James,Felix, Matthias, Evan, Lei, Brad, Shuochen and Qiang as well as with myacademic friends and collaborators Karol, Erik, Timo, Greg and everyoneelse I had the pleasure of meeting over the years.None of the momentum of maturing the light steering technology couldhave been maintained without the hard working and fun co-workers at MTT:my co-founders Anders and Eric as well as Johannes, Raveen, James, Sam,Eric, Andra, Ronan, Thomas and everyone else I got to meet and work withduring this exciting time.Finally I want to thank my parents for their continuous support andencouragement throughout the years.xxTo Astrid for her incredible love, patience and support during my work andstudies; to Johannes, Philip and Mathea for encouraging me with playfulenthusiasm to take enough breaks from the same.xxiChapter 1Introduction and Structureof the Thesis1.1 IntroductionDisplay technologies have evolved increasingly fast over the past decades, al-ways with the goal of providing the most realistic looking images, given costand technology constraints. Advances in new display materials, faster andlower power semi-conductors, solid state illumination technologies as wellas more powerful computational hardware have motivated the emergence ofa new research discipline: computational displays. This field in computerscience lies at the intersection of display optics, mathematical analysis andphysical modelling, efficient computational processing, and, most impor-tantly, visual perception. Computational displays aim to provide a visualexperience beyond the capabilities of traditional systems by adding compu-tational power to the display architecture.While a variety of research concepts from the field have made their wayinto commercial display products, large screen projectors have predomi-nantly been carved out from practical innovations in computational display.For realistic image appearance, arguably the most important visual prop-erty of a display system is the range and number of light levels and coloursthat can be displayed. Unfortunately, increasing this range significantly inprojectors is prohibitively expensive, because peak luminance scales linearlywith display power and light source cost, while brightness perception ofluminance values is near logarithmic.This thesis introduces our analysis and understanding of light require-ments in the cinema production pipeline from capture and production on areference display, encoding and distribution to display on a projector and fi-nally light perception by the audience. We explore tone mapping and colourappearance in HDR image reproduction. We propose a new mathematicalframework and a simple model to accurately map HDR images between areference display and the projector and explore means to transmit video data11.2. Contributions and Outline of the Dissertationfrom source to display in a bit-efficient manner. Based on this perceptualunderstanding we propose and prototype a new, steerable light source andprojector architecture based on phase modulation to efficiently achieve therequired luminance and colour range for life-like images in cinema. Finally,we assemble and analyse HDR image statistics for theatrical high brightnessHDR content. We work under the hypothesis that a much more optimizedprojector design can be derived based on understanding the light require-ments for large screen cinema.1.2 Contributions and Outline of the DissertationThe scope of work presented herein aims at addressing the remaining bot-tlenecks in the current HDR pipeline in cinema including content creation(colour and tone mapping), content delivery and display (high brightnessHDR projection) and finally the understanding of visual perception of lightlevels by the observer. Figure 1.1 provides a high-level overview of the scopeof the Dissertation.Figure 1.1: Scope of the proposed work to address the major bottlenecksin the current cinematic pipeline to enable HDR. While HDR content cap-ture/creation falls outside the scope of the work, we do perform statisticalanalysis on scene referred (camera captured) and display referred (colourcorrected for presentation in a cinema) HDR content in order to better un-derstand the requirements for an ideal HDR projector.21.2. Contributions and Outline of the DissertationThe following overview outlines the remaining chapters of the thesis.Chapter 2. Background and Related Work. We review basic con-cepts of high contrast projection together with the state of the art in com-putational displays as it relates to light steering, colour appearance modelsand tone reproduction operators.Chapter 3. Visual Perception and Colour Appearance. Thischapter introduces a new, combined colour appearance model and tone map-ping operator that aids in mapping HDR content for arbitrary displays andviewing environments. The model, in a modified form was adapted in theSMPTE ST 2084:2014 standard that discusses the HDR Electro-OpticalTransfer Function (EOTF) for reference displays.Chapter 4. Digital Imaging Pipelines and Signal Quantization.In this chapter we discuss experiments related to utilizing video links withhigher bandwidth to allow for artifact free psycho-physical vision experi-ments as well as encoding schemes that allow bit efficient quantization ofvideo signals that represent the luminance range of modern HDR displays.Chapter 5. Image Statistics and Power Requirements. In thischapter we discuss light steering projector architectures suitable for cinemataking into consideration cost, power levels and performance. We propose anew hybrid architecture consisting of a steered and a non-steered light pathbased on our analysis of HDR image data.Chapter 6. Freeform Lensing and HDR Projector Proof ofConcept. A new phase retrieval algorithm based on freeform lensing aswell as our work on a new light steering HDR projector architecture con-cept is introduced.Chapter 7. RGB Projector Prototype. We discuss the research anddevelopment that lead to a full featured HDR projector prototype, includingsuitable RGB laser light sources, temporal considerations of optically pulsedcomponents in the light path and the optical forward model with algorithmimplementations to control the system.Chapter 8. Discussion and Conclusion. The contributions of thisthesis are summarized and future directions of research are indicated.3Chapter 2Background and RelatedWorkOur research draws from and builds on a number of different fields of relatedwork, including display technologies and algorithms for freeform lens design,human visual perception, colour appearance and image tone reproductionoperators. The following is a brief description of the state of the art in theserelated fields.2.1 Spatial Light ModulatorsThroughout this work we make use of Spatial Light Modulators (SLMs) forprojection systems. In modern projectors these SLMs typically consist ofpixelated micro displays in which each pixel can be electronically addressedto cause modulation of light. This section provides an overview of the keycharacteristics of some of the different types of common light modulators.2.1.1 Spatial Resolution and Pixel DimensionsToday, typical micro display spatial resolutions for consumer applicationsprovide 1920 \u00C3\u0097 1080 pixels (High Definition (HD)) or 3840 \u00C3\u0097 2160 pixels(Ultra-High-Definition (UHD)) of spatial resolution [110]. In digital cin-ema, to accommodate a larger horizontal to vertical aspect ratio, the cor-responding resolutions are 2048 \u00C3\u0097 1080 (Digital Cinema Initiative 2K stan-dard (DCI2K)) and 4096 \u00C3\u0097 2160 (Digital Cinema Initiative 4K standard(DCI4K)) respectively [23]. Today, the dimension of a typical individualpixel can be between 1\u00C2\u00B5m and 8\u00C2\u00B5m in a projector [50]. There is a trendtowards smaller micro displays, higher pixel count and smaller pixel pitch.Low power applications (e.g. mobile projectors and near-to-eye projectors)can utilize small micro displays. Higher power applications (e.g. confer-ence room, large venue or cinema projectors) tend to require larger microdisplays to manage heat dissipation and accommodate higher power lightsources with higher divergence of the light within the optical system [108].42.1. Spatial Light ModulatorsThe micro displays that were used in this work have a spatial resolution of1920 \u00C3\u0097 1080 and a pixel pitch of 8\u00C2\u00B5m leading to active area display dimen-sions of 15.36 mm\u00C3\u0097 8.64 mm [42].2.1.2 Types of Spatial Light ModulatorsConceptually, image formation in traditional projectors begins with a fullwhite screen (Full Screen White (FSW)) created by a light source that uni-formly illuminates a SLM (with the help of beam homogenization optics).Then, to create colours and tones of the image, light is attenuated on aper-pixel basis. A number of micro display technologies exist for this pur-pose. The three commercially most successful technologies are transmissiveLiquid Crystal Display (LCD) or High Temperature Poly-Silicon (HTPS),reflective Digital Micromirror Device (DMD) and reflective Liquid Crystalon Silicon (LCoS). A comprehensive discussion of the nuances of each of thetechnologies is provided in [2] and [38].Transmissive Liquid Crystal Displays. In the early days of digitalprojectors most systems were based on transmissive LCD technology whichrequires linearly polarized light at the input (e.g. with a polarization filterfollowing the light source). The LCD can rotate the state of the polarizationof light on a per-pixel basis (by applying an electric field across a cell filledwith Liquid Crystal (LC) material). The per-pixel (polarization) modulatedlight then passes through another linear polarization filter (typically at 90\u00C2\u00B0to the orientation of the first filter). Light at pixels with rotated polarizationpasses through the second polarization filter. Light at pixels with unchangedpolarization does not pass the polarization filter (it gets absorbed). Lightthat has been partially rotated will partially pass the second filter and getpartially absorbed by the second polarization filter. Switching the individualpixels at video rates requires active electronic components at each pixel, ThinFilm Transistors (TFTs) as well as the required wiring to control them.These components within the clear or transmissive area of the display causea limited pixel fill-factor for HTPS-based micro displays as well as a visiblepixel structure on the projection screen (often referred to as the screen dooreffect).Reflective Digital Micromirror Devices. In the late 1980\u00E2\u0080\u0099s Texas In-struments Inc. (TI) developed the DMD technology, consisting of a Micro-electromechanical System (MEMS)-based pixelated array of micro mirrorsthat can tilt into two distinct positions (typically across the diagonal of each52.1. Spatial Light Modulatorssquare pixel mirror) and either reflect incoming light towards a projectionscreen or into a light dump (heat sink) to absorb the light. The typicalangle between the two states of each mirror is between \u00C2\u00B110\u00C2\u00B0 and \u00C2\u00B114\u00C2\u00B0. Alarge angle between the two mirror states enables a higher contrast in theresulting image due to higher separation of the incoming and outgoing il-lumination beams. Given the binary nature of the display device DMDscan only form binary image patterns. However, the resulting binary imagescan be updated at significantly higher speeds (between 20kHz and 80kHz)compared to for example LCD (typically 30Hz to 240Hz) and thus greyscale can be achieved with a binary/digital modulator. Benefits of DMDsinclude a relatively high pixel fill factor, the high degree of repeatabilitydue to the digital drive scheme and well-defined mirror states, high reflec-tivity, and good power handling. Most cinema projectors are based on threeDMD chips (one for each of the red, green and blue colour channels) andtypical out-of-lens optical power is between 20, 000 and 60, 000 lumens. Adraw back of DMD technology is the relatively high cost per device. In theconsumer market projectors containing three DMDs are sparse. Lower costsingle chip Digital Light Processor (DLP) projectors on the other hand usefield sequential colour schemes which can cause image artifacts (or imagedeficiencies) related to colour break-up of the sequential colour image fields(also referred to as the rainbow effect) [2, 105].Reflective Liquid Crystal on Silicon Displays. LCoS technology wasdeveloped to overcome some of the shortcomings of transmissive LCD pro-jectors as well as the high cost of DLP technology predominantly in the rear-projection Television (TV) market. However with the rapid introduction oflarge, flat panel TVs based on plasma and LCD technology in the early2000s there was less of an addressable market for the technology. A numberof large companies in the semiconductor field (such as Intel, Thompson andPhilips) as well as many small companies discontinued their investmentsinto the development of LCoS technology. Other companies (such as SONYand JVC) pivoted and continued development of the technology for nichemarkets for example high-end home theatre projectors and cinema.In LCoS technology a LC layer is sandwiched between a cover glass that actsas a global electrode and a silicon backplane with individually addressableelectrodes. Similar to transmissive LCD technology a voltage across the LCcell rotates the crystals and with that changes the state of polarization ofincoming light. Linearly polarized light initially passes through the LC layerand reflects off the backplane before passing through the LC layer a second62.1. Spatial Light Modulatorstime to exit the display. Due to the light passing through the LC layertwice, the gap thickness of the LC cell can be designed to be small which inturn allows for a fast (polarization) switching speed and high frame rates.Instead of a transmissive linear polarization filter, LCoS-based devices em-ploy a Polarizing Beam Splitter (PBS) positioned at 45\u00C2\u00B0 to the surface ofthe micro display. A PBS passes polarized light of one linear polarization(P-polarized light for parallel) and reflects (at 90\u00C2\u00B0) light of the other lin-ear polarization (S-polarized light for German senkrecht = perpendicular).LCoS devices can be produced at lower cost compared to DMDs. There areno moving parts and the backplane manufacturing process is comparable tothe process used to produce semiconductors (e.g. Application-Specific Inte-grated Circuits (ASICs)). Other benefits include the high switching speed, ahigh pixel fill factor (93% and higher: all pixel switching circuitry is hiddenbehind the reflective electrodes) and high contrast optical systems that canbe designed around the micro display [2].Of the three technologies introduced here LCoS technology provides thehighest native contrast (e.g. 50, 000 : 1) compared to DMD (e.g. 2, 000 : 1)and HTPS (e.g. 1, 000 : 1).2.1.3 Amplitude and Phase-Only LCoS ModulatorsThe previous section discusses typical projection SLMs that attenuate lighton a per-pixel basis to form images. In this thesis we introduce the useof programmable lenses to create a more efficient projection system (seeChapter 6). Similar to static lenses (for example a spherical glass lens) wewould like to achieve a wavefront distortion effect to focus or spread light.The change of the state of polarization in a traditional LCoS display displayalways comes coupled with a retardation of the phase (in other words adistortion of the wavefront) of light. It is possible to design the LC materialand the alignment layer of an LCoS device in such a way that the display canbe operated in phase-only mode [31, 41, 61\u00E2\u0080\u009363, 113]. Incoming light shouldbe linearly polarized but the state of polarization will not change uponreflection off the micro display. In this configuration no PBS is required andhence light is not absorbed. Similar to the example of a static glass lens,this dynamically addressable phase retardation is accomplished by adjustingthe effective refractive index along the light path in the LC material ateach pixel. The change in refractive index is possible due to the use ofa LC material with non-zero birefringence. In birefringent material therefractive index of the material depends on the polarization of light that72.1. Spatial Light Modulatorspassed thought it. Birefringence can be quantified as the difference betweenrefractive indices of a material and is defined as:\u00E2\u0088\u0086LCn = ne \u00E2\u0088\u0092 no (2.1)where ne is the extraordinary refractive index for incident light parallelto the preferred orientation direction of the LC material (the director) andno is the ordinary refractive index for light perpendicular to the director. Atypical LC has a positive birefringence between 0.05 and 0.45 (see [21, 54,114]). The amount of phase retardation of light as it passes through the LCmaterial is given by:\u00CE\u00B4LC =2pi \u00E2\u0088\u0097\u00E2\u0088\u0086n \u00E2\u0088\u0097 d\u00CE\u00BB(2.2)where d is the thickness of the LC material and \u00CE\u00BB is the wavelengthof incident light. Given that an LCoS device is reflective and light passesthrough the LC material twice, the total phase retardation can be expressedas:\u00CE\u00B4LCoS =2pi \u00E2\u0088\u0097\u00E2\u0088\u0086n \u00E2\u0088\u0097 2d\u00CE\u00BB(2.3)For programmable lens and holographic applications it is often desir-able to allow for a maximum phase retardation of exactly one wavelength(2pi) for example for phase-wrapping of a phase function (see for exampleFigure 6.11). In this case the thickness of the LC device should be:d =\u00CE\u00BB2 \u00E2\u0088\u0097\u00E2\u0088\u0086n (2.4)For broadband light device and material properties can be optimizedaround a center wavelength \u00CE\u00BBc. Light with shorter or longer wavelength com-pared to \u00CE\u00BBc will refract more or less which can cause wavelength dependantimage magnification or demagnification resulting in chromatic aberration(see for example Figure 6.7).Finally, in phase-only LCoS devices the pixel pitch plays an importantrole. A small pixel pitch is generally desirable as it allows for larger diffrac-tion angels. However, given that the LC material thickness remains constantfor a given desired phase retardation a small pixel pitch can cause inter-pixel82.2. Contrast Metricscrosstalk (i.e. a mixed phase response) between individual pixels which canbe challenging to predict.We refer the interested reader to [117] and [98] for further details onphase-only LCoS devices and topics related to polarization engineering forprojection applications.2.2 Contrast MetricsWhen comparing the contrast performance between different projectors itis important to differentiate between sequential contrast and in-scene con-trast. The sequential contrast is a good indication of the native micro-display contrast, while the in-scene contrast is a much more meaningfulcontrast measure when judging the image quality of natural images as ittakes into account the entire system\u00E2\u0080\u0099s optics including for example scatterin the projection lens.2.2.1 Sequential Projector ContrastAs the name suggests the sequential contrast is typically calculated froma set of sequential light measurements of the maximum (full white screen)and minimum (full black screen) amount of light that can be displayed bythe projector. The sequential contrast serves as a good approximation ofthe native micro-display contrast, provided the projector light source is notglobally dimmed/attenuated (see Section 2.2.1). While early video projec-tors based on liquid crystal micro displays were limited to on the order of500:1 sequential contrast, today\u00E2\u0080\u0099s projectors based on MEMS based devices(for example TI\u00E2\u0080\u0099s DLP technology) or LCoS displays such as Sony\u00E2\u0080\u0099s SiliconX-tal Reflective Display (SXRD) branded technology achieve on the order of1,000:1 to 6,000:1 sequential contrast. JVC continues to push the envelopeachieving on the order of 30,000:1 sequential contrast using their flavourof LCoS technology called Direct Drive Image Light Amplifier (D-ILA) [9].The sequential contrast is an important data point in understanding a pro-jector, but it is less relevant when predicting projector contrast performancefor the reproduction of natural images.Dynamic Irises and Global Light Source Dimming. The sequentialcontrast of a projector can be increased when the light source is dimmedor turned off entirely. High-pressure discharge lamps that are common inmost projectors can neither be electrically dimmed by a large amount nor92.2. Contrast Metricscan they be dimmed rapidly (e.g. at video frame rates). In order to globallymodulate the lamp intensity, one or more dynamic irises are controlled in acontent-dependant fashion. Similarly, with solid state light sources replac-ing lamps in more recent projector designs, an LED or laser array\u00E2\u0080\u0099s globalintensity can be dimmed electronically based on content. In overall brightscenes, the iris will be fully open (or the solid state light source driven atfull intensity), whereas dark scenes will cause the light source to be off ordark for better (=darker) black level. The dynamic iris feature helps boostthe sequential contrast metric for test patters, but provides little visual ben-efit for any but the darkest images - provided no image feature (includingsingle white pixels) requires the peak luminance level. In this case, theiris will either remain fully open, which leads to an elevated black level, orthe luminance of the bright image feature will be reduced by the amountthat the light source is dimmed [45, 59]. In product marketing material thequoted sequential contrast numbers often exceed 1,000,000:1, in this contexta rather meaningless measure for real image content.2.2.2 In-scene Projector ContrastFor natural images, the in-scene contrast is a much more meaningful con-trast measure. Here, a bright and a dark feature are measured simulta-neously within the same image. There are several common test patterns,for example the American National Standards Institute (ANSI) 4x4 checkerboard contrast consisting of a grid of alternating white and black rectan-gles. The in-scene contrast, or simultaneous contrast, takes into account theentire optical path within the projector. The native micro display contrast,inter-pixel light scatter, scatter within the light path caused by reflectionsoff of optical elements, undesired divergence of light within the projectoras well as the optical quality of the projection lens all affect the in-scenecontrast.Aside from the native contrast of the micro display, the in-scene contrastof a projector is largely affected by the overall amount of light in an image(the mean luminance of an image). This is due to scattered light withinthe projector that raises the achievable black level more in brighter imagescompared to darker ones. In the case of for example a small white featurecentered on a black background, a high in-scene contrast which is comparableto the sequential contrast can be measured, whereas a larger bright featuretowards the edge of the same black image would cause a reduced in-scenecontrast. In the example of the JVC projector (30,000:1 sequential contrast),the in-scene contrast for a dark image might be close to 30,000:1, whereas102.3. Dual Modulation Projection Displaysthe in-scene contrast for a bright scene (e.g. ANSI 4x4 checker board, with50% mean luminance) would be closer to 500:1 due to scatter in the opticspast the micro display. We refer to Section 5.3 for a first estimate of averageimage intensity for HDR cinema and note that for overall dark scenes a blacklevel of 100,000\u00C3\u0097 or less below the peak luminance can be appreciated bythe observer.To compare image quality of different HDR projectors the in-scene con-trast of a test pattern with mean luminance comparable to target imagecontent, or a selection of real images can be used. The capability of a pro-jector to reproduce bright highlights is important for both, overall dark andoverall bright scenes [13, 14, 118].2.3 Dual Modulation Projection DisplaysOver the last two decades, there have been several different proposals toimplement dual modulation approaches in display applications to increasein-scene contrast. The availability of large flat panel TVs and high powerLEDs led to an adoption of dual modulation techniques in consumer elec-tronics [101]. Similar concepts to increase contrast in projectors includescreens with spatially varying reflectivity (either statically [7] or dynami-cally [102]), and arrays of hundreds or even thousands of primitive projec-tors [102], proposed as means to increase on-screen luminance. Few of theseconcepts have made it past the research stage and small-scale prototypes.One exception are dual modulation projector designs using two traditionalamplitude SLMs in sequence [8, 19, 20, 59]. These systems are typicallyintended for specialty applications requiring good black level and limitedpeak luminance. The low optical efficiency of amplitude SLMs results inboth a low light intensity on screen and high power consumption, all at sig-nificantly increased system cost. Nevertheless dual amplitude attenuatingprojectors are being deployed not only in planetariums, training and sim-ulation applications, but recently also in high end, premium large format,cinema.To alleviate the problem of inefficient image formation, Hoskinson etal. [44, 46] introduced the notion of light reallocation using 2D arrays oftip-tilt mirrors in the light path of a small DLP projector, whereby the firstmodulator does not actually absorb much light, but moves it around withinthe image plane, so it can be reallocated from dark image regions to brightones, essentially creating moving, bright spots of approximately constant sizeon the amplitude modulator. Hoskinson and co-authors used a continuously112.4. Holographic Displaystilting micro-mirror array to achieve this light reallocation. Unfortunatelysuch mirror arrays are not easy to control accurately (achieving predictabletile-angles for a given drive signal) and are still only available as researchprototypes at low spatial resolution (7\u00C3\u0097 4 pixels in their work).2.4 Holographic DisplaysHolographic image formation models (e.g. [65]) have been adapted to createdigital holograms [39] quite early in the history of phase SLMs. Holographicprojection systems have been proposed in many flavours for research andspeciality applications including projectors [10]. Some projection systemsuse diffraction patterns addressed on a phase SLM in combination with tem-porally and spatially coherent light for image generation. The challenges inholography for projectors lie in achieving sufficiently good image quality, thelimited diffraction efficiency, often due to binary phase modulators [10], andthe requirement for a Fourier lens, often resulting in an undesired, bright Di-rect Current (DC) spot within the active image area (zero-order diffraction,which is hard to eliminate completely).2.5 Freeform LensesRecently there has been increased interest in freeform lens design, both forgeneral lighting applications (e.g. [74]) and for goal-based caustics [6, 47].In the latter application, we can distinguish between discrete optimizationmethods that work on a pixelated version of the problem (e.g. [83, 84, 116]),and those that optimize for continuous surfaces without obvious pixel struc-tures (e.g. [33, 53, 87, 100, 115]). The current state of the art methodsdefine an optimization problem on the gradients of the lens surface, whichthen have to be integrated up into a height field. This leads to a tensionbetween satisfying a data term (the target caustic image) and maintainingthe integrability of the gradient field.Lens and Phase Function Equivalence. The effects of phase delaysintroduced by a smooth phase function can be related to an equivalent,physical refractive lens under the paraxial approximation, which can bederived using either geometric optics or from the Hyugens principle. Theparaxial approximation is widely used in optics and holds when sin \u00CE\u00B8 \u00E2\u0089\u0088\u00CE\u00B8. For the projection system considered in this thesis, |\u00CE\u00B8| \u00E2\u0089\u00A4 12\u00E2\u0097\u00A6, whichcorresponds to redirecting light from one side of the image to the other.122.6. Tone Mapping Operators and Colour Appearance ModelsThe error in the paraxial approximation is less than 1% for this case, whichmakes optimizing directly for the phase surface possible.2.6 Tone Mapping Operators and ColourAppearance ModelsImage reproduction has a history dating back to the start of photography,leading to significant advances in print reproduction [1], colour reproduc-tion [48] and digital image reproduction [91]. The aim of image reproductionis to offer imagery for human consumption that looks natural, realistic aswell as appealing and, above all, correct. It has long been understood thathuman visual perception plays a key role, and should therefore be taken intoaccount in any image and video reproduction systems [48].CAMs, for instance, focus on the perception of various attributes ofcolour, including hue, lightness and colourfulness [27]. Of particular impor-tance for our work is lightness, which in colour science is loosely defined asthe impression of how much light a patch of colour appears to emit, relativeto a patch of colour perceived as white. In psychology, lightness is exten-sively studied as well, and it is found that its perception depends cruciallyon the presence of light sources, their intensities, distances as well as sizes([37] and references therein). It is therefore a complex spatial phenomenon,not yet extensively explored in either colour appearance modeling or highdynamic range imaging.The CIECAM02 colour appearance model is the most recent industrystandard, adopted by the Commission International de l\u00E2\u0080\u0099E\u00C2\u00B4clairage (CIE) [76].CAMs can be used to adapt the tristimulus value of a patch of colour for ob-servation under different viewing conditions. This is commonly achieved byoperating the model in reverse, substituting parameters describing the newviewing environment. Although these models are accurate in their predic-tion of colour appearance, they are validated only under certain conditions,specifically for a limited range of illumination levels. One reason is that thesemodels are derived from psychophysical data collected for a limited rangeof luminances [69]. A second reason is that by applying a CAM in bothforward and reverse mode, significant discrepancies in lighting conditionsare difficult to account for [92].This can be remedied by deriving spatially varying colour appearancemodels, known as image appearance models [29, 56, 86]. These models areoften closely related to CIECAM02, but include a spatially varying compo-nent to better deal with dynamic range mismatches.132.6. Tone Mapping Operators and Colour Appearance ModelsIn the area of dynamic range reduction, image appearance models arethe exception in that their design naturally includes colour management.Tone reproduction operators are specifically designed to deal with large dif-ferences between the dynamic range of an image and that of the targetdisplay. Global operators compress the image by applying a single functionto each pixel [24, 32, 109, 112]. The shape of this function is often sig-moidal [90, 94] or close to sigmoidal (see Chapter 7 of Reinhard et al. [95]for a discussion). Note that such response functions are known to describephotoreceptor output well [111].Local operators add a spatially varying compressive function to take pixelneighbourhoods into consideration. This often takes the form of a Gaussianfilter to approximate locally adaptative processes [12, 94], may use stacksof band-pass filters [67], or possibly employ edge-preserving smoothing op-erators that help minimize haloing artifacts [25]. They can also serve thepurpose of local contrast management [30]. To our knowledge, only onetone reproduction operator is spatially variant by taking inspiration fromlightness perception [55]. This method segments the image into separateregions, called frameworks, which each have a significantly different averageluminance. This allows each framework to be treated semi-independently.Our model is also inspired by lightness perception, although we embed ourtechnique into a colour management algorithm rather than a tone reproduc-tion operator, allowing us to simultaneously manage lightness perception,dynamic range as well as colour appearance.14Chapter 3Visual Perception andColour AppearanceManaging the appearance of images across different display environments isa difficult problem, exacerbated by the proliferation of high dynamic rangeimaging technologies. Tone reproduction is often limited to luminance ad-justment and is rarely calibrated against psychophysical data, while colourappearance modeling addresses colour reproduction in a calibrated manner,albeit over a limited luminance range. Only a few image appearance mod-els bridge the gap, borrowing ideas from both areas. Our take on scenereproduction reduces computational complexity with respect to the state-of-the-art, and adds a spatially varying model of lightness perception. Thepredictive capabilities of the model are validated against all psychophysi-cal data known to us, and visual comparisons show accurate and robustreproduction for challenging high dynamic range scenes.3.1 IntroductionTraditional imaging pipelines are designed around the abilities of conven-tional capture and display devices, and therefore do not handle dynamicrange beyond what can be represented with a single byte per pixel percolour channel. Light in the world around us cannot be well representedby such highly quantized values. This has led to the development of a col-lection of capture, processing and display technologies that are collectivelytermed HDR imaging [72, 95]. In particular capture and display hardwaretechnologies are rapidly maturing [103, 107], opening up new opportunitiesin entertainment and broadcasting, but bringing new demands on encoding,storage and especially on display.For instance, dynamic range reduction [77, 95] or expansion operators [5]address luminance mismatches between image and display, and may takedisplay capabilities into account [71]. Many of these techniques offer sophis-ticated mechanisms and models to handle the extensive luminance range of153.2. ContributionsHDR, often inspired by aspects of the visual system. More often than nothowever, tone reproduction operators treat colour as a separate modality,and with much less precision. Usually, post-processing steps are applied toreduce the saturation of colours [70, 99]. Although this may lead to satis-fying results in many cases, they are not sufficient for accurately modellingthe appearance of colours under different conditions. For instance, a colourtimer preparing a movie for the cinema will need to take into account thespecific viewing conditions likely to occur (dark room, large display, varyingviewing distances). On the other hand, when preparing the same movie forhome viewing, to ensure that the viewer will have the same experience asthe cinema goer, different viewing conditions are considered.CAMs may be employed to predict the appearance of a given colour un-der different conditions (which are specified as inputs to the algorithm) [27].As opposed to tonemapping, most CAMs are designed with a focus on colourand less so on dynamic range, making them less appropriate for dealing withHDR data as they offer little compression. Although CAMs offer high pre-dictive power for single patches of colour, spatial relations in an image cangreatly affect the appearance of colours, requiring spatially varying imageappearance models instead [56].Another mismatch between existing tonemapping and colour appear-ance solutions is the treatment of the viewing environment. In the former,display capabilities are rarely taken into account, albeit with one notable ex-ception [71]. In CAMs on the other hand, the room illumination is taken intoaccount in a symmetric manner to the original scene environment. Mixedadaptation to both room and display is not considered in any model thatwe are aware of.3.2 ContributionsWe therefore propose a novel, fully calibrated model for reproducing the ap-pearance of images under a wide range of different scene and display/viewingconditions. As an example, the image in Figure 3.1 is processed for severaldifferent viewing and display conditions, leading to images that, when seenunder the corresponding conditions, will have the same appearance. Algo-rithmically, the novelty of our approach is, that rather than matching theinput to the visual system between scene and view environment, our algo-rithm crucially achieves relative computational simplicity by matching anintermediate state of visual processing, namely photoreceptor responses. Weborrow from lightness perception to implement a spatially varying response163.2. ContributionsFigure 3.1: Our appearance reproduction model faithfully reproducescolour, taking into account different viewing environments and display typesas shown here for different combinations of viewing environment and dis-play. The plot shows the image histogram (grey) as well as the input/outputmapping (red, green and blue) for this particular image. The right panelshows that contrast is accurately reproduced for most pixels (shown grey), asdetermined by the dynamic range independent image quality metric [3].to the input, leading to a natural reproduction of scene appearance.The algorithm is uniquely matched against all psychophysical corre-sponding colour and colour appearance data known to us. This allowsus to prepare images as well as video for observation under known view-ing conditions, as well as predict appearance correlates. We believe thatthis algorithm brings together best practices from research in colour ap-pearance modeling, tone reproduction research as well as knowledge fromhuman lightness perception, leading not only to calibrated, but also visuallypleasing results. In summary, we offer the following contributions:\u00C2\u0088 We describe a calibrated and extensively evaluated global model thatcan match image appearance over a wide range of conditions and dis-plays.\u00C2\u0088 We derive correlates for appearance characterization using a signifi-cantly simpler formulation than any existing models and achieve com-parable predictive performance.\u00C2\u0088 We propose a novel formulation for modelling local aspects of lightnessperception, which allows us to take display size, viewing distance andmany other factors into account.In Section 3.3 we describe our main model and in Section 3.4 we show howit is extended to include a spatially varying notion of lightness perception.The model is evaluated in the context of several applications in Section 3.5,while conclusions are drawn in Section 3.6.173.3. Our ModelFigure 3.2: An overview of the steps of our model.3.3 Our ModelWe work under the hypothesis that it would be possible to match the outputof photoreceptors, and possibly further stages of visual processing, to obtaina visual match. The aim is therefore to derive a model (a flow chart is shownin Figure 3.2) to process images such that when presented on a known displaydevice under known illumination will elicit a perceptual response identicalto the original scene environment in which the image was captured. Thismeans that in addition to the input image (a), which we assume to bespecified in calibrated photometric units1 (cd/m2), the model requires aset of parameters that characterizes the environment in which the imagewas taken (e), as well as a characterisation of the display device (f2) and theviewing environment (f1). These parameters are compatible with those usedin other colour and image appearance models. The output of the model isthen a new image (g2) which can be post-processed according to a desiredrendering intent (h) and displayed on the specified display (i), generatingthe correct percept.Additionally, it is possible to calculate a set of appearance correlates(g1) which describe how pixels in the input image would be perceived in theenvironment in which the image was captured. These are computed on theoutput of the global model, and this is discussed in Section 3.5.3. Thesecorrelates allow the model to serve as a standard colour appearance model.We model the pathway that light takes from entering the eye until thecomputation of the neural response generated by the photoreceptors. Thisincludes modulating the pupil size (Figure 3.2(b)), accounting for bleach-ing (c), as well as subsequent modeling of the neural response of the three1The model produces plausible results even if the input image is not calibrated, al-though the resulting output image in that case is also not necessarily calibrated.183.3. Our Modelcone types (d). Although tone reproduction and colour appearance modelsoften omit the effects of pupil size and bleaching, we have found that includ-ing these allows the model to accurately predict photoreceptor output witha straightforward model. The non-linear response is modulated by a spa-tially varying measure of scene adaptation (e), which is inspired by lightnessperception and takes into account size, distance and strength of the lightsources in the scene (Section 3.4). Before describing each stage, we beginby discussing the input parameters and their derivation.3.3.1 Input ParametersThe first and most important input to the algorithm is an image I with Npixels In = (Xn, Yn, Zn) that ideally would be linear and specified in abso-lute values. We assume that the image is given in the CIE XYZ colour space,and that the Y channel is in cd/m2. Like all colour appearance models, thescene in which the photograph was taken needs to be characterised. This istypically done by specifying an adapting luminance La,s as well as the whitepoint of the dominant luminant, specified as a CIE XYZ tristimulus valueWa,s = (XW,s, YW,x, ZW,s) (with the Y channel normalized to 100)2. Theseparameters can be estimated from the image, and although it would be pos-sible to derive one set of parameters for the entire image, we believe thatfor successful appearance reproduction it would be better to estimate theseparameters in a spatially varying manner. This will allow us to take proxim-ity of light sources, their size and intensity into account, thereby effectivelyapproximating human lightness perception. This estimation technique isdescribed in Section 3.4.Following CIECAM02 we compute the degree of adaptation Ds, an in-terpolant which models to what extent the visual system is adapted [76]:Ds = 1\u00E2\u0088\u0092 exp((\u00E2\u0088\u0092La,s \u00E2\u0088\u0092 42)/92)3.6. (3.1)In our calculations we also need a notion of the maximum values thatoccur in the image. However, directly measuring the maximum value willnot lead to a robust algorithm. Instead, we compute the maximum sceneluminance by taking the 90th percentile, giving La,max,s. This value waschosen such that small but extremely bright regions (e.g. sun) would not bias2Note that we use \u00E2\u0080\u0099s\u00E2\u0080\u0099 in subscripts to denote scene referred parameters, \u00E2\u0080\u0099d\u00E2\u0080\u0099 to refer todisplay parameters, and \u00E2\u0080\u0099v\u00E2\u0080\u0099 to refer to the viewing environment in which the display islocated. If these identifiers are omitted, the variable applies equally to all scene, displayand viewing conditions.193.3. Our Modelfurther computations. The associated maximum white point is Wa,max,s =Wa,s La,max,s/La,s, a scaled version of the log average white point.In most colour appearance models, each of the input parameters has acounterpart that describes the display environment. However, the currentand future range of display capabilities means that it may not be sufficientto only describe the viewing environment: humans may adapt to both theviewing environment as well as the display itself \u00E2\u0080\u0094 an issue particularlyimportant for HDR displays. We therefore specify two counterparts foreach scene-related parameter; one describing the display and one describingthe viewing environment. These parameters should be specified by the user,whereby we note that we typically use the display white point for Wa,d =(XW,d, YW,d, ZW,d), and measure a white piece of paper using a chromameterto derive the viewing white point Wa,v = (XW,v, YW,v, ZW,v). We then inferthe following parameters: La,d = YW,d, La,v = YW,v, La,max,d = 5La,d,La,max,v = 5La,v and finally, Yb,d = Yb,v = 20 and Dd = Dv = 1.In the viewing environment the room illumination as well as the displaymay contribute to the state of adaptation of the viewer. Although partialadaptation is currently not a fully understood mechanism, we assume thatthe viewing and display illuminants contribute relative to their intensities,as well as the proportion of the field of view taken up by the display. Asa result, we combine the room and display parameters into a unified set ofparameters. This is achieved by measuring the visual angle \u00CE\u00B1 of the displayrelative to the entire visual field, and weighting La,d and La,v accordingto this fraction. This leads to a new La,v defined as \u00CE\u00B1La,d + (1 \u00E2\u0088\u0092 \u00CE\u00B1)La,v.We weigh all other display and viewing parameters in the same manner,reducing the number of viewing/display related parameters by half.Finally, we convert the image as well as all tristimulus values to LMScones space by means of the Hunt-Pointer-Este\u00C2\u00B4vez transform [27]. In thefollowing we describe the model of human vision that we employ using scenereferred parameters. An exact counterpart can be assumed for the viewingenvironment. The output of these two models is then combined into a singlealgorithm that transforms the image such that appearance is preserved.3.3.2 Pupil SizeThe pupil size is normally interpreted as a function of the adapting lumi-nance La,s and thus could be seen as providing an optimal aperture for thegiven environment. However, it could also be interpreted as a protectivedevice, constricting the pupil in bright environments to minimise the risk ofdamage to the retina. In that case, pupil area would be a function of the203.3. Our Modelmaximum adapting luminance La,max,s, used in the formulation for pupilarea As [75]:As = pi (2.45\u00E2\u0088\u0092 1.5 tanh(0.4 ln(La,max,s + 1))2mm2. (3.2)The input image as well as all the white points and adapting luminances canbe converted to trolands by multiplying by pupil area As, yielding retinalilluminance.3.3.3 BleachingThe opsins within the photoreceptors can change state if hit by too muchlight, stopping them temporarily from being able to transduce light. Theprobability that a photoreceptor is able to function is given by [43]:p(W) =4.34.3 + ln(W). (3.3)The vector notation used here is to indicate that we compute the effectof bleaching for each of the three colour channels separately. We computefactors for Wa,s, Wa,max,s and Wa,v. This means that the effective retinalilluminance for these two white points is:Wea,s = Wa,s \u000C p(Wa,s) As (3.4)Wea,max,s = Wa,max,s \u000C p(Wa,max,s) As. (3.5)While we could compute the effective retinal illuminance for the imageas well, it would be computationally more efficient to compute a factorfs = As p(Wa,s) that is used to account for retinal illuminance in the finalmapping function introduced in Section 3.3.5.3.3.4 Photoreceptor ResponseWe choose a model of photoreceptor behavior that is similar to existingmodels as they have been applied to tone reproduction as well as colourappearance modelling, i.e. we use a model that is a variant of the well-knownMichaelis-Menten equation3:Vs = Vmax,sII + \u00CF\u0083s Vmax,s/fs(3.6)3Often the Naka-Rushton equation is used instead, which has the same form, albeitwith the I and \u00CF\u0083s terms exponentiated with n \u00E2\u0088\u0088 [0, 1]. In CIECAM02 n is 0.42, whereasKim et al. [51] use n = 0.73. There is significant debate as to what value would be optimal,although measurements have shown that for the normal retina we have n = 1 ([34], page17); hence our choice to omit the exponent.213.3. Our Modelwhere Vs is the neural response due to stimulus strength I, the semi-saturation constant is given by \u00CF\u0083s and the maximum response is given byVmax,s. The factor fs accounts for bleaching and pupil size, as discussedin Section 3.3.3. The main difference with the standard sigmoidal form isthat the semi-saturation constant is multiplied by Vmax,s. This allows usto match scene and viewing environments in a novel and interesting way, asshown in the following section. Moreover, this equation can be rewritten asVs =II V\u00E2\u0088\u00921max,s + \u00CF\u0083s/fs, (3.7)which shows that the output of this system is not normalized, but canvary according to the choice of Vmax,s.Most models of human vision choose Vmax,s to be constant. However,it has been shown that this value should vary according to overall illumina-tion [43]. In particular, the maximum neural response reduces somewhat inbright environments. We surmise that its value is related to the maximumeffective retinal illuminance, so that we can compute Vmax,s as:Vmax,s = k(\u00CE\u00B8 + Wea,max,s\u00CE\u00B8)\u00E2\u0088\u00920.5. (3.8)Although the literature does not specify values for k and \u00CE\u00B8, by optimisingagainst a set of corresponding colour datasets [66] we have found that k = 34and \u00CE\u00B8 = 67 yields consistently good results.The semi-saturation constant \u00CF\u0083s models the neural mechanism of adap-tation, and as such is a function of both the adapting luminance as well asthe adapting white point. Following Kunkel and Reinhard [57], we find thatinterpolating between these values according to the degree of adaptation Dsproduces an appropriate triplet of semi-saturation constants \u00CF\u0083s:\u00CF\u0083s = Ds Wea,s + (1\u00E2\u0088\u0092Ds)As La,s, (3.9)where La,s = La,s (1, 1, 1) is a scaled identity vector. The neural responsefor the scene environment Vs is now considered to be the quantity that givesrise to all further perceptual effects. It is therefore desirable to match thisneural response across viewing conditions, which we discuss next.3.3.5 Final Mapping FunctionWe assume that the neural response of the scene environment Vs needs tobe recreated in a potentially different viewing environment. If we were to223.3. Our Modeldisplay the image in a given room on a given display, this would elicit aneural response Vv. We aim to modify the input image such that whenobserved in this viewing environment, the neural responses are matched.The modification of the input image to the display image can be writtenas Ld = g(I). In general there is no guarantee that any mapping g() leadsto an appearance match. Alternatively, one could require that Vv(Ld) =Vs(I) \u00E2\u0080\u0094 effecting a direct appearance match \u00E2\u0080\u0094 for whichever functionalform of V was chosen. Solving this equation for the tone reproductionoperator Ld leads to a solution that encodes the forward and backwardsteps as seen in all colour appearance models and some tone reproductionoperators. If the photoreceptor model was chosen as a sigmoidal compressivefunction, however, then it can be shown that the functional form of the tonereproduction operator follows a power law [92], which is fine for images ofmedium dynamic range but not sufficiently compressive for high dynamicrange imagery.On the other hand, sigmoidal operators that omit the backward step(e.g. [94]), have been shown to produce plausible images [64], despite be-ing theoretically incorrect. The latter stems from the fact that sigmoidalcompression produces perceived values which are then displayed as if theywere luminances. This makes that the observer\u00E2\u0080\u0099s visual system incorrectlyperceives these values for a second time.We can close this gap between theory and practice by noting that theprecise form of our neural response computation allows us to match Vv andVs in parts rather than as a single equation. We will show how this keyidea leads to a desirable result in the following.We begin by choosing a specific form for our mapping function g(), suchthat display values Ld are computed from the input I as follows:Ld = LmaxII + \u00CF\u0084 Lmax=IIL\u00E2\u0088\u00921max + \u00CF\u0084. (3.10)Note that our computational model has exactly the same functional formas the assumed photoreceptor model of (3.6). Suppose we map the inputimage with this function and display the result, then the neural response of233.3. Our Modelan observer\u00E2\u0080\u0099s photoreceptors would be:Vv = Vmax,vLmaxII + \u00CF\u0084 LmaxLmaxII + \u00CF\u0084 Lmax+\u00CF\u0083vfvVmax,v(3.11)=Vmax,v1 +\u00CF\u0083vfvVmax,vLmaxII + \u00CF\u0084\u00CF\u0083vfvVmax,vLmax(3.12)= c1II + c2. (3.13)We note that (3.13) has once again the same form as (3.6). This is cruciallyimportant, as equating Vv to Vs can now be achieved by equating theconstants c1 and c2 leading to two equations from which we can solve ourtwo unknowns Lmax and \u00CF\u0084 :Vmax,sII +\u00CF\u0083sfsVmax,s=Vmax,v1 +\u00CF\u0083vfvVmax,vLmaxII + \u00CF\u0084\u00CF\u0083vfvVmax,vLmax. (3.14)The two parts c1 and c2 correspond to the following two equations:Vmax,s =Vmax,v1 +\u00CF\u0083vfvVmax,vLmax(3.15)\u00CF\u0083sfsVmax,s = \u00CF\u0084\u00CF\u0083vfvVmax,vLmax. (3.16)Solving this system of equations for Lmax and \u00CF\u0084 gives:Lmax =\u00CF\u0083vfv11/Vmax,s \u00E2\u0088\u0092 1/Vmax,v (3.17)\u00CF\u0084 =\u00CF\u0083sfsfv\u00CF\u0083v. (3.18)In practice, we would implement the right-hand side of (3.10), and thereforeneed L\u00E2\u0088\u00921max which also has the mathematical advantage of approaching zerowhen the scene and viewing conditions are identical (and thereby Vmax,s =Vmax,v). Our method is formally not suitable to handle scene and displayenvironments whereby the maximum scene luminance is smaller than themaximum luminance in the viewing environment. In that case we haveVmax,s > Vmax,v (due to (3.8)) resulting in Lmax < 0. Bearing in mind243.3. Our Modelthat the range of values attained by Vmax is small, we therefore introducea straightforward variation to L\u00E2\u0088\u00921max that will produce plausible images evenwhen this condition is not met:L\u00E2\u0088\u00921max =fv\u00CF\u0083v\u00E2\u0088\u00A3\u00E2\u0088\u00A3\u00E2\u0088\u00A3\u00E2\u0088\u00A3 1Vmax,s \u00E2\u0088\u0092 1Vmax,v\u00E2\u0088\u00A3\u00E2\u0088\u00A3\u00E2\u0088\u00A3\u00E2\u0088\u00A3 . (3.19)An important observation is that our mapping function (3.10) reduces tothe identity operator in case the scene and viewing environments are thesame. This means that for images which do not require compression for thegiven viewing environment, the operator leaves pixels unaltered.The algorithm calculates in cone response space three fully independentchannels, which means that it is a strict von Kries model [27]. Althoughmost colour appearance models compute chromatic adaptation in a separatesharpened colour space as a preprocess, we have folded chromatic adapta-tion into the computation of three different semi-saturation constants, atechnique first shown to be viable by Kunkel and Reinhard [57].As a result, the model can be directly evaluated against correspondingcolour datasets. Such psychophysical datasets are obtained by showing ob-servers a patch of colour under a specific illuminant. The patch is thenshown under a different illuminant, and the observer is asked to adjust thetristimulus value of the patch until it appears identical. We evaluate themodel against the CSAJ, Kuo and Luo, Lam and Rigg, Helson et al., Brene-man, and the Braun and Fairchild datasets [66], as shown in Figure 3.3. Forthe three colour channels, the root mean square error is 2.0183, 1.6863 and2.181 (overall 1.97), which is within 8% of CIECAM02 (but note that weachieve this performance without requiring a separate chromatic adaptationstep).3.3.6 The Hunt and Stevens effectsTwo important colour appearance phenomena need to be taken into account,as they play a particularly important (yet mostly unexplored role) in highdynamic range imaging. The first is the Hunt effect, which predicts that theperception of saturation covaries with scene luminance [27]. This effect isimplicitly handled in our model by defining independent compressive curvesfor each of the three colour channels.The Stevens effect predicts that the perception of contrast co-varies withscene luminance [27]. We model this effect by re-introducing some of thecontrast that was lost by applying (3.10). A reliable way to do this is byemploying a bilateral filter-based unsharp mask [88]. Here, we use a bilateral253.3. Our ModelFigure 3.3: The output of (3.10) for each of its channels plotted againstseven different corresponding colour datasets.filter B\u00CF\u0081space,\u00CF\u0081L with spatial constant \u00CF\u0081space = 0.1 ddisp max(1, La,s/La,v) ,where ddisp is the display diagonal. The intensity constant is \u00CF\u0081L = 0.2 \u00E2\u0088\u0086L,where \u00E2\u0088\u0086L is the scene\u00E2\u0080\u0099s luminance range. The unsharp mask U is thencomputed as U = Yd \u00E2\u0088\u0092 B\u00CF\u0081space,\u00CF\u0081L(Yd), where Yd is the luminance channelderived from Ld, the output of (3.10), by conversion to Yxy space. Thecontrast enhanced image luminance is then given by Y \u00E2\u0080\u00B2d = Yd + 0.3U . Thecomputation is carried out in Yxy space for convenience, but could be carriedout directly in LMS space.3.3.7 Post-ProcessingThe image obtained by executing (3.10) will create a visual match in thechosen viewing environment, under the condition that the display range issufficient. In essence the image is prepared for a particular viewing con-dition, as opposed to a particular display device. As a result, pixels maybe mapped to values outside the display\u00E2\u0080\u0099s range. This would occur whenthe display is dim relative to the viewing environment (e.g. when taking alaptop outside into the sun).In such cases it is inevitable that some information is lost. To decidehow to display an image we introduce two rendering intents that post-process263.4. Local Lightness PerceptionFigure 3.4: An image without (left) and with (right) inclusion of our modelof the Stevens effect, showing that the perception of contrast is maintainedbetter in the right image.the image according to the wishes of the observer. The most precise wayto display pixels is to simply clamp all pixels that are above the maximumdisplay value. We call this rendering intent physical. It maps as many pixelsas accurately as possible. Alternatively we can normalise the image to thedisplay range, a rendering intent we call linear. This produces plausibleimages despite mapping pixels to values that are potentially either too highor too low.Note that such issues would not occur if the observer chose the illumina-tion in the viewing environment to be appropriately matched to the displaycapabilities. As such, these rendering intents are created to allow the displayof images under less than ideal circumstances.Finally, as per normal, the image should be gamma corrected prior todisplay to account for display non-linearities.3.4 Local Lightness PerceptionThe model described so far affects all pixels in the image in the same wayas it produces a single tone curve per channel. Although this is sufficientfor many images, in more extreme scenarios, such as the image shown inFigure 3.1, the resulting tone curve is appropriate for neither the bright orthe darker parts of the image. A better solution would be to adapt the curveto take into account local trends in the image.In the human retina, lateral interconnections allow such adaptation tooccur. Examples are horizontal cells that modify photoreceptor output [60]as well as various amacrine cell types. These cells form neural substrates273.4. Local Lightness PerceptionFigure 3.5: A visualization of the median cut algorithm [22], creating re-gions which can be represented with a single point acting as its representativelight source.which compute some form of spatial adaptation. Many other lateral, for-ward, and backward interconnections occur further along the visual pathway.These together give rise to various perceptual phenomena, one of which isthe notion of lightness.Studies of lightness perception have shown that the size of a region in ascene, in addition to distance and strength, plays a crucial role in how thatpart of the scene is perceived. Gilchrist et al. [36] formalised these effectsin the following rule, known as the Area Rule: \u00E2\u0080\u009CIn a simple display, whenthe darker of the two regions has the greater relative area, as the darkerregion grows in area, its lightness value goes up in direct proportion. At thesame time the lighter region first appears white, then a fluorescent white andfinally, self-luminous.\u00E2\u0080\u009DWith this in mind, our model of lightness perception considers lumi-nance, distance as well as region size. The first issue then is naturally thedetection of appropriate regions in the input image. Although some form ofsegmentation could be used [55], this would be a costly operation and wouldrequire an additional cleanup step.283.4. Local Lightness PerceptionFor our solution, we draw inspiration from importance sampling tech-niques and specifically the median cut algorithm [22], which recursively sub-divides the image into R regions r of equal luminous energy for a givennumber of iterations. A light source is then placed in the centre of eachregion (or in a different position, such as a centroid determined accordingto the energy distribution within that region) and its colour is computed asthe region\u00E2\u0080\u0099s average pixel value (see Figure 3.5).This method is both robust and efficient, subdividing the image intoregions of approximately equal energy. These regions can naturally serveas neighborhoods over which the scene parameters La,s, Wa,s and Ds areestimated locally. In addition, the sizes of the different regions are implicitlycomputed in the subdivision process. For instance, Figure 3.5 shows thatapproximately half the regions are allocated to light parts of the scene,but these are much smaller than their darker counterparts. This encodesthe relation between light and dark parts of the image and allows us toapproximate the area rule. We have found that 7 levels of recursion (128regions) creates regions that represent their local areas well. Subdividinginto more regions only creates more computations without further benefit,while using fewer regions gracefully degrades the result, to obtain a globaloperator in the limit of using the image as a single region.We restrict the minimum size of a region to 10 pixels to ensure that small,bright regions such as highlights are not over-represented in the estimationof adaptation levels. We position the virtual light source representing eachregion at its center. For each region r \u00E2\u0088\u0088 R, the pixels within that regionare used to compute local adaptation levels La,s,r (computed as the region\u00E2\u0080\u0099sgeometric mean), degree of adaptation Ds,r (Equation (3.1)) and effectiveretinal illuminance Wea,s,r (Equation (3.4)). Note that the pupil size usedfor the computation of Wea,s,r is computed based on global scene parametersas the viewer will be observing the scene as a whole (Equation (3.2)). Thisis also the case for Vmax,s (and consequently L\u00E2\u0088\u00921max) as it represents a globalscaling to the maximum response of the photoreceptors (Equations (3.8)and (3.19)).The semi-saturation value that drives the compressive part of our modelis then computed for each pixel based on its distance to each of the virtuallight sources. This computation also takes into account the size of the displayand the viewing distance, effectively including visual angle as a factor fordetermining the contribution of each light source for a particular pixel.Thus, for any image pixel In, a weight wr,n \u00E2\u0088\u0088 wn is computed per regionr, based on the pixel distance dr,n between the pixel and the center of each293.4. Local Lightness Perceptionregion r:wn,r =(1\u00E2\u0088\u0092 cos(pi2\u00E2\u0088\u0092 dn,rddiag\u00CF\u0089))4. (3.20)Here, ddiag is the display diagonal in pixels, which is used to normalizethe pixel distances for each region, and \u00CF\u0089 = \u00CE\u00B8D/120 is the ratio betweenthe visual angle covered by the display itself and the full (binocular) visualfield of the viewer, which we set to 120\u00E2\u0097\u00A6 [104]. The visual angle \u00CE\u00B8D of thedisplay is computed based on its physical diagonal size dsize and the expectedviewing distance dview as \u00CE\u00B8d = 2 tan\u00E2\u0088\u00921(0.5 dsize/dview). The weights wn arefinally normalized so that they sum to 1 for each pixel. Pixels are thenassigned their own values for adaptation luminance, adaptation white pointand degree of adaptation. For instance, the adapting luminance is:La,s,n =R\u00E2\u0088\u0091r=1wn,r La,s,r. (3.21)The per pixel degree of adaptation Ds,n and white points Wea,s,n are com-puted analogously. A semi-saturation triplet \u00CF\u0083s,n is then computed for eachpixel In using a spatially varying version of (3.9):\u00CF\u0083s,n = Ds,n Wea,s,n + (1\u00E2\u0088\u0092Ds,n)As La,s,n, (3.22)where La,s,n = La,s,n (1, 1, 1). With this per pixel parameter the image isthen processed using the model described in Section 3.3.5.Our approach allows us to model spatial aspects of lightness perceptionand preserves sharp edges in the image without introducing artefacts. Spa-tial modulations are overall very low frequency and their intensity dependson the specific viewing conditions. If the image is intended to be viewedon a laptop monitor in typical office conditions, it will only fill a relativelysmall part of the viewer\u00E2\u0080\u0099s field of view and other light sources are likely toplay a role in their overall adaptation, meaning that all parts of the im-age will be perceived in a similar way. On the other hand, if the targetviewing condition is a cinema, the surrounding illumination is likely to beextremely low and the screen very large. Consequently, the displayed imagewill largely determine the adaptation of the viewer. If the original scenewas much brighter than the display or projector can handle, effects due tolightness perception that would be apparent if the viewer was in the originalenvironment, need to be simulated in the compressed output to match thescene appearance.303.5. ApplicationsFigure 3.6: This scene was rendered with our algorithm, using a global setof parameters (left) and per pixel parameters (right), approximating lightnessperception.3.5 ApplicationsIn this section we describe three usage scenarios for which our model is appli-cable. The first is what we call scene reproduction, whereby (un-)calibratedimages are prepared for viewing in a specific viewing environment. Thesecond is video reproduction, demonstrating the use of our technique onhigh dynamic range video, and finally we present our take on appearanceprediction, which requires the calculation of appearance correlates. In thesesections we compare our results to the state-of-the-art in tone reproduction,colour appearance and image appearance models.3.5.1 Scene ReproductionThe main aim of our work is to reproduce scene appearance across a varietyof viewing conditions. With this we mean to reproduce the overall lookand feel of a scene. This is not the same as reproducing a photograph ofa scene, which would produce an altogether different look. An exampledemonstrating this subtle but important difference is shown in Figure 3.6.The left image was created with our algorithm, although we have set La,sand Wa,s to the image\u00E2\u0080\u0099s global geometric mean, rather than computingthese values per pixel, as our standard algorithm would do (right image).We note that the left image looks more like a photograph of a scene thanlike the scene itself. In our opinion the right image is a perceptually morecorrect representation of the scene. In particular, note the bark on the treein the right image. The part of the tree trunk shown against the sky is bothdarker and less rich in detail than the bottom part of the tree trunk. This is313.5. Applicationsindeed how such a scene appears as a result of human lightness perception,although this is rarely captured in photographs.It is important to note that we intend to match appearances, rather thanmaximize image attributes. This means that our aim is not to produce themost punchy and contrast-rich image. We are, however, aiming to producewell-balanced images with the correct distribution of intensities, contrastsand colours. Consider Figure 3.7, where we compare our result with a set ofstate-of-the-art colour appearance models and tone reproduction operators.This well-known image is difficult to reproduce accurately due to its largedynamic range, as well as its strong colour cast. Notice that in our scenereproduction (top left) the colours are reproduced accurately. The marbleof the stairs can be clearly differentiated from the rest of the floor, the goldon the walls appears as gold, and the wood on the ceiling looks like wood.323.5.ApplicationsCIECAM 02 iCAM 06 Kunkel 09 Kim et al 09Our model Reinhard et al 02Mantiuk 08Figure 3.7: A visual comparison of our results with a variety of colour appearance, image appearance and tonereproduction operators. The mapped image is shown in the top row, and the bottom row shows the differencebetween the input high dynamic range image and the output image, according to the dynamic range independentquality assessment metric [3]. This metric shows where contrasts have changed above visible threshold (see text).333.5. ApplicationsThe dynamic range independent quality assessment metric [3] shows thatcomparing input with our output yields minimal distortions (bottom left inFigure 3.7. Our result was created using default parameters that describean average laptop in an office environment4. The encoding of colour in thequality metric images is as follows: green pixels show loss of contrast, redindicates amplification of contrast, and blue indicates a contrast reversal.Grey shows image areas without artifacts.The same figure shows renderings and quality assessments for a set ofstate-of-the-art colour appearance models and tone reproduction operators.These include CIECAM02 [76], Kunkel\u00E2\u0080\u0099s [57] and Kim\u00E2\u0080\u0099s [51] colour ap-pearance models, the iCAM06 image appearance model [56], and Mantiuk\u00E2\u0080\u0099sdisplay adaptive operator [71] as well as the photographic operator [94].Where possible, we have used default parameters to create these images.For CIECAM02 this was not possible, requiring hand tuning to find theparameters that best represent the scene.Each of these algorithms create images with very different appearance.Characteristic for tone reproduction operators, the display adaptive andphotographic operators do not reproduce colours appropriately, making thescene overly yellow. As expected, the colour and image appearance modelsdo better in that respect, but tend to make the image too flat \u00E2\u0080\u0093 the qualitymetric shows significant loss of contrast for these results. They also makethe image either a little light or dark. It may be that default parameters arenot optimal for this image, although it is more likely, at least for CIECAM02and Kunkel\u00E2\u0080\u0099s method, that the inverse step largely undoes the compressioneffected by the forward step. This is the reason that we have taken a differentapproach, having a single compressive step that takes both scene and viewingparameters into account.A further comparison with Mantiuk et al.[71], iCAM06 [56] and Kim etal.[51] is given in Figure 3.8 for a set of four calibrated images from theHDR survey [26]. All algorithms in this figure take the display environmentinto account, which was set to an average laptop display, assumed to have aD65 white point. Thus, this figure is best viewed on a D65 display, ideallyunder a D65 illuminant, zoomed in such that each panel takes the full sizeof the display. The viewing distance should then be about 30 cm. Notethat our method produces even results without over-saturating the images(a problem common in tone reproduction [70]). Our algorithm produces4Display: La,d = 60 cd/m2, La,max,d = 191, Wa,d = (55, 60, 65), Wa,max,d =(172, 191, 190); Viewing environment: La,v = 800 cd/m2, La,max,v = 7010, Wa,v =(760, 800, 871), Wa,max,v = (6662, 7010, 7632).343.5. ApplicationsOur method Mantiuk 08 iCAM 06 Kim at alResponse curves/Image histogram Figure 3.8: A visual comparison of our output with several models that takethe display environment into account. The left column shows the responsecurves for our method (log-linear plots), showing how they adapt to the imagehistogram (shown in grey). As each channel is processed independently,strong colour casts are automatically removed, as shown in the top image.sufficient contrast to show both colour checkers in the second image from thetop with good fidelity. The early sunrise scene shows a dark sky, as expectedof a scene captured at 6:19AM. The night scene produces a well balancedresult. Here, the car appears relatively dark as it is depicted against a lightstore front, simulating the effects of lightness perception.Finally, Figures 3.1 and 3.9 show the effect of preparing an image fordifferent display and viewing environments. Figure 3.9 shows an imageprepared for an average laptop (parameters as before) as well as an iPhonescreen5. Although the differences are subtle, they do allow the snow to bereproduced accurately in each case.3.5.2 Video ReproductionThe proposed model is suited for managing the appearance of video contentin addition to images. We have experimented with a range of high dynamic5Display: La,d = 100 cd/m2, La,max,d = 250, Wa,d = (91, 100, 110), Wa,max,d =(227, 250, 276); Viewing environment: La,v = 2000 cd/m2, La,max,v = 5440, Wa,v =(1907, 2000, 1907), Wa,max,v = (5135, 5440, 5472).353.5. ApplicationsFigure 3.9: An image prepared for display on a laptop (left) and an iphonescreen (right), taking into account differences in size, display properties, aswell as viewing environments.Figure 3.10: A series of frames from an HDR video processed with theproposed model. The adapting white point is progressively changed in thevideo to demonstrate how the image is modified to simulate adaptation tothe varying white point.range video segments [107]. A series of frames from a video processed withour model is shown in Figure 3.10. For this video, we simulated a tempo-rally varying illumination environment by using a strongly coloured border(ranging from CIE D75 to CIE A) to which the observer will adapt. Usingthe colour of this border as input to the algorithm, we prepared the videofor observation with this border on a display with 15 inch diagonal screenarea and for a viewing distance of 15 inches. Note that this allows the over-all appearance of the video to co-vary with the border, thus conveying thecorrect appearance.Although the method cannot offer theoretical guarantees of temporalstability, we note that, in practice, the algorithm did not introduce flickerin the videos we have tested. If temporal stability needs to be guaranteed,it should be possible to apply a leaky integrator to \u00CF\u0083s,n [52].363.5. Applications3.5.3 Appearance PredictionThe model can be used to derive appearance correlates, including lightness,hue and colourfulness, that describe colour under specific viewing condi-tions [27]. In colour science, such predictions are highly valuable as theyallow inferences to be made as to how humans perceive colour in context.In our model, computing appearance correlates requires specifying the sceneenvironment and resetting the viewing and display environments. The latteris done by setting the following parameters La,v = 50, Yb,v = 20, La,max,v =La,v 100/Yb,v, Wa,v = (50, 50, 50) and Wa,max,v = Wa,v La,max,v/La,v. Thescene environment should be measured, for instance with a photometer,leading to a specification of the scene parameters La,s and Wa,s. Colourappearance models normally also specify a relative background Yb,s, whichwe set to 20. This allows us to compute the maximum adapting luminanceas La,max,s = La,s 100/Yb,s.Under these conditions, the correlate of lightness J is computed as theratio of the achromatic signal and achromatic white:J = 121(La,sLa,s + 168Ld \u00C2\u00B7wjLd,W \u00C2\u00B7wj)0.29 z, (3.23)where Ld,W is the result of passing the scene white point Wa,s through (3.10),z = 1.48 + (Yb,s La,s/100)0.5 and wj is a vector of weights given by (7.53,6.93, 1.43). Following other appearance models [51, 57, 76], the correlateof hue H is computed on the basis of opponent signals a and b:H =180pitan\u00E2\u0088\u00921(Ld \u00C2\u00B7 bhLd \u00C2\u00B7 ah), (3.24)where ah = (12.59,\u00E2\u0088\u009214.15, 1.68) and bh = (3.38,\u00E2\u0088\u00921.37,\u00E2\u0088\u00922.62). Finally, thecorrelate of colourfulness M can be computed with a method inspired byKunkel and Reinhard [57]:M = 93((Ld \u00C2\u00B7 am)2 + (Ld \u00C2\u00B7 bm)210\u00E2\u0088\u00923 + max(0,Ld \u00C2\u00B7 dm))0.73 \u00E2\u0088\u009AJ, (3.25)where am = (2.10,\u00E2\u0088\u00922.84, 0.82), bm = (2.63,\u00E2\u0088\u00922.26,\u00E2\u0088\u00920.40) anddm = (24.46, 0.93, 11.01).This relatively straightforward model can be directly compared againstpsychophysical colour appearance datasets. There are two such datasets thatare relevant to colour appearance modeling and high dynamic range imag-ing, which are the LUTCHI dataset [69] as well as the Kim et al. dataset [51].373.5. ApplicationsFigure 3.11: The performance as a predictive appearance model is con-firmed in this comparison against three other state-of-the-art colour appear-ance models, for three appearance correlates and two psychophysical datasets(RMS error; left four bars: LUTCHI, right four bars: Kim et al.).These sets are the result of psychophysical experimentation in which humanobservers assess patches of colour as presented under specific viewing con-ditions. These conditions are described by parameters that are compatiblewith the input parameters to our algorithm.The three relevant appearance correlates measured in these datasets arelightness, hue and colourfulness. Averaging over all different observationconditions in each of the datasets, we obtain the results as shown in Fig-ure 3.11. We compare our results for the two datasets against three othercolour appearance models, namely CIECAM02 [76], Kim et al [51] andKunkel and Reinhard [57]. For each of the three colour appearance cor-relates we find that our results are comparable to or better than the othermodels which we deem the current state-of-the-art. It can thus serve as apredictive colour appearance model, even over the extended dynamic rangeof Kim\u00E2\u0080\u0099s dataset. This is achieved while obviating the need for an inversestep, and therefore comes with the benefit of relative computational sim-plicity.3.5.4 LimitationsOur method is generally robust and achieves calibrated results when testedagainst colour appearance datasets as well as corresponding colour datasets.For images we use an estimation technique to derive input parameters. Partof this estimation involves finding the 90th percentile of the image to robustlyinfer these parameters. While this gets the image right in most cases, of themore than 100 calibrated images we have tested our algorithm against [26],we have found that in less than 3% the image is reproduced somewhat too383.6. Conclusionsdark. For those images, choosing a higher percentile would fix this problem,although estimating this parameter automatically for now remains an openproblem.3.6 ConclusionsDrawing inspiration from colour appearance modeling, tone reproductionresearch as well as studies of lightness perception, we present an appearancereproduction algorithm which prepares images and video for display underspecific, measured viewing conditions. Importantly, we take the room illu-mination as well as the characteristics of the display device into account,leading to precise appearance reproduction. The model requires input pa-rameters describing the viewing environment and the display, but ratherthan leaving its settings as guesswork, the correct inputs can be obtainedthrough a few direct measurements of the environment and display.For calibrated scenes it would be possible to measure the scene param-eters and supply these to the algorithm. However, we have found that astraightforward estimation technique, embedded into a model of lightnessperception and implemented by means of the median cut algorithm, is ableto robustly set all scene-referred parameters. The model also returns plau-sible imagery in case the input image is not calibrated.To ensure correct appearance reproduction, the algorithm was success-fully validated against seven corresponding colour datasets, the LUTCHIcolour appearance dataset as well as Kim et al\u00E2\u0080\u0099s high dynamic range colourappearance dataset. Combining a spatial model of lightness perception withvalidated colour management has led to an algorithm that allows images andvideos to be appreciated as scenes, rather than photographs.39Chapter 4Digital Imaging Pipelinesand Signal QuantizationThis chapter first discusses our experimental work to enable psycho-physicalvision experiments with a bus between Personal Computer (PC) and displaythat is sufficiently wide to avoid visible banding artifacts by using profes-sional display hardware. We then introduce our early work on a universaltone mapping function and how it leads towards a perceptually meaningfulluminance encoding scheme for video signals in today\u00E2\u0080\u0099s HDR standards.4.1 Experiments With A High Bit Depth DigitalImaging Pipeline for Vision ResearchIn order to achieve accurate results in user studies in the fields of Psy-chophysics, Experimental Psychology, Ophthalmology and clinical studiesthere are high demands towards an imaging pipeline presenting these stim-uli in an experiment (as illustrated in Figure 4.1). For example, displaystability and repeatability, both short term and long term, are crucial whenconducting research leading to robust results. Further important factors arethe perceptual limits of a graphics pipeline. Here, two important elementsare the achievable dynamic range and the colour gamut, which would ideallyapproximate or exceed the capabilities of the Human Visual System (HVS).In an optimal solution, those stimulus dimensions would be displayed withcontinuous intensity levels between their respective extrema (e.g. from darkto light) when presenting them to participants.404.1. Digital Imaging Pipeline for Vision ResearchFigure 4.1: Components within the imaging pipeline for vision science.When using conventional display devices, the dynamic range as well asthe colour gamut is usually limited and much narrower than the capabilitiesof the HVS. Also, signals representing those dimensions in the digital do-main need to be quantized which is referred to as bit depth. Unfortunately,common graphics hardware is only capable of outputting a maximum of 8bits per channel which results in 256 interval steps from the devices black towhite level. This is often not sufficient, especially when using wide dynamicranges and colour gamuts leading e.g. to false contour artifacts (banding).Therefore there is demand to use 10 or even 12 bit systems allowing for1024 or 4096 steps, respectively. To achieve higher bit depths (approxi-mating continuous levels), researchers have been using analog Cathode RayTube (CRT) displays that are connected via Video Graphics Array (VGA)to specialized research graphics cards. However, it is getting more difficultto come by high-end analog monitors. Therefore, a digital successor is de-sirable. Such a successor would offer a fully digital pipeline from softwareto display, which can be calibrated throughout.There are three major elements in an imaging pipeline targeted towardsvision science: the software to control the study, the display interface andthe actual display device.The Matlab technical computing software package as well as the Psych-Toolbox are popular environments to create experiments. Currently, Mat-lab and PsychToolbox only support 8 bit output via Digital Visual Interface(DVI), High-Definition Multimedia Interface (HDMI) or DisplayPort (DP)on a standard graphics card. The DP specifications include the option tooutput 10 bits, but so far this is not reliable at the Operating System (OS)level. Although there exist software modules that allow greater than 8 bitoutput on specialized hardware, they are usually not using standardizedhardware, increasing the complexity of experimental designs. The SDI isparticularly suitable for image bit depths greater than 8 bit (see interconnect414.1. Digital Imaging Pipeline for Vision Researchin Figure 4.1). It is a robust and thoroughly tested professional broadcastinterface, which currently supports up to 12 bits per channel with resolutionsof up to 2K or the support for 3D (SMPTE 424M). One limitation of thestandard is the maximum refresh rate of 60Hz, which makes this interfaceless favourable for perceptual experiments requiring high frame rates. SDIinterface cards for desktop and laptop computers have become reasonablyaffordable in recent years. Manufacturers of such cards are Nvidia, Matroxand Black Magic Design to name a few.A further crucial component is the display. Aside from CRT displays,the vision science community has been using various high end display sys-tems such as SpectraView Reference 2690 and HP Dream- Colour monitors.Also, the BrightSide DR37-P HDR display has been used in many studies.If precision and accuracy are crucial to an experimental design, suitable dis-play devices can be sourced in the professional film and broadcast industry.Offering a wide colour gamut with dark black levels as well as high colouraccuracy and temporal stability required for colour grading of motion pic-tures and TV series make these displays useful for vision research. Manyof these professional displays support a high bit depth video input: HD-SDI.Figure 4.2: Simulated and measured intensity using varying bit-depths.We have prototyped a software module to directly send 16 bit integerAdditive Colour Red Green Blue (RGB) matrices via HD-SDI to a display.As HD-SDI currently only supports 12 bits, the least significant bits aretruncated. The RGB matrices in this approach represent drive values forthe display. Thus, it is display device independent and allows for displaycalibration and characterizations tailored to a given monitor e.g. by us-ing gamma functions or lookup tables. A comparison of gray ramps withvarying bit-depths is given in Figure 4.2. With such a set-up, bit-depth,colour gamut and dynamic range limitations can be alleviated forming apowerful imaging pipeline for psychophysics. At a high level the work in424.1. Digital Imaging Pipeline for Vision Researchthis section merely constitutes the exercise of sending a discretized videosignal over a more capable bus (HD-SDI instead of DVI) to a display. Opti-mizing the way a signal is discretized for transmission over a bus with finitebandwidth is a different and possibly harder to solve problem. We referencegamma encoding above, which has been the default encoding for consumerand professional display devices for many years, due to the nature of theCRT phosphor response and the HVSs perception of luminance values inthe rage of a CRT display (approximately 0.5 to 100 cd/m2). A breadth ofliterature discusses the topic (for example see [89]). Gamma encoding canbe interpreted as a simple exponential function applied to the normalizedlinear input signal, often using a power of \u00CE\u00B3 = 2.2 for consumer or \u00CE\u00B3 = 2.6for cinema applications. This encoding scheme historically worked well fortwo reasons: first the HVSs response to luminance and the inverse responseof the CRT material properties matched well (by accident) and secondly theluminance range and contrast capabilities of most displays were comparableand thus a relative interpretation (i.e. 0 and 1 mapping to the minimumand the maximum display luminance) with a shared encoding scheme wasfeasible for a large variety of display devices. Today, there is a larger vari-ety of display devices (e.g. Plasma, LCD, Organic Light Emitting Diodes(OLEDs) or direct view LEDs). Many displays natively do not match thegamma approximation very well any more (for instance direct view LEDwalls exhibit a proportional light output relative to drive current). Further-more the variation in luminance capabilities between different display typeshas increased, and hence a single (relative) drive scheme that maps 0 to theminimum and 1 to the maximum display luminance is not ideal. Finally, theoverall peak luminance of most displays today is significantly higher (often1000 cd/m2 and more) than the previous CRT target peak luminance of 100cd/m2, and thus the gamma exponent is not a good approximation of theHVSs response within this new luminance range. Two aspects of our work inChapter 3 are relevant for video encoding for new displays: first a video sig-nal should represent absolute luminance and colour information rather thanmap to the relative minimum and maximum luminance and chromaticitycoordinates of the display and secondly, an exponential function such asthe gamma curve is no longer a good perceptual match when modelling theHVSs response to the (larger) luminance range of modern displays. A betterapproach are curves inspired by the photoreceptor response that are brieflyintroduced in the Section 4.2.434.2. Perceptually Optimal Quantization4.2 Perceptually Optimal LuminanceQuantizationThe work in Chapter 3, especially the model developed to predict lightnessperception inspired the development of a practical and perceptually accuratetone mapping operator for use in the studio post-production environmentand for mapping of HDR content to consumer displays. The final, universalmapping function is modelled based on a modified version of the Naka-Rushton cone response equation [78] and takes the following form:Lout =c1 + c2 \u00E2\u0088\u0097 Ln1 + c3 \u00E2\u0088\u0097 Ln , (4.1)where L is the input (scene) absolute luminance, Lout is the output targetdisplay luminance, and the constants c1, c2, and c3 are calculated based onabsolute luminance levels for:\u00C2\u0088 Lmin: minimum scene luminance\u00C2\u0088 Las: scene adaptation level\u00C2\u0088 Lmax: maximum scene luminance\u00C2\u0088 Ldmin: minimum target (display) luminance\u00C2\u0088 Lat: target adaptation level\u00C2\u0088 Ldmax: maximum target luminance\u00C2\u0088 n: contrast or slope of the resulting mapping function, typically 1A perceptually optimal video signal is encoded in a non-linear fashion,i.e. the relationship between code words (store and transmit pixels) and thelight levels that those code word represent is non-linear. For traditional dis-play systems that operate within a limited luminance and contrast range,an exponential (or gamma) encoding or a logarithmic encoding was suit-able. For HDR content, in which absolute luminance information acrossa large brightness range is required, neither gamma nor logarithmic en-coding are ideal, when the available number of bits are limited. A curvethat approximately describes Just Noticeable Differences (JNDs) of neigh-bouring luminance levels across the entire range would be ideal. A simpleapproximation of such a curve is central to the colour appearance modelin Chapter 3 and was important in Ballestad and Kostin\u00E2\u0080\u0099s follow-up work444.2. Perceptually Optimal Quantizationtowards Equation 4.1 [4], that eventually lead to a perceptually optimaldiscretization function also known as Perceptual Quantization (PQ) whichwas published in the SMPTE ST 2084:2014 standard (High Dynamic RangeElectro-Optical Transfer Function of Mastering Reference Displays), whereit is described as:N =(c1 + c2 \u00E2\u0088\u0097 Lm11 + c3 \u00E2\u0088\u0097 Lm1)m2, (4.2)where:\u00C2\u0088 m1 = 2610/4096 \u00E2\u0088\u0097 1/4\u00C2\u0088 m2 = 2523/4096 \u00E2\u0088\u0097 128\u00C2\u0088 c1 = 3424/4096\u00C2\u0088 c2 = 2413/4096 \u00E2\u0088\u0097 32\u00C2\u0088 c3 = 2392/4096 \u00E2\u0088\u0097 32,and where L is the absolute input luminance between 0 and 10, 000cd/m2,N is the signal between 0 and 1 ready for quantization to, for example,the 12 bit per colour channel range. The interested reader is referred to [73]and [79] for the reasoning that led to the standard, as well as to [35] and [68]for related proposals on efficient encoding of colour difference signals.The general form of Equation 4.1 is used in Chapter 5 to adjust imageluminance levels, in particular the average luminance level of video contentmastered for HDR TVs to our light steering HDR projector in the cinemaenvironment.45Chapter 5Image Statistics and PowerRequirementsThe research in this chapter outlines our approach to determine an idealprojector configuration based on an optical model of the new HDR projectorarchitecture and image statistics of target HDR content. We operate underthe hypothesis that the traditionally conflicting goals of reproducing highbrightness, high contrast HDR content and minimizing power and cost (lightsource) requirements for such a system can be balanced by understandingcontent characteristics. The work makes use of an optical simulation andcost model of several projector architectures: the Light Budget Estimator(LBE) in combination with statistical analysis of a variety of representativetheatrical HDR content.5.1 IntroductionLamp power in a traditional (light attenuating) projection systems linearlyscales with the desired peak luminance on screen, whereas the HVS\u00E2\u0080\u0099s bright-ness perception of these luminance values is non-linear (near-logarithmic),see Chapter 3. The projector architecture in Chapter 6 was developed to en-able image features that can be reproduced at significantly higher than FSWluminance levels. Figure 5.1 compares on a logarithmic scale today\u00E2\u0080\u0099s stan-dardized luminance range in cinema (see Digital Cinema Initiative (DCI)specifications: peak luminance white is defined at 14fL = 48cd/m2) withan example of the luminance range that is achievable in a projector archi-tecture using light steering.5.1.1 Light Steering EfficiencyIn an ideal light steering projector, all the available source light is used toform the image. In such an architecture the power required to form an imageis equal to the average (mean) luminance, or light power of the target image.Two terms that are commonly used in industry and relate to this mean465.1. IntroductionFigure 5.1: Standardized cinematic SDR luminance range (yellow), com-pared to an example of the HDR luminance range that could be achievedwith a light steering projector architecture (green) on a logarithmic scale tovisualize approximate perceptual brightness impact.luminance are the Average Picture Level (APL) and the Average LuminanceLevel (ALL). Since the APL often refers to the average of all code wordsin a discretized signal or a percentage relative to to the maximum signalvalue we prefer to use the term ALL in this work. The ALL is the meanluminance of an image. In a practical implementation, such as the prototypein Chapter 6 the ability to steer light into very small bright features is boundfor example by the system PSF. Figure 5.2 shows a selection of test patternswith identical minimum and maximum luminance in the source signal, butdifferent mean luminance (power), reproduced on a light steering projectorand on a traditional, amplitude attenuating projector. The solid curve showsthe theoretical possible maximum peak luminance in an ideal light steeringprojector, which is:Lmax =1Lmean. (5.1)475.1. IntroductionThe blue curve shows actual measured data on an early light steeringprototype. It can be seen that the steering efficiency is not 100%. Thenon-steered light is in part scattered to areas outside the projection screen.Some of the non-steered light is present on the screen and elevates the blacklevel.Figure 5.2: Theoretical and measured steering efficiency for test patternswith varying mean luminance compared to a traditional projection systemthat uses amplitude modulation to form images.5.1.2 Component EfficiencyIntroducing additional components into the light path of an existing projec-tion system almost always reduces the total light throughput. In the caseof the new proposed projector architecture we have to take into accountthe reflectivity of the phase modulator (65 \u00E2\u0088\u0092 80%), losses due to higherorder diffraction effects off the pixel grid as they appear on any regularlystructured surface in the light path (40\u00E2\u0088\u009250% losses), as well as additionallyrequired optical elements such as relay lenses, diffuser, broadband mirrors tofold the optical path and dichroic mirrors to combine or split colour channels.In an early prototype the total light efficiency of the system was measured ataround 5% from source light to screen. This inefficiency is balanced in partwith the large gains in peak luminance due to light steering, but is overall485.2. Full Light Steering and Hybrid Light Steering Architecturescostly, especially for images that have a high ALL. We estimate that in aoptimized system (custom coatings to light source wavelengths) the overallefficiency can be as high as 15 \u00E2\u0088\u0092 25%, however this is still lower comparedto a traditional projector architecture in which, for a full screen white testpattern, around 35\u00E2\u0088\u0092 45% of source light can reach the screen.5.1.3 Narrowband Light sourcesIn the new HDR projector architecture the light steering efficiency, and withit the maximum peak luminance, is highest for well collimated, narrowbandlight. Laser diodes are well suited to the application. There are howeverseveral pitfalls when using laser diodes. These include:\u00C2\u0088 Cost: other light sources, such as LEDs, lamps and laser+phosphorsystems are as of today still more cost-effective in terms of outputlumens per dollar.\u00C2\u0088 Interobserver metamerism variations: light from narrowband light sourceshas a higher chance of being perceived differently by different observerscompared to broadband light. This effect is less pronounced for lightthat is not narrowband.\u00C2\u0088 Screen speckle: Small scale interference patterns from laser light cancause small spatial intensity variations on the projection screen thatare disturbing. This effect is not visible for broadband light.5.2 Full Light Steering and Hybrid LightSteering ArchitecturesFigure 5.3 shows at a high level the proposed system architecture introducedin Chapter 6.Some of these negative effects of using a laser light source can be coun-tered by either breaking up some of the laser properties (for example byintroducing a larger angular diversity) or by mixing light from laser and non-laser based light sources. More interestingly, many overall bright scenes, donot require a low black level, due to adaptation of the human visual systemto the brighter parts of the image.We introduce an efficient hybrid projector architecture, in which lightsteering narrow-band light is combined with broadband non-steered light(uniformly illuminating an amplitude modulator) into one system to reduce495.2. Full Light Steering and Hybrid Light Steering ArchitecturesFigure 5.3: Full light steering architecture with a total possible systemefficiency of between 15\u00E2\u0088\u0092 25%.system cost further and mitigate certain image artifacts. Figure 5.4 depictsat a high level an example of this hybrid architecture.Figure 5.4: Hybrid architecture to increase overall system light efficiencyand reduce cost, while achieving comparable image quality to a full lightsteering projector. In this example the light efficiency of the steered lightpath is 13.5% and the light efficiency of the non-steered light path is 31.5%.Hardware parameters can include the split of source light between thesteering and non-steering parts of the system, as well as types of light sourceand associated cost and spectral properties.Software parameters can include aspects of the algorithm used to drivethe system and allocate light between the steering and non-steering stages.Figures 5.5 shows an example of splitting a linear input signal between a505.3. Average Luminance of HDR Images in Cinemaclassical projector light path and a light steering path.Finally a perceptually meaningful image quality metric needs to be se-lected and implemented to determine to what extent the desired target imagecolour and intensity has been faithfully reproduced.Figure 5.5: Linear (left) and log (right) plot showing a possible split of aninput signal (black) between a light steering projector (green) and a non-lightsteering projector (red).In this section we discuss image statistics that were collected basedon non-theatrical HDR image data (mapped to an approximate luminancerange) as well as a (very) simplified system model of a full light steeringprojector.5.3 Average Luminance of HDR Images inCinemaLittle high-brightness HDR video content is publicly available that has beencolour graded for a theatrical viewing environment. Partially this is of coursedue to the current lack of sufficiently capable large screen projection sys-tems. In this section we attempt to estimate the relative power requiredto reproduce HDR luminance levels up to 10 times above current peak lu-minance in cinema using colour graded HDR still images. An analysis of104 HDR images has been performed, and power requirements for a lightsteering projector as in the proposed architecture has been estimated. Inthis theoretical exercise it was found that a light steering projector with lesspower than a traditional cinema projector can directly reproduce all imagesup to 48cd/m2 and almost all of the surveyed HDR images up to 480cd/m2without the need for additional tonal compression. Table 5.1 summarizesthe results.515.3. Average Luminance of HDR Images in Cinema5.3.1 MethodologyMark Fairchilds [28] set of 104 scene-referred HDR images was analyzed(see Figure 5.6 for examples). The images differ in dynamic range from lessthan 1, 000 : 1 to over 109 : 1. Most images are outdoor scenes. Whilethe image data represents measured, scene-referred HDR (actual scene lu-minance levels) and is not intended for viewing on a cinema projector, aninitial guess for a cinema-suitable rendering can be established by shift-ing the image intensity, so that the ALL approximately matches the esti-mated viewer adaptation level in cinema. A simple linear scaling operator,Sadaptation, was determined manually for each image. Images were hand-tuned in a dark viewing environment, on a calibrated 27\u00E2\u0080\u00B2\u00E2\u0080\u00B2 reference monitor(Dell U2713HMt, calibration confirmed using a Photo Research Inc. PR-650spectro-radiometer) which was set to a peak white luminance of 48cd/m2(D65 white point). While adjusting the intensity the images were viewedfrom a distance of approximately 3-5 screen-heights.Figure 5.6: Examples from the image set that was used in this study.Once an adequate brightness scaling factor had been determined, lumi-nance levels above 10 times that of FSW, 480cd/m2, were clipped. Next,the steering efficiency of the proposed projector architecture was accountedfor via a system PSF approximation (in this case a somewhat conservative,large Gaussian kernel spanning effectively 81 pixels of 1920 horizontal im-age pixels). The mean intensity across all pixels of the resulting luminanceprofile serves as an approximate metric for power requirements of a lightsteering projector.Computational steps:\u00C2\u0088 Compute scaled luminance: Ys = Yhdr \u00C3\u0097 Sadaptation\u00C2\u0088 Clip Ys to 10\u00C3\u0097 48cd/m2 = 480cd/m2: Ysc = min(480, Ys)\u00C2\u0088 Account for steering efficiency: Yscm = Ysc \u00E2\u0088\u0097 g\u00C2\u0088 Determine arithmetic mean luminance: Y\u00C2\u00AFscm = mean(Yscm)525.3. Average Luminance of HDR Images in Cinema\u00C2\u0088 Scale to reference (48cd/m2): Prel =Y\u00C2\u00AFscm48cd/m25.3.2 ResultsFigure 5.7 shows the estimated power required to reproduce each HDR im-age on a light steering projector with peak luminance identical to that ofa traditional cinema projector, 48cd/m2, and of a light steering projectorwith a peak luminance one order of magnitude greater than cinema refer-ence systems: 480cd/m2. All images can be reproduced on the 48cd/m2light steering architecture using only a fraction of the power (13%) of a tra-ditional projector. More importantly, almost all images can be reproducedup to 480cd/m2 (10x higher peak luminance) using the same or less powercompared to a traditional projector.We note that the ALL of our data set when using the scale and clipoperations described above with no further artistic colour corrections ap-pears higher than what might be expected from cinema-ready high bright-ness HDR content. We point the interested reader to [118] for a recentintroduction to HDR content production in which significantly lower ALLs(approximately 3% and less) have been reported. This in turn would suggestthat the power requirements for a light steering projector architecture couldbe even lower (or peak luminance and contrast higher) than proposed in ourwork. For our comparisons on the HDR prototype projector in Section 6.6we select test images within a range of relatively conservative (=high) ALLsfrom 7% - 45% (see Table 6.2, second column).Table 5.1: Power required to reproduce the images from the HDR data seton three different projectors (relative to a standard cinema projector in thefirst row).Lpeak Steering? Prel (min) Prel (median) Prel 90%tile48cd/m2 no 1 1 148cd/m2 yes 0.0107 0.1079 0.2595480cd/m2 yes 0.0107 0.1832 0.8554535.3. Average Luminance of HDR Images in CinemaFigure 5.7: Relative power required to reproduce each of 104 HDR imageson three hypothetical projectors: two light steering projectors with a peakluminance of 48cd/m2 (blue) and 480cd/m2 (green) relative to a traditional,light blocking cinema projector with peak luminance of 48cd/m2 (red). Theaverage power required to achieve identical peak luminance (48cd/m2) is onthe order of 13% of a traditional projector. More importantly, all but thevery brightest images (approximately 9% of all images under test), can bere-produced up to a peak luminance of 480cd/m2 while using less or the sameamount of power.54Chapter 6Freeform Lensing PhaseRetrieval and Light SteeringHDR Projector Proof ofConcept6.1 IntroductionIdeally, HDR projectors should produce both darker black and (much) brighterhighlights while at the same time maintaining an appropriate-for-the-viewing-environment ALL. However, todays HDR projectors predominantly focus onimproving black level (for example recently demonstrated laser projectionsystems by Kodak, IMAX, Zeiss, Dolby, Barco, Christie and others). Im-proved contrast and peak luminance are vital for higher perceived imagequality (brightness, colourfulness) [96]. Brightness perception of luminancelevels is near-logarithmic in the photopic range. Doubling the luminance ofan image feature on a projection screen (e.g. by increasing the lamp powerof a traditional projector by 2\u00C3\u0097) does not result in a significant improvementin perceived brightness.Results in [93, 97, 118] suggest that 10\u00C3\u0097, 20\u00C3\u0097 or even 100\u00C3\u0097 increases inpeak luminance would be desirable, even if most images only require a verysmall percentage of pixels to be this bright (also see Section 5.3).Such drastic improvements cannot be achieved with conventional pro-jector designs, which use amplitude SLMs to generate images by pixel-selectively blocking light. For a typical scene, this process destroys between82% and 99% of the light that could reach the screen, with the energy beingdissipated as heat. This causes a number of engineering challenges, in-cluding excessive power consumption, thermal engineering, and cost, whichultimately limit the peak luminance in current projector designs.We explore the use of dynamic freeform lenses in the context of lightefficient, high (local) peak luminance, and high contrast (high dynamic556.2. Contributionsrange, HDR) projection systems. Freeform lenses, i.e. aspherical, asym-metric lenses have recently received a lot of attention in optics as well ascomputer graphics. In the latter community, freeform lenses have mostlybeen considered under the auspices of goal-based caustics, i.e. the design oflenses that generate a specific caustic image under pre-defined illuminationconditions [33, 84, 100, 115].We implement dynamic freeform lensing on a phase-only SLM, which iscombined with a conventional light blocking device such as a reflective LCDin a new type of cascaded modulation approach. The phase modulator inour approach creates a smooth, but still quite detailed \u00E2\u0080\u009Ccaustic\u00E2\u0080\u009D image onthe amplitude modulator. Since the caustic image merely redistributes, or\u00E2\u0080\u009Creallocates\u00E2\u0080\u009D, light [46], this approach produces both a higher dynamic rangeas well as an improved (local) peak luminance as compared to conventionalprojectors.6.2 ContributionsWe offer the following contributions:\u00C2\u0088 A new approach to generate freeform lenses or goal driven causticsusing common approximations in optics to directly optimize the phasemodulation pattern or lens shape of a freeform lens ( [18]).\u00C2\u0088 A new dual-modulation projector design that combines one phaseand one amplitude modulator for image generation and enables highbrightness, high contrast images.To our knowledge this is both the first time that practical high-resolutiondynamic light redistribution has been shown using commercially availablehardware, as well as the first time that phase-only SLMs have been used fordynamic freeform lensing.6.3 Phase Modulation Image FormationTo derive the image formation model for a phase modulation display, weconsider the geometric configuration shown in Figure 6.1: a lens plane andan image plane (e.g. a screen) are placed parallel to each other at focaldistance f . Collimated light is incident at the lens plane from the normaldirection, but a phase modulator (or lens) in the lens plane distorts thephase of the light, resulting in a curved phase function p(x), correspondingto a local deflection of the light rays.566.3. Phase Modulation Image FormationFigure 6.1: Geometry for the image formation model. Phase modulationtakes place in the lens plane, which is placed at a focal distance of f fromthe image plane. This results in a curvature of the wavefront, representedby a phase function p(x).Light deflection. With the paraxial approximation sin\u00CF\u0086 \u00E2\u0089\u0088 \u00CF\u0086, which isvalid for small deflection angles, we obtain in 2D thatu\u00E2\u0088\u0092 x = f \u00C2\u00B7 sin\u00CF\u0086 \u00E2\u0089\u0088 f \u00C2\u00B7 \u00E2\u0088\u0082p(x, y)\u00E2\u0088\u0082x. (6.1)In 3D this leads to the following equation for the mapping between x onthe lens plane and u on the image plane:u(x) = x + f \u00C2\u00B7 \u00E2\u0088\u0087p(x). (6.2)Intensity modulation. With the above mapping, we now need to derivethe intensity change associated with this distortion. Let dx be a differentialarea on the lens plane, and let du = m(x) \u00C2\u00B7 dx be the differential area of thecorresponding region on the image plane, where m(.) is a spatially varyingmagnification factor. The intensity on the image plane is then given asi(u(x)) =dxdui0 =1m(x)i0, (6.3)where i0 is the intensity of the collimated light incident at the lens plane.In the following we set i0 = 1 for simplicity of notation.576.4. Optimization ProblemFigure 6.2: Intensity change due to the distortion of a differential area dx.The magnification factor m(.) can be expressed in terms of the deriva-tives of the mapping between the lens and image planes (also compare Fig-ure 6.2):m(x) =(\u00E2\u0088\u0082\u00E2\u0088\u0082xu(x))\u00C3\u0097(\u00E2\u0088\u0082\u00E2\u0088\u0082yu(x))\u00E2\u0089\u0088 1 + f \u00E2\u0088\u00822\u00E2\u0088\u0082x2p(x) + f\u00E2\u0088\u00822\u00E2\u0088\u0082y2p(x) (6.4)= 1 + f \u00C2\u00B7 \u00E2\u0088\u00872p(x).This yields the following expression for the intensity distribution on theimage plane:i(x + f \u00C2\u00B7 \u00E2\u0088\u0087p(x)) = 11 + f \u00C2\u00B7 \u00E2\u0088\u00872p(x) . (6.5)In other words, the magnification m, and therefore the intensity i(u) onthe image plane can be directly computed from the Laplacian of the scalarphase function on the lens plane.6.4 Optimization ProblemWhile it is possible to directly turn the image formation mode from Equa-tion 6.5 into an optimization problem, we found that we can achieve betterconvergence by first linearizing the equation with a first-order Taylor ap-proximation, which yieldsi(x + f \u00C2\u00B7 \u00E2\u0088\u0087p(x)) \u00E2\u0089\u0088 1\u00E2\u0088\u0092 f \u00C2\u00B7 \u00E2\u0088\u00872p(x), (6.6)586.4. Optimization Problemwhere the left hand side can be interpreted as a warped image ip(x) =i(x + f \u00C2\u00B7 \u00E2\u0088\u0087p(x)) where the target intensity i(u) in the image plane has beenwarped backwards onto the lens plane using the distortion u(x) producedby a given phase function p(x).With this parameterization, the continuous least-square optimizationproblem for determining the desired phase function becomesp\u00CB\u0086(x) = argminp(x)\u00E2\u0088\u00ABx(ip(x)\u00E2\u0088\u0092 1 + f \u00C2\u00B7 \u00E2\u0088\u00872p(x))2dx. (6.7)This problem can be solved by iterating between updates to the phase func-tion and updates to the warped image, as shown in Algorithm 1. Thealgorithm is initialized with the target image intensity. From this, the firstphase pattern is computed, which in turn is used to warp the original targetimage intensity to provide a distorted intensity image for use in the nextiteration.Algorithm 1 Freeform lens optimization// Initializationi0p(x) = i(u)while not converged do// phase updatep(k)(x)=argminp(x)\u00E2\u0088\u00ABx(i(k\u00E2\u0088\u00921)p (x)\u00E2\u0088\u0092 1 + f \u00C2\u00B7 \u00E2\u0088\u00872p(x))2dx// image warpi(k)p (x)=i(x + f \u00C2\u00B7 \u00E2\u0088\u0087pk(x))end whileAfter discretization of i(.) and p(.) into pixels, the phase update cor-responds to solving a linear least squares problem with a discrete Laplaceoperator as the system matrix. We can solve this positive semi-definite sys-tem using a number of different algorithms, including Conjugate Gradient,BICGSTAB and Quasi Minimal Residual (QMR). The image warp corre-sponds to a texture mapping operation and can be implemented on a GPU.We implement a non-optimized prototype of the algorithm in the Matlabprogramming environment using QMR as the least squares solver. Table6.1 shows run times for Algorithm 1 and a selection of artificial and naturaltest images at different resolution. It was executed on a single core of amobile Intel Core i7 clocked at 1.9 GHz with 8 GByte of memory.We note that due to the continuous nature of the resulting lens surfaces,computation of the phase with resolutions as low as 128 \u00C3\u0097 64 are sufficient596.5. Simulation ResultsTable 6.1: Run times of Algorithm 1 using five iterations for a set ofdifferent test images and image resolutions.Image Resolution RuntimeLogo 128\u00C3\u0097 64 2.62 sLena 128\u00C3\u0097 64 2.14 sWave 128\u00C3\u0097 64 1.81 sLogo 256\u00C3\u0097 128 4.03 sLena 256\u00C3\u0097 128 4.75 sWave 256\u00C3\u0097 128 3.23 sLogo 512\u00C3\u0097 256 9.37 sLena 512\u00C3\u0097 256 10.22 sWave 512\u00C3\u0097 256 5.27 sfor applications such as structured illumination in a projector. We alsonote that the algorithm could, with slight modifications, be rewritten as aconvolution in the Fourier domain which would result in orders of magnitudeshorter computation time for single threaded CPU implementations andeven further speed-ups on parallel hardware such as GPUs. With theseimprovements, computations at, for example, 1920\u00C3\u0097 1080 resolution will bepossible at video frame rates. In addition both, the resulting contrast of thecaustic image as well as the sharpness (effective resolution), benefit fromhigher working resolution.The progression of this algorithm is depicted in Fig. 6.3. We show theundistorted target image, from which we optimize an initial phase function.Using this phase function, we update the target image in the lens plane bybackward warping the image-plane target. This process increasingly distortsthe target image for the modulator plane as the phase function converges.The backward warping step implies a non-convex objective function, butwe empirically find that we achieve convergence in only a small number ofiterations (5-10).6.5 Simulation ResultsWe evaluate the performance of our algorithm by utilizing different simu-lation techniques: a common computer graphics ray tracer and a customwavefront model based on the Huygens-Fresnel principle to simulate diffrac-tion effects at a spectral resolution of 5nm.606.5. Simulation ResultsFigure 6.3: Algorithm progression for six iterations: target i gets progres-sively distorted by backwards warping onto lens plane i(k)p as phase functionp(k) converges towards a solution. The 3D graphic depicts the final lensheight field.6.5.1 Ray Tracer SimulationFor the ray tracer simulation we use the LuxRender framework, an unbiased,physically-based rendering engine for the Blender tool. The setup of thesimulation is quite straightforward: the freeform lens is imported as a mesh,and material properties are set to mimic a physical lens manufactured outof acrylic.A distant spot light provides approximately collimated illumination, awhite surface with Lambertian reflectance properties serves as screen. Thelinear, high dynamic range data output from the simulation is tone mappedfor display. The results (see Fig. 6.4) visually match the target well.6.5.2 Physical Optics SimulationTo analyze possible diffraction effects that cannot be modeled in a ray tracerbased on geometric optics principles, we perform a wave optics simulationbased on the Huygens\u00E2\u0080\u0093Fresnel principle. We compute a freeform lens surfacefor a binary test image (see Fig. 6.5) and illuminate it in simulation withlight from a common 3-LED (RGB) white light source (see Fig. 6.6, dottedline) in 5nm steps. We integrate over spectrum using the luminous efficiencyof the LED and the spectral sensitivity curves of the CIE colour matching616.5. Simulation ResultsFigure 6.4: LuxRender simulation results of a caustic image caused byan acrylic freeform lens. The inset shows the absolute intensity differencebetween simulated and original image, where the original image is encoded inthe interval [0-1], and 0 in the difference map (green) means no difference.There are three possible sources of error: reflections off the edges of thephysically thick lens (vertical and horizontal lines), misalignment and scalingof the output relative to the original (manual alignment) and the nature ofthe light source (not perfectly collimated).functions (see Fig. 6.6, solid line), as well as a 3x3 transformation matrixand a 2.2 gamma to map tristimulus values to display/print RGB primariesfor each LED die and for the combined white light source (see Fig.6.7).As expected, the wavefront simulation reveals chromatic aberrations withinthe pattern and diffraction off the edge of the modulator, which can be(partially) mitigated, for example, by computing separate lens surfaces foreach of R,G and B.The phase function p(x) can be used directly to drive a digital phasemodulation display. However, if instead we would like to create a refractivelens surface out of a transparent material, then this phase function needs tobe converted to a geometric model for the lens shape.Similar to other research on goal-based caustics [84], we assume a lensshape that is flat on one side, and a freeform height field h(x) on the otherside (see Figure 6.9). In the (x, z) plane, the deflection angle \u00CF\u0086 is relatedto the incident (\u00CE\u00B8i) and the exitant (\u00CE\u00B8o) angles at the height field (also see626.5. Simulation ResultsFigure 6.5: Binary test pattern (left) and resulting lens height field (right)used in the wave optics simulation.Figure 6.1) as follows:\u00E2\u0088\u0082p(x)\u00E2\u0088\u0082x\u00E2\u0089\u0088 \u00CF\u0086 = \u00CE\u00B8o \u00E2\u0088\u0092 \u00CE\u00B8i. (6.8)The analogous relationship holds in the (y, z) plane.In addition, the lens material has a refractive index of n. Using Snell\u00E2\u0080\u0099slaw, and again the paraxial approximation, we obtain1n=sin \u00CE\u00B8isin \u00CE\u00B8o\u00E2\u0089\u0088 \u00CE\u00B8i\u00CE\u00B8o. (6.9)Using Equations 6.8 and 6.9, as well as \u00CE\u00B8i \u00E2\u0089\u0088 \u00E2\u0088\u0082h(x)/\u00E2\u0088\u0082x, we can derivethe lens shape ash(x) = h0 +1n\u00E2\u0088\u0092 1p(x), (6.10)where h0 is a base thickness for the lens.We note that the height is a linear function of the phase, and the refrac-tive index n shows up only as a scalar multiplier to the phase function p(.).Figure 6.6: Spectra of standard white 3-LED (RGB) [82] (dotted graph)and the CIE standard observer colour matching functions (solid graph) usedin the wave optics simulation.636.5. Simulation Results(a) Red LED (b) Green LED (c) Blue LED(d) Combined (RGB) White LEDFigure 6.7: Wave optics simulation for a test lens using standard white3-LED (RGB) spectra. The simulation was performed at 5nm intervals andmapped to a RGB colour space for print.Since p itself is approximately linear in the focus distance f (Equation 6.5),we can see that uniform scaling of the height field and uniform changes ofthe refractive index simply manifest themselves as a refocusing of the lens.This also shows that it is equivalently possible to adjust the optimizationprocedure to directly optimize for h(.) instead of p(.). We chose to opti-mize for phase because we desire the use of spatial phase modulators forapplications in video projectors.Figure 6.8: Results using 3D-printed refractive lenses. The left imageshows the lenses themselves, while the center and right images show thecaustics generated by them: the Lena image and a Siggraph logo. Due toresolution limits on the 3D printer, the lens dimensions have been optimizedfor large feature scales, which results in a short focal length.646.5. Simulation ResultsFigure 6.9: Geometry for refraction in a freeform lens defined by a heightfield h(x).6.5.3 Refractive 3D Printed Lens ResultsFigure 6.8 shows results for goal-based caustics using refractive freeformlenses generated with our method. The lenses (shown on the left of Fig-ure 6.8) were 3D printed on an Objet Connex 260 rapid prototyping machineusing VeroClear material. Afterwards, the lenses were thoroughly cleanedand the flat side was manually polished using fine grained sand paper andpolishing paste. This type of 3D printer has a layer thickness of 42\u00C2\u00B5m, whichlimits the feature size that we can create.As discussed above, the model can be re-scaled to achieve different focaldistances. To accommodate the resolution limits of the fabrication method,we chose very short focal distances of (about 1\u00E2\u0080\u009D for the Siggraph logo and5\u00E2\u0080\u009D for the Lena image). Although these scales test the very limits of theparaxial approximation used in the derivation of our image formation model,the image quality is still quite good. With better fabrication methods suchas injection molding, high precision milling [87], or even detailed manualpolishing of a 3D printed surface, one could both improve the image qualityand reduce the feature size, so that far field projection becomes feasible.6.5.4 Static Phase PlatesWe evaluate a selection of phase patterns using static phase plates withdimensions comparable to that of a phase-only SLM (approximately 12mm656.5. Simulation Resultsx 7mm). Figure 6.10 shows two phase plates that were manufactured on afused silica wafer using a lithography based process as well as the resultinglight fields when illuminated with a collimated laser beam. The spatialresolution of the phase pattern is high at 12,288 x 6912 pixels per lens(1.00 \u00C2\u00B5m pixel pitch). The phase resolution was limited to 8 phase levels (4masks in the lithography process resulted in 8 phase levels between 0 and2 pi). The static phase plates were manufactured to evaluate our freeformlenses for static beam shaping applications and to test the method using highpower lasers (in this case a fiber coupled laser) in which spatial coherenceproperties are not preserved. The laser used for experiments (see results inFigure 6.10) is a 638nm laser with up to 60W optical power. A collimationlens doublet is used to expand the beam and to provide an approximatelycollimated (but slightly diverging) beam of light to illuminate the phaseplates. Spatial coherence is partially broken up as multiple laser sourceswithin the module are combined and coupled into a 400 \u00C2\u00B5m fiber whichintegrates light travelling along multiple light paths. The ability to focuslight from this source is limited (this applies not only to our phase plates,but also to ordinary glass lenses), and thus the sharpness of the resultingimage is affected. For the proposed application as a structured light sourcein a projection system, a small amount of blur in the image is acceptable,if not desirable. The ray-tracing simulations in Section 6.5.1 make use ofa perfectly collimated light source and hence blur is not modeled, howeverthe overall geometry of the real light distribution is visually undistorted andthe contrast is comparable to the simulations. For the projector prototypedescribed in Section 6.6 we use a free space laser and achieve a similaramount of blur with a diffuser.666.6. Dynamic Lensing in Projection SystemsFigure 6.10: Static phase plates manufactured on a fused silica wafer fortwo test patterns (Marilyn and Align). Left: phase plates without and withlaser illumination. The top phase plate reproduces Marilyn, the lower phaseplate focuses the Align pattern. Right: The fiber-coupled and beam expandedlaser light source as well as two phase plates mounted in the light path (outof focus in the photo) are shown in the foreground. The projected struc-tured illumination pattern is focused on the screen. For reference, above thered projection is a printed copy of the phase pattern etched into the waferas well as a print-out of a wave-front simulation of the expected intensitydistribution on screen. The light pattern visually matches the simulationwell.6.6 Dynamic Lensing in Projection SystemsIn order to apply the freeform lens concept in projection displays, we requirea spatial light modulator that can manipulate the shape of the wavefront of676.6. Dynamic Lensing in Projection Systemsreflected or transmitted light. We first provide a brief overview of the differ-ent technologies available for this purpose, and then describe experimentsand prototypes using this technology.There are several commercially available technologies that can rapidlymanipulate the phase of an incident wavefront on a per-pixel basis. Thesedevices include MEMS displays such as analog mirror arrays [44] or de-formable mirrors for use in wavefront sensing and correction applications.The benefit of MEMS-based devices is the very fast temporal response. Dis-advantages include cost and availability as well as the relatively low spatialresolution: devices with 4096 actuators currently mark the upper end of therange in this domain leaving a resolution gap of 3 orders of magnitude tocommon projection displays.An alternative to MEMS based mirrors is offered by liquid crystal dis-plays either in form of a transmissive LCD or in a reflective configuration:LCoS. Liquid crystal displays can retard the phase of light and offer highspatial resolution. Reflective LCoS devices can update at higher switchingspeeds compared to transmissive LCD due to the reduced cell gap and pro-vide a high pixel fill factor. Omitting the input/output polarizing beamsplitter and careful management of the polarization of incoming light allowsfor the operation in phase-only mode in which phase is retarded based onthe rotation of the liquid crystals in each pixel. Although standard LCoSmodules can in principle be used as phase modulators, dedicated SLMs areavailable that can be calibrated to shift phase by one wavelength or morewhich allows for the implementation of steeper lenses that steer light moreaggressively. The pixel values of the LCoS module then correspond directlyto the wavelength modulated phase function, i.e. mod(p(x), \u00CE\u00BB). For moreon this topic we refer the reader to [98].Our choice of phase SLM is a reflective LCoS chip distributed by [42]. Itprovides a spatial resolution of 1920 \u00E2\u0088\u0097 1080 discrete pixels at a pixel pitch of6.4\u00C2\u00B5m, and can be updated at up to 60Hz. Access to a look-up-table allowsfor calibration of the modulator for different working wavelengths. The fillfactor and reflectivity of the display are high compared to other technologiesat 93% and 75% respectively. The phase retardation is calibrated to between0 and 2pi, equivalent to one wavelength of light. This is sufficient to generatefreeform lenses with a long focal distance. For shorter focal distances, werequire more strongly curved wavefronts, which creates larger values for p(.).We can address this issue by phase wrapping, i.e. using only the fractionalpart of p(.) as drive signal for the phase SLM. This results in a patternsimilar to a Fresnel lens (see Figure 6.11 and also red box in Figure 6.12 aswell as the phase pattern in Figure 6.13).686.6. Dynamic Lensing in Projection Systems(a) 0 to 8pi spherical lens (b) cross section (c) phase-wrapped at 2piFigure 6.11: Phase wrapping example: (a) phase function of a sphericallens with a height of 8pi, (b) a plot of the cross section of the original andthe phase wrapped lens, and (c) the same lens wrapped at intervals of 2pi.6.6.1 Monochromatic PrototypeWe initially analyze image results from the first (phase) modulation stagein isolation (see Figure 6.12) and later relay the resulting light profile intothe second stage for amplitude attenuation.The use of single frequency lasers causes small-scale artifacts includingscreen speckle and diffraction fringes due to interference (Figure 6.13, centerphoto). As previously mentioned these artifacts can be reduced below thenoticeable visible threshold by using for example a set of lasers with differentcenter wavelengths or broadband light source such as LED and lamps. Whenconstraint to using a narrowband light source such as in our test setup, asimilar image smoothing effect can be achieved by spatially or temporallyaveraging the image using for example a diffuser or commercially availablecontinuous deformable mirrors that introduces slight angular diversity ina pseudo-random fashion at high speeds. For ease of implementation wechoose to use a thin film diffuser placed in an intermediate image planefollowing the phase SLM. A photo of the cleaned-up intensity profiles canbe seen in Figure 6.13, right.We demonstrate a first prototype of a high brightness, high dynamicrange projection system, in which we form a first image based on our dy-namic lensing method and provide additional sharpness and contrast usinga traditional LCoS-based amplitude modulating display.At a high level, the light path of a traditional projection system consistof a high intensity light source and some form of beam homogenization, forexample beam expansion, collimation and homogenization, colour separationand recombining optics. At the heart of a typical projector, a small SLMattenuates the amplitude of light per pixel. The resulting image is thenmagnified and imaged onto the projection screen.696.6. Dynamic Lensing in Projection SystemsFigure 6.12: Single modulation test setup for lasers consisting of a lightsource (yellow box, 532nm DPSS laser and laser controller), beam expansionand collimation optics (orange box), the reflective phase SLM (blue), variousfolding mirrors and a simple projection lens to relay the image from andintermediate image plane onto the projection screen (green). The phasepattern shown on the computer screen correlates linearly to the desired phaseretardation in the optical path to form the image. It has been phase-wrappedat multiples of one wavelength and can be addressed directly onto the microdisplay SLM.706.6. Dynamic Lensing in Projection SystemsWe largely keep this architecture intact, but replace the uniform illu-mination module with both, the collimated laser illumination and a phaseSLM (Figure 6.14). Our lensing system is inserted between the light sourceand the existing SLM, and forms an approximate light distribution on athin-film diffuser in an intermediate image plane which is then relayed ontothe image plane of the amplitude SLM. The freeform lensing approach redis-tributes light from dark image regions to bright ones, thus increasing bothcontrast and local peak brightness, which is known to have a significantimpact on visual realism [97].We make use of the forward image formation model from our simula-tions for the light steering phase to predict the illumination profile presentat the second, amplitude-only modulator. Given the phase function fromthe freeform lensing algorithm, the light distribution on the image planeis predicted using the model from Equations 6.2 and 6.4. The amount ofsmoothness introduced at the diffuser at the intermediate image plane can bemodelled using a filter kernel that approximates the PSF and the modulationpattern required for the amplitude modulator is then obtained to introduceany missing spatial information as well as additional contrast where needed.We note that careful calibration and characterization of the entire opticalsystem is required to optimally drive the SLMs. No significant efforts beyondcareful spatial registration of the two images (illumination profile caused byphase retardation and amplitude modulation on the SLM) and calibrationto linear increments in light intensity were performed for this work.6.6.2 ResultsFigure 6.13 shows the phase patterns computed by Algorithm 1 as appliedto the phase modulator with black corresponding to no phase retardationFigure 6.13: From left to right correlating to positions A to C in Figure6.14: A: phase pattern present at phase-only LCoS modulator, B: directimage produced by lens in intermediary image plane (prior to diffuser) andC: intensity distribution present at amplitude LCoS modulator after havingpassed through a thin-film diffuser.716.6. Dynamic Lensing in Projection SystemsFigure 6.14: System diagram of the proposed and prototyped high bright-ness, HDR projector: light from an expanded and collimated laser beam isreflected off a phase-only modulator. The per-pixel amount of phase retar-dation resembles the height field of the dynamic lens calculated with ouralgorithm. The effective focal plane of this freeform lens is in-plane withan off-the-shelf, reflective projection head consisting of the polarizing beamsplitter together with an LCoS microdisplay and a projection lens. Lightfrom dark parts of the image can be used to create high luminance features,and simultaneously reduce the black level.726.6. Dynamic Lensing in Projection Systemsand white corresponding to a retardation of 2pi. We illustrate how phasepatterns with maximum phase retardation larger than 2pi can be wrappedto the maximum phase retardation of the modulator, resulting in a patternsimilar to a Fresnel lens. The resulting light profile resembles the targetimage closely, but also contains a small amount of local, high spatial fre-quency noise. We make use of a patterned diffuser (0.5 degree half-angle) tointegrate over these local intensity variations. The resulting light profile atthe diffuser is locally smooth and still provides sufficient contrast to enhancepeak luminance and lower black level.Figure 6.15 shows a selection of experimental results for our method.The first row of Figure 6.15 shows the phase pattern addressed onto thephase SLM. In the forth row of Figure 6.15 we show photos of the lightsteering high brightness projector and compare them to what a traditionalprojector with the same lumen rating out of lens would look like (secondrow). For the latter case we address a flat phase across the phase SLM. Rowsthree and five show false-colour logarithmic luminance plots on the matchingscales for the traditional and light steering projector systems. All photoswere captured with identical camera settings and show that our method notonly recovers better black levels but also allows for significantly elevatedpeak luminance in highlights by redistributing light from dark regions ofthe image to lighter regions by making better use of available light sourcepower. This enables high brightness high-dynamic range projection withdrastically reduced power consumption when compared to dual amplitudemodulation approaches.736.6.DynamicLensinginProjectionSystemsTable 6.2: Luminance measurements of the results depicted in Figure 6.15. Multiple exposures at varying exposuretimes were capture (8s, 4s, 2s, 1s, 1/2s, 1/4s, 1/8s, 1/15s, 1/30s, 1/60s, 1/125s) and combined into one linearHDR file, which was then calibrated to represent actual luminance values using a luminance spot meter (MinoltaLS100). The lowest accurate measurement using the Minolta LS100 is 0.001 cd/m2. We note that the relativepower Prel of the test patterns is significantly higher than what might be expected in theatrical high brightness HDRcontent and thus the gain in Lpeak and in contrast could be even higher.Name HDR Lpeak HDR Lblack HDR SDR Lpeak SDR Lblack SDR Lpeak Contrast(Prel) [cd/m2] [cd/m2] contrast [cd/m2] [cd/m2] contrast gain gainSG logo (7%) 701 0.001 700900 : 1 46 0.01 4, 272 : 1 15X 173XLena (48%) 121 0.03 4053 : 1 42 0.83 50 : 1 3X 80XMarilyn (25%) 407 0.03 13008 : 1 41 0.63 64 : 1 10X 203XAlign (20%) 180 0.01 29677 : 1 45 0.44 101 : 1 4X 292XEinstein (15%) 348 0.001 347700 : 1 44 0.01 2, 996 : 1 8X 122X746.6.DynamicLensinginProjectionSystemsPhaseoflensSDRlog(SDR)HDRlog(HDR)SG logo Lena Marilyn Align EinsteinFigure 6.15: Result photos and measurements of the HDR prototype projector. Top to bottom: Phase Functionof Lens - the phase pattern as computed by our algorithm. SDR projector for comparison - projector with identicallamp power (out of lens) used in a traditional, light attenuating mode: a uniform light field (flat phase field) isprovided to the amplitude SLM which forms the image by blocking light. SDR luminance profile on a logarithmicscale. HDR projector - photograph of our lensing approach used to redistribute light from dark regions to brightregions, resulting in improved black levels and significantly increased highlight intensity. HDR luminance profileon a logarithmic scale.756.6. Dynamic Lensing in Projection Systems6.6.3 Real-time Freeform LensingWhile pre-processing of video frames at non-real-time frame rates is ac-ceptable for some applications such as cinema and fixed installations usingprojectors, a real-time implementation is ultimately desired.Optimization Using Fourier-Domain Solves. The key insight is thatby mirror padding the input image the system arising from the discretiza-tion of (\u00E2\u0088\u00872p)2 results in periodic boundary conditions with pure-Neumannboundary conditions at the nominal image edge. This is illustrated in Fig-ure 6.16 and was also observed in earlier work by Ng et al. [80] for deblurringimages, but has not been exploited for lensing. The modification allows theproduct (\u00E2\u0088\u00872p)2 in the objective function, Equation 6.7, to be expressed asa convolution via the Fourier convolution theorem since the system matrixresulting from discretizing Equation 6.7 is circulant. This enables the use offaster Fourier-domain solves (operations) in place of slower general purposeiterative linear solvers.We build upon the method summarized in Section 6.4 and note that forperiodic boundary conditions, this problem can be solved very efficientlyin Fourier-space by using proximal operators [85]. Proximal methods fromsparse optimization allow for regularization to be imposed without destroy-ing the structure of the system.The specific proximal method that we use is a non-linear variant (Al-gorithm 2) of the well-known proximal point method. The proximal pointmethod is a simple fixed-point iteration defined by Equation 6.11, that isexpressed in terms of the proximal operator, prox\u00CE\u00B3F (p(x)), of the objectivefunction F (p(x)).pk+1(x)\u00E2\u0086\u0090 prox\u00CE\u00B3F (pk(x)) (6.11)For an arbitrary convex function, F (q(x)), the proximal operator, prox\u00CE\u00B3F ,(defined in Equation 6.12) acts like a single step of a trust region optimiza-tion in which a value of p(x) is sought that reduces F but does not straytoo far from the input argument q(x):prox\u00CE\u00B3F (q) = arg minpF (p) +\u00CE\u00B32\u00E2\u0080\u0096p\u00E2\u0088\u0092 q\u00E2\u0080\u009622. (6.12)To simplify notation, we use bold lower-case letters to refer to raster images,i.e. p = p(x), noting that there is an implied discretization step. Theparameter \u00CE\u00B3 serves to trade off the competing objectives of minimizing Fwhile remaining close (proximal) to q but for strictly convex objectives does766.6. Dynamic Lensing in Projection Systemsnot affect the final solution, only the number of iterations required to reachit.For a least-squares objective F (p) = 12\u00E2\u0080\u0096Ap \u00E2\u0088\u0092 b\u00E2\u0080\u009622, the resulting proxi-mal operator [85] is found by expanding the resulting right hand side fromEquation 6.12 and setting the gradient of the minimization term to zero.This results in Equation 6.13:prox\u00CE\u00B3F (q) =(\u00CE\u00B3 + ATA)\u00E2\u0088\u00921 (\u00CE\u00B3q + ATb). (6.13)In our case, the function F is simply the integral term from Equation 6.7.We form the proximal operator by discretizing the integral with sums overimage pixels and defining: A = f\u00E2\u0088\u00872 and b = 1\u00E2\u0088\u0092 ip(x).Since proximal operators contain a strictly convex regularization term,\u00E2\u0080\u0096p\u00E2\u0088\u0092 q\u00E2\u0080\u009622, the whole operator is a strictly convex function even if F is onlyweakly convex, as is the case for our problem. The proximal regulariza-tion improves the conditioning of our problem and can be interpreted asdisappearing Tikhonov regularization [85], i.e. regularization whose effectdiminishes to zero as the algorithm converges. This is helpful since theadded regularization does not distort the solution.Another benefit is that the proximal regularization does not change thestructure of our problem since it only adds an identity term. This, coupledwith the mirrored padding periodic boundary conditions, means that allterms in Equation 6.7 can be expressed as convolutions and the proximaloperator solved in the Fourier domain. This is vastly more efficient thansolving the optimization implied by the proximal operator in the spatialdomain.By denoting the forward and inverse Fourier transforms as F() & F\u00E2\u0088\u00921()respectively, complex conjugation by \u00E2\u0088\u0097 and performing multiplication anddivision point-wise, the proximal operator for Equation 6.13 can be re-expressed in the Fourier domain as Equation 6.14 for circulant matricesA, as reported in [11] who used it to solve deconvolution problems.prox\u00CE\u00B3F (q) = F\u00E2\u0088\u00921(F(b)F(A)\u00E2\u0088\u0097 + \u00CE\u00B3F(q)F(A)2 + \u00CE\u00B3)(6.14)In practice, we modify Equation 6.14 slightly by the addition of a regu-larization parameter \u00CE\u00B1. The modified proximal operator is shown in Equa-tion 6.15.prox\u00CE\u00B3F (q) = F\u00E2\u0088\u00921(F(b)F(A)\u00E2\u0088\u0097 + \u00CE\u00B3F(q)(1 + \u00CE\u00B1)F(A)2 + \u00CE\u00B3)(6.15)776.6. Dynamic Lensing in Projection SystemsThe constant \u00CE\u00B1 \u00E2\u0089\u00A5 0 regularizes the solution by favoring results withlow curvature. This corresponds to solving a modified form of Equation 6.7that imposes a penalty of \u00CE\u00B12 \u00E2\u0080\u0096\u00E2\u0088\u00872p(x)\u00E2\u0080\u00962 once discretized (the second term ofEquation 6.16 in the continuous case).p\u00CB\u0086(x) = arg minp(x)\u00E2\u0088\u00ABx(ip(x)\u00E2\u0088\u0092 1 + f \u00C2\u00B7 \u00E2\u0088\u00872p(x))2dx+ \u00CE\u00B1\u00E2\u0088\u00ABx(\u00E2\u0088\u00872p(x))2 dx, (6.16)The effect of the parameter \u00CE\u00B1 is to favour smoother solutions than canotherwise be found. This helps to prevent the method from producing un-desirable caustics in an attempt to achieve very bright highlights at theexpense of image quality in darker regions. The effect of the \u00CE\u00B1 parameter isshown in Figure 6.17 for lens simulations.Our final algorithm is shown in Algorithm 2 and is identical to the prox-imal point method except that the b image used by the proximal operatoris updated at every iteration using the warping procedure from our previ-ous work in [18]. After precomputing the Fourier transforms of f\u00E2\u0088\u00872, eachiteration of the algorithm can be implemented with an image warping, somecomponent-wise operations and a forward/inverse Fourier transform.Implementation. The re-formulation of the algorithm results in ordersof magnitude speedup when executed on a Central Processing Unit (CPU)using Fast Fourier Transform (FFT) based solvers over the Quasi-MinimalResidual Method (QMR) solver that was previously used. Typical per-framecomputation times were previously on the order of 20 minutes or more [18],while the Fourier version in Algorithm 2 takes approximately 0.6 seconds atthe same resolution (256\u00C3\u0097 128) on a Core i5 desktop computer, a speedupof approximately 2,000 times. The conversion to Fourier domain solves alsoresults in operations that are more friendly for parallel Graphics ProcessingUnit (GPU) implementation. We have implemented the algorithm both inC++ and in NVIDIA\u00E2\u0080\u0099s Compute Unified Device Architecture (CUDA) usingNVIDIA\u00E2\u0080\u0099s CUDA Fast Fourier Transform Library (cuFFT) for the forwardand inverse Fourier transforms [81]. The CUDA & cuFFT version of thecode yields nearly a 150 times speedup over the single-threaded CPU versionwhen run on a GeForce 770 GPU, resulting in roughly a 300,000 fold speedupover the naive CPU version implemented using QMR. To our knowledge thismakes the algorithm the first freeform lensing method capable of operating786.6. Dynamic Lensing in Projection SystemsAlgorithm 2 Paraxial caustics in Fourier space// Initialize phase surface as a constant valuep0(x)\u00E2\u0086\u0090 0// Initialize iteration counter and constant parametersA\u00E2\u0086\u0090 f\u00E2\u0088\u00872k \u00E2\u0086\u0090 0while k < kmax do// Warp target image by current solutionikp(x)\u00E2\u0086\u0090 i(x + f\u00E2\u0088\u0087pk(x))// initialize right hand side of least-squares problemb\u00E2\u0086\u0090 1\u00E2\u0088\u0092 ikp(x)// Update the current solution by evaluating// the proximal operator in Equation 6.15pk+1(x)\u00E2\u0086\u0090 prox\u00CE\u00B3F (pk(x))// update iteration indexk \u00E2\u0086\u0090 k + 1end while// RETURN computed mappingreturn pkmax(x)in real-time, see Table 6.3. This is in contrast to methods such as [100],which produce very high quality results, but have runtimes roughly fiveorders of magnitude higher than our GPU algorithm.Table 6.3: Runtimes of the FFT based algorithm run on a CPU and ona GPU (implemeted using cuFFT) for various resolution inputs with 10iterations of Algorithm 2.Algorithm Resolution Runtime (ms) FPSCPU 256\u00C3\u0097 128 600ms 1.7GPU 256\u00C3\u0097 128 4ms 250GPU 480\u00C3\u0097 270 14ms 71GPU 960\u00C3\u0097 540 52ms 19GPU 1920\u00C3\u0097 1080 212ms 4.7The algorithm is well suited to hardware implementation on devices suchas GPUs, Field Programmable Gate Arrays (FPGAs) or ASICs due to its useof highly parallel FFTs and component-wise operations. We run Algorithm 2for a fixed number of iterations (typically 10 or less). Convergence to a796.6. Dynamic Lensing in Projection Systemssolution is rapid, requiring well fewer than 10 iterations, however for manyhardware implementations it is desirable to have computation times thatare independent of frame content.(a) Padded target(b) Without padding (c) Mirrored paddedFigure 6.16: By mirror-padding the input image, pure-Neumann boundaryconditions at the image edge can be achieved while retaining a Toeplitz matrixstructure. This prevents distortions of the image boundary. Results weresimulated with LuxRender.Simulation Results. Using the equivalence between physical lenses andphase functions allows solid lens models to be generated for testing viageometric optics simulation (we use Blender+LuxRender). Examples areshown in Figure 6.16 and 6.17 which illustrate the effect of mirror paddingand the choice of \u00CE\u00B1 respectively.806.6. Dynamic Lensing in Projection Systems(a) target (b) \u00CE\u00B1 = 2.0(c) \u00CE\u00B1 = 0.2 (d) \u00CE\u00B1 = 0.02Figure 6.17: LuxRender raytracing simulations: the smoothness parameter\u00CE\u00B1 penalizes strong caustics in the image that achieve high-brightness but poorimage quality.6.6.4 LimitationsThe architecture presented in this chapter and the resulting image qualityimprovements in contrast and peak luminance that can be achieved with itdemonstrate the feasibility of the concept. We list some of the obvious andless obvious limitations of the implementation here that will be addressedin Chapter 7.\u00C2\u0088 The test system was built with a single, monochromatic (green) lightsource. For a full colour projector, at least two additional colour chan-nels will need to be added to the system in either a parallel or a timesequential fashion. Either approach presents its own (solved) chal-lenges. The former with respect to alignment of red, green and bluecomponents such as SLM and dichroic mirrors and the latter withrespect to synchronization/timing and thermal limitations.\u00C2\u0088 As with any display based on narrow band or monochromatic lightsources (such as LEDs or lasers) care needs to be taken to manageundesirable properties such as observer metamerism and speckle.816.6. Dynamic Lensing in Projection Systems\u00C2\u0088 The phase SLM and the amplitude SLM need to be synchronized,ideally at the frame or subframe level. The amplitude modulator in theprototype was borrowed from a consumer projector which introducedan undesired latency in one of the modulation stages.\u00C2\u0088 In a full colour system (RGB), characterization and accurate model-ing of the optical system including the PSF is required to ensure acolourimetrically accurate projector.\u00C2\u0088 None of the relay optics or other elements were custom designed forthe prototype, which leads to light losses. Even with a more optimizedlight path, the addition of a phase SLM can reduce the overall lightthroughput. We estimate that this loss can be as high as 40-60% forthe components used in the prototype. While this might seem highwe note that even for bright images (ALL of 50% relative to the peakluminance) the gain in peak luminance exceeds what could be achievedin a traditional projector. Better suitable SLMs can further reduce theassociated light losses.\u00C2\u0088 Careful alignment of a number of elements in the light path is re-quired to achieve a uniform and predictable light profile on the phasemodulator. In our experiment, the reflective nature of the phase SLMrequired off-axis illumination that was not accounted for in the simu-lations and algorithm and which in turn leads to errors in the resultingluminance profiles. While these errors were not clearly visible in theimages projected onto the screen, the logarithmic luminance represen-tation in Figure 6.15 reveal this non-uniformity. It can be accountedfor in the lens pattern.\u00C2\u0088 Finally, the dynamic nature of the projection system with respect topeak luminance and feature size may present a challenge when colourgrading content for the display. The notion of a limited light budgetand a peak luminance that exceeds that of full screen white mightmakes sense from an HDR image statistics point of view, but wouldrequire a re-thinking in existing movie production processes.6.6.5 DiscussionWe have made two technical contributions: a simple but fast and effectivenew optimization method for freeform lenses (goal-based caustics), and anew dual-modulation design for projection displays, which uses a phase-onlyspatial light modulator as a programmable freeform lens for HDR projection.826.6. Dynamic Lensing in Projection SystemsThe new freeform lens optimization approach is based on first-order(paraxial) approximations, which hold for long focal lengths and are widelyused in optics. Under this linear model, the local deflection of light is propor-tional to the gradient of a phase modulation function, while the intensity isproportional to the Laplacian. We combine this insight with a new parame-terization of the optimization problem in the lens plane instead of the imageplane to arrive at a simple to implement method that optimizes directly forthe phase function without any additional integration steps. Solved in theFourier domain, this is the first algorithmic approach for freeform lensingthat is efficient enough for on-the fly computation of video sequences.Our new dual-modulation HDR projector design finally allows us toachieve perceptually meaningful gains in peak luminance on large cinemascreens while simultaneously improving black level performance and main-taining a manageable power, light and cost budget. As such we believe thatto date the approach presents one of the most sensible proposals for com-mercial high contrast HDR projection systems and one of the most practicalways to achieve high brightness HDR imagery in cinema.83Chapter 7Improved RGB ProjectorPrototypeThis chapter describes the work related to the final prototype that wasdeveloped during the PhD term in collaboration with our industry partnerMTT. We discuss a number of limitations of the earlier proof-of-conceptwork in Chapter 6 and based on this the design decisions that led to animproved full colour RGB projector prototype.Table 7.1 provides an overview of the new features that were incorporatedinto the RGB prototype introduced in this section.Table 7.1: RGB High Power Prototype FeaturesFeature CommentFull colour R, G, and B primaries at 462nm, 520nm and 638nmHigher power Arrays of individual diodes independently collimatedIntensity control Source intensity adjustable per framePhase SLM Custom Holoeye PLUTO deviceAmplitude SLM LCoS based (higher native contrast compared to DMD)Synchronization Theoretical analysis of timing for DMD implementationsAlgorithm Calibrated forward modelPSF Measured, synthesized and incorporated into forward modelHDR Content Tone-mapped/graded for theatrical HDR systems847.1. Architecture of the RGB Projector Prototype7.1 Architecture of the RGB ProjectorPrototypeIn the following section we address the most significant proof-of-conceptlimitations discussed in Chapter 6 and discuss improvements and designdecisions for a new full colour prototype implementation which can be seenin Figure 7.1.Figure 7.1: The photos show the completed RGB light steering prototype.In the left photo the lid is removed and some of the core components arevisible including the green and blue channel light sources (A), the waterlineconnections for a chiller to maintain the laser light sources and phase mod-ulators at a controlled operating temperature (B), the combination opticsblock (C), control electronics and laser drivers (D), a top mounted spot-spectroradiometer for colour calibration (E) and a machine vision camerafor spatial uniformity calibration.Colour. Instead of the monochromatic (532nm, green Diode-Pumped Solid-State (DPSS) laser) a light source consisting of red, green and blue laserdiodes enables us to design a full colour projector. Switching from a DPSS-based laser to native laser diodes at wavelengths of 462nm, 520nm and638nm simplified the light source (much fewer optical components are re-quired for a diode laser compared to a DPSS laser). While a field-sequentialsystem in which each of a red, green and blue light field are presented insequence is in principle possible, we choose instead a parallel architecture inwhich three monochromatic light paths, including individual light sources,857.1. Architecture of the RGB Projector Prototypephase modulators and optical components (compare Figure 6.14 up untilthe diffuser) are combined into a white beam with dichroic colour filters andthen relayed into an amplitude modulating projection head. The range ofachievable chromaticities can be seen in the diagram in Figure 7.2 relativeto other common display and cinema primaries.Optical Power. The optical power of the proof-of-concept projector wasmeasured as 10 lumens out of lens. While the overall light levels, thepeak luminance and the screen size that could be illuminated using the lightsteering concept were impressive, a higher power system is required to showthat eventually cinema screens can be illuminated using a light steeringprojector. 100 to 200 lumens of steered light as a goal for the newprototype present a meaningful stepping stone. However the required lightsource power can not easily be achieved with existing individual laser diodes.Three laser diode properties are of critical interest: the total power of thelaser diode as well as the emitter dimensions and the divergence of lightas it is emitted. Beam expansion, collimation optics and tiling of multiplelaser diodes at the light source require a mechanical design in which theindividual components can be adjusted, some at 6 degrees of freedom.Speckle. As with any display based on narrow band or monochromaticlight sources (such as LEDs or lasers) care needs to be taken to manageundesirable properties such as inter-observer metamerism variations andspeckle. There are typically three measures that can be taken to reducethe visible speckle contrast to the observer: randomization of polarization,increasing angular diversity of light in the optical path and broadening of thelight source spectrum. Elements of all three methods can be applied to ourmethod. While the projection head utilized in our RGB prototype is basedon LCoS technology which requires linearly polarized input (see Chapter 2for reference), it is in principle possible to randomize this polarization afterthe final image has been formed either before or after the projection lens.Similarly, if the light steering method is coupled with a DMD-based pro-jector head (see Chapters 2 and 7 for reference), then polarization can berandomized within the projector light path following phase modulation andprior to the DMD amplitude modulation, since the DMD does not requirelinearly polarized light at the input. Light within our light steering systemis ideally well collimated. This translates to a high f-number (f#) opticalsystem and with that limited angular diversity. While it is important to pre-serve this high degree of collimation for the light steering part of the system,867.1. Architecture of the RGB Projector PrototypeFigure 7.2: Chromaticity diagram of the proposed projector primaries com-pared to a number of common colour standards. The CIE 1976 UniformChromaticity Scale (UCS) using the u\u00E2\u0080\u00B2 and \u00E2\u0080\u00B2v\u00E2\u0080\u00B2 as chromaticity coordinatesprovides a perceptual mostly uniform relation between individual colours (i.e.the perceived difference between colours of equal distance from each other iscomparable). The green data points represent the D65 white point which iscommon in cinema and home display systems as well as the projector laserprimaries. The chromaticity space that can be reproduced with the prototypeis significantly larger and includes both the DCI P3 space as well as the Rec.709 space. It almost entirely encompasses the Rec. 2020 colour space andexceeds it in overall area.this property of light is no longer required after an intermediate image hasbeen formed (compare Figure 6.14, diffuser). The f# can then be reduced,for example with a light shaping diffuser, to match the input acceptance an-gles of the following optics resulting in higher angular diversity of the beam877.2. Temporal Considerations in HDR Projection Displaysand with it less visible speckle. Moving the diffuser inside the projectorrandomizes the angular spread of light over time and further reduces visiblespeckle contrast. This can be achieved for example by rotating a diffuserdisc within the projector or by linearly displacing optical elements at ornear the diffuser. A second and effective method to reduce speckle contrastis the introduction of movement to the projection screen. A slight continu-ous displacement of the screen surface has the effect of averaging over manyangles as light from the projection lens reflects of the (non-flat) surface ofthe screen. Equally effective, but much less practical in a cinema setting, isthe movement of the observer. Finally, by employing binned and calibratedlaser diodes that, for each colour, consist of different center wavelengths, theeffective spectral band of the light source can be broadened, which reducesvisible speckle contrast. The broadening of the light source will result ina superimposed set of slightly magnified and demagnified images for eachcolour channel after steering the light, which can be modelled by a smallblur kernel and is not necessarily undesirable (compare also Figure 6.6).Synchronization. The phase SLM and the amplitude SLM, as well asa Pulse Width Modulation (PWM) dimmable laser light source need tobe synchronized, ideally at the frame or subframe level. The amplitudemodulator in the prototype stems from a consumer projector and exhibitsundesired latency of multiple frames within its built-in image processingblock that we account for. In cinema, binary DMDs are used predominantlyas amplitude SLMs in order to handle large projector light output and tocomply with certain standards (e.g. the DCI). Section 7.2 discusses newdrive schemes (timing) that aim at mitigating anticipated temporal artifacts(flicker).Calibration. In a full colour system colourimetric calibration requirescharacterizing and accurately modeling the system, including the PSF, whichdepending on the light source and optical path could potentially be depen-dent on location or even image feature sizes. We discuss our approach inSection 7.3.7.2 Temporal Considerations in HDR ProjectionDisplaysWhile parts of the prototype were built based on commodity consumer hard-ware, the use of development kits, together with customized light source887.2. Temporal Considerations in HDR Projection Displayscontrol electronics allows for better synchronization of pulses from the lightsource (laser intensity modulation), digital pulses codes required to addressthe phase modulator (bit planes within a subframe), and for example abinary primary amplitude modulator in the projector head (DMD mirrorstates). While prototyping a fully integrated solution is outside the scope ofthis work, we explore different theoretical solutions for synchronized driveschemes. This section discusses synchronization options for DMD based pro-jector architectures. As part of the prototyping work in Sections 6 and 5.1we gained a better understanding of the temporal properties of the phaseSLM hardware and the PWM intensity controlled light source, and updatedthe prototype and the models in this section based on the findings.LCoS Based Phase Modulator. The phase modulator we selected forthe light steering projector implementation currently uses an LCoS microdisplay with a digital back plane and no input or output polarizing filter.The backplane updates in a vertical, top to bottom scrolling fashion. Ei-ther one or two lines are updated at a time. The relatively slow response ofthe LC material relative to the fast pulse codes of the back plane make itpossible to achieve effectively a near-analogue phase response of the display.This is advantageous as synchronization to the fast, binary states of a micromirror based projection head is less of an issue compared to truly binary,fast LCoS devices (e.g. ferroelectric devices).Important for our application are:\u00C2\u0088 The overall refresh rate of the phase modulator (the refresh rate needsto be in line with the required video frame rate of the overall projector).\u00C2\u0088 The phase accuracy over the period of a frame (how reliable does thephase modulator reproduce a given phase value based on the corre-sponding drive level).\u00C2\u0088 The phase stability within a frame (to what extent are individual digi-tal pulses from the back plane measurable in the overall phase response(phase flicker) and how does this affect light steering).\u00C2\u0088 Phase drift within a sub-frame between each line update.In a calibrated LCoS device, per pixel, crystals will drift towards 2pi or 0pirespectively when an electric field is applied or removed. The response time897.2. Temporal Considerations in HDR Projection Displayscan be thought of as the time from when the electric field is applied continu-ously to the time when the crystal produces a 2pi phase shift. Response timecan be tuned with the chemical formulation of the crystal. Figure 7.3 showsan example of the rise and fall characteristics of a phase-only micro-display.If the target phase response is between 0 and 2pi, voltage can be periodicallyapplied and removed to achieve the intermediate phase retardation.Figure 7.3: Typical relative rise (left) and fall (right) times for the phasemodulator between 0 and 2pi.A circuit generating the periodic voltages has a frequency governed bythe spatial resolution of the micro display and the bit precision of phasecontrol. The crystal response time is tuned to give the most stable imagegiven the update frequency from the driving circuitry.The driving circuit used in the current prototype is an FPGA, it canchange the state of the driving on/off voltage at around 7kHz, in whichcase the pixel array is updated line by line starting at the top. The updateof one bitplane across the entire frame therefore takes 1/7, 000s (or around145, 000ns). An example of variations in phase stability and phase driftof the current system can be seen in the (preliminary) measurements inFigure 7.4 in which the crystal orientation is balanced in a half turned stateby a periodic square wave voltage being applied.The total rise response time is 8.7ms and the total fall response time iscurrently 21ms. The cell thickness for the particular panel under test wasnot customized for the application and provides sufficient phase retardationall the way into the Infrared (IR) part of the light spectrum. The cell thick-ness is hence thicker than it needs to be. For example with this particularpanel the maximum possible phase retardation for a blue laser diode can beup to 6pi and for a red laser diode can be close to 3pi. A faster responsetime can be achieved by reducing the cell thickness to provide no more than907.2. Temporal Considerations in HDR Projection DisplaysFigure 7.4: Sub-frame phase response at drive levels 65 (left), 75 (center)and 100 (right) out of 255. The small scale temporal ripple is what we referto as phase flicker, the deviation around the target phase value (blue line)indicates the phase stability.the maximum required phase retardation per wavelength. Furthermore thelensing (steering) function can and should account for the effective refreshrate of the phase modulator via a simple model. As a new frame arrives,the driving state of all pixels is updated on the next refresh cycle.Fast response time and high phase stability are somewhat opposing goalsalong one shared dimension of temporal control. This is because within theduration of a video frame, applying a continuous electric field early on willallow the crystals to move into the correct position quickly (after which theelectric field should be removed or only pulsed on occasionally), whereasphase stability throughout the entire video frame is best achieved by havingthe on and off states of the electric field spread relatively evenly throughoutthe duration of the video frame. Both goals could eventually be accountedfor in the lensing algorithms as well as in the underlying digital updatescheme of Pulse Code Modulation (PCM) used to map phase code words tothe optical phase response.In the future a dedicated ASIC could be developed instead of an FPGAto control the phase modulator would allow for faster update rates and lowerpart costs for volume production. In the phase stability plots in Figure 7.4one would then see more peaks and valleys during the same time period.Additionally there are options to modify or re-formulate the birefringenceproperties of the LC material used in the current phase modulator to enablea faster response time (for example 2\u00C3\u0097 improvements).917.2. Temporal Considerations in HDR Projection DisplaysDLP Technology Characteristics. DLP technology makes use of a bi-nary modulator which flips per pixel micro mirrors back and forth acrosstheir diagonals. Each mirror at any time can be either in the on state inwhich it directs light rays to the screen or an off state in which it directslight rays to an off-screen location, the so-called light dump area. A mirrorcreates grey scales by flipping back and forth rapidly. For example over thecourse of a video frame it would spend more time in an on state to rendera brighter pixel or more time in an off state for a darker pixel. Typically,each pixel requires an 8 bit (or more) grey scale drive value per frame ofvideo (usually 60 Frames per Second (fps)). Figure 7.5 shows how these greyscales are translated into mirror flips.Figure 7.5: DMD timing: high level working principle (reproduced fromTI DLP documentation depicting the conceptual bit partition in a frame foran 8-Bit colour).Whether a bit is set to 0 or 1 determines whether the mirror is flippedto the on or to the off position. The bit position determines the relativeduration of time that the mirror remains in the state. The maximum numberof flips per seconds that the mirrors can typically achieve is just under 10kHz(see 7.5; up to 80kHz have been reported for professional applications), thusfor this estimation we set the shortest period of a mirror state to 0.1ms or100, 000ns). We will refer to this as the mirror flip period equal to theperiod of b0 in the diagram in Figure 7.5.Asynchronous Light Pulses. If a pulsed light source is used (for ex-ample to produce light at 50 percent of the maximum level), flickering willoccur if the off and on pulses are asynchronous to the mirror flipping andthe periods of off and on significantly differ from frame to frame on a staticimage due to for example a low pulse frequency of the light state.In Figure 7.6 note how in frame 1 the light is on for 2/5 of the timeand in frame 2 the light is on 3/5 of the time due to the fact the signals areasynchronous. If the off and on light source periods are short relative to themirror flip period, the difference between off and on periods between staticframes should be drastically reduced and be imperceptible to the human927.2. Temporal Considerations in HDR Projection DisplaysFigure 7.6: Slow, asynchronous light pulses.eye. Figure 7.7 shows an example in which the light source is modulatedsignificantly faster than the shortest possible mirror flip period.Figure 7.7: Fast, asynchronous light pulses.In Figure 7.7 note that only a single minimum width mirror flip is shownwith a drive value of 1, and the light state is analogous to the pwm clockdescribed below. Also note that in this example the light is on 27/54 inframe 1 and 28/54 of frame 2.937.2. Temporal Considerations in HDR Projection DisplaysSynchronous Light Pulses. Provided the light source off and on periodsare synchronous to mirror flips (such as in Figure 7.8), there should bepractically no intensity difference between static frames and the light sourcepulse generator need only run at the period of the mirror flips, which in turncan drastically reduce the requirements of the control solution and impactElectromagnetic Interference (EMI) considerations.Figure 7.8: Slow, synchronous light pulses.When a new frame arrives, the mirror flip logic for all pixels can beupdated simultaneously via a double buffering scheme (or in blocks fromtop to bottom if desired).Laser Control Solution. After a first review of available laser driversolutions we selected the iC-Haus iC-HG [49] device to directly drive laserdiodes at high current and the option to pulse at very high frequency. Thedevice has up to 200MHz switching capability from a differential pair input.Synchronized switching of up to 100MHz of a 500mW (optical power or650mA and 2.2V electrical power per diode) 638nm laser diode array wasconfirmed during a previous project in the UBC CS PSM lab [40]. A constantvoltage input to the iC-HG sets the current limit in the on state. Wedrive the constant voltage current input with a high speed Digital-to-AnalogConverter (DAC).Combining LCoS Phase Modulator, Binary DMD-based SLM, andPulsed Lasers. Figure 7.9 shows an example of the timing for a combinedsystem. The following assumptions were used for a first estimate of theoverall system temporal response:\u00C2\u0088 For the shortest mirror flip duration, there are about 100 light statePWM clock pulses (fewer shown in the diagram below for clarity) andthe LCoS phase error will drift between, in this example, 0.1pi and\u00E2\u0088\u00920.1pi an average of 1.5 times.947.2. Temporal Considerations in HDR Projection Displays\u00C2\u0088 A 2\u00C3\u0097 faster (compared to the prototype) phase LCoS SLM and fastercontroller chip (ASIC) are used in these visualizations along with thefast asynchronous PWM clock drive scheme introduced above.Table 7.2 shows an exemplar update speed and the resulting pulse durationfor the different modulation elements within the projector.Table 7.2: Pulse durations of the different light modulation stages withinthe projector.Component updates shortest frame pulses pulses perper second period [ms] period [ms] per frame mirror flipDMD 10, 000 0.1 16.66 166.66 1LCoS 15, 000 0.066 16.66 250 1.5Laser 1, 000, 000 0.001 16.66 16666.66 100The 0.1pi error (drift) in phase modulation is relatively low compared tothe maximum amount of possible phase retardation of > 2pi and the fastlaser light source washes over even the fastest DMD mirror flips. Many al-ternative implementations are possible, including a Continuous Wave (CW)or constant-on laser driver, in which excess light is steered away from theactive image area.Figure 7.9: Relative timing diagram for phase LCoS, DMD and laser pulsecombination: DMD mirror flips (shortest possible period), light source pulses(fewer than actual for illustration purposes), and anticipated phase drift ofthe LCoS-based phase modulator).DMD-Based Experimental Prototype. An experimental prototypethat combines the LCoS based phase modulator, a PWM-dimmed (order957.3. Colourimetric Calibration of the Projectorof kHz) laser light source and a DMD according to the asynchronous drivescheme, was built for demonstration purposes, and while direct timing mea-surements were not taken, visible temporal artifacts were not dominantenough to be noticeable. The relatively low magnitude of the phase er-ror and the low amplitude of the phase flicker relative to its maximum of2pi aided in masking possibly present temporal artifacts. Figure 7.10 showsa DMD-based system.Figure 7.10: Prototype using a DMD amplitude modulator. The laseris coupled into the system via a fiber (red, right side of the laser safetyenclosure). Its intensity can be adjusted via a PWM drive scheme.7.3 Colourimetric Calibration of the ProjectorThe system being targeted in this subsection is a two-projector system con-sisting of a traditional non-steering projector and a light steering projector.The traditional projector is an off-the-shelf projector that uses amplitudemodulation while the light steering projector is a phase/amplitude modula-tion design.967.3. Colourimetric Calibration of the Projector7.3.1 Light Steering Image Formation ModelWorking from light source to screen, collimated laser light is relayed froma light-source to a reflective phase modulator by a series of optics. Lightleaving the phase modulator for a given colour channel is combined with lightfrom the remaining channels and relayed through a diffuser to a Philips orRGB prism. The prism splits the light into its component colours which areeach modulated by an amplitude modulator, recombined within the prismand directed into the main projection lens. Figure 7.11 shows the high levelblocks described herein.Figure 7.11: Diagram depicting the high level optical blocks within the lightpath of the prototype projector from laser light source (left) to projectionlens (right). The combination of RGB light into white light was chosen forconvenience and to accommodate commercially available projection hardwarewith pre-aligned SLMs. Better contrast performance can be expected from adiscrete RGB light path.This design is able to produce local intensities well above the typicalfull-screen intensity by virtue of the phase modulator, which is able to in-troduce phase variation to the incident wavefront. This allows it to functionas a programmable lens in response to a software-driven phase pattern. Thephase pattern is computed and attempts to redistribute light from the inputillumination profile to target light profile, chosen ideally to approximate anupper-envelope of intensities in the underlying target image. This redis-tributes light from dark areas to bright regions which, due to properties of977.3. Colourimetric Calibration of the Projectortypical images, tend to have large dim regions and small bright highlightsresulting in considerable focusing of spare light which can reach levels morethan 10\u00C3\u0097 of what could be achieved by a conventional projectors.A diffuser is incorporated into the optical design to reduce speckle andalso optionally acts as a low-pass filter over the light field, since the phasemodulator can introduce a number of artifacts. These include fixed texture,a component of unsteered illumination and diffraction artifacts. The diffuseris effective at removing diffraction artifacts and fixed texture but generallycannot compensate for the unsteered light(see Figure 7.12).Figure 7.12: Unsteered Component. Laser light that is not steered by thephase modulator.Typically around 10% of the light ends up in the unsteered componentand is related to the illumination incident on the phase modulator afterfiltering by the diffuser. Consequently the unsteered component is an im-portant contributor to the subsequent image formation. Measurements ofthe unsteered component can be obtained by designing a phase pattern tosteer all available light off-screen; what remains is the unsteered component(Figure 7.12).For laser diode-based systems such as in the RGB prototype the un-steered component shows individual diode beams reflected from the phasemodulator, in Figure 7.12 these show as vertical (red) and horizontal (green,blue) stripes. The difference in orientation is due to differing polarizationorientation of the diodes. Beams from the light source are polarized in thesame direction and so all stripes are oriented similarly, by design. Figure 7.13shows the full-screen white pattern once a corrective lens pattern has beenapplied to the illumination profile in Figure 7.12. In case of fiber-coupledlaser, the unsteered component would be significantly more uniform.987.3. Colourimetric Calibration of the ProjectorFigure 7.13: Normalized full-screen white pattern after a correction phasepattern has been applied to the illumination profile.7.3.2 Optical ModelThis section describes the algorithms that are used to drive the system,beginning with high-level algorithmic blocks for the overall algorithm. Keyblocks are further described in dedicated subsections. Each rectangularblock corresponds to a set of operations, parallelograms indicate data passedbetween the blocks. Solid arrows indicate known interactions between blockswhile dashed arrows indicate optional connections. Figure 7.14 shows a highlevel overview of the functional algorithm blocks.7.3.3 High-Level AlgorithmThe high-level view of the algorithm takes in the input image. In the RGBprototpye system, this input is the CIE 1931 XYZ colour space with PQencoding of each channel.In the input transformation block, image data content is linearized andconverted to the working colour space of the system. The output of inputtransformation is a linear image expressed currently in linear RGB laserprimaries.In the content mapping block the linear image is split (or distributed)between the light steering projector and the non-steering projector with adistinct amplitude pattern each in the case of two projectors or in case ofan integrated system a shared amplitude pattern (one projection head witha light steering and a non-light steering light source). The output of thecontent mapping block is a target light field image, a target (full) image(per projector) and a power control signal. The power-control signals andtarget image are inter-related depending on the power-control approach that997.3. Colourimetric Calibration of the Projectoris taken.To physically redirect light, the target light field is used as input tothe phase pattern generation algorithmic block. This computes the driveparameters needed to affect light-redistribution by the phase modulators.In addition, the target light field is also used by the forward model al-gorithmic block, which implements a feed-forward simulation of the lightsteering image-formation model since, in practice, the phase modulator andsubsequent optical path is unable to reproduce arbitrary target light fieldsexactly. The forward model produces a predicted light field image that, incombination with the target image, is used by the amplitude pattern gener-ation block to determine the necessary amplitude patterns for both the lightsteering and the non-light steering block.Figure 7.14: High-level algorithm blocks.7.3.4 Input TransformationThe input transformation block functions primarily to transform the inputimage from the input PQ encoded images in the XYZ colour space to thecolour space defined by the laser primaries.1007.3. Colourimetric Calibration of the ProjectorTable 7.3: Projector Chromaticity CoordinatesColour Wavelength(s) (nm) Chromaticity x Chromaticity yRed 638 0.71679 0.28317Green 520 0.07430 0.83380Blue 462 0.14075 0.03357White 638 / 520 / 462 0.31271 0.32902Figure 7.15: Input transformation block takes PQ-XYZ inputs, linearizesthem and transforms them to the light steering projector colour space.The forward and inverse transformations for the PQ encoding are givenin the following equations:L =(P1m2 \u00E2\u0088\u0092 c1c2 \u00E2\u0088\u0092 c3P1m2) 1m1(7.1)P =(c1 + c2Lm11 + c3Lm1)m2(7.2)where P and L represent PQ and linear values mapped to the range [0, 1].These ranges should be adjusted to the nominal working range, e.g. [0, 210\u00E2\u0088\u00921] for 10 bit PQ and [0, 10000]cd/m2 for L. The transformations can beimplemented as a 1D Look Up Table (LUT), however care over the samplingrate is important to resolve all regions of the curve.For the colour transformation, the RGB projector primaries and whitepoint (D65) are shown in Table 7.3.To obtain RGB images in laser primaries from these, it is necessary toconvert to the RGB projector primaries MTTP3. This transformation ischosen to preserve the luminance of each channel, leading to the followingrelationship between XYZ images:\u00EF\u00A3\u00AE\u00EF\u00A3\u00AF\u00EF\u00A3\u00B0YwxwywYwYw(1\u00E2\u0088\u0092xw\u00E2\u0088\u0092yw)yw\u00EF\u00A3\u00B9\u00EF\u00A3\u00BA\u00EF\u00A3\u00BB =\u00EF\u00A3\u00AE\u00EF\u00A3\u00B0XwYwZw\u00EF\u00A3\u00B9\u00EF\u00A3\u00BB =\u00EF\u00A3\u00AE\u00EF\u00A3\u00AF\u00EF\u00A3\u00B0xryrxgygxbyb1 1 1(1\u00E2\u0088\u0092xr\u00E2\u0088\u0092yr)yr(1\u00E2\u0088\u0092xg\u00E2\u0088\u0092yg)yg(1\u00E2\u0088\u0092xb\u00E2\u0088\u0092yb)yb\u00EF\u00A3\u00B9\u00EF\u00A3\u00BA\u00EF\u00A3\u00BB\u00EF\u00A3\u00AE\u00EF\u00A3\u00B0YrYgYb\u00EF\u00A3\u00B9\u00EF\u00A3\u00BB ,(7.3)1017.3. Colourimetric Calibration of the Projectorwhere [Xw, Yw, Zw]T is the luminance of the combined image and [Yr, Yg, Yb]Tis the luminance of each channel treated independently under the constraintthat Yw = Yr + Yg + Yb. The per-channel luminances [Yr, Yg, Yb]T corre-sponding to Yw = 1 can then be found by solving the system above.The per-channel luminance values are used to define the transformationM from MTTP3 to XYZ. This transformation can be defined as follows:\u00EF\u00A3\u00AE\u00EF\u00A3\u00B0Xr Xg XbYr Yg YbZr Zg Zb\u00EF\u00A3\u00B9\u00EF\u00A3\u00BB = M\u00EF\u00A3\u00AE\u00EF\u00A3\u00B0Yr 0 00 Yg 00 0 Yb\u00EF\u00A3\u00B9\u00EF\u00A3\u00BB , (7.4)meaning that input images in which each channel stores the luminance ofits corresponding primary should map to the chromaticity of the primary atthe luminance stored in the image. The transformation can be found usingM =\u00EF\u00A3\u00AE\u00EF\u00A3\u00B0Xr Xg XbYr Yg YbZr Zg Zb\u00EF\u00A3\u00B9\u00EF\u00A3\u00BB\u00EF\u00A3\u00AE\u00EF\u00A3\u00AF\u00EF\u00A3\u00B01Yr0 00 1Yg 00 0 1Yb\u00EF\u00A3\u00B9\u00EF\u00A3\u00BA\u00EF\u00A3\u00BB . (7.5)For the chromaticities and white point listed above, this gives the followingresult for M:M =\u00EF\u00A3\u00AE\u00EF\u00A3\u00B02.5313 0.0891 4.19271.0000 1.0000 1.00000.0001 0.1102 24.5958\u00EF\u00A3\u00B9\u00EF\u00A3\u00BB (7.6)Similarly, the inverse mapping from XYZ to MTTP3 :M\u00E2\u0088\u00921 =\u00EF\u00A3\u00AE\u00EF\u00A3\u00B0 0.4064 \u00E2\u0088\u00920.0287 \u00E2\u0088\u00920.0681\u00E2\u0088\u00920.4082 1.0333 0.02760.0018 \u00E2\u0088\u00920.0046 0.0405\u00EF\u00A3\u00B9\u00EF\u00A3\u00BB (7.7)7.3.5 Content MappingThe content mapping block takes as input the linear input image and deter-mines the split between the light steering and the non-light steering projec-tors as well as the power levels required.1027.3. Colourimetric Calibration of the ProjectorFigure 7.16: Content Mapping Algorithm BlockThe algorithm first checks if the input image is feasible given the systempower budget. This is currently done using a power heuristic. If not, theinput is tone-mapped. The resulting image (either passed through or tone-mapped) is then the target light field for subsequent stages. For feasibleinput content, the linear input and target images are identical. The targetimage is then used to generate a target light field image.The split between steering and non-steering can be achieved by a numberof methods. A simple one is raising the target image to an exponent (\u00CE\u00B3 > 1)to determine the light steering image. More sophisticated schemes havebeen proposed earlier. This new image does not accurately reproduce lightsteering projector data since it distorts the content to emphasize highlights.It is expected that a more accurate (and light efficient) splitting can beobtained by using the non-steering projector for most of the image formationup to 48 cd/m2, gradually phasing in the light steering projector as depictedin Figure 7.17.1037.3. Colourimetric Calibration of the ProjectorFigure 7.17: Steering and Non-Steering splitting as a function of targetluminance in cd/m2. Note the log-log scale.The split function attempts to utilize the non-steering projector for 90%of image formation up to 47 cd/m2, at which point the steering projectorbegins to take over. It is desirable to use the steering projector for a portionof the image at every pixel in order to avoid bright image features having apainted-on appearance. The splitting is 1D and could be implemented as afunction or as a LUT.7.3.6 Forward ModelThe forward model block takes as input the target light field from the con-tent mapping block and uses it to predict the output of the optical system,referred to as the predicted light field. This is necessary since not all targetlight fields are achievable and, to compute the correct amplitude patterns,it is also necessary to know how the actual light field differs from the targetlight field. This consists mainly of applying the calibrated system PSF and1047.3. Colourimetric Calibration of the Projectorthe unsteered component to the output image, taking care to account foroverall power-levels.Figure 7.18: Forward model algorithmic blockThe forward model takes the target light field as input and applies thesystem PSF to it to predict the result of the actual light field after blurringby the diffuser. An example of the PSF of the RGB projector system isshown below, tiled into a 4\u00C3\u00974 pattern:Figure 7.19: Point Spread Function applied to a test patters. Note thedifferent size and shape for red, green and blue colour channels.The resulting light field then has the effect of the unsteered componentadded. This is added after blurring since measurement of this image can onlybe accomplished after passing through the diffuser. In the current system,the fixed pattern is highly non-uniform. In a fiber-coupled system it wouldapproximate a Gaussian profile.7.3.7 Phase Pattern ComputationThe phase pattern generation block calculates the phase patterns requiredto achieve a target light field.1057.3. Colourimetric Calibration of the ProjectorFigure 7.20: Phase Pattern Computation BlockIn order to frame the image correctly on the amplitude modulator andseparate out higher diffraction orders it is necessary to pre-process the targetlight field. This involves warping by a calibrated distortion intended toalign the three channels. Currently, each point in the target image, [x, y],of dimensions W \u00C3\u0097 H is mapped to a point in the source image, [xm, ym],by a 2D cubic polynomial:xn =xW(7.8)yn =yH(7.9)b =[1, xn, y, x2n, xnyn, y2n, x3n, x2nyn, xny2n, y3n]T(7.10)xm = bT\u00CE\u00B2x (7.11)ym = bT\u00CE\u00B2y (7.12)The source image is then linearly sampled at the [xm, ym] correspondingto each destination pixel [x, y]. Normalization of the target coordinates (the[xn, yn] coordinates), allows the mapping to be computed even for resolutionmismatches between source and target images. The 10 \u00C3\u0097 1 fit parametervectors \u00CE\u00B2x and \u00CE\u00B2y are obtained from calibration. Once warped and re-sampled, the resulting image is circularly shifted (depending on the optical1067.3. Colourimetric Calibration of the Projectorconfiguration), clamped to the range [0.001, 1000.0]. At this point the phasecomputation algorithm is applied. The final phase pattern is then mappedto the 8 bit output range of the phase panels.7.3.8 Amplitude Pattern GenerationThe amplitude pattern generation block determines the amplitude patternfor the steering projector and the non-steering projector using the targetimage and the predicted light field as input.Figure 7.21: Amplitude Pattern Generation BlockThe algorithm first adds the non-steering illumination to the predictedlight field. This is the total light available on-screen. A common amplitudepattern is then computed for both the light steering and the non-light steer-ing amplitude modulator. The resulting image is clamped to a valid range oftransmission factors of [0, 1], (or could be tone-mapped in order to preservetexture in out-of-range regions). Any necessary LUTs are then applied to1077.4. Resultsaccount for the response of the amplitude SLMs and the pattern is thendirectly send to the projection head. In order to spatially align the steeringand the non-steering projectors, a warping is used based on calibrated pixelcorrespondences which uses the same cubic warping function as in the phasepattern generation block.7.4 ResultsFigure 7.22 shows photos comparing a cinema projector and our light steer-ing projector side-by-side, both with the same optical power out of lens,playing a video processed using the algorithm framework introduced in thissection. The contrast and peak luminance performance was confirmed to becomparable (slightly higher due to better light management within the lightpath) to the proof-of-concept work in Chapter 6.1087.4. ResultsFigure 7.22: Two scenes from the movie Avatar by 20th Century FOX(top and bottom) displayed on the light steering prototype (left) and on atraditional projector with same power out of lens (right). The light steeringprojector (left) exceeds the comparison projector contrast and peak luminancesignificantly by about a factor of 20 (light steering projector: 1,000 cd/m2;right projector: 48 cd/m2).109Chapter 8Discussion and ConclusionIn this thesis we have taken a critical look at the current HDR cinemapipeline from a perceptual point of view and have identified the most sig-nificant bottlenecks. We explored a variety of approaches to address theselimitations with new optical system designs and computational processing.8.1 DiscussionIn this section we reiterate over the contributions of the work presentedin this thesis and briefly discuss each approach in light of computationaldisplay and visual perception. A more detailed discussion of each individualtopic is included in the respective chapters.The light steering projector and the associated algorithms introduced inChapter 6 provide a practical solution to the unsolved challenge of achiev-ing perceptually meaningful contrast and peak luminance in large screencinema environments where today light source power, system cost, mechan-ical dimensions and thermal management limit the performance. Comparedto traditional cinema projectors our approach provides a visually more ap-pealing image with 20 times or more of the peak luminance capability andwith orders of magnitude darker black levels. Compared to the new classof high contrast laser projectors that has recently been introduced into themarket as a PLF offering our approach is attractive because it requires asignificantly lower power light source and with that also enables a lower costprojector alternative that exceeds the peak luminance of current brute-forcelaser projector offerings by about 10 times. Other high contrast high bright-ness technologies for large screens include tiled LCD TVs and direct viewLED walls. Both achieve great black level and peak luminance comparableto our approach. The difference lies in cost (every pixel of an LED wall or anLCD display has to have the capability to achieve the peak luminance, whichresults in high local power and with that high cost for LEDs and associateddriver electronics). Additionally to date it is not possible to manufacturetiled large displays without visible seams. While the gap size of seams has1108.1. Discussionimproved over the last years, the HVS remains extremely sensitive to eventhe smallest vertical and horizontal discontinuities in an image. Finally pro-jection screens, for several reasons including vandalism, existing buildinginfrastructure (power and space) and speaker placement behind perforatedscreens, are currently preferred over direct view displays in cinema. Havingsaid that, direct view LED and OLED displays will in the future provide afeasible, and from visual quality perspective, very compelling alternative toprojectors including the systems introduced in this thesis.An appearance reproduction algorithm has been introduced in Chap-ter 3 which was inspired from the fields of colour appearance modelling andtone reproduction research. While traditionally the difference in peak lu-minance, contrast and colour performance as well as screen size betweendifferent available display devices and cameras was comparable and henceone unified video signal was sufficient to represent content for all devices,today an ever increasing performance gap between the different types ofdisplays surrounding the user in their daily lives (mobile phones, tablets,computer screens, TVs, cinema, advertising displays and even Virtual Re-ality (VR) and Augmented Reality (AR) devices) also requires that videosignals account for the different properties of these displays as well as thedifferent viewing environments that these displays are being viewed in (e.g.dark in cinema, bright for mobile screens). Our model takes into accountthe characteristics of the display device as well as parameters describing theviewing environment and leads to precise appearance reproduction. Wherethese variables are unknown we have proposed a robust method to directlyapproximate the scene-referred parameters leading to plausible reproduc-tion of the image content. Both the field of colour appearance modellingand tone reproduction operators have recently received a large amount ofattention from the computer graphics research community. We have aimedto bring the two fields closer together with a new model that can handlea large dynamic range and reproduce colours accurately. Our research andfindings related to the HVS\u00E2\u0080\u0099s lightness perception have been instrumentalin prototyping bit efficient transmission of video data over existing inter-faces and in preparing both SDR and HDR content for viewing on our lightsteering projector.In Chapter 5, based on the initial proof-of-concept projector performancewe have analysed typical image statistics to understand the implications fora meaningful hardware design and propose two projector architectures thatcan enable scaling to larger screens with available electro-optical compo-nents. These findings have been taken into account to develop a new fullfeatured projector prototype introduced in Chapter 7. Here we have tried1118.2. Future Workto address the major practical limitations of the proof-of-concept work inChapter 6, discussed temporal considerations as well as detailed character-ization and optical modelling of the hardware and demonstrated that themethods presented in this thesis can go beyond the experimental and theo-retical research work in the lab and possibly make a meaningful impact inthe cinema in the future.8.2 Future WorkThroughout the work presented in this thesis we have attempted to put valueon perceptual engineering by thinking about perceptually meaningful targetsfirst, and only then finding the means to achieve them. This methodologymight be risky in the sense that the solutions found present a number of newchallenges that require solving, the potential gains however can be large.The methods and systems developed in this thesis only present a startingpoint for research in the new fields of computational projection display andaccurate colour appearance reproduction. Taking into account the recentadvances in display technologies we look forward to further research intodisplays that combine optical modelling and computational processing toachieve perceptually meaningful gains. Contrast (both global and local) aswell as peak luminance levels comparable to real world scenes are importantcues in depth perception of the HVS. Stereoscopic and light field displaysare emerging, but need to improve in spatial resolution, contrast, luminanceand practicality of their implementation. Further work to close the gapbetween these new type of displays and our work will provide even morerealistic image appearance. While we have attempted to address the majorpractical and engineering challenges in this research work, there remains along path to a product that is viable in the market and robust enough to beutilized in professional environments such as cinema.8.3 ConclusionWe have conceived a new computational projection display architecture thatfor the first time allows an increase in both the contrast and more impor-tantly the peak luminance in a perceptually meaningful fashion - by ordersof magnitude. We have introduced new algorithmic approaches to computedynamic freeform lensing phase patterns efficiently for this computationalprojection display. An initial monochromatic proof-of-concept projector hasdemonstrated that the approach shows promise, but is limited in power1128.3. Conclusion(screen size), colour (achievable chromaticity) and overall image quality.To understand these limitations as well as requirements for cinema in moredetail we have developed a colour appearance model and tone mapping oper-ator that operates in an absolute, calibrated colour space over a wide rangeof luminance values and takes into account viewing environment and theHVS\u00E2\u0080\u0099s adaptation to display and environment. We have explored new andexisting tools to transmit high bit depth data between devices and we havetouched upon perceptual aspects of image signal discretization that sup-ports high brightness HDR data. Inspired by projector power requirementsto reproduce HDR image content faithfully as well as based on the overallefficiency of optical components within the projector we have proposed anew hybrid architecture that balances the overall system light throughputand peak luminance. Finally we have prototyped a full featured projec-tor to address the remaining limitations of the proof-of-concept projectorand developed an optical model that allows colourimetric accurate imagereproduction of high brightness HDR content in a cinema environment (seeFigure 7.22) with an up to now unachievable peak luminance of up to 1,000cd/m2 and with in-scene contrast of up to 1, 000, 000 : 1.113Bibliography[1] A. Adams. The print. The Ansel Adams Photography series. Little,Brown and Company, 1983.[2] D. Armitage, I. Underwood, and S.-T. Wu. Introduction to microdis-plays, volume 11. John Wiley & Sons, 2006.[3] T. Aydin, R. Mantiuk, K. Myszkowski, and H. Seidel. Dynamic rangeindependent image quality assessment. ACM Transactions on Graph-ics, 27:69(1)\u00E2\u0080\u009369(10), 2008.[4] A. Ballestad and A. Kostin. Method and apparatus for image datatransformation, November 26 2013. US Patent 8,593,480.[5] F. Banterle, A. Artusi, K. Debattista, and A. Chalmers. AdvancedHigh Dynamic Range Imaging: Theory and Practice. AK Peters /CRC Press, Natick, MA, 2011.[6] M. Berry. Oriental magic mirrors and the laplacian image. Europeanjournal of physics, 27(1):109, 2006.[7] O. Bimber and D. Iwai. Superimposing dynamic range. ACM Trans.Graph., 27(5):150, 2008.[8] G. Blackham and A. Neale. Image display apparatus, March 18 1998.EP Patent App. EP19,970,306,624.[9] W. P. Bleha and L. A. Lei. Advances in liquid crystal on silicon(LCOS) spatial light modulator technology. In SPIE Defense, Se-curity, and Sensing, pages 87360A\u00E2\u0080\u009387360A. International Society forOptics and Photonics, 2013.[10] E. Buckley. 70.2: Invited paper: holographic laser projection technol-ogy. In Proc. SID, volume 39, pages 1074\u00E2\u0080\u00931079, 2008.[11] A. Chambolle and T. Pock. A first-order primal-dual algorithm forconvex problems with applications to imaging. Journal of Mathemat-ical Imaging and Vision, 40(1):120\u00E2\u0080\u0093145, 2011.114Bibliography[12] K. Chiu, M. Herf, P. Shirley, S. Swamy, C. Wang, and K. Zimmerman.Spatially nonuniform scaling functions for high contrast images. InProceedings of Graphics Interface \u00E2\u0080\u009993, pages 245\u00E2\u0080\u0093253, May 1993.[13] S. Daly, T. Kunkel, X. Sun, S. Farrell, and P. Crum. 41.1: Distin-guished paper: Viewer preferences for shadow, diffuse, specular, andemissive luminance limits of high dynamic range displays. In SID Sym-posium Digest of Technical Papers, volume 44, pages 563\u00E2\u0080\u0093566. WileyOnline Library, 2013.[14] S. Daly, T. Kunkel, X. Sun, S. Farrell, and P. Crum. Preferencelimits of the visual dynamic range for ultra high quality and aestheticconveyance. In Proc. SPIE, volume 8651, page 86510J, 2013.[15] G. Damberg, A. Ballestad, E. Kozak, R. Kumaran, and J. Minor.Efficient, high brightness, high dynamic range projection, 2014.[16] G. Damberg, J. Gregson, A. Ballestad, E. Kozak, J. Minor, R. Ku-maran, and W. Heidrich. High-brightness HDR projection usingdynamic-phase modulation, 2015.[17] G. Damberg, J. Gregson, and W. Heidrich. High brightness HDR pro-jection using dynamic freeform lensing. ACM Transactions on Graph-ics (TOG), 35(3):24, 2016.[18] G. Damberg and W. Heidrich. Efficient freeform lens optimization forcomputational caustic displays. Optics Express, 23(8):10224\u00E2\u0080\u009310232,2015.[19] G. Damberg, H. Seetzen, G. Ward, W. Heidrich, and L. Whitehead.3.2: High dynamic range projection systems. In Proc. SID, volume 38,pages 4\u00E2\u0080\u00937. Wiley Online Library, 2007.[20] G. Damberg, H. Seetzen, G. Ward, M. Kang, P. Longhurst, W. Hei-drich, and L. Whitehead. High dynamic range projector, 2007.[21] P. Dasgupta, M. K. Das, and B. Das. Physical properties of three liquidcrystals with negative dielectric anisotropy from x-ray diffraction andoptical birefringence measurements. Molecular Crystals and LiquidCrystals, 540(1):154\u00E2\u0080\u0093161, 2011.[22] P. Debevec. A median cut algorithm for light probe sampling. InACM SIGGRAPH 2005 Posters, SIGGRAPH \u00E2\u0080\u009905, New York, NY,USA, 2005. ACM.115Bibliography[23] L. Digital Cinemea Initiatives. DCI specifications documentation.http://www.dcimovies.com/, visited on 01 February 2017.[24] F. Drago, K. Myszkowski, T. Annen, and N. Chiba. Adaptive logarith-mic mapping for displaying high contrast scenes. Computer GraphicsForum, 22(3):419\u00E2\u0080\u0093426, 2003.[25] F. Durand and J. Dorsey. Fast bilateral filtering for the display of high-dynamic-range images. ACM Transactions on Graphics, 21(3):257\u00E2\u0080\u0093266, 2002.[26] M. Fairchild. The HDR photographic survey. In Proceedings of theFifteenth Color Imaging Conference: Color Science and EngineeringSystems, Technologies, and Applications, volume 15, pages 233\u00E2\u0080\u0093238,2007.[27] M. D. Fairchild. Color appearance models. Addison-Wesley, Reading,MA, 2nd edition edition, 2005.[28] M. D. Fairchild. The HDR photographic survey. In Color and ImagingConference, pages 233\u00E2\u0080\u0093238. Society for Imaging Science and Technol-ogy, 2007.[29] M. D. Fairchild and G. M. Johnson. Meet iCAM: an image color ap-pearance model. In IS&T/SID 10th Color Imaging Conference, pages33\u00E2\u0080\u009338, Scottsdale, 2002.[30] R. Fattal, D. Lischinski, and M. Werman. Gradient domain high dy-namic range compression. ACM Transactions on Graphics, 21(3):249\u00E2\u0080\u0093256, 2002.[31] E. J. Ferna\u00C2\u00B4ndez, P. M. Prieto, and P. Artal. Wave-aberration controlwith a liquid crystal on silicon (LCOS) spatial phase modulator. Opticsexpress, 17(13):11013\u00E2\u0080\u009311025, 2009.[32] J. A. Ferwerda, S. Pattanaik, P. Shirley, and D. P. Greenberg. A modelof visual adaptation for realistic image synthesis. In SIGGRAPH 96Conference Proceedings, pages 249\u00E2\u0080\u0093258, 1996.[33] M. Finckh, H. Dammertz, and H. P. Lensch. Geometry constructionfrom caustic images. In Proc. ECCV, pages 464\u00E2\u0080\u0093477, 2010.[34] G. A. Fishman and S. Sokol. Electrophysiological Testing in Disordersof the Retina, Optic Nerve, and Visual Pathway. American Academyof Ophthalmology, San Francisco, 1990.116Bibliography[35] J. Froehlich, T. Kunkel, R. Atkins, J. Pytlarz, S. Daly, A. Schilling,and B. Eberhardt. Encoding color difference signals for high dynamicrange and wide gamut imagery. In Color and Imaging Conference, vol-ume 2015, pages 240\u00E2\u0080\u0093247. Society for Imaging Science and Technology,2015.[36] A. Gilchrist, C. Kossyfidis, F. Bonato, T. Agostini, J. Cataliotti, X.Li, B. Spehar, V. Annan, and E. Economou. An anchoring theory oflightness perception. Psychol Rev., 106(4):795\u00E2\u0080\u0093834, Oct 1999.[37] A. Gilchrist and A. Radonjic\u00C2\u00B4. Frameworks of illuination revealed byprobe disk technique. Journal of Vision, 10(5):1\u00E2\u0080\u009312, 2010.[38] R. R. Hainich and O. Bimber. Displays: fundamentals & applications.CRC press, 2016.[39] P. R. Haugen, H. Bartelt, and S. K. Case. Image formation by multi-facet holograms. Applied optics, 22(18):2822\u00E2\u0080\u00932829, 1983.[40] F. Heide, M. B. Hullin, J. Gregson, and W. Heidrich. Low-budgettransient imaging using photonic mixer devices. ACM Transactionson Graphics (ToG), 32(4):45, 2013.[41] A. Hermerschmidt, S. Osten, S. Kru\u00C2\u00A8ger, and T. Blu\u00C2\u00A8mel. Wave frontgeneration using a phase-only modulating liquid-crystal-based micro-display with HDTV resolution. In International Congress on Opticsand Optoelectronics, pages 65840E\u00E2\u0080\u009365840E. International Society forOptics and Photonics, 2007.[42] Holoeye Photonics AG. Holoeye corporation. http://www.holoeye.com, visited on 01 February 2017.[43] D. C. Hood, T. Ilves, E. Maurer, B. Wandell, and E. Buckingham.Human cone saturation as a function of ambient intensity: A test ofmodels of shifts in the dynamic range. Vision Research, 18(8):983\u00E2\u0080\u0093993,1978.[44] R. Hoskinson, S. Hampl, and B. Stoeber. Arrays of large-area, tip/tiltmicromirrors for use in a high-contrast projector. Sensors and Actu-ators A: Physical, 173(1):172\u00E2\u0080\u0093179, 2012.[45] R. Hoskinson and B. Stoeber. Increasing projector contrast and bright-ness through light redirection. Optical Imaging Devices: New Tech-nologies and Applications, 44:49, 2015.117Bibliography[46] R. Hoskinson, B. Stoeber, W. Heidrich, and S. Fels. Light reallocationfor high contrast projection using an analog micromirror array. ACMTransactions on Graphics (TOG), 29(6):165, 2010.[47] M. B. Hullin, I. Ihrke, W. Heidrich, T. Weyrich, G. Damberg, andM. Fuchs. State of the art in computational fabrication and displayof material appearance. In Eurographics Annual Conference (STAR),2013.[48] R. W. G. Hunt. The reproduction of color. Fountain Press, England,1996. Fifth edition.[49] iC-Haus GmbH. iC-HG product page. http://ichaus.de/HG, visitedon 01 February 2017.[50] Y. Isomae, Y. Shibata, T. Ishinabe, and H. Fujikake. P-199l: Late-news poster: Optical phase modulation properties of 1 \u00C2\u00B5m-pitch LCOSwith dielectric walls for wide-viewing-angle holographic displays. InSID Symposium Digest of Technical Papers, volume 47, pages 1670\u00E2\u0080\u00931673. Wiley Online Library, 2016.[51] M. H. Kim, T. Weyrich, and J. Kautz. Modeling human color percep-tion under extended luminance levels. ACM Transactions on Graphics,28(3):27:1\u00E2\u0080\u00939, 2009.[52] C. Kiser, E. Reinhard, M. Tocci, and N. Tocci. Real-Time automatedtone mapping system for HDR video. In IEEE International Confer-ence on Image Processing, 2012.[53] T. Kiser, M. Eigensatz, M. M. Nguyen, P. Bompas, and M. Pauly.Architectural causticscontrolling light with geometry. In Advances inArchitectural Geometry 2012, pages 91\u00E2\u0080\u0093106. Springer, 2013.[54] P. Kohns, J. Schirmer, A. A. Muravski, S. Y. Yakovenko, V.Bezborodov, and R. Da\u00CB\u009Cbrowski. Birefringence measurements of liq-uid crystals and an application: An achromatic waveplate. Liquidcrystals, 21(6):841\u00E2\u0080\u0093846, 1996.[55] G. Krawczyk, R. Mantiuk, K. Myszkowski, and H. Seidel. Lightnessperception inspired tone mappping. In First ACM Symposium onApplied Perception in Graphics and Visualization (APGV), page 172,2004.118Bibliography[56] J. Kuang, G. M. Johnson, and M. D. Fairchild. iCAM06: a refinedimage appearance model for HDR image rendering. Journal of VisualCommunication and Image Representation, 18(5):406 \u00E2\u0080\u0093 414, 2007.[57] T. Kunkel and E. Reinhard. A neurophysiology-inspired steady-statecolor appearance model. Journal of the Optical Society of America A,26:776\u00E2\u0080\u0093782, 2009.[58] T. Kunkel, G. Damberg, and L. Johnson. A high bit depth digi-tal imaging pipeline for vision research. In Proceedings of the ACMSIGGRAPH Symposium on Applied Perception in Graphics and Vi-sualization, pages 121\u00E2\u0080\u0093121. ACM, 2011.[59] Y. Kusakabe, M. Kanazawa, Y. Nojiri, M. Furuya, and M. Yoshimura.A high-dynamic-range and high-resolution projector with dual mod-ulation. In IS&T/SPIE Electronic Imaging, pages 72410Q\u00E2\u0080\u009372410Q.International Society for Optics and Photonics, 2009.[60] A. Lasansky. Synaptic action mediating cone responses to annularillumination in the retina of the larval tiger salamander. Journal ofPhysiology, 310:205\u00E2\u0080\u0093214, 1981.[61] G. Lazarev, A. Hermerschmidt, S. Kruger, and S. Osten. Opticalimaging and metrology: Advanced technologies, 2012.[62] G. Lazarev, F. Ga\u00C2\u00A8deke, and J. Luberek. Ultrahigh-resolution phase-only LCOS spatial light modulator. In Proc. of SPIE Vol, volume10125, pages 101250M\u00E2\u0080\u00931, 2017.[63] G. Lazarev, A. Hermerschmidt, S. Kru\u00C2\u00A8ger, and S. Osten. LCOS spatiallight modulators: trends and applications, 2012.[64] P. Ledda, L. P. Santos, and A. Chalmers. A local model of eye adap-tation for high dynamic range images. In AFRIGRAPH \u00E2\u0080\u009904, pages151\u00E2\u0080\u0093160, New York, NY, USA, 2004. ACM.[65] L. Lesem, P. Hirsch, and J. Jordan. The kinoform: a new wavefrontreconstruction device. IBM Journal of Research and Development,13(2):150\u00E2\u0080\u0093155, 1969.[66] C. Li, M. R. Luo, B. Rigg, and R. W. G. Hunt. CMC 2000 chromaticadaptation transform: CMCCAT2000. Color Research and Applica-tion, 27(1):49\u00E2\u0080\u009358, 2002.119Bibliography[67] Y. Li, L. Sharan, and E. Adelson. Compressing and companding highdynamic range images with subband architectures. ACM Transactionson Graphics, 24(3):836\u00E2\u0080\u0093844, 2005.[68] T. Lu, F. Pu, P. Yin, T. Chen, W. Husak, J. Pytlarz, R. Atkins, J.Fro\u00C2\u00A8hlich, and G. Su. ITP colour space and its compression performancefor high dynamic range and wide colour gamut video distribution. ZTECommunications, Feb, 2016.[69] M. R. Luo, A. Clark, P. Rhodes, A. Schappo, S. Scrivner, and C. Tait.Quantifying colour appearance: Part I. LUTCHI colour appearancedata. Colour Research and Application, 16:166\u00E2\u0080\u0093180, 1991.[70] R. Mantiuk, R. Mantiuk, A. Tomaszewska, and W. Heidrich. Colorcorrection for tone mapping. Computer Graphics Forum, 28(2):pp.193\u00E2\u0080\u0093202, 2009.[71] R. Mantiuk, S. Daly, and L. Kerofsky. Display adaptive tone mapping.ACM Trans. on Graphics, 27(3):68, 2008.[72] J. J. McCann and A. Rizzi. The Art and Science of HDR Imaging.John Wiley and Sons, Chichester, 2012.[73] S. Miller, M. Nezamabadi, and S. Daly. Perceptual signal coding formore efficient usage of bit codes. SMPTE Motion Imaging Journal,122(4):52\u00E2\u0080\u009359, 2013.[74] J. C. Min\u00CB\u009Cano, P. Ben\u00C2\u00B4\u00C4\u00B1tez, and A. Santamar\u00C2\u00B4\u00C4\u00B1a. Free-form optics forillumination. Optical Review, 16(2):99\u00E2\u0080\u0093102, 2009.[75] P. Moon and D. E. Spencer. Visual data applied to lighting design.Journal of the Optical Society of America, 34(10):605\u00E2\u0080\u0093617, 1944.[76] N. Moroney, M. D. Fairchild, R. W. G. Hunt, C. J. Li, M. R. Luo, andT. Newman. The CIECAM02 color appearance model. In 10th ColorImaging Conference, pages 23\u00E2\u0080\u009327, Scottsdale, 2002.[77] K. Myszkowski, R. Mantiuk, and G. Krawczyk. High Dynamic RangeVideo. Morgan and Claypool Publishers, San Rafael, 2008.[78] K.-I. Naka and W. A. Rushton. S-potentials from luminosity units inthe retina of fish (cyprinidae). The Journal of physiology, 185(3):587,1966.120Bibliography[79] M. Nezamabadi, S. Miller, S. Daly, and R. Atkins. Color signal en-coding for high dynamic range and wide color gamut based on humanperception. In IS&T/SPIE Electronic Imaging, pages 90150C\u00E2\u0080\u009390150C.International Society for Optics and Photonics, 2014.[80] M. K. Ng, R. H. Chan, and W.-C. Tang. A fast algorithm for deblur-ring models with neumann boundary conditions. SIAM Journal onScientific Computing, 21(3):851\u00E2\u0080\u0093866, 1999.[81] NVIDIA. CUDA programming guide, CUSPARSE, CUBLAS, andCUFFT library user guides.{Online}. http://docs.nvidia.com/cuda/, visited on 01 February 2017.[82] Y. Ohno. Color rendering and luminous efficacy of white led spectra.In Proc. of SPIE Vol, volume 5530, page 89, 2004.[83] M. Papas, T. Houit, D. Nowrouzezahrai, M. Gross, and W. Jarosz. Themagic lens: refractive steganography. ACM Transactions on Graphics(TOG), 31(6):186, 2012.[84] M. Papas, W. Jarosz, W. Jakob, S. Rusinkiewicz, W. Matusik, and T.Weyrich. Goal-based caustics. Computer Graphics Forum, 30(2):503\u00E2\u0080\u0093511, 2011.[85] N. Parikh and S. Boyd. Proximal algorithms. Foundations and Trendsin Optimization, 1(3):123\u00E2\u0080\u0093231, 2013.[86] S. N. Pattanaik, J. A. Ferwerda, M. D. Fairchild, and D. P. Greenberg.A multiscale model of adaptation and spatial vision for realistic imagedisplay. In SIGGRAPH 98 Conference Proceedings, pages 287\u00E2\u0080\u0093298,1998.[87] M. Pauly and T. Kiser. Caustic art. Technical report, EPFL, 2012.[88] T. Pouli and E. Reinhard. Progressive color transfer for images ofarbitrary dynamic range. Computers and Graphics, 35(1):67\u00E2\u0080\u009380, 2011.[89] C. Poynton. Digital video and HD: Algorithms and Interfaces. Elsevier,2012.[90] E. Reinhard and K. Devlin. Dynamic range reduction inspired byphotoreceptor physiology. IEEE Transactions on Visualization andComputer Graphics, 11(1):13\u00E2\u0080\u009324, 2005.121Bibliography[91] E. Reinhard, E. A. Khan, A. O. Akyu\u00C2\u00A8z, and G. M. Johnson. ColorImaging: Fundamentals and Applications. A K Peters, Wellesley, 2008.[92] E. Reinhard, T. Kunkel, Y. Marion, J. Brouillat, R. Cozot, and K.Bouatouch. Image display algorithms for high and low dynamic rangedisplay devices. Journal of the Society for Information Display, 15(12),2007.[93] E. Reinhard, T. Pouli, T. Kunkel, B. Long, A. Ballestad, and G.Damberg. Calibrated image appearance reproduction. ACM Transac-tions on Graphics (TOG), 31(6):201, 2012.[94] E. Reinhard, M. Stark, P. Shirley, and J. Ferwerda. Photographictone reproduction for digital images. ACM Transactions on Graphics,21(3):267\u00E2\u0080\u0093276, 2002.[95] E. Reinhard, G. Ward, S. Pattanaik, P. Debevec, W. Heidrich, and K.Myszkowski. High dynamic range imaging: Acquisition, display andimage-based lighting. Morgan Kaufmann Publishers, San Francisco,2nd edition edition, 2010.[96] A. Rempel, W. Heidrich, H. Li, and R. Mantiuk. Video viewing pref-erences for HDR displays under varying ambient illumination. Proc.APGV, pages 45\u00E2\u0080\u009352, 2009.[97] A. Rempel, W. Heidrich, and R. Mantiuk. The role of contrast in theperceived depth of monocular imagery. Proc. APGV, page 115, 2011.[98] M. D. Robinson, G. Sharp, and J. Chen. Polarization engineering forLCD projection, volume 4. John Wiley & Sons, 2005.[99] C. Schlick. Quantization techniques for the visualization of high dy-namic range pictures. In P. Shirley, G. Sakas, and S. Mu\u00C2\u00A8ller, edi-tors, Photorealistic Rendering Techniques, pages 7\u00E2\u0080\u009320. Springer-VerlagBerlin Heidelberg New York, 1994.[100] Y. Schwartzburg, R. Testuz, A. Tagliasacchi, and M. Pauly. High-contrast computational caustic design. ACM Trans. Graph. (Proc.Siggraph), 2014. (in print).[101] H. Seetzen, W. Heidrich, W. Stuerzlinger, G. Ward, L. Whitehead, M.Trentacoste, Ghosh, and A. Vorozcovs. High dynamic range displaysystems. ACM Trans. Graph. (Proc. SIGGRAPH), pages 760\u00E2\u0080\u0093768,August 2004.122Bibliography[102] H. Seetzen. High dynamic range display and projection systems. PhDthesis, University of British Columbia, 2009.[103] H. Seetzen, W. Heidrich, W. Stuerzlinger, G. Ward, L. Whitehead,M. Trentacoste, A. Ghosh, and A. Vorozcovs. High dynamic rangedisplay systems. ACM Trans. on Graphics, 23(3), 2004.[104] D. Stidwill and R. Fletcher. Normal Binocular Vision: Theory, In-vestigation and Practical Aspects. Wiley-Blackwell, Chichester, 2011.[105] Texas Instruments Inc. DLP technology documentation. http://www.ti.com/lsds/ti/dlp/overview.page, visited on 01 February 2017.[106] M. D. Tisdall, G. Damberg, P. Wighton, N. Nguyen, Y. Tan, M. S.Atkins, H. Li, and H. Seetzen. Comparing signal detection betweennovel high-luminance HDR and standard medical LCD displays. Dis-play Technology, Journal of, 4(4):398\u00E2\u0080\u0093409, 2008.[107] M. D. Tocci, C. Kiser, N. Tocci, and P. Sen. A Versatile HDR VideoProduction System. ACM Transactions on Graphics (TOG) (Proceed-ings of SIGGRAPH 2011), 30(4), 2011.[108] H. Toyoda, T. Inoue, N. Mukozaka, T. Hara, and M. H. Wu. 39.3:Invited paper: Advances in application of liquid crystal on siliconspatial light modulator (LCOS-SLM). In SID Symposium Digest ofTechnical Papers, volume 45, pages 559\u00E2\u0080\u0093562. Wiley Online Library,2014.[109] J. Tumblin and H. Rushmeier. Tone reproduction for computer gener-ated images. IEEE Computer Graphics and Applications, 13(6):42\u00E2\u0080\u009348,1993.[110] UHD Alliance. UHD Alliance online ressources. http://www.uhdalliance.org/, visited on 01 February 2017.[111] J. H. van Hateren. Encoding of high dynamic range video with amodel of human cones. ACM Transactions on Graphics, 25(4):1380\u00E2\u0080\u00931399, 2006.[112] G. Ward, H. Rushmeier, and C. Piatko. A visibility matching tonereproduction operator for high dynamic range scenes. IEEE Transac-tions on Visualization and Computer Graphics, 3(4):291\u00E2\u0080\u0093306, 1997.123Bibliography[113] A. M. Weiner, D. E. Leaird, J. Patel, and J. R. Wullert. Programmableshaping of femtosecond optical pulses by use of 128-element liquidcrystal phase modulator. IEEE Journal of Quantum Electronics,28(4):908\u00E2\u0080\u0093920, 1992.[114] S.-T. Wu, U. Efron, and L. D. Hess. Birefringence measurements ofliquid crystals. Applied optics, 23(21):3911\u00E2\u0080\u00933915, 1984.[115] Y. Yue, K. Iwasaki, B.-Y. Chen, Y. Dobashi, and T. Nishita. Poisson-based continuous surface generation for goal-based caustics. ACMTrans. Graph., 2014. (in print).[116] Y. Yue, K. Iwasaki, B.-Y. Chen, Y. Dobashi, and T. Nishita. Pixelart with refracted light by rearrangeable sticks. Computer GraphicsForum, 31(2pt3):575\u00E2\u0080\u0093582, 2012.[117] Z. Zhang, Z. You, and D. Chu. Fundamentals of phase-only liquidcrystal on silicon (LCOS) devices. Light: Science & Applications,3(10):e213, 2014.[118] M. Zink and M. Smith. Managing HDR content production and dis-play device capabilities. International Broadcasting Convention 2015,2015.124"@en . "Thesis/Dissertation"@en . "2017-11"@en . "10.14288/1.0349061"@en . "eng"@en . "Computer Science"@en . "Vancouver : University of British Columbia Library"@en . "University of British Columbia"@en . "Attribution-NonCommercial-NoDerivatives 4.0 International"@* . "http://creativecommons.org/licenses/by-nc-nd/4.0/"@* . "Graduate"@en . "Computational projection display : towards efficient high brightness projection in cinema"@en . "Text"@en . "http://hdl.handle.net/2429/62421"@en .