UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Tone mapping of high dynamic range video for video gaming applications Khaldieh, Ahmad 2018

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2018_september_khaldieh_ahmad.pdf [ 2.95MB ]
Metadata
JSON: 24-1.0366909.json
JSON-LD: 24-1.0366909-ld.json
RDF/XML (Pretty): 24-1.0366909-rdf.xml
RDF/JSON: 24-1.0366909-rdf.json
Turtle: 24-1.0366909-turtle.txt
N-Triples: 24-1.0366909-rdf-ntriples.txt
Original Record: 24-1.0366909-source.json
Full Text
24-1.0366909-fulltext.txt
Citation
24-1.0366909.ris

Full Text

TONE MAPPING OF HIGH DYNAMIC RANGE VIDEO FOR VIDEO GAMING APPLICATIONS by  Ahmad Khaldieh  B.E., American University of Beirut, 2014  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF  MASTER OF APPLIED SCIENCE in THE FACULTY OF GRADUATE AND POSTDOCTORAL STUDIES (Electrical and Computer Engineering)  THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver)  May 2018  © Ahmad Khaldieh, 2018  ii  The following individuals certify that they have read, and recommend to the Faculty of Graduate and Postdoctoral Studies for acceptance, a thesis/dissertation entitled:    submitted by Ahmad Khaldieh in partial fulfillment of the requirements for the degree of Master of Applied Science in Electrical and Computer Engineering  Examining Committee: Panos Nasiopoulos Co-supervisor Victor Leung Co-supervisor  Rabab Ward Supervisory Committee Member  Additional Examiner  Additional Supervisory Committee Members:  Supervisory Committee Member  Supervisory Committee Member iii  Abstract  High Dynamic Range (HDR) technology is regarded as the latest revolution in digital multimedia, as it aims at capturing, distributing and displaying a range of luminance and color values that better correspond to what the human eye can perceive. Inevitably, physical-based rendering in High Dynamic Range (HDR) has recently gained a lot of interest in the video gaming industry. However, the limited availability of commercial HDR displays on one hand and the large installed base of Standard Dynamic Range (SDR) displays on the other imposed the need for techniques to efficiently display HDR content on SDR TVs. Several such techniques, known as Tone-Mapping Operators (TMOs), have been proposed, but all of them are specifically designed for natural content. As such, these TMOs fail to address the unique characteristics of the HDR gaming content, causing loss of details and introducing visual artifacts such as brightness and color inconsistencies. In this thesis, we propose an automated, low complexity and content adaptive video TMO specifically designed for video gaming applications. The proposed method uses the distribution of HDR light information in the perceptual domain and takes advantage of the unique properties of rendered HDR gaming content to calculate a global piece-wise-linear tone-mapping curve to efficiently preserve the global contrast and texture details of the original HDR scene. A unique flickering reduction method is also introduced that eliminates brightness inconsistences caused by the tone-mapping process while successfully detecting scene changes. Subjective and objective evaluations have shown that our method outperforms existing TMOs, offering better overall visual quality for video gaming content.  iv  Lay Summary  High Dynamic Range (HDR) content capturing, rendering and transmission has become recently available; however, the vast majority of users' displays support only Standard Dynamic Range (SDR) of brightness and color. SDR technology fails to reproduce the high quality HDR content, which rises the need for a method that can efficiently transfer HDR content to the SDR format, making it backward compatible with the large base of available SDR displays. The process of mapping HDR content to SDR format is known as tone mapping. The majority of the existing Tone Mapping Operators (TMOs) were designed to address realistically captured HDR images, but fail to preserve the high quality of Rendered HDR gaming content. Besides, they result in temporal brightness and color inconsistencies when applied to video sequences. In this thesis, we propose an automated, low complexity and content adaptive video TMO specifically designed for video gaming applications. Our method eliminates brightness inconsistences caused by the tone-mapping process while successfully detecting scene changes. v  Preface  A version of Chapter 3, sub-sections 3.2.3, 3.2.4 and 3.2.5.1, has been published as A. Khaldieh, S. Ploumis, M. T. Pourazad, P. Nasiopoulos and V. Leung, "Tone mapping for video gaming applications," 2018 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, pp. 1-2, 2018. I was the lead investigator responsible for all areas of research, data collection, and the majority of manuscript composition. S. Ploumis was involved in the preparation of subjective tests conducted to evaluate the proposed method. M. T. Pourazad were involved in the early stages of research concept formation and aided with manuscript edits. P. Nasiopoulos and V. Leung were the supervisors of this project and were involved with the research concept formation, and manuscript edits.            vi  Table of Contents  Abstract .......................................................................................................................................... ii Lay Summary ............................................................................................................................... iv Preface .............................................................................................................................................v Table of Contents ......................................................................................................................... vi List of Figures ............................................................................................................................. viii List of Abbreviations ................................................................................................................... xi Acknowledgements ..................................................................................................................... xii Dedication ................................................................................................................................... xiii Chapter 1: Introduction ................................................................................................................1 1.1 Overview ......................................................................................................................... 1 1.2 Motivation ....................................................................................................................... 3 1.3 Thesis Organization ........................................................................................................ 4 Chapter 2: Background .................................................................................................................5 2.1 High Dynamic Range (HDR) Technology ..................................................................... 5 2.1.1 Overview ..................................................................................................................... 5 2.1.2 Rendered HDR Gaming Content vs Real-Life ........................................................... 6 2.1.3 Perceptual Quantization .............................................................................................. 8 2.2 Tone Mapping Operators .............................................................................................. 10 2.2.1 Overview ................................................................................................................... 10 2.2.2 Flickering in Tone Mapped Video ............................................................................ 13 Chapter 3: Our Proposed Content Adaptive Tone Mapping Operator .................................15 vii  3.1 Introduction ................................................................................................................... 15 3.2 Proposed Method .......................................................................................................... 16 3.2.1 Overview ................................................................................................................... 16 3.2.2 Perceptual Encoding  ................................................................................................ 17 3.2.3 Histogram PQ Bins Classification ............................................................................ 18 3.2.4 Slopes Calculation .................................................................................................... 20 3.2.4.1 Starting Curve ................................................................................................... 20 3.2.4.2 Content Adaptive Slopes Readjustment............................................................ 22 3.2.4.2.1 Upper Bound of Slopes ............................................................................... 23 3.2.4.2.2 Lower Bound of Slopes ............................................................................... 26 3.2.4.2.3 Slopes Readjustment Method ...................................................................... 28 3.2.5 Scene Detection and Flickering Reduction ............................................................... 31 3.3 Results and Discussion ................................................................................................. 35 3.3.1 Subjective Evaluation ............................................................................................... 35 3.3.2 Flickering Analysis ................................................................................................... 44 Chapter 4: Conclusion and Future Work ..................................................................................48 4.1 Conclusion .................................................................................................................... 48 4.2 Future Work .................................................................................................................. 49 Bibliography .................................................................................................................................50   viii  List of Figures Figure 2.1 The dynamic range of a real-world scene along with the capabilities of the HVS, SDR and HDR capturing and display technologies……………………………………………………..5 Figure 2.2 Examples of bidirectional reflectance distribution functions (BRDFs) used in HDR rendering ………………………………………………………………………………………….6 Figure 2.3 Demonstration of light caches distribution in the rendered scene……………………..7 Figure 2.4 Effect of indirect light sources on the quality of the rendered scene: without light caches (Left) with light caches (Right) …………………………………………………………...7 Figure 2.5 Difference in distribution of light information in the histogram of a real-life captured HDR image (a) vs that of a rendered HDR image (b) ……………………………………………8 Figure 2.6 PQ utilization of code words as function of maximum luminance of the HDR content ……………………………………………………………………………………………………..9 Figure 2.7 Example of the difference in visual quality between SDR rendered scene (a) and tone-mapped HDR rendered scene (b)………………………………………………………………...10 Figure 2.8 The choice between using offline and online TMO in postproduction tone-mapping of HDR content (a) versus the constraint of using online TMO in gaming applications (b)……….11 Figure 3.1 Block diagram of the proposed TMO ………………………………………………..17 Figure 3.2 Histogram of perceptually encoded light values of an input HDR image/frame…….18 Figure 3.3 Demonstration of the histogram bins sorted in ascending order (a) and the calculated Maximum Entropy threshold (b) ………………………………………………………………..19 Figure 3.4 Example of histogram bins categorization…………………………………………...20 ix  Figure 3.5 Starting piece-wise-linear tone-mapping function in PQ domain for input PQ HDR range between 0 and 1 and output PQ SDR range between 0.0623 and 0.5081 (Default Parameters) ……………………………………………………………………………………...21 Figure 3.6 Demonstration of the way the starting curve maps information at different PQ levels from the HDR range in (a) to the limited SDR range in (b)……………………………………..21 Figure 3.7 Demonstration of the way information at dark PQ values in the HDR range (a) are shifted towards brighter values when mapped to the SDR range (b) …………………………...23 Figure 3.8 Demonstration of the way information at mid and bright PQ levels in HDR range (a) will always be mapped to darker PQ levels in SDR (b) ………………………………………...24 Figure 3.9 Values of the proposed slopes’ upper bound at different PQ levels of the input HDR range……………………………………………………………………………………………...25 Figure 3.10 Illustration of how the proposed maximum brightness constraints prevents dark PQ values in HDR (a) from being mapped to brighter PQ values in SDR (b) ……………………...26 Figure 3.11 Values of the proposed slopes’ lower bound at different PQ levels of the input HDR range……………………………………………………………………………………………...27 Figure 3.12 Demonstration of slopes readjustment method: starting curve (a) Range detaining (b) Range redistribution (c) and the final mapping curve (d) ……………………………………….29 Figure 3.13 Demonstration of the proposed method of eliminating flickering caused by the mapping function of two consecutive frames by limiting changes between nodes of the mapping functions to 1 JND……………………………………………………………………………….32 Figure 3.14 Example of histograms of consecutive frames belonging to different scenes………33 Figure 2.15 First frame of each HDR sequence displayed at exposure of 2-9…………………...37 Figure 2.16 Visual fidelity subjective test results………………………………………………..39 x  Figure 2.17 Side-by-Side subjective test results…………………………………………………42 Figure 2.18 Demonstration of the video sequence used in the flickering reduction analysis along with the scenes order……………………………………………………………………………..44 Figure 2.19 Geometric mean of the flickering test results by applying our method with scene detection (a) and without scene detection (b) …………………………………………………...45 Figure 2.20 Visual representation of the brightness incoherence introduced at scene change by applying the proposed flickering reduction method without detecting scene changes…………..46 Figure 2.21 Visual representation of preservation of brightness coherence between frames while applying the proposed flickering reduction method and detecting scene changes………………46       xi  List of Abbreviations  CIE  Commission Internationale del’Eclairage cd/m2   Candela Per Square Meter fps   Frames Per Second HDR  High Dynamic Range HVS   Human Visual System ITU-T   International Telegraph Union-Telecommunication Standardization Sector MPEG  Motion Picture Experts Group JND   Just Noticeable Difference PQ   Perceptual Quantizer SDR   Standard Dynamic Range SMPTE  Society of Motion Picture and Television Engineers TMO   Tone Mapping Operators xii  Acknowledgements  I would like to start by expressing my most sincere gratitude to my supervisor and mentor Dr. Panos Nasiopoulos for his support through the past years. Thank you for your patient, guidance and inspiration through the thesis. I would also like to thank Dr. Mahsa T. Pourazad for her constant help and support during different stages of this thesis. I am also grateful to my lab mates and friends, Stelios Ploumis and Pedram Mohammadi, who helped me thorough their feedback, and for their help in preparing the subjective test to evaluate my method. My utmost gratitude to my beloved parents, who always believed in me and encouraged me through my whole life. I am always thankful to their unconditional love, unlimited dedication, and all the sacrifices they made for me. Without them, I would never be who I am today. Special thanks to my parents, whose have supported me throughout my years of education, both morally and financially. xiii  Dedication  To my beloved family.1  Chapter 1: Introduction 1.1 Overview High Dynamic Range (HDR) technology is regarded as the latest revolution in digital media, which has gained a lot of interest by academia and industry. HDR can capture and reproduce a wide range of luminance values, very close to what the human eye can perceive in the real world [1]. This is a huge deviation from what the present Standard Dynamic Range (SDR) technology can offer, as it is limited to a small portion of the brightness and color range that humans can see. Inevitably, physical-based rendering in HDR has gained a lot of interest in the video gaming industry too. The old generation of game engines used an SDR rendering mode that tied the generated colors to the sRGB color space and restricted light values to a normalized range between 0 and 1, which is linearly mapped to the luminance range supported by the SDR display. However, with the advance in computer graphics rendering technology, physical-based rendering has become the new trend in game development. Having the ability to render HDR light values in OpenEXR floating point format [2], the new generation of game engines is capable of rendering light values between 0 and 65,000 cd/m2. This resulted in a more engaging gaming content with richer colors, brighter highlights and more details in the shadows.  Due to the significant improvement in the visual Quality of Experience (QoE) that HDR offers, it is expected to replace SDR in the near future [3]. However, during this transition period, most displays will only have SDR capabilities. Considering the difference in range and quality that the two technologies offer, it is impossible to reproduce HDR content directly on SDR displays. In addition, naïve linear scaling from HDR to SDR fails to preserve global contrast, brightness and details of the original HDR content, resulting in degradation of the overall visual quality and loss 2  of artistic intent. Thus, the most efficient way to achieve acceptable conversion from HDR to SDR is through tone-mapping, a process that maps the large amount of HDR information to a more limited SDR range. In that respect, tone mapping aids in making sophisticated decisions regarding the tonal levels that should be preserved and the ones that should be dismissed in this conversion process [4]. Over the years, several image Tone Mapping Operators (TMOs) have been proposed. Given the fact that HDR video technology started to mature only recently, the majority of these TMOs were designed for images. They can be distinguished into two categories: local and global operators. Local operators [5,6,7] tone map each pixel based on its spatial neighborhood. These operators usually deal well with edges and preserve most of the visual information. Their main drawbacks are the appearance of halo artifacts around edges [8] and their high computational cost, which makes them unlikely candidates for real time applications. On the other hand, global operators [9,10,11] use statistics of an HDR image to compute a monotonously increasing tone map curve for the whole image. The used statistics could be the highest/lowest luminance values, the average luminance and luminance histogram.  While these operators have low computational cost, they have been designed for natural content. As such, these TMOs fail to address the unique characteristics of the HDR gaming content, resulting in undesirable SDR image when dealing with dark scenes or highly contrasted HDR content [12]. Tone mapping of HDR videos is a relatively new research topic as HDR video content capturing [13,14,15], rendering and distribution [16,17] have only recently become widely available. Since the time that capturing HDR video and HDR game rendering became possible, efficient video TMOs became the need and priority for the entertainment and broadcasting industries. However, 3  naive application of image TMOs to video sequences raise visual artifacts such as visual noise, ghosting effect and brightness/color inconsistencies [18,19,20]. Traditional and widely used flickering reduction techniques [21,22] try to smooth brightness differences between successive frames in expense of visual performance. In other cases, despite their efficiency in reducing flickering caused by brightness inconsistencies, their application at scene changes results in altering the artistic intent of the original HDR content. 1.2 Motivation Once physical-based rendering in HDR became possible, it resulted in a more engaging gaming content with richer colors, brighter highlights and more details. Since the majority of available SDR TVs cannot directly display HDR content, designing a tone-mapping method that transforms the HDR content to the SDR format is of high importance. In the past years, several TMOs have been proposed in literature; however, these TMOs fail to address the unique characteristics of the HDR gaming content, resulting in undesirable output when dealing with dark content or scenes with high contrast. In this thesis, we focus on HDR video gaming and propose an automated, low complexity and content adaptive video TMO, which is optimized to address of the unique properties of rendered HDR gaming content to improve both the visual quality and the overall quality of experience.   4  1.3 Thesis Organization The rest of the thesis is structured as follows: Chapter 2 provides background information on Tone Mapping Operators, unique properties of rendered HDR gaming content and challenges for HDR video tone mapping. Chapter 3 explains in detail our proposed video Tone Mapping Operator. In addition, it presents the results of conducted subjective tests to evaluate the proposed TMO against state-of-the-art online TMOs, along with analysis and discussion. Finally, conclusions and future work are drawn in Chapter 4. 5  Chapter 2: Background 2.1 High Dynamic Range (HDR) Technology 2.1.1 Overview Unlike the existing SDR technology, HDR technology aims at capturing, distributing, and displaying a range of luminance and color values that better correspond to what the human eye can perceive. The term luminance stands for the photometric quantity of light arriving at the human eye measured in candela per square meter (cd/m2) or nits. In the real world, our eyes can perceive a dynamic range of over 14 orders of magnitude through adaptation. Order of magnitude represents the difference in powers of ten between highest and lowest luminance value. However, the human eye can only resolve up to five orders of magnitude at a single adaptation instance [23], as demonstrated in Fig. 2.1 below.   Figure 2. 1 The dynamic range of a real-world scene along with the capabilities of the HVS, SDR and HDR capturing and display technologies. [1] 6  The large installed base of Standard Dynamic Range (SDR) TVs are capable of displaying a small range of light between 0.1 and 100 cd/m2, and up to 600 cd/m2 with some of the latest high-end displays. This is a huge deviation from the wider light range, 0.005 up to 10,000 cd/m2, and color range that the High Dynamic Range (HDR) technology can offer [24]. As a result, HDR content displayed on HDR display will have more details and better representation of contrast and colors as compared to capturing the same content with SDR technology and displaying it on a conventional SDR display. 2.1.2 Rendered HDR Gaming Content vs Real-Life Using physical-based rendering techniques, game engines try to mimic light propagation and colors perception in real life. In rendering, surfaces are modeled statistically in terms of reflecting and refracting light in multiple directions [25] as demonstrated in Fig 2.2. Bidirectional Reflectance Distribution Functions (BRDFs) model the propagation of light illuminated from direct light sources and calculate the amount of light scattered from a surface point depending on the surface type, incident light direction and the view direction.  Figure 2. 2 Examples of bidirectional reflectance distribution functions (BRDFs) used in HDR rendering [25] 7  However, in reality, the bouncing light from surfaces will eventually contribute to the light information in the scene as an indirect source of light. Due to the high computational complexity of ray tracing techniques to track the path of bouncing light rays from surfaces [26], real time rendering applications employ the concept of light caches [27]. Light caches are samples distributed throughout the 3D geometry of the rendered scene that store reflected light from surfaces, which are demonstrated as navy blue points in Fig 2.3 above. Stored light samples are then interpolated through the scene to compensate for the light information contributed by the indirect light sources. The difference in the visual quality of the rendered scene in case indirect light source contribution was included or disregarded at the rendering stage is illustrated in Fig 2.4. As we can see, accounting for indirect light sources will give the 3D objects and their shadows a more realistic look.   Figure 2. 3 Demonstration of light caches distribution in the rendered scene [27]   Figure 2. 4 Effect of indirect light sources on the quality of the rendered scene: without light caches (Left) with light caches (Right) [27] 8  Because of the approximations in light modeling, game engines will never be able to accurately replicate real life texture. This shows up in the number of colors generated in a rendered scene which is much less than the number of colors that can be captured in real life. Consequently, luminance values in rendered HDR gaming scenes are concentrated in smaller areas of the histogram, as demonstrated in Fig. 2.5(b), and not spread all over the full HDR range, as it is the case in real-life captured HDR content, demonstrated in Fig. 2.5(a) above. 2.1.3 Perceptual Quantization The Perceptual Quantizer (PQ) [28] is an inverse Electro Optical Transfer Function (EOTF) designed to encode light intensities in a non-linear way with respect to the Human Visual System (HVS) properties. Our human visual system does not perceive differences between consecutive light values equally along the full HDR range. Just Noticeable Difference (JND) threshold [29,30] is the minimum difference between two consecutive light values that makes them distinguishable to our eyes. This minimum difference threshold increases in a nonlinear way as light values increase, and any two light values whose difference falls below the corresponding JND threshold   (a)                                                                                    (b) Figure 2. 5 Difference in distribution of light information in the histogram of a real-life captured HDR image (a) vs that of a rendered HDR image (b) 9  will be perceived by our eyes as one light value. As such, PQ is designed to convert light values from the physical domain to a perceptually linear domain (i.e., any variation of intensity at any brightness level is seen the same way by the human eye). Inverse EOTF transforms the captured physical light values to code words for digital pixel representation. A desirable goal is to encode the highest amount of visual information using fewer code words. Hence, as higher perceptual linearity leads to fewer code words, the degree of perceptual linearity of the EOTF has a direct impact on its performance. Gamma encoder, standardized as BT.1886 [31], is an EOTF specifically designed for SDR technology. Gamma EOTF efficiently encodes luminance values that fall into the SDR luminance range (0.1 to 100 cd/m2). However, it fails to maintain the same encoding efficiency while dealing with a wider luminance range with much larger peak values in HDR (0.005 to 10,000 cd/m2). As such, Perceptual Quantizer (PQ) standardized as SMPTE ST 2084 emerged in 2012 as the new EOTF that delivers high encoding efficiency with the wide luminance range supported by the new HDR technology. The derivation of PQ is based on the peak sensitivity values of Barten’s Contrast Sensitivity Function (CSF) [32], which models HVS contrast detection threshold with respect to spatial frequency, background luminance and the viewing angle. The way PQ utilizes code words with respect to the supported luminance range is demonstrated in Fig. 2.6 below. Figure 2. 6 PQ utilization of code words as function of maximum luminance of the HDR content [28] 10  2.2 Tone Mapping Operators 2.2.1 Overview Due to the limitation in contrast and peak luminance reproduced by the widely available Standard Dynamic Range (SDR) displays, the dynamic range and color gamut of the HDR gaming content has to be reduced to match display’s capabilities. This is achieved through a process known as tone mapping. Inevitably, tone mapping results in loss of information and lower visual quality in the final SDR content as compared to the originally rendered HDR content. However, as illustrated in  (a)  (b) Figure 2. 7 Example of the difference in visual quality between SDR rendered scene (a) and tone-mapped HDR rendered scene (b) 11  Fig 2.7(b), the amount of details preserved in the tone-mapped scene is more than what can be rendered in SDR by the older generation of game engines, as in Fig 2.7(a). Tone Mapping Operators are classified as “offline” and “online” TMOs. Both online and offline TMOs are used during the post production phase of HDR content as shown in Fig. 2.8 (a). Those TMOs don't have to be content adaptive or optimized for real-time applications, as they are manually tuned by artists through a grading process of HDR content in post-production houses. On the other hand, only online TMOs can be used in real-time applications, such as gaming applications, as show in Fig. 2.8 (b).   (a)  (b) Figure 2. 8 The choice between using offline and online TMO in postproduction tone-mapping of HDR content (a) versus the constraint of using online TMO in gaming applications (b) 12  One of the state-of-the-art online TMOs employed by the video gaming industry is the Real-time Automatic Global TMO by Kiser et al. [34]. This global TMO extends the photographic operator [10] with automated parameter estimation [35] for video applications. In order to utilize the available SDR range and make the TMO less prone to extreme fluctuations in the light values of input frame, Kiser TMO clamps the input HDR frame based on the black and white levels of HDR light histogram. In addition, Kiser TMO filters the estimated input parameter over time using an exponential averaging low pass filter. This results in the elimination of global brightness flickering artifacts while delivering an automated real-time performance. Eilertsen’s et al. [36] Noise-aware Global TMO is a video TMO that aims at preserving the contrast of the original HDR content without increasing the noise present in the original content. This TMO divides the input HDR frame into details- layer and base-layer by applying a novel edge-stop filter. This filter is designed as a unified formulation of bilateral filtering and anisotropic diffusion, and optimized for real-time applications. A piece-wise-linear function is calculated based on the histogram of luminance of base-layer encoded in log domain. The details-layer is then scaled according to estimated noise visibility and recombined with the tone-mapped base-layer to deliver the final SDR frame. To ensure smooth brightness changes between consecutive frames over time, the nodes of the piece-wise-linear function are filtered over time, either using a low-pass IIR. A local version of the above method is presented in [36] and aims at saving both the global and local contrast of the original HDR content. This TMO divides the HDR frame into square tiles and tone-maps each tile independently while keep a 10% correlation between the local tile histogram and the histogram of the full HDR frame. The piece-wise-linear function of each tile is filtered independently over time to reduce brightness incoherence. 13  All of the described online video TMOs use different methods to reduce brightness flickering. However, it is not clear how these methods handle scene changes. Smoothing the brightness differences between frames belonging to different scenes will result in loss of visual information and loss of the original artistic intent in those frames. 2.2.2 Flickering in Tone Mapped Video  Tone mapping of HDR videos is relatively a new research topic as HDR video content capturing, rendering and distribution have only recently become widely available. Tone-mapping of HDR video differs in many aspects from processing single HDR images, due to the correlation between frames that has to be maintained. Applying image TMOs on video sequences without carefully considering temporal coherence results in visual artifacts such as ghosting and brightness flickering [18,19,20]. Brightness flickering is caused by an abrupt brightness differences between consecutive frames. Even the slightest disruption may be well visible due to the sensitivity of the human visual system to temporal changes.  Local TMOs are more prone to ghosting artifacts than global TMOs. Besides, their input parameters require continuous tuning via user interaction on a frame-by-frame basis, which limits their use to offline applications only. Although both local and global TMOs are prone to brightness flickering, the problem may be more evident in the former case. Abrupt brightness changes are desirable only when they exist in the original HDR content, i.e., part of the original artistic intent. Traditional and widely used flickering reduction techniques [21, 22] try to smooth brightness differences between successive frames through temporal filtering. That, however, leads to reduced visual quality as original mapping is compromised, while special care should be taken for avoiding scene changes as that will result in altering the artistic intent of the original HDR content.  14  Scene changes are divided into two parts: abrupt (hard-cut) or gradual [33]. Hard-cut scene changes result in larger variations of brightness information between frames than gradual scene changes. Thus, it is desirable to detect hard-cut scene changes in order to preserve the original artistic intent in the tone-mapped content. Therefore, an ideal flickering reduction method should be able to detect hard-cut scene changes and distinguish them from other gradual brightness changes.   15  Chapter 3: Our Proposed Content Adaptive Tone Mapping Operator 3.1 Introduction In this thesis, we propose an automated, low complexity and content adaptive TMO specifically designed for video gaming applications. The proposed TMO follows the behavior of the human visual system (HVS) and produces a tone-mapped content that best matches the appearance of the original HDR content. Our method is extended to tone mapping of HDR video sequences by using a novel flickering reduction method that eliminates brightness inconsistences caused by the tone-mapping process while successfully detecting scene changes. We parameterize our tone-mapping curve as a piece-wise-linear function due to its low computational complexity and its good control over HDR range compression. We start with a set of slopes that preserve contrast and details at mid-luminance levels while compressing information in highlights and shadows. In order to make our method content adaptive, we modify the slopes of the original mapping function depending on the distribution of information in the input HDR content in the perceptual domain. This is achieved by first decreasing the slopes corresponding to perceptually encoded HDR luminance values with low population; thus, decreasing their allocated destination SDR range. Then, we redistribute the detained range to perceptually encoded HDR luminance values with medium/high population by increasing their slopes. In order to preserve as much details as possible while maintaining the original artistic look of the HDR content, we restrict the slope at each perceptually encoded HDR luminance level to a maximum and minimum value. The restriction on the lower bound of each slope will prevent visual information from being totally clipped. Moreover, it ensures that the minimum allocated SDR 16  range to information at each luminance level in HDR mimics the minimum sensitivity of the human visual system to details at those levels. Besides, by reserving a minimum SDR range for dark HDR luminance values, the lower bound ensures that bright HDR luminance values will not be mapped to very dark luminance values in SDR, which prevent the SDR content from being perceived relatively too dark as compared to the original bright content. On the other hand, the imposed restriction on the upper bound of each slope will prevent contrast exaggeration and creation of visual noise that was not visible in the original HDR content. In addition, it ensures that luminance values in HDR will not be mapped to brighter luminance values in SDR, which prevent the SDR content from being perceived relatively brighter than the original HDR content. The following subsections described our tone-mapping method in detail. 3.2 Proposed Method 3.2.1 Overview The block diagram of the proposed TMO is presented in Fig. 3.1. We start by extracting the luminance channel of the HDR image/frame and we perceptually encode it using the most recent Perceptual Quantizer (PQ) function [28], specifically designed for HDR and standardized as SMPTE ST 2084. We compute the histogram of the perceptually encoded HDR luminance channel, and then we divide its bins into two categories: under-populated bins, and medium/high populated bins. The tone-mapping curve is then calculated in two steps. First, we calculate the starting curve, and then we modify its slopes depending on the histogram of perceptually encoded luminance. Finally, we apply our flickering reduction method that eliminates brightness inconsistences caused by the tone-mapping process while successfully detecting scene changes. 17   3.2.2 Perceptual Encoding  First, we extract the luminance channel of the input HDR image/frame. Since our human visual system doesn’t perceive differences between consecutive light values equally along the full HDR range, we move from the physical light domain to the perceptual domain, which is a closer representation of the way the human eye perceives those light values. This is achieved by converting the extracted luminance channel to the perceptual domain using the most recent Perceptual Quantizer (PQ) standardized as SMPTE 2084. Then, we calculate the histogram of the perceptually encoded light values. We propose to use a histogram with 2048 bins as each bin’s width corresponds to the Just Noticeable Difference (JND) threshold [29,30], which is the minimum difference between two consecutive light values that makes them distinguishable to our eyes. This histogram represents the distribution of light information in the HDR image/frame along distinguishable light values, and the height of each bin  Figure 3. 1 Block diagram of the proposed TMO 18  indicates the number of pixels in the input HDR image/frame at that perceptual light level. Figure 3.2 shows an example of such distribution. The histogram covers the full input HDR light range 0.005 to 10,000 cd/m2, which corresponds to PQ values between 0 and 1, and total number of bins corresponds to the total number of distinguishable light values in HDR. 3.2.3 Histogram PQ Bins Classification In order to make our tone-mapping method content adaptive and preserve details of the original HDR content, we decrease the SDR range allocated to under-populated bins, and we redistribute the detained range to medium/high populated bins. For this reason, we classify histogram bins based on their population into two categories: under-populated bins, and medium/high populated bins. We calculate the under-populated bins threshold using the maximum entropy thresholding method [37]. Entropy is the measure of uncertainty of the output of an experiment, the higher the entropy, the more uncertain we are about the output. In other words, we are looking for a bin with the highest uncertainty of whether it belongs to the under-populated or the medium/high populated category. First, we sort the histogram bins of Fig. 3.2 in ascending order and then we apply maximum entropy thresholding as shown in Fig. 3.3(a). As we go from the smallest bin up to the  Figure 3. 2 Histogram of perceptually encoded light values of an input HDR image/frame 19  largest bin, we calculate H0 that measures how uncertain we are that the bin belongs to under-populated category and not to medium/high populated category, and H1 that measures how uncertain we are that the bin belongs to medium/high populated category and not to under-populated category. As illustrated in Fig. 3.3(b), the under-populated bins threshold will be the bin at which the sum of H0 and H1 is maximum. The final categorized histogram is shown in Fig. 3.4 in which under-populated bins are represented in violet and medium/high populated bins are represented in green.    (a)                                                                    (b)  Figure 3. 3 Demonstration of the histogram bins sorted in ascending order (a) and the calculated Maximum Entropy threshold (b)  20  3.2.4 Slopes Calculation 3.2.4.1 Starting Curve Since the majority of HDR light information is usually concentrated in mid-luminance ranges, from 1 to 250 cd/m2 [28], we start with an “initial” tone-mapping curve that preserves contrast and details of the original HDR content at mid-luminance levels while compressing information in highlights and shadows. This curve is derived by calculating the normalized HVS response in Eq. 1 at adaptation level equal to SDR display’s maximum brightness, then scaling it linearly to the maximum and minimum brightness of the SDR display in cd/m2, with 𝑛 = 1 in our case. 𝐻𝑉𝑆 𝑅𝑒𝑠𝑝𝑜𝑛𝑠𝑒 =(𝐿𝐻𝐷𝑅)𝑛(𝐿𝐻𝐷𝑅)𝑛+ (𝐿𝑎𝑑𝑎𝑝𝑡𝑎𝑡𝑖𝑜𝑛)𝑛                 0.7 < 𝑛 ≤ 1                        (1) Where LHDR represents the input light value of the HDR image/frame in cd/m2, L adaptation represents the chosen adaptation light level in cd/m2. As tone mapping is performed in the PQ domain, we map the starting curve calculated in the physical light domain to the PQ domain. We divide the curve into equal segments with width equal to a JND unit, and we calculate the linear slope of each  Figure 3. 4 Example of histogram bins categorization  21  segment. This gives us the starting piece-wise-linear tone-mapping function shown in Fig. 3.5, for a default SDR display with maximum brightness of 100 cd/m2 and minimum brightness of 0.1 cd/m2, which corresponds to a PQ range between 0.0623 and 0.5081. In order to generate SDR content that best matches the capabilities of the target display, our proposed TMO takes the target display's maximum and minimum brightness, and scales the SDR range (y-axis) accordingly. These parameters can be retrieved through the HDMI connection. However, if they are not available, our TMO uses the default values presented before.  Figure 3. 5 Starting piece-wise-linear tone-mapping function in PQ domain for input PQ HDR range between 0 and 1 and output PQ SDR range between 0.0623 and 0.5081 (Default Parameters)  (a)  (b) Figure 3. 6 Demonstration of the way the starting curve maps information at different PQ levels from the HDR range in (a) to the limited SDR range in (b) 22  Figure 3.6 demonstrates how the starting slopes of our mapping function maps HDR PQ values to SDR PQ values. We consider a tone-mapping example of HDR content with light values between 0.005 and 4000 cd/m2 to SDR range between 0.1 and 100 cd/m2. This corresponds to mapping HDR PQ values between 0.0151 and 0.9026 shown in Fig 3.6(a) to SDR PQ values between 0.0623 and 0.5081 as shown in Fig. 3.6(b). Blue bins represent information at dark PQ values, green bins represent information at mid-luminance PQ values, and orange and yellow bins represent information at bright PQ values. As illustrated in the mapping example of Fig 3.6, information in shadows and highlights are mapped to a smaller SDR range; whereas, information at mid-luminance levels is allocated larger range in SDR. 3.2.4.2 Content Adaptive Slopes Readjustment The starting curve provides a good set of initial slopes; however, in order to achieve the best visual quality in the tone-mapped content, we propose to adjust the slopes of our mapping function depending on distribution of information in the histogram of perceptually encoded HDR luminance channel. This is achieved by decreasing the SDR range allocated to under-populated bins, and redistributing the saved range to medium/high populated bins. In order to ensure that the proposed slopes readjustment method will not affect the original artistic look of the HDR content, we propose restrictions of the maximum and minimum values that a slope can be changed within, which will be discussed in the following subsections.    23  3.2.4.2.1 Upper Bound of Slopes A slope of 1, in the PQ domain, will preserve visual details between two consecutive PQ levels. Increasing the slope above 1 will result in contrast exaggeration and creation of visual information that did not exist in the original HDR content. Hence, we limit the slopes of our mapping function to be less than or equal 1 in the PQ domain. However, the HDR range starts at darker luminance levels than its SDR counterpart, more specifically 0.005 cd/m2 and 0.1 cd/m2 correspondingly with an offset of 0.095 cd/m2 between the two ranges. In case of dark content, histogram bins associated with those luminance levels have higher population. Clipping all the information at dark luminance values between 0.005 cd/m2 and 0.1 cd/m2 in HDR to 0.1 cd/m2 in SDR will cause loss of details; whereas, allocating a slope of 1 to those bins will map them to brighter luminance values in SDR, which makes the SDR content look brighter than the original HDR. Figure 3.7 illustrates how allocating a slope of 1 to bins with dark HDR PQ values, represented by the first three dark blue bins in Fig. 3.7(a), will map this information to brighter PQ values in SDR as shown in Fig. 3.7(b). The first 3 bins were shifted towards brighter PQ levels. This causes the  (a)  (b) Figure 3. 7 Demonstration of the way information at dark PQ values in the HDR range (a) are shifted towards brighter values when mapped to the SDR range (b) 24  SDR content to be perceived brighter than the HDR content, drastically changing the artistic intent. This problem is not encountered in case of normal and bright HDR content as information will be concentrated in bins at brighter PQ levels in HDR. Hence, allocating a slope of 1 to bins at bright PQ values will not cause loss of the artistic intent in terms of brightening up the SDR content as those values will always be mapped to darker luminance values in SDR as illustrated in Fig 3.8. This problem presents a tradeoff between preserving visual details at dark luminance levels in HDR and preserving the original artistic look in final tone-mapped SDR content. To address this problem, we propose a new constraint that would preserve details at dark luminance levels while minimizing brightness difference between HDR and SDR in dark areas. This is achieved by restricting the luminance of mapped HDR light values in SDR. The new brightness constraint will prevent input HDR light values (LHDR) from being mapped to a luminance value in SDR brighter than LHDR by 0.095 cd/m2, which is the offset that separates both HDR and SDR ranges at dark luminance. For example, input LHDR = 5 cd/m2 will not be mapped to a value greater than 5.095 cd/m2 in SDR. As the SDR range is limited by the maximum brightness of the SDR display, our  (a)  (b) Figure 3. 8 Demonstration of the way information at mid and bright PQ levels in HDR range (a) will always be mapped to darker PQ levels in SDR (b) 25  constraint will only affect the upper bound of slopes corresponding to HDR light values that are smaller than the maximum light value in SDR. Since the mapping process is done in the PQ domain, a new upper bound for the slope between consecutive light values HDR is defines and calculated using Eq. 2 below. 𝑆𝑙𝑜𝑝𝑒𝑚𝑎𝑥(𝐿1, 𝐿2) =𝑃𝑄(𝐿2+ 𝐿𝑂𝑓𝑓𝑠𝑒𝑡)− 𝑃𝑄(𝐿1+𝐿𝑂𝑓𝑓𝑠𝑒𝑡)𝑃𝑄(𝐿2)− 𝑃𝑄(𝐿1)                                              (2) Where L1 and L2 correspond to two consecutive input light values HDR in cd/m2, and L Offset corresponds to the 0.095 cd/m2 offset that separates both HDR and SDR ranges at dark luminance. Figure 3.8 below shows the new maximum values of slopes of our mapping function for a SDR display with minimum brightness of 0.1 cd/m2 and HDR content with minimum light value of 0.005 cd/m2.  Figure 3. 9 Values of the proposed slopes’ upper bound at different PQ levels of the input HDR range 26  Figure 3.10 demonstrates how applying the maximum brightness constraint will prevent information at dark PQ values in HDR, Fig. 3.10(a), from being shifted to brighter PQ values when mapped to the SDR range, as shown in Fig. 3.10(b) and as opposed to the case in Fig. 3.7(b). 3.2.4.2.2 Lower Bound of Slopes Human’s eye sensitivity to changes in brightness at different luminance levels changes depending on eye’s adaptation in the viewing environment. We are more sensitive to changes at low luminance levels in a dark setting than we are to the same levels in a bright setting. Whereas, in bright setting we are more sensitive to changes at high luminance levels than we are to the same levels in a dark setting [38]. However, there is a minimum sensitivity level that our eyes will not go below despite its adaptation at any luminance level. Therefore, we restrict the local slope at each PQ level to the minimum sensitivity of our eyes to that level. First, we calculate the normalized response of HVS at all adaptation levels, starting from the maximum brightness of SDR display up to the maximum brightness of the HDR content. Each HVS response curve is scaled linearly to maximum and minimum brightness of the SDR display,  (a)  (b) Figure 3. 10 Illustration of how the proposed maximum brightness constraints prevents dark PQ values in HDR (a) from being mapped to brighter PQ values in SDR (b) 27  and then mapped to PQ domain. At each PQ level, we find the minimum slope allocated by all of the mapping curves. We choose this set of slopes, shown in Fig. 3.11, as the slopes lower bound. We exclude HVS adaptation to luminance levels below the maximum brightness of SDR display. This is because in a real-life gaming setting, our eyes will not change their adaptation to luminance level below the maximum brightness of the display. This change in adaptation happens if the player focuses at a static scene whose maximum brightness less than maximum brightness of the display for more than 3 minutes [39]. However, we will not face this issue in a real-life gaming setting as the brightness of the scene is changing continuously according to the game scenario.  This restriction on the lower bound of each slope will prevent visual information from being totally clipped in underpopulated bins. In addition, by reserving a minimum SDR range for dark HDR luminance values, the lower bound ensures that bright HDR luminance values will not be mapped to very dark luminance values in SDR, which prevent the SDR content from being perceived relatively too dark as compared to the original bright content.  Figure 3. 11 Values of the proposed slopes’ lower bound at different PQ levels of the input HDR range 28  3.2.4.2.3 Slopes Readjustment Method In order to achieve the best visual quality in the tone-mapped content, we readjust the slopes of our mapping function depending on distribution of information in the histogram of perceptually encoded HDR luminance channel. This is achieved by decreasing the SDR range allocated to under-populated bins, and redistributing the detained range to medium/high populated bins. The histogram of the input HDR image/frame and the starting curve are demonstrated in Fig. 3.12(a). We start by decreasing the slopes of under-populated bins to slope values defined by the lower bound as demonstrated in Fig 3.11(b). Then, we redistribute the detained range by increasing the slopes of medium/high populated bins proportionally to their population, i.e. more range is distributed to highly populated bins than medium populated ones. Redistribution of the detained range and the final tone-mapping function are demonstrated in Fig 3.11(c) and Fig 3.11(d) correspondingly. We model the redistribution method as a minimization problem in which we minimize the difference between the initial slopes 𝑆𝑖 of medium/high populated bins, defined by the starting curve, and the slopes defined by the upper bound denoted as 𝑆𝑢.  We define 𝛼 as the percentage increase in 𝑆𝑖 so it would be as close as possible to 𝑆𝑢. The convex minimization equation is shown in Eq. 3 below. minimize Ε{‖𝑆𝑢 − (1 + 𝛼)𝑆𝑖‖22}                                               (3) 𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜: ∑ 𝛼𝑘𝛿𝑆𝑖𝑘𝑁𝑘=1=  𝑅𝑑𝑒𝑡𝑎𝑖𝑛𝑒𝑑 𝛼𝑘 ≥ 0 29  Where 𝛿 represents each bin’s width and 𝑁 is the number of bins in medium/highly bins category. The first constraint of the minimization equation ensures that the redistributed range doesn’t exceed the detained range 𝑅𝑑𝑒𝑡𝑎𝑖𝑛𝑒𝑑, and the second constraint ensures that slopes of medium/highly bins are not decreased.  The minimization function in Eq. 3 above can be rewritten as Eq. 4, with 𝑝𝑘 representing the ratio of each medium/high bin population over the total number of pixels. Ε{‖𝑆𝑢 − (1 + 𝛼)𝑆𝑖‖22} =  ∑ 𝑝𝑘(𝑆𝑢𝑘 − (1 + 𝛼𝑘)𝑆𝑖𝑘)2𝑁𝑘=1                                (4)  (a)                                                                           (b)  (c)                                                                           (d)  Figure 3. 12 Demonstration of slopes readjustment method: starting curve (a) Range detaining (b) Range redistribution (c) and the final mapping curve (d) 30  Solving the minimization problem in Eq. 3 gives us the percentage increase in the slope of each medium/high populated bin shown in Eq. 5 below. 𝛼𝑘 =  𝑆𝑢𝑘𝑆𝑖𝑘− 1 − 1𝑆𝑖𝑘 𝑝𝑘∑ (𝑆𝑢𝑙− 𝑆𝑖𝑙)𝑁𝑙=1 − 𝑅𝑑𝑒𝑡𝑎𝑖𝑛𝑒𝑑𝛿∑1𝑝𝑙𝑁𝑙=1                                           (5) The above solution may result in negative values of 𝛼𝑘 violating the second constraint in Eq. 3. Thus, we exclude medium/high populated bins whose probability doesn’t satisfy Eq. 6 below.  𝑝𝑘 ≥  ∑ (𝑆𝑢𝑙− 𝑆𝑖𝑙)𝑁𝑙=1 − 𝑅𝑑𝑒𝑡𝑎𝑖𝑛𝑒𝑑𝛿(𝑆𝑢𝑘− 𝑆𝑖𝑘) ∑1𝑝𝑙𝑁𝑙=1                                                        (6) Note that the ratio 1𝑝𝑙 will always be valid with no zero values in the denominator since the summation ∑1𝑝𝑙𝑁𝑙=1  considers medium and highly populated bins only.        31  3.2.5 Scene Detection and Flickering Reduction Tone-mapping of HDR video differs in many aspects from processing single HDR images, due to the correlation between frames that has to be maintained. Applying image TMOs on video sequences without carefully considering temporal coherence results in visual artifacts such as brightness flickering [18,19,20]. Brightness flickering is caused by any abrupt brightness changes between consecutive SDR frames. Even the slightest disruption may be well visible due to the sensitivity of the human visual system to temporal changes. These abrupt changes are desirable only when they exist in the original HDR content and they are part of the original artistic intent. Common and widely used flickering reduction techniques [21,22] try to smooth brightness differences between successive frames by filtering the mapping curves temporally with a low pass filter. This of course has an impact on the efficiency of the mapping process. Besides, those techniques are not theoretically proven to eliminate flickering in all tone-mapping scenarios. In addition, limiting the changes in the tone-mapping curve across scene changes results in altering the artistic intent of the original HDR content. By smoothing the brightness differences between frames belonging to different scenes, the mapping curve would not be able to instantaneously adapt to the new scene, resulting in loss of visual information. In this section, we propose a flickering reduction method that eliminates brightness inconsistences caused by the tone-mapping process while successfully detecting scene changes. Scene changes can be divided into two parts: abrupt (hard-cut) or gradual [33]. Hard-cut scene changes result in larger variations of brightness information between frames than gradual scene changes. Thus, it is desirable to detect hard-cut scene changes, and distinguish them from other 32  gradual brightness changes, in order to preserve the original artistic intent while applying our flickering reduction method. We propose to limit changes in the tone-mapping curve to eliminate brightness fluctuations between original HDR scene and the tone-mapped scene. In order to achieve this, we propose to limit the variation of each node of our piece-wise-linear function between consecutive frames of the same scene to 1 Just Noticeable Difference (JND) unit in the PQ domain, as illustrated in Fig. 3.13 below. This will eliminate brightness inconsistences caused by the tone-mapping process between consecutive frames of the same scene, as those variations in brightness fall below the detection threshold of temporal changes for human observers [28]. However, regulating changes in the piece-wise-linear function between frames at hard-cut scene changes will cause brightness discontinuity in those frames, altering the original artistic intent. Our method resolves this by preventing mapping function regulations at hard-cut scene changes. We use the difference in brightness of two consecutive frames as an indicator of hard-cut scene change  Figure 3. 13 Demonstration of the proposed method of eliminating flickering caused by the mapping function of two consecutive frames by limiting changes between nodes of the mapping functions to 1 JND 33  due to its accuracy at low computational cost. One indicator of brightness difference between consecutive frames is the % of pixels whose PQ value changes between the two frames. This is calculated using Eq. 7 and as illustrated in Fig. 3.14. % 𝑜𝑓 𝑝𝑖𝑥𝑒𝑙𝑠 =  ∑ |𝐵𝑘− 𝐵𝑘′ |𝑁𝑘=12 ×𝑇𝑜𝑡𝑎𝑙 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑃𝑖𝑥𝑒𝑙𝑠                                                (7) Where 𝐵𝑘′  represents bins population of the histogram of the current frame, 𝐵𝑘 represents bins population of the histogram of the previous frame, and N is total number of histogram bins. Our experimental results show that a hard-cut scene change is always detected when % of pixels whose PQ value changes exceeds 15%. On the other hand, no hard-cut scene change is detected when % of pixels whose PQ value changes is below 5%. However, in case the % of pixels whose PQ value changes is between 5% and 15 %, a further step has to be taken to differentiate a gradual scene change from a hard-cut scene change.  In order to do that, we calculate the 2nd order difference, as shown in Eq. 8 below. The second order difference provides a good indicator of the rate of change in the difference in perceptual  Figure 3. 14 Example of histograms of consecutive frames belonging to different scenes 34  brightness between frames. Our experimental results show that a hard-cut scene change can be detected for a 2nd order difference value greater than 16. 𝑓(𝑡)′′ =  𝑓(𝑡−1) − 2𝑓(𝑡) +  𝑓(𝑡+1)                                                  (8) Where 𝑓(𝑡)′′  represents the calculated second order difference at the current frame, 𝑓(𝑡) represents % of pixels whose PQ value changed calculated at the current frame, 𝑓(𝑡−1) represents % of pixels whose PQ value changed calculated at the previous frame, and 𝑓(𝑡+1) represents % of pixels whose PQ value changed calculated at the next frame. It is worth mentioning that experimental results conducted over a large database of video sequences with 225 scene changes shows that approximately 90% of the hard cut scene changes correspond to a difference in the perceptual brightness that exceeds 15% of frame pixels.        35  3.3 Results and Discussion  We evaluated the performance of our TMO in terms of the overall visual quality. Our method is classified as online TMO and has the potential to be implemented in real-time. Without considering I/O operations, our un-optimized, single thread, Matlab implementation requires on average 0.249579 seconds to process a color frame at 1920x1080 HD resolution, on a computer with CPU Intel i7 5820k. We performed subjective evaluations comparing the proposed method with the online state-of-the-art TMO [34,36]. Video sequences used in the subjective tests were generated from the free demo games provided by Epic Unreal Engine4 which include: SunTemple, RealisticRendering, Reflections, Blueprints, ElementalDemo and VehicleGame. The peak luminance of each scene is 4,000 cd/m2 and the HDR video frames were extracted in the OpenEXR format [2]. 3.3.1 Subjective Evaluation Since our TMO is a content adaptive online method, we compared it with the state-of-the-art online TMOs, Kiser et al [34] (denoted as “Kiser”) and Eilertsen et al [36] when applied as a global operator (denoted as “Global Eilertsen”) and as a local operator (denoted as “Local Eilertsen”). Our subjective tests were performed on 6 video sequences named: Dungan, Basement, Apartment, Statue, Pyramids and Garden. The duration of each video is 10 secs with frame rate of 60 frames per second. The first frame of each sequence is shown in Fig. 3.15 at Exposure = 2-9, which gives us an idea about the type of each HDR content. The three displays used in our subjective tests are: one HDR display Sim2 HDR47E and two Samsung KS9800 SDR displays. The original HDR video sequences, with maximum brightness of 4,000 cd/m2, were tone-mapped to SDR range of 36  0.1 – 600 cd/m2 as it corresponds to the maximum and minimum brightness of the two SDR displays used in the subjective tests. The HDR luminance range (0 – 10,000 cd/m2) can be divided according to [28] to 4 main subranges: dark luminance and shadows (0 – 1 cd/m2), normal luminance (1 – 250 cd/m2), bright luminance (250 – 1,000 cd/m2) and highlights (1,000 – 10,000 cd/m2). Both “Dungan” and “Basement” sequences are considered dark sequences with more than 50% of the light information in the dark luminance and shadows and less than 5% of the information in bright luminance and highlights. The “Apartment” sequence is considered brighter than “Dungan” and “Basement” sequences, as it has only 30% of the light information in the dark luminance and shadows and more information in the normal luminance. The “Statue” sequence is considered normal sequence with 90% of the light information in the normal luminance, 8% in highlights and only 2% in shadows. Finally, “Pyramids” and “Garden” sequences are considered bright sequences. The “Pyramids” sequence has 45% of the light information in normal luminance and 65% in bright luminance and highlights. The “Garden” sequence is even brighter with more than 75% of the light information in bright luminance and highlights and 20% in normal luminance.    37       (a) Dungan                                                          (b) Basement   (c) Apartment                                                          (d) Statue   (e) Pyramids                                                          (f) Garden  Figure 3. 15 First frame of each HDR sequence displayed at exposure of 2-9 38  We performed two independent subjective tests. The first test included Side-by-Side evaluation of the tone-mapped videos displayed on the SDR TVs against the original HDR videos displayed as a reference on the HDR TV [40]. The visual fidelity (brightness, color, contrast, details and artistic intent) of the tone-mapped video sequences to the reference HDR video sequences were evaluated on 1 to 10 scale. The order of videos in each test session was randomized, and extra care was taken for the results of the same TMO not to be displayed consecutively. We used the method proposed in [40] to detect outliers. In the second test, we evaluated the results of two TMOs at a time, displayed side-by-side on two SDR TVs while having the original HDR video displayed on the HDR TV as a reference. We followed the procedure for simultaneous paired comparison as it is described in [41]. In this test, each pair of videos was the resulting tone-mapped sequence produced by two out of the four TMOs. The viewers had to choose between A, B or the same in case they could not detect any difference between the two stimuli. The video pairs were randomized in each session. Eighteen subjects (11 males and 7 females) participated in the tests, and the average age of the subjects was 24 years old. All of the subjects were non-expert viewers, with negligible experience in HDR video subjective testing. All of the subjects were tested for visual acuity and color vision as described in [40]. Each test session included the two described subjective tests, whose order was randomized between the test sessions. At the beginning of each subjective test session, test procedure and evaluation task were introduced in a training session using a set of different training video sequences. Two outliers were detected and their rates were discarded from the results.  39     (a)                                                                      (b)   (c)                                                                      (d)   (e)                                                                      (f) 00.10.20.30.40.50.60.70.80.91DML Kisser GlobalEilertsenLocalEilertsenDungan00.10.20.30.40.50.60.70.80.91DML Kisser GlobalEilertsenLocalEilertsenBasement00.10.20.30.40.50.60.70.80.91DML Kisser GlobalEilertsenLocalEilertsenApartment 00.10.20.30.40.50.60.70.80.91DML Kisser GlobalEilertsenLocalEilertsenStatue00.10.20.30.40.50.60.70.80.91DML Kisser GlobalEilertsenLocalEilertsenPyramids00.10.20.30.40.50.60.70.80.91DML Kisser GlobalEilertsenLocalEilertsenGardenFigure 3. 16 Visual fidelity subjective test results 40  Figure 3.16 depicts the results of the subjective test in terms of visual fidelity of the tone mapped sequences with the original HDR content for all the tested TMOs. The Mean Opinion Score (MOS) for each TMO was calculated as the average score of all subjects with 95% confidence interval. We observe that subjects ranked the proposed method (denoted as “DML”) as the closest to the original HDR content for all the video sequences. The SDR results generated by the proposed DML TMO are closer to the original HDR than the SDR results generated by Kiser et al [34] TMO (denoted as “Kiser”) on average by 41.16%. Moreover, the SDR results generated by the proposed DML TMO are closer to the original HDR than the SDR results generated by Eilertsen et al [36] global TMO (denoted as “Global Eilertsen”) on average by 25.82%. The results of the proposed DML TMO and Global Eilertsen TMO are very close in the “Pyramids” sequence whose light information is almost equally divided between normal and bright luminance ranges, and Global Eilertsen TMO delivers good results with this type of content. Similarly, the results of the proposed DML TMO and Kiser TMO are very close in the “Statue” video sequence. The majority of light information in this scene is concentrated in the normal luminance range with less information in shadows and highlights, and Kiser TMO delivers good results with this type of content. In addition, subjects noticed slight global brightness flickering in both “Apartment” and “Garden” sequences when tone-mapped with Global Eilertsen TMO, which affected their fidelity score. Finally, Eilertsen et al [36] local TMO (denoted as “Local Eilertsen”) scored low with all of the sequences due to the tonal discontinuity between tiles of the frame, which is considered one of the most disturbing visual artifacts.   41  The results of the “Dungan” and “Basement” sequences indicate the efficiency of the proposed slopes upper-bound which saves the original artistic intent while tone-mapping dark scenes. The SDR results of all of the other TMOs look much brighter than the original HDR content, which drastically alters the artistic intent. Furthermore, our TMO saves the artistic intent in bright content too, and delivers brighter results with more preserved details than Eilertsen et al [36] TMO, which is also a content adaptive histogram-based tone-mapping method, in both “Statue” and “Garden” sequences. This is mainly for two reasons: PQ domain and the proposed slopes lower-bound. The PQ domain is more perceptually linear than the Log domain, used in Eilertsen et al [36] TMO, especially in the bright luminance range. This helps us to preserve information at bright luminance levels that are more visible to our eyes. Besides, by reserving a minimum SDR range to dark HDR luminance values, the proposed slopes lower-bound ensures that normal HDR luminance values will not be mapped to very dark values in SDR. This saves that artistic intent in bright HDR content and guarantees that SDR content will not look relatively too dark compared to the original HDR content.   42     (a)                                                                      (b)   (c)                                                                      (d)   (e)                                                                      (f) 00.10.20.30.40.50.60.70.80.91Kiser Global Eilertsen Local EilertsenDungan00.10.20.30.40.50.60.70.80.91Kiser Global Eilertsen Local EilertsenBasement00.10.20.30.40.50.60.70.80.91Kiser Global Eilertsen Local EilertsenApartment 00.10.20.30.40.50.60.70.80.91Kiser Global Eilertsen Local EilertsenStatue00.10.20.30.40.50.60.70.80.91Kiser Global Eilertsen Local EilertsenPyramids00.10.20.30.40.50.60.70.80.91Kiser Global Eilertsen Local EilertsenGardenFigure 3. 17 Side-by-Side subjective test results 43  Figure 3.17 depicts the results of the Side-by-Side subjective evaluation, using two SDR displays, while providing the original HDR video as reference. The results are presented as % of subjects who chose our method (y-axis) over the other 3 methods (x-axis) for each video sequences. The proposed TMO outperforms all of the tested TMOs with all of the video sequences. We notice that for the majority of the video sequences, 90% - 100% of the subjects found the SDR sequence tone-mapped by our TMO as closer to the HDR reference video sequence. The large error bars for Kiser in the “Statue” sequence and for Global Eilertsen in the “Pyramids” sequence indicate that a significant number of subjects chose the resulting SDR videos generated by our TMO and the compared to TMO as “equal”. However, the majority of the rest of the subjects preferred the SDR results generated by our proposed TMO. This reconfirms the very close results of the fidelity test between the proposed TMO and Global Eilertsen in the “Pyramids” sequence, and between the proposed TMO and Kiser in the “Statue” sequence.  It is worth mentioning that preserving the global contrast and preserving the global brightness of the HDR content have an inverse relation, which shows up in the SDR results of the “Statue” sequence generated by Kiser TMO and DML TMO. The SDR result generated by DML TMO preserves the global contrast of the HDR content more than Kiser. However, this comes at the cost of slightly darker global luminance, which explains the indecision of some subjects who graded both SDR results as “equally” close to the original HDR content.    44  3.3.2 Flickering Analysis In this section, we evaluate the performance of our scene detection and flickering elimination methods. We created a 10 seconds HDR video, at a frame rate of 60 frames per second, with hard cut scene changes by combining 120 frames (2 seconds) from the following video sequences as in the order shown in Fig. 3.18 above: Dungan, Garden, Basement, Pyramids and Apartment. Fig 3.19 shows the effect of flickering elimination on preserving global brightness consistency between frames at scene changes. Fig. 3.19(a) shows the geometric mean of the SDR video sequence generated by applying the proposed DML method with scene detection (orange) and original HDR sequence (blue). The geometric mean of the SDR video sequence generated by applying the proposed DML method without scene detection (orange line) and original HDR sequence (blue) are shown in Fig. 3.19(b).  We observe that the continued application of the proposed flickering reduction method without applying scene detection results in brightness inconsistencies in the tone mapped video sequence at scene changes. This shows as brightness abrupt change in the geometric mean of the of the SDR sequences at scene cuts. Whereas, such behavior was avoided by applying the proposed flickering reduction method while detecting scene changes.  Figure 3. 18 Demonstration of the video sequence used in the flickering reduction analysis along with the scenes order 45    (a)  (b) 00.10.20.30.40.50.611937557391109127145163181199217235253271289307325343361379397415433451469487505523541559577595Geometric Mean in PQ DomainFrame NumberGeometric Mean of HDR and SDR SequencesOriginal HDR Proposed Method With Scene Detection00.10.20.30.40.50.611937557391109127145163181199217235253271289307325343361379397415433451469487505523541559577595Geometric Mean in PQ DomainFrame NumberGeometric Mean of HDR and SDR SequencesOriginal HDR Proposed Method Without Scene DetectionFigure 3. 19 Geometric mean of the flickering test results by applying our method with scene detection (a) and without scene detection (b) 46   A visual representation of the global brightness inconsistency between frames at scene changes introduced by applying the proposed flickering reduction method without detecting scene changes is shown in Fig. 3.20. Figure 3.21 shows the results of applying the same proposed method while detecting scene changes.       Figure 3. 20 Visual representation of the brightness incoherence introduced at scene change by applying the proposed flickering reduction method without detecting scene changes Figure 3. 21 Visual representation of preservation of brightness coherence between frames while applying the proposed flickering reduction method and detecting scene changes 47  At the scene change frames, between the dark scene “Dungan” and the bright scene “Garden” in Fig. 3.20, the mapping curve is first adapted to preserve visual information in HDR at low luminance levels. However, the next frames belong to a much brighter scene where most of the information in HDR belongs middle to high luminance levels.  Applying the flickering reduction method across scene changes prevent the curve from instantly adapting to the new scene. This results in brightness in coherence between frames at scene changes. On the other hand, by applying the flickering reduction method while detecting scene changes, we manage to successfully avoid limiting changes in the mapping curves at scene changes as shown in Fig. 3.21, thus maintaining brightness consistency and artistic intent. In summary, our tone mapping approach is an automated, low complexity and content adaptive video TMO, which uses the distribution of HDR light information in the perceptual domain and takes advantage of the unique properties of rendered HDR gaming content to efficiently preserve the global contrast and texture details of the original HDR content. In addition, our method has the potential to be implemented in real-time as our Matlab implementation requires on average 0.249579 seconds to process a color frame at 1920x1080 HD resolution. Finally, our subjective tests rests show that we outperform all existing state-of-the-art online TMOs in delivering the best SDR visual quality and preserving the original artistic intent.    48  Chapter 4: Conclusion and Future Work 4.1 Conclusion In this thesis, we address the backward compatibility of the emerging HDR technology and its challenges in in delivering best SDR quality for video gaming applications, by investigating and improving upon the visual quality of the SDR content generated by state-of-the-art online TMOs. In Chapter3, we proposed an automated, low complexity and content adaptive video TMO. The proposed method uses the distribution of HDR light information in the perceptual domain and takes advantage of the unique properties of rendered HDR gaming content to efficiently preserve the global contrast and texture details of the original HDR scene in the generated SDR scene. Besides, we proposed a unique flickering reduction method that eliminates brightness inconsistences caused by the tone-mapping process while successfully detecting scene changes. Subjective evaluations show that our approach outperforms the state-of-the-art online TMOs. An average of 95% of the subjects preferred the SDR content generated by our TMO over the state-of-the-art online TMOs. In addition, statistical evaluation showed that the proposed flickering reduction method in combination with our scene detection approach efficiently eliminates brightness inconsistences caused by the tone-mapping process while successfully detects scene changes.     49  4.2 Future Work Converting HDR content to the SDR format also includes restricting the scope of HDR color values to match what SDR display technologies can support. The scope of supported color values by SDR displays is described by the ITU-R Recommendation BT.709 [44], more commonly known by the abbreviation BT.709. This standard uses 8 bits to cover approximately 35.9% of the full visible gamut. With the immerging new HDR technology, a larger color gamut was introduced, which is described by the ITU-R Recommendation BT.2020 [45]. BT.2020 uses 10 or 12 bits to cover 75.8% of the full visible gamut. The majority of online TMOs compress the dynamic range of luminance channel and scale the color channels accordingly without taking into account how each color is mapped from the larger BT.2020 gamut to the smaller BT.709 gamut, which causes some colors in SDR to look more saturated then the original colors in HDR. Several color correction methods [42,43] have been proposed over the years; however, they disqualify for rea-time applications due to their high computational complexity. As such, we plan to address the color correction challenge for real-time applications in the YCbCr color space, supported by state-of-art compression standards.     50  Bibliography 1. R. Boitard, M. T. Pourazad, P. Nasiopoulos and J. Slevinsky, "Demystifying High-Dynamic-Range Technology: A new evolution in digital media," Consumer Electronics Magazine, IEEE, vol. 4, pp. 72-86, Oct. 2015. 2. OpenEXR: http://www.openexr.com/. 3. A. Chalmers and K. Debattista, "HDR video past, present and future: A perspective," Signal Processing: Image Communication, vol. 54, pp. 49-55, May 2017. 4.  E. Reinhard, W. Heidrich, P. Debevec, S. Pattanaik, G. Ward, and K. Myszkowski, “High dynamic range imaging,” in High Dynamic Range Imaging: Acquisition, Display, and Image-Based Lighting. Francisco, CA: Morgan Kaufmann, 2010, p. 672. 5. E. Gastal and M. Oliveira, "Domain transform for edge-aware image and video processing," ACM Transactions on Graphics, vol. 30, no. 4, p. 1, Aug 2011. 6. F. Durand and J. Dorsey, "Fast bilateral filtering for the display of high-dynamic-range images," ACM Transactions on Graphics, vol. 21, no. 3, Jul. 2002. 7. S. Pattanaik and H. Yee, "Adaptive gain control for high dynamic range image display," in18th spring conference on Computer graphics, Budmerice, Slovakia, pp. 83-87, 2002. 8. M. Čadík, M. Wimmer, L. Neumann and A. Artusi, "Evaluation of HDR tone mapping methods using essential perceptual attributes," Computers & Graphics, vol. 32, no. 3, pp. 330-349, Jun 2008. 9. G. Larson, H. Rushmeier and C. Piatko, "A visibility matching tone reproduction operator for high dynamic range scenes," IEEE Transactions on Visualization and Computer Graphics, vol. 3, no. 4, pp. 291-306, Oct. 1997. 51  10. E. Reinhard, M. Stark, P. Shirley and J. Ferwerda, "Photographic tone reproduction for digital images," ACM Transactions on Graphics, vol. 21, no. 3, Jul. 2002. 11. F. Drago, K. Myszkowski, T. Annen and N. Chiba, "Adaptive Logarithmic Mapping For Displaying High Contrast Scenes," Computer Graphics Forum, vol. 22, no. 3, pp. 419-426, 2003. 12. A. Akyüz and E. Reinhard, "Perceptual evaluation of tone-reproduction operators using the Cornsweet--Craik--O'Brien illusion," ACM Transactions on Applied Perception, vol. 4, no. 4, pp. 1-29, 2008. 13. P. E. Debevec and J. Malik, “Recovering high dynamic range radiance maps from photographs,” in Proc. 24th Annu. Conf. Computer Graphics and Interactive Techniques (SIGGRAPH’97), New York, 1997, pp. 369–378. 14. S. Nayar and T. Mitsunaga, “High dynamic range imaging: Spatially varying pixel exposures,” in Proc. IEEE Conf. Computer Vision and Pat- tern Recognition (CVPR’2000) (Cat. No. PR00662), 2000, pp. 472–479. 15. M. D. Tocci, C. Kiser, N. Tocci, and P. Sen, “A versatile HDR video production system,” ACM Trans. Graph., vol. 30, no. 4, pp. 41:1–41:10, 2011. 16. M. Pourazad, C. Doutre, M. Azimi, and P. Nasiopoulos, “HEVC: The new gold standard for video compression: How does HEVC com- pare with H.264/AVC?” IEEE Consumer Electron. Mag., vol. 1, pp. 36–46, 2012. 17. D. Touze, S. Lasserre, Y. Olivier, R. Boitard, and E. Francois, “HDR video coding based on local LDR quantization,” in Proc. 2nd Int. Conf. SME Workshop on HDR Imaging, pp. 1–6, 2014. 52  18. G. Eilertsen, R. Wanat, R. Mantiuk and J. Unger, "Evaluation of Tone Mapping Operators for HDR-Video", Computer Graphics Forum, vol. 32, no. 7, pp. 275-284, 2013. 19. R. Boitard, R. Cozot, D. Thoreau, K. Bouatouch, "Survey of temporal brightness artifacts in video tone mapping", in Proc. 2nd Int. Conf. SME Workshop on HDR Imaging (HDRi'2014), pp. 1-6, 2014. 20. G. Eilertsen, R. Mantiuk and J. Unger, “A comparative review of tone‐mapping algorithms for high dynamic range video," Computer Graphics Forum, vol. 36, no. 2, pp. 565-592, 2017. 21. B. Guthier, S. Kopf, M. Eble, and W. Effelsberg, “Flicker reduction in tone mapped high dynamic range video,” Color Imaging XVI: Displaying, Processing, Hardcopy, and Applications, 2011. 22. A. Koz and F. Dufaux, “Optimized tone mapping with flickering constraint for backward-compatible high dynamic range video coding,” in 2013 14th International Workshop on Image Analysis for Multimedia Interactive Services (WIAMIS), pp. 1–4, 2013. 23. K. Naka and W. Rushton, “An attempt to analyse colour reception by electrophysiology,” J. Physiol., vol. 185, no. 3, pp. 556–586, 1966. 24. ITU-R Recommendation BT.2100-1, “Image parameter values for high dynamic range television for use in production and international programme exchange,” International Telecommunications Union Recommendations Section, Geneva, Jun. 2017. 25. M. Lenoch and C. Wohler, "Reflectance-based 3D shape refinement of surfaces with spatially varying BRDF properties," 2016 9th IAPR Workshop on Pattern Recogniton in Remote Sensing (PRRS), pp. 1-6, 2016. 26. I. P. Shkarofsky and S. B. Nickerson, "Computer modeling of multipath propagation: Review of ray-tracing techniques," in Radio Science, vol. 17, no. 05, pp. 1133-1158, Sept.-Oct. 1982. 53  27. P. Sloan, J. Kautz, and J. Snyder, "Precomputed radiance transfer for real-time rendering in dynamic, low-frequency lighting environments," ACM Transactions on Graphics, pp. 527-536, 2002. 28. S. Miller, M. Nezamabadi and S. Daly, "Perceptual Signal Coding for More Efficient Usage of Bit Codes," SMPTE Motion Imaging Journal, vol. 122, no. 4, pp. 52-59, 2013. 29. P. G. J. Barten, Contrast Sensitivity of the Human Eye and its Effects on Image Quality, SPIE Optical Engineering Press: Bellingham, WA, 1999. 30. P. G. J. Barten, “Formula for the contrast sensitivity of the human eye”, Proc. SPIE-IS&T Vol. 5294:231-238, Jan. 2004. 31. ITU-R Recommendation BT.1886, “Reference Electro-optical Transfer Function for Flat Panel Displays used in HDTV Studio Production,” International Telecommunications Union Recommendations Section, Geneva, Mar. 2011. 32. P. G. J. Barten, "Formula for the contrast sensitivity of the human eye", in Electronic Imaging 2004, San Jose, California, United States, 2013, pp. 231–238. 33. Chung-Lin Huang and Bing-Yao Liao, "A robust scene-change detection method for video segmentation," in IEEE Transactions on Circuits and Systems for Video Technology, vol. 11, no. 12, pp. 1281-1288, Dec 2001. 34. C. Kiser, E. Reinhard, M. Tocci, and N. Tocci, "Real time automated tone mapping system for HDR video," in International Conference Image Processing (IClP), Orlando, USA, 2012. 35. E. Reinhard, "Parameter estimation for photographic tone reproduction," in Journal of Graphics Tools 7, pp. 45–51, 2002. 36. G. Eilertsen, R. Mantiuk, and J. Unger, "Real-time noise aware tone mapping," ACM Transactions on Graphics, 2015. 54  37. W. Burger, and M. Burge, Digital Image Processing, 2nd ed. London: Springer, 2016, pp. 263-266. 38. D.C. Hood, M.A. Finkelstein, and E. Buckingham, “Psychophysical tests of models of the response function,” Vision Research, pp. 401-406, 1979. 39. J. Ferwerda, S. Pattanaik, P. Shirley, and D. Greenberg, "A model of visual adaptation for realistic image synthesis," in 96 Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, pp. 249-258, 1996. 40. ITU-R Recommendation BT.500-11, “Methodology for the Subjective Assessment of the Quality of Television Pictures,” International Telecommunications Union Recommendations Section, Geneva, Mar. 2002. 41. J.-S. Lee, L. Goldmann, T. Ebrahimi, "A new analysis method for paired comparison and its application to 3D quality assessment", in Proc. ACM Multimedia, pp. 1281-1284, 2011. 42. T. Pouli, A. Artusi, F. Banterle, E. Reinhard, A. O. Akyüz, H. P. Seidel, "Color Correction for Tone Reproduction," in Proc. 21st IS&T Color Imaging Conference, 2013. 43. J. Kuang, G. M. Johnson, M. D. Fairchild, "iCAM06: A refined image appearance model for HDR image rendering," J. Vis. Commun. Image Represent. , vol. 18, no. 5, pp. 406-414, 2007. 44. International Telecommunication Union, Recommendation ITU-R BT.709-3, “Parameter values for the HDTV standards for production and international programme exchange,” 1998. 45. International Telecommunication Union, Recommendation ITU-R BT.2020, “Parameter values for ultra-high definition television systems for production and international programme exchange,” 2012.  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0366909/manifest

Comment

Related Items